uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,090,854
arxiv
\section{Introduction} \label{intro} In order to release statistics about populations without violating privacy, the US Census Bureau announced in 2018 that it would implement differential privacy (DP) on publicly released data products derived from 2020 decennial census data\cite{doi:10.1073/pnas.200371411}. Given that reporting of statistics at population-scale is not sufficient to ensure the privacy of individuals\cite{10.1007/11787006_1}, DP provides a formal framework for providing rigorous privacy guarantees via the principled introduction of noise into statistics. That is, DP ensures that at specified population-scales the statistics do not change substantially when a single individual’s information is included/excluded. The Census Bureau's use of DP as a component of its 2020 Disclosure Avoidance System (DAS), which was introduced in response to a report showing that its previous methods permitted larger than expected risks of person re-identification\cite{nationalacademies.org}, is a departure from the disclosure avoidance procedures applied to prior publicly released decennial census data (described by \cite{hotz2022chronicle}). The variant of DP applied to the 2020 census products is a top-down mechanism that infuses random noise into census tabulations at six different nested geolevels (nation, state, county, tract, block group, block)\cite{garfinkel2019deploying,abowd2022tda}. Because the threat to privacy is greater for release of statistics on small populations, the noise injected into small counts is relatively larger than that applied to larger counts\cite{cohen2022private, https://doi.org/10.1111/1475-6773.14000}. The decision to incorporate DP and the necessary subsequent post-processing steps in the 2020 Census DAS has been controversial. Some scholars have voiced concerns about the potential negative impacts of noisy data on public policy and social science research, which critically rely upon census data\cite{doi:10.1126/sciadv.abk3283}. What's more, there are concerns about impacts on resource allocation, since many programs rely on population counts. Harvard Data Science Review (HDSR) recently released a special issue to document, contextualize, and assess US Census Bureau’s adoption of DP and presented discussions with key stakeholders about the decision\cite{Gong2022Harnessing}. Within this issue is a guide for researchers on responsible use of 2020 public-release decennial census products\cite{groshen2022disclosure}. Monitoring of social and spatial patterns of disease, a fundamental component of public health research and practice, also requires accurate population counts, which serve as the denominator for estimation of disease rates, and these population denominators are most often obtained from census products. In particular, disease monitoring typically relies on demographically-stratified small-area disease rates (and associated population denominators), which are analyzed to identify and intervene on populations at highest risk. Moreover, populations within small areas tend to be more homogeneous than in larger areas (reflecting impacts of present and past residential racialized and economic segregation and housing costs)\cite{rothstein2017color,trounstine2018segregation,widestrom2015displacing,ellen2019dream,orlando2021keeping}, providing a differential between the socioeconomic and environmental characteristics of areas studied that may aid in detecting relationships between these variables and health data\cite{piel2020small}. Thus, the potential exaggerated effects of DP on census tabulations for small populations is of particular concern to the public health community. Although the US Census Bureau will solely release the DAS-protected 2020 Census tabulations, they have published several "demonstration products" in which sequentially refined variants of the proposed DAS procedures were applied to publicly released 2010 decennial census data to collect public comment\cite{2019dp,0527dp}. Through comparisons with the original 2010 public release census data, these products enable researchers and practitioners to assess whether DP adjustments unintentionally induce systematic (instead of random) discrepancies in reported Census statistics or in analyses that utilize them. Scholars across a range of fields have thus evaluated the DAS-affected products and given feedback to the Census Bureau, which has led to modifications and shaped the trade-offs between "accuracy" and "privacy" in the final 2020 DAS\cite{doi:10.1126/sciadv.abk3283}. In total, three different demonstration products have been released (with increasingly refined DAS procedures applied in each new product) that provide the race and age stratified population counts needed for small-area studies of health inequities. While the demonstration products have been used extensively to assess the potential impacts of the 2020 DAS on redistricting for political representation\cite{kenny2021use,cohen2021,cohen2022private}, few studies have formally assessed its impacts in public health applications. Even fewer have focused on the consequences of using DP population counts in modeling of small-area disease rates for identifying health trends and monitoring disparities. One recent study compared county-level estimates of 2010 racialized group mortality rates produced using the original 2010 census population counts and the first of the Census Bureau's demonstration products\cite{doi:10.1073/pnas.200371411}. This work estimated county-level rate differences for rates constructed with the original and demonstration product denominators and summarized the rate differences across county urbanicity strata. Using a similar approach but reporting county-level absolute percent errors in mortality rate estimates (comparing demonstration product vs. original denominators), another study investigated the extent to which DP could distort county-level COVID-19 mortality rates by age-sex/racialized groups\cite{doi:10.1177/2378023121994014}. They employed the second of the demonstration products and concluded that DAS-induced errors in COVID-19 mortality rates were larger for non-white racialized groups (though their use of absolute errors precludes assessment of directionality of errors). Another study compared estimates of premature mortality rates aggregated by racialized group and by census tract quintile of inequality across Massachusetts using denominators from the 2010 census vs. the first DAS demonstration product. Although census tract inequality measures were used to create strata (each stratum included $>200$ census tracts), all mortality rates were computed and compared in aggregate for the strata, i.e., the study did not evaluate DAS impacts on small area rate estimates. They concluded that, for these heavily aggregated metrics, the 2020 DAS procedures may have little impact on estimates of health inequities\cite{Krieger2021}. Given the limited nature of this literature, numerous gaps remain. For instance, none of these studies have (1) examined and compared different DP versions' impacts for estimation of disease rates for small areas (smaller than counties) commonly used for health disparities analysis in public health practice; (2) utilized the March 2022 released demonstration product that better reflects the final 2020 DAS procedures; nor (3) conducted simulation studies to more formally quantify biases introduced by DP. Therefore, in this paper we leverage all three of the demonstration products available at this time to evaluate the performance of small-area disease mapping models employing DAS-affected denominators vs. original 2010 decennial census denominators, with an emphasis on accurate characterization of health inequities. Potential biases are illustrated using a pseudo-simulation study and a real data analysis of racialized disparities in premature mortality at the census tract (CT) level in Massachusetts (MA). Our results may help public health researchers and practitioners to determine whether the 2020 DAS-affected publicly released products will yield reliable and actionable results when used for essential public health tasks. \section{Methods} \label{methods} \subsection{Population count data} To help data users study impacts of the proposed 2020 DAS, the Census Bureau has released a set of demonstration data products in which potential variants of the 2020 DAS have been applied to 2010 decennial census (DC) data. Data available include 2010 population counts both without and with DP applied (along with subsequent post-processing), enabling comparisons of the population counts themselves and the results of analyses that rely on them. Here, we use three of these products to evaluate how using new DAS-protected population counts to model disease rates might bias our results if the true population counts are those from the 2010 DC (see Section 4 for discussion of how DC population counts themselves are known to exhibit systematic biases). In both the simulations and real data analysis, we use the following four CT level population count data sources for MA: \begin{enumerate} \item \textit{Original 2010 DC Data}\cite{dc}; \item \textit{Demonstration product released in 2019 (DP19): 2010 DC data with the first version of Census Bureau's DAS procedure applied}\cite{2019dp}; \item \textit{Demonstration product released in 2020 (DP20): 2010 DC data with the second version of Census Bureau's DAS procedure applied}\cite{0527dp}; \item \textit{Demonstration product released in 2022 (DP22): 2010 DC data with the third version of Census Bureau's DAS procedure applied}\cite{2022dp}. \end{enumerate} For the original DC data, we use the \texttt{tidycensus} R package\cite{walker2020tidycensus} to extract CT population counts stratified by age and census-defined “racial” and “ethnic” categories (which we refer to as racialized groups\cite{krieger2000counting}, since these categories are socially constructed) for the state of MA. In our study, we consider these to be “ground truth” population counts. For the three demonstration data products, we obtain population counts stratified by age and racialized group for MA CTs from the website of IPUMS (Integrated Public Use Microdata Series) National Historical Geographic Information System (NHGIS)\cite{IPUMS}. \subsubsection{Differences in demonstration products} In the differential privacy algorithm, the accuracy privacy-loss tradeoff is controlled by the privacy-loss budget (PLB) parameter, $\epsilon$, representing the spectrum between perfect privacy/low accuracy ($\epsilon = 0$), to perfect accuracy/low privacy ($\epsilon = \infty$). The demonstration products of DP19 and DP20 considered here use the same value of $\epsilon$ ($\epsilon=6.0$ overall, divided between the population tables, $\epsilon=4.0$, and housing and household tables, $\epsilon=2.0$), and thereby identical implementations of differential privacy. The difference between DP19 and DP20 lies only in the post-processing procedures, which are operations involving how the DAS TopDown Algorithm (TDA) converts the formally private noisy tabulations taken from the confidential data into the non-negative integer counts that will be published. The TDA used in DP19 conducted the postprocessing of all of the statistics for a particular geographic level at the same time, resulting in distortions when there were large quantities of statistics with zeros or very small values processed at the same time. To address and mitigate this issue, the TDA used in DP20\cite{PPMF0527factsheet} conducts the postprocessing in a series of passes through all the geographic levels (national level, state level, etc.). Specifically, the first pass processed total population counts, and the second pass processed statistics necessary to inform redistricting. The third pass processed core statistics stratified by age/sex/racialized group, and the final pass processed all remaining counts. In this version of the TDA, output from each pass was constrained to agree with the counts from prior passes. DP22, on the other hand, incorporates modifications to the DP algorithm parameters in response to stakeholder feedback that greater accuracy was needed. Specifically, the Census Bureau tuned the PLB applied to different sets of tabulations\cite{progress22}. The PLBs assigned to person-level and housing unit-level counts in DP22 are $\epsilon =20.82$ and $\epsilon =22.77$, respectively\cite{factsheet2022}, both of which are substantially higher than the analogous PLBs for DP19 and DP20, yielding counts with lower privacy/higher accuracy. DP22 applies the same multi-pass post-processing procedures as DP20, but incorporates additional geographic entities into this post-processing\cite{factsheet2022}. Importantly, DP22 may not represent the final version of DAS procedures that will be applied to the 2020 census data, as decisions about the final version have not been announced at the time of writing. However, DP22 is the recently released demonstration product and reflects the recent refinements of the algorithm at this time. \subsection{Real data analysis of inequities in premature mortality} The outcome of interest of our study is premature mortality (death before 65 years old), which is an important and widely-used metric in health inequities studies. We focus on inequities in risk by racialized group (comparing Black and non-Hispanic White (NHW) populations) and by socioeconomic status. We obtained records of all premature deaths in 2010 from the MA Department of Public Health\cite{MADPH}. These data have been described previously, see Krieger et al.\cite{Krieger2021} for more detail. Briefly, each record contains the age, racialized group, and residential address for the deceased individual. We geocoded the addresses to CTs and created aggregate premature mortality counts stratified by age group (for ages <65), race/ethnicity, and CT. Using each of the four population count data sources described above, we compute age-standardized 2010 premature standardized mortality ratios (SMR) for both the Black and NHW populations in each MA CT using the indirect standardization method (described below)\cite{boscoe2013geographic}. The SMRs are calculated using the CT/racialized group observed count in the numerator and an expected premature mortality count for the CT, based on its population size and age distribution, in the denominator. \subsubsection{Age-standardization} To perform indirect age-standardization and create expected counts to be used as denominators based on the DC, DP19, DP20, and DP22 population counts, we use the \textit{ageadjust.indirect()} function from the \texttt{epitools} R package\cite{epitools}. Age-standardization adjusts for differences in age distribution to mitigate possible confounding effects on inequity analyses arising from differing age distributions across groups. We conduct age-standardization, based on empirical MA statewide age group-specific premature mortality rates, separately for each CT and racialized group, to get expected premature mortality counts. We compute descriptive statistics for the expected counts constructed from each dataset (DC, DP19, DP20 and DP22), and we specifically compare the DC expected counts, used here as the ``ground truth'', with the DP19, DP20 and DP22 expected counts, which have the DAS applied. \subsubsection{Models} Our four sets of CT-level SMRs correspond to the four different sets of denominator data (DC, DP19, DP20, and DP22). To study the impact of the different denominators in assessing disparities in premature mortality, we fit standard disease mapping models to the CT-level SMRs stratified by racialized group from each denominator source (separately) to examine associations with racialized group and an area-level measure of economic deprivation -- CT proportion below the poverty line (PropPov)-- extracted from the 2008-2012 5-year American Community Survey (ACS) data, which is estimated for the CT as a whole and is not race-specific. The four sets of SMRs are modeled separately, using a multi-level variant of the spatial Poisson regression model\cite{BesagJ1991}: Let $i = {1,\cdots,N}$ index CTs in MA and $j = {0, 1}$ index racialized group (0 for NHW and 1 for Black), so that $Y_{ij}$ is the premature mortality count in racialized group $j$ within CT $i$. $I(Black)_{ij}$ is a binary indicator of Black racialized group, and $PropPov_i$ is the proportion in poverty in CT $i$ (centered and scaled). Let $P_{ij}$ be the CT- and racialized group-specific expected number of premature mortalities computed using any of the four datasets described above. We assume $Y_{ij} \sim Poisson(\lambda_{ij})$ and fit the following model using each of the four variants of $P_{ij}$ formed from DC, DP19, DP20, and DP22: \begin{equation} log(\lambda_{ij}) = \beta_0 + \beta_1I(Black)_{ij} + \beta_2PropPov_{i} + \theta_i + \phi_{ij} + log(P_{ij}), \end{equation} where $\theta_i$ is a CT-specific random effect with a conditionally autoregressive spatial covariance structure\cite{10.1007/978-1-4612-1284-3_4} and $\phi_{ij}$ is an unstructured CT- and racialized group-specific error term that models overdispersion in the disease count. To clarify, $\theta_i$ is spatial random intercept unique to CTs but shared by racialized groups within a CT, while the $\phi_{ij}$ are both CT and racialized group-specific. Models are fit by a Bayesian approach implemented in the \texttt{CARBayes} package in R\cite{JSSv055i13}. For each of the four denominator data sources, mortality rate ratio (MRR) estimates based on the exponentiated posterior means of the coefficients and 95\% credible intervals are reported and compared in Section \ref{real pmr}. \subsection{Simulation Study} In addition to the real data analysis, we conduct a simulation study to formally assess the magnitude of biases induced in estimates of health inequities due to using the DAS-protected denominators in standard models. We structure our simulated outcomes to mimic real patterns in premature mortality in MA. Synthetic premature mortality counts are generated for each CT, stratified by racialized group (NHW and Black), using the 2010 DC expected premature mortality counts as the denominator and the real covariate data, and following the model form described in the real data analyses above. We then fit models to the simulated data using the DP19, DP20 and DP22 expected counts, but otherwise correctly specified, and evaluate the resulting bias in key parameters. Further details are given below. \subsubsection{Data Generating Process} Using the 2010 DC expected counts for each CT and racialized group, we simulate the outcomes. Formally, premature mortality counts, $Y$, are generated following the model form in equation (1), plugging in the real DC-based expected counts for $P_{ij}$ and using the real CT-level PropPov variable. The coefficient parameter values used in all simulations are $\beta_0=0$, $\beta_1=0.4$, and $\beta_2=0.01$. The conditionally autoregressive spatial effect $\theta_i$ is generated as $$\theta_i |\theta_{-i} \sim N\left(\frac{0.2\sum_{k}w_{ik}y_k}{w_{i+}}, \frac{1}{w_{i+}}\right),$$ where $w_{ik}$ is the $(i, k)^{th}$ element of an adjacency matrix $W$, and $w_{i+}$ is the sum of the elements in the $i^{th}$ row of $W$. $\phi_{ij} \sim N(0, 0.25)$ is an unstructured random effect. The hyperparameter values in the distributions of $\theta_i$ and $\phi_{ij}$ were selected to generate data with moderate spatial correlation and an outcome distribution mimicking the empirical distribution of CT premature mortality rates in MA. We simulate 100 datasets from this model, and for each simulated dataset we fit four models to it-- one plugging in each set of expected counts (DC, DP19, DP20 and DP22) as the denominator. Aside from possible error in the denominators, the fitted models are otherwise correctly specified, to allow us to isolate potential biases due to DAS-induced error in the denominators. \subsubsection{Model Assessment} To investigate the performance of the models fit with different denominator data sources, we evaluate the distribution of the estimated model coefficients and the model-based SMR estimates, relative to the known true values of these quantities. For each coefficient, we compute the simulated bias of the estimator based on each of the four models using different denominator sources. We summarize and visualize these coefficient-specific estimates across all 100 simulated datasets to demonstrate how the use of DAS-protected denominators in model fitting (when DC denominators are the "true" denominators in the data generating process) impacts assessment of high-level patterns and comparisons of risks across groups. For a given simulated dataset (indexed by $k=1,...,100$) and denominator data source $(P_{ij})$, we estimate the model-based SMRs as: \begin{equation} \widehat{SMR}_{ijk} = \frac{\widehat{Y}_{ijk}}{P_{ij}}, \label{smr_dp} \end{equation} where $\widehat{Y}_{ijk}$ is the predicted value of $Y_{ij}$ from the model fit to simulated dataset $k$ using the given denominator data source. We then compute the bias and mean absolute percentage error (MAPE) for each CT and racialized group's SMR estimate for each denominator data source, i.e., \begin{equation} MAPE_{ij} = \frac{1}{100}\sum_{k=1}^{100}\left|\frac{\widehat{SMR}_{ijk}-(\lambda_{ijk}/P_{ij})}{(\lambda_{ijk}/P_{ij})}\right|, \label{mape} \end{equation} \begin{equation} Bias_{ij} = \frac{1}{100}\sum_{k=1}^{100}\left(\widehat{SMR}_{ijk}-(\lambda_{ijk}/P_{ij})\right). \label{bias} \end{equation} We plot and map these metrics for the DC, DP19, DP20, and DP22 denominators (for the racialized groups separately). This enables us to investigate whether small-area spatial patterns in model-smoothed disease/mortality risk estimates are preserved when using the DAS-protected denominators in standard models. In this way, we can assess extent and direction of biases related to DP algorithm. \section{Results} \label{results} \subsection{Comparison of denominator data sources} Figure \ref{scatterplot} in the Appendix shows a scatterplot of the racialized group-stratified CT expected premature mortality counts from the DC vs. DP19, DP20 and DP22. From this figure, it is clear that that Black expected counts are generally much smaller than the NHW counts (Black individuals represented about 7.5\% of the MA population in 2010) and that for both racialized groups, the DP19 data are slightly more noisy than the DP20 data, which are more noisy than the DP22 data. The mean (and standard deviation) of the differences in the CT-level DP and DC expected counts for the NHW population are $0.0029$ (0.562) for DP19, $0.0012$ (0.484) for DP20 and $0.0007$ (0.033) for DP22 and for the Black population are $0.0007$ (0.139) for DP19, $-0.0003$ (0.105) for DP20 and $0.0001$ (0.022) for DP22. This again demonstrates that the DP22 data have less bias and less noise, on average, than the DP20 data, which also have less bias and less noise than the DP19 data. The smaller magnitude of bias and noise associated with Black expected counts relative to NHW in both DP datasets is a result of the smaller scale of the Black counts. In Figure \ref{boxplot_percent_error}, we plot the percent error in the DP19, DP20 and DP22 CT expected premature mortality counts, relative to the DC expected counts (the ``ground truth''), stratified by race group. First, we note that the distribution of percent errors in the DAS-protected expected counts for the NHW population is narrow and centered around zero, a result of the generally large NHW populations in most MA CTs. Second, the distribution of percent errors in DAS-protected expected counts for Black populations is much wider than for NHW, a result of the generally small Black populations in most MA CTs. Moreover, for DP19 and DP20, the distribution of percent errors for Black populations is centered well below zero, indicating that Black expected counts tend to be underestimated in the earlier DAS variants. To more thoroughly characterize this under-estimation as well as the comparison between NHW and Black, we include in Table \ref{expected_counts_percent-table} the percent of expected counts that are under-estimated for each demonstration product (relative to the DC). In DP22, with the increased PLB, the distribution of errors for Black populations remains wider than for NHW but is centered around zero. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{./pic/boxplot_percent_error.pdf} \caption{Boxplots of the percent error in DP19, DP20 and DP22 CT expected premature mortality counts, relative to DC expected counts (the ``ground truth''), for Black and non-Hispanic white populations.} \label{boxplot_percent_error} \end{figure} \subsection{Real Premature Mortality Modeling Results} \label{real pmr} MRR estimates and 95\% Bayesian credible intervals from our racialized group-stratified models are presented in Table \ref{IRR-table}. The racialized group variable is a group-level binary indicator of Black (versus NHW) and the MRR estimate for the racialized group variable is $>1$ for the models fit with each of the four denominator data sources, indicating that on average CT premature mortality rates for Black populations are higher than rates for NHW populations. Percent of CT residents in poverty is also associated with higher premature mortality rates in the racialized group-stratified models. Inferences about patterns of health disparities from the models using the four different denominator sources are identical, with only very minor differences in the point estimates (and even these may be attributable to randomness in the Bayesian posterior sampling). \begin{table}[ht] \caption{Mortality rate ratio estimates (95\% credible intervals) from real premature mortality data analyses with each of the four denominator data sources.} \label{IRR-table} \centering \begin{tabular}{rllll} \toprule \multicolumn{5}{c}{Data Sources} \\ \cmidrule(r){2-5} & DC & DP19 & DP20 & DP22 \\ \midrule Intercept & 1.06 (1.04,1.09) & 1.06 (1.04,1.09) & 1.06 (1.04,1.09) & 1.06 (1.04,1.09) \\ Racialized group & 1.16 (1.06,1.29) & 1.17 (1.06,1.29) & 1.16 (1.05,1.28) & 1.16 (1.07,1.3) \\ Poverty & 1.33 (1.28,1.38) & 1.33 (1.28,1.38) & 1.32 (1.28,1.37) & 1.34 (1.29,1.38) \\ \bottomrule \end{tabular} \end{table} \subsection{Simulation Results} The bias in the coefficient estimates from 100 simulated datasets are summarized in boxplots (Figure \ref{coef_boxplot}), with true parameter values $\beta_0=0, \beta_1=0.4, \beta_2=0.01$. First, we note that all three coefficient parameters are estimated with little to no bias, on average, when using the DC denominators (the "true" denominators used to generate the data) for model fitting. On the other hand, we can observe that using the DP19 and DP20 denominators in the model fitting leads to overestimation of the racialized inequity parameter ($\beta_1$), with average bias of 0.039 and 0.026, respectively, corresponding to percent biases of 10\% and 6\%, respectively. This is likely a result of the systematic under-estimation of the DAS-protected denominators for Black populations (Figure \ref{boxplot_percent_error}), leading Black premature mortality rates to be over-estimated, a phenomenon which is not mirrored in the NHW population. This issue is largely attenuated in DP22, with an average bias of 0.009 (or 2\%) in the racialized inequity parameter estimate. The intercept and the poverty coefficient are generally estimated with little bias for all denominator data sources, although the DP20 data yield slightly more bias in these parameter estimates than the other denominators. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{./pic/coef_boxplot4.pdf} \caption{Boxplot of estimated coefficients' biases across simulations using the four different denominator data sources.} \label{coef_boxplot} \end{figure} Figure \ref{smr_mape_bias} shows boxplots of the bias and MAPE in the model-estimated SMRs using each denominator data source. (Note that, in these plots, the data represented are the CT and racialized group-specific bias/MAPEs, averaged across simulations as in equations (3) and (4), as opposed to Figure~\ref{coef_boxplot} where the data are the difference between an estimate and the truth for each individual simulation.) When using the true DC denominators in the models, estimated SMRs are unbiased on average for both racialized groups, but the MAPEs are much larger for the Black SMRs compared to the NHW SMRs. This indicates that even correctly specified models, in absence of error in the denominators, struggle more to estimate Black vs. NHW SMRs for any given CT simply due to the smaller population sizes and more unstable rates for the Black population in most MA CTs. The use of DP19 and DP20 denominators exacerbates this disparity in SMR estimation accuracy. With these denominators, SMR estimates for the NHW population remain unbiased on average, but for the Black population even the average of the SMR estimates demonstrates an upward bias of 0.128 and 0.090 for DP19 and DP20, respectively. This is further illustrated in Table \ref{percent_upward_bias}, which provides the percent of SMRs biased upwards for both racialized groups from each of the four denominator data sources. This is, again, the result of the systematic under-estimation of the DAS-protected denominators for Black populations (Figure \ref{boxplot_percent_error}). There is also a larger DC vs DP19/DP20 differential in the distribution of MAPEs for the Black population (relative to NHW), indicating that the use of these DAS variants worsens the (already poorer) model performance for Black populations more than for NHW populations. We also note that biases/MAPEs in the SMRs are generally smaller when using the DP20 denominators compared to the DP19 denominators. The distributions of SMR biases and MAPEs using DP22 denominators are virtually indistinguishable from those observed when using the DC denominators (average bias of 0.021 for DP22 SMRs), indicating that using the newest DAS variant for SMR estimation gives results with comparable accuracy to the gold standard. \begin{figure}[htbp] \centering \includegraphics[width=1.0\textwidth]{./pic/smr_bias_mape.pdf} \caption{Boxplots of bias and mean absolute percent error (MAPE) in the standardized mortality ratio (SMR) estimates for the Black and non-Hispanic white populations in each Massachusetts census tract from models using each of the three denominator data sources.} \label{smr_mape_bias} \end{figure} The biases and MAPEs for the model-estimated SMRs for each CT and racialzed group using the DP20 denominators are mapped in Figures \ref{bias_map} and \ref{mape_map} (the patterns for DP19 are similar and the maps for DP22 are shown in Figures~\ref{bias_map22} and~\ref{mape_map22}). As noted above, the small Black populations in many MA CTs results in generally larger magnitudes of biases and MAPEs for the Black SMRs (indicated by the bolder colors in the maps). Previous studies have suggested that more severe distortions may occur when using DP denominators to characterize health-related patterns in smaller population groups\cite{doi:10.1073/pnas.200371411,doi:10.1177/2378023121994014}. Our findings, including the presence of more blue/green hues in the maps of bias in the Black SMRs (Figure \ref{bias_map}) provide further insight that the use of DP denominators using smaller PLBs tend to favor over-estimation for Black premature mortality rates but not for NHW rates (as described in Figure \ref{smr_mape_bias}). However, as we have consistently reported throughout this section, these distortions are largely eliminated by increasing the PLB to the values applied in DP22. \begin{figure}[htbp] \centering \includegraphics[width=1.0\textwidth]{./pic/map_bias_20_BrBG.pdf} \caption{Biases for DP20 SMRs across MA (top row) and Boston (bottom row) CTs.} \label{bias_map} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1.0\textwidth]{./pic/maps_mape0527.png} \caption{MAPE for DP20 SMRs across MA (top row) and Boston (bottom row) CTs.} \label{mape_map} \end{figure} \section{Discussion} In this paper, we explored the potential impact of the US Census Bureau's proposed 2020 decennial census DAS procedures, including the use of DP, taking three new steps beyond those in the handful of investigations focusing on this topic in relation to health inequities, by: (1) modeling small-area disease/mortality rates for the purposes of identifying health inequities; (2) including the newer released demonstration product that incorporates the recent Census Bureau refinements to the DAS; and (3) conducting simulation analyses to formally quantify the biases introduced by the DAS in mortality rate estimation for Black and NHW populations separately. Using three DAS-protected 2010 census demonstration products released by the Census Bureau, we conducted a simulation study and an analysis of real small-area premature mortality data from MA to investigate how the DAS procedures impacted racialized group and economic inequity estimates. Our results provide evidence that recent changes to the DAS procedures made by the Census Bureau in response to stakeholder feedback (featured in DP22 data)-- in particular increasing the DP PLB-- brought about substantial improvements in accuracy of the DAS-protected denominators that may translate to dramatic decreases in biases in disease mapping and inequity studies employing these denominators relative to older variants of the DAS procedures. We observed that biases in racialized and economic inequity parameter estimates and model-based SMR estimates from models using the DP22 denominators were virtually indistinguishable from those obtained when using the original DC data, indicating little impact of the DAS on analyses or inference. When using older demonstration products (DP19 and DP20), which applied DP with a lower PLB, we found that, while high level patterns in inequities by racialized group and socioeconomic status are preserved, the errors induced in mortality rate estimation were considerably larger for Black than NHW populations. In particular, in the example examined here, these older DAS variants led to systematic under-estimation of denominators and, therefore, over-estimation of premature mortality rates for Black populations, which was not observed for NHW populations. These relatively larger DAS-induced errors in estimated small-area SMRs for Black populations relative to NHW populations compound the already worse model performance for Black populations due to small counts and rate instability. Our work demonstrates (1) the profound implications of the US Census Bureau's choice of PLB for the accuracy of future small-area disease mapping and health inequity studies and (2) the extent to which a small PLB, combined with modeling approaches that neglect the DAS-induced error in denominators for small groups, can distort characterizations of inequities. Our findings regarding the older DAS variants generally agree with some previous studies, which have reported that DAS-protected denominators are more problematic for estimation of rates in smaller racialized groups\cite{doi:10.1073/pnas.200371411,doi:10.1177/2378023121994014}. Such mischaracterizations could have real implications for public health practitioners, who rely on these metrics for identification of an intervention on high-risk groups. For instance, these distortions may lead policy makers to miss opportunities to improve public health in some small populations and/or small areas, which due to myriad social and economic factors are often the groups with the highest health risks. However, encouraging evidence from our investigation of the recent demonstration product suggests that such issues can be largely ameliorated, in this context, by the choice of a larger PLB in the DP implementation, striking a compromise between preservation of privacy and the ability to accurately characterize and advance the health of even small populations and areas To our knowledge, our study is so far the first study to compare different DAS-protected 2010 census demonstration products' impacts for small-area disease modeling and inequity studies and the first health-focused study to investigate the demonstration product newly released in 2022. The primary weakness of our study is that we only investigate DAS impacts in a single state, MA, and on health inequity estimates for the two largest racialized groups in MA, Black and NHW. The DAS-attributable errors uncovered here may be further exacerbated for other smaller groups, such as Native Americans and Asian Americans/Pacific Islanders. Moreover, our simulation study is conducted utilizing the DC denominators as the true denominators in the data generating process. However, even the original DC population counts are known to exhibit systematic biases, with a particular tendency to under-represent non-white individuals, which is particularly troublesome for health inequity studies\cite{o2019differential}. In spite of these limitations, DC denominators are commonly used as a ``gold standard'' for comparison purposes when evaluating alternative denominator data sources\cite{Krieger2021,NETHERY2021100786}. \section{Future Work} As the implementation of DP to preserve privacy in publicly released health and social science data accelerates, the development of statistical methods to adapt standard disease mapping models to DP-injected noise in variables is critical to reduce DP-related systematic errors that can bias health inequity estimates. Future work should also investigate how the Census Bureau's 2020 DAS impacts health inequity studies using smaller populations, such as Native Americans and Asian Americans/Pacific Islanders. \section{Acknowledgements} The authors gratefully acknowledge funding from NIH grants R01HD092580, 1K01ES032458 and P30ES000002. \section{Data and code availability} The real premature mortality data used herein can be obtained upon request to the MA Department of Public Health. Demonstration data products are publicly available on the IPUMS website (\url{https://www.nhgis.org/privacy-protected-2010-census-demonstration-data}). 2010 decennial census data was imported using the \texttt{tidycensus} R package. Code to reproduce analyses and simulation studies is available on Github at \url{https://github.com/Lyric98/CT_race_diff_privacy}. \newpage \newpage \bibliographystyle{unsrt} \section{Introduction} \lipsum[2] \lipsum[3] \section{Headings: first level} \label{sec:headings} \lipsum[4] See Section \ref{sec:headings}. \subsection{Headings: second level} \lipsum[5] \begin{equation} \xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}} \end{equation} \subsubsection{Headings: third level} \lipsum[6] \paragraph{Paragraph} \lipsum[7] \section{Examples of citations, figures, tables, references} \label{sec:others} \lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} \subsection{Figures} \lipsum[10] See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.} \lipsum[11] \begin{figure} \centering \fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \label{fig:fig1} \end{figure} \subsection{Tables} \lipsum[12] See awesome Table~\ref{tab:table}. \begin{table} \caption{Sample table title} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \label{tab:table} \end{table} \subsection{Lists} \begin{itemize} \item Lorem ipsum dolor sit amet \item consectetur adipiscing elit. \item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna. \end{itemize} \section{Conclusion} Your conclusion here \section*{Acknowledgments} This was was supported in part by...... \bibliographystyle{unsrt}
2,877,628,090,855
arxiv
\section{Instantaneously Polarizeable Systems} Our central working equation (1) in the main paper is equally valid for non-polarizable and instantaneously polarizable systems. As derived in Ref. \cite{deissenbeck}, the fluctuation-dissipation theorem (FDT) takes the following form for the electrode charge: \begin{eqnarray} f dt &=& \underbrace{-\frac{1}{\tau_{\Phi}} (\Phi - \Phi_0)\ dt}_{dissipation} + \underbrace{\sqrt{\frac{2}{\tau_{\Phi}} \frac{k_B T}{C_0}}\ dW_t}_{fluctuation}, \label{fdt2} \end{eqnarray} where $\tau_{\Phi}$, $\Phi$, $\Phi_0$ and $C_0$ are the relaxation time constant, the instantaneous potential, the target potential and the capacitance of the bare electrodes in absence of a dielectric, respectively. We note that the differential $dW_t$ of a Wiener process plays the same conceptual role as $dt$: it represents an integration over time $t$, albeit with an infinitesimal stochastic time step $dW_t$ using It$\bar{\mathrm{o}}$ integration \cite{gardiner}, cf. Ref. \cite{deissenbeck}. If the system under investigation is instantaneously polarizable, e.g. in the context of Born-Oppenheimer dynamics, the form of Eq. \ref{fdt2} seemingly suggests that the capacitance $C_0$ would need to be corrected accordingly. Indeed it is possible, in principle, to describe instantaneously polarizable systems by adapting $C_0$ within the fluctuation term. However, this particular choice is inadvisable because it would require advance knowledge about the system's dielectric properties. Instead, the dielectric properties should be an outcome of the simulation, not a parameter entering it. We therefore propose to shift the issue of the unknown dielectric contributions to the capacitance into the time domain and to take into account \emph{any} polarizability, instantaneous or not, \emph{implicitly} within the deterministic term, as described in the following. For simplicity, we set $\Phi_0 = 0$ without loss of generality and demonstrate the derivation of Eq. \ref{fdt2} explicitly for instantaneously polarizable systems. According to Ohm's Law and Kirchhoff's 2nd Law, the instantaneous current for the setup shown in Fig. 1 in the main manuscript is described by: \begin{equation} \frac{dn}{dt} = -\frac{\Phi}{R}, \label{ohmslaw} \end{equation} where $n$ and $R$ are the electrode charge and an effective resistance, respectively. We now assume that the capacitance is increased to an unknown value $C = \epsilon_r C_0$, where the factor $\epsilon_r$ describes an instantaneous dielectric response. According to the definition of the capacitance, the instantaneous voltage $\Phi$ is: \begin{equation} \Phi = \frac{n}{\epsilon_r C_0}. \label{cap} \end{equation} Substituting Eq. \ref{cap} into Eq. \ref{ohmslaw} and adding a corresponding fluctuation term $\tilde{n}\ dW_t$, we obtain: \begin{equation} dn = - \frac{1}{R C_0} \frac{n}{\epsilon_r} dt + \tilde{n}\ dW_t. \label{dn} \end{equation} In the canonical ensemble at finite temperature $T$, the variance $\sigma_n^2$ of the electrode charge $n$ must satisfy the relation: \begin{equation} \sigma_n^2 = k_B T C = k_B T \epsilon_r C_0. \label{ktc} \end{equation} Therefore, the fluctuation term $\tilde{n}$ in Eq. \ref{dn} must be constructed accordingly. Here we remind the reader that Eq. \ref{dn} formally represents a stochastic differential equation (SDE) of the so-called Ornstein-Uhlenbeck type: \begin{equation} dx = -kxdt + \sqrt D dW_t. \label{OU} \end{equation} The variance of Eq. \ref{OU} has been derived analytically using It$\bar{\mathrm{o}}$ calculus \cite{gardiner}: \begin{equation} \sigma^2_x = \frac{D}{2k}. \label{OUvar} \end{equation} Hence, we directly obtain an expression for $\tilde{n}$, using Eq. \ref{OUvar}: \begin{eqnarray} dn &=& \underbrace{- \frac{1}{\tau_{\Phi}} \frac{n}{\epsilon_r}\ dt}_{dissipation} + \underbrace{\sqrt{\frac{2}{\tau_{\Phi}} k_B T C_0}\ dW_t}_{fluctuation}, \label{dn2} \end{eqnarray} with $\tau_{\Phi} := R C_0$. Clearly, taking the variance of Eq. \ref{dn2} according to Eq. \ref{OUvar} satisfies Eq. \ref{ktc}. Yet, we note that the fluctuation term is free of $\epsilon_r$. By design, the deterministic dissipation term is the only place where $\epsilon_r$ needs to be introduced. We remind the reader, that our central working equation (1) in the main manuscript is derived solving the It$\bar{\mathrm{o}}$ integral \begin{equation} dn = C_0\ f dt. \end{equation} Straightforward multiplication of Eq. \ref{fdt2} with $C_0$ in order to obtain dn and substituting for $\Phi$ using Eq. \ref{cap} yields: \begin{eqnarray} dn = C_0\ f dt &=& -\frac{1}{\tau_{\Phi}} C_0 \underbrace{\frac{n}{\epsilon_r C_0}}_{=: \Phi}\ dt + \sqrt{\frac{2}{\tau_{\Phi}} k_B T C_0}\ dW_t, \label{dn3} \end{eqnarray} With Eq. \ref{dn3}, it is now clear that Eqs. \ref{fdt2} and \ref{dn2} are formally identical. Our central working equation (1) therefore already takes the polarizability - instantaneous or not - implicitly into account. \section{Numerical and technical details} We carried out density functional theory (DFT) calculations within the Perdew-Burke-Ernzerhof generalized gradient approximation \cite{PBE}, using plane wave basis sets and projector augmented wave pseudopotentials \cite{PAW} with an energy cutoff of 400 eV. All calculations were performed using the Vienna Ab Initio Simulation Package (VASP) \cite{vasp1,vasp2}. We used the $\Gamma$ point for $\mathbf{k}$-space integration. To integrate the equations of motion in our AIMD simulations and ensure accurate energy conservation over a time scale of several 100 ps, we converged the electronic total energies to $10^{-8}$ eV at each ionic step, used a discrete time step $\Delta t = 0.5$ fs and the 2nd order leapfrog scheme as implemented in VASP. The simulation cells contain 2 computational Ne electrodes \cite{surendralal} with a lateral size of \mbox{14.5 $\times$ 14.5 \AA$^2$}, separated by $d = $ 10.7 \AA, 17.4 \AA\ and 31.4 \AA, respectively, and include 32, 64 and 192 H$_2$O molecules between the electrodes, respectively. The values for the electrode separation $d$ were chosen so that the bulk water density of 1 g/cm$^3$ is reached in the central part of the unit cell, after equilibration for 10 ps with the Langevin thermostat and a relaxation time of $\tau = 50$ fs. From here on, the thermostat was switched off without exception and we sampled the ensembles for an additional 125 ps. In all simulations, we integrated the equations of motion for the spatial degrees of freedom with the leapfrog scheme in the NVE ensemble. In addition to potentiostating the system, the temperature is actively controlled by our thermopotentiostat due to exposing the atoms to the fluctuating electric field, so that the simulation samples the NVT$\Phi$ ensemble. Both computational Ne electrodes are charged by equal and opposite amounts. The amount of charge transferred between both electrodes is controlled by our thermopotentiostat. To that purpose, we use distinct Ne pseudopotentials for the left-hand side and right-hand side Ne electrodes, respectively. The core charges of the pseudopotentials describing the Ne electrodes are adjusted over the course of the simulation at each individual ionic step, according to our central working equation (1). For the thermopotentiostat relaxation time we use a value of $\tau_{\Phi} = 100$ fs. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{integrationscheme_final_verlet3.jpg} \caption{\label{integrationscheme} Flowchart to conceptually show the integration of the thermopotentiostat in second order velocity verlet. The potentiostat acts on the charge and positions of the first force calculation and updates the charge according to the thermopotentiostat or any other control logic. The new charge and the new positions are used to calculate the new forces and thereby the new velocities and the next ionic step is performed.} \end{figure} \section{Velocity Verlet integration scheme} In the main text we provided a flowchart for integration via leapfrog. Velocity Verlet is another widely used integration scheme. Here, the thermopotentiostat must be be included in a slightly different way, cf. Fig. \ref{integrationscheme}. After the initialization and a first calculation of the forces the electrode charge is updated together with the positions. Subsequently, a new calculation of the forces is performed which includes the updated charge and updated positions. Next, the velocities are integrated and, if necessary, a thermostat can be applied. The integration loop is then closed by a new integration of the positions and electrode charges. \section{Convergence of the bound charges and dielectric profiles} Computing dielectric constants from molecular dynamics simulations is commonly performed using Kirkwood-Fr\"ohlich theory or the theory of polarization fluctuations. Both approaches rely on the variance of the dipole moment fluctuations, typically requiring several nanoseconds of statistical sampling to obtain converged results. Our approach outlined in the main text uses only thermodynamic averages, which converge significantly faster. In order to determine the statistical sampling necessary to converge the dielectric properties, we computed the dielectric constants within the bulk and interfacial water regions as a function of sampling time. Statistical error bars are obtained as running variances: \begin{equation} \sigma(z,t) := Var\left( \frac{1}{t} \int_0^t dt' \epsilon_{\perp}(z,t') \right), \end{equation} where $\epsilon_{\perp}(z,t')$ denotes the dielectric constant at position $z'$ at timestep $t'$. The position-dependent error bars of $\epsilon(z)$ for the total sampling time are shown in Fig. 6c in the main text. In Fig. \ref{variance} we show the evolution of the error bars as a function of time for the interfacial and bulk water regions. For interfacial water, the standard error of $\epsilon^{-1}$ falls below 0.1 after a sampling time of 50 ps. Consistent with Fig. 6c in the main text, the dielectric properties of interfacial water converge significantly faster compared to those of bulk water, since the water reorientation dynamics is less pronounced close to the interface. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{variance.jpg} \caption{\label{variance} Evolution of variances of bulk water dielectric constant with increasing statistics. This plot serves as a guideline to estimate the required number of steps to get a sufficiently accurate dielectric profiles.} \end{figure} \section{Interfacial water structure} In order to probe the orientation of interfacial water in response to the applied electric bias, we computed the probability distributions of the angles enclosed between the surface normal and the water bisector ($\alpha$, Fig. \ref{orient}a) or the water OH-bond ($\theta$, Fig. \ref{orient}b). Solid and dashed lines refer to the left hand side (negatively charged) and right hand side (positively charged) electrodes. We consider only the first layer of interfacial water up to a normal distance of 4 \AA\ with respect to the electrode, corresponding to the density minimum between the first and the second stratified water layer, cf. Fig. 5 in the main text. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{orient3.jpg} \caption{\label{orient} Probability distributions of \textbf{a)} the angle $\alpha$ enclosed between the surface normal and the water bisector and \textbf{b)} the angle $\theta$ enclosed between the surface normal and the water OH-bond. Solid and dashed lines indicate distributions computed for negatively and positively charged surfaces, respectively.} \end{figure} For $\Phi_0 = 0\ \mathrm{V}$, the angle distributions obtained for the left and right hand side electrodes agree within the numerical accuracy (blue solid and dashed lines, Fig. \ref{orient}), reflecting the symmetry of our computational setup. Both the $\alpha$ and $\theta$ distributions are centered around $90^{\circ}$, indicating that for our hydrophobic electrodes interfacial water adopts largely planar configurations on average, where the molecular planes are parallel to the electrode surfaces. At an applied voltage of $\Phi_0 = 4\ \mathrm{V}$, we observe a field-induced reorientation of the interfacial water molecules. On the negatively charged left hand electrode (solid red lines, Fig. \ref{orient}) the interfacial water layer features a clear net dipole moment. The maximum of the probability distribution for the angle $\alpha$ between the water bisector and the surface normal is located at $126^{\circ}$. In agreement with recent findings by Li \emph{et al.} \cite{li2019situ} for Au(111) surfaces, the OH-bond angle distribution becomes bimodal: one OH-bond remains in-plane (maximum at $96^{\circ}$), whereas the other OH-bond is now pointing towards the electrode surface (maximum at $162^{\circ}$) in an H-up configuration. On the positively charged right hand electrode (dashed red lines, Fig. \ref{orient}), in contrast, such a bimodal distribution is absent since here the oxygen atoms of the water molecules are oriented towards the electrode surface. We note that Li \emph{et al.} \cite{li2019situ} used explicit counter ions to induce a surface charge. The interfacial water structure is sampled hence for surface charges that amount to an integer number of electrons and, by extension, for potentials that correspond to those integer charges. The thermopotentiostat approach introduced here, in contrast, allows us to perform simulations under potential control for arbitrary continuous potentials.
2,877,628,090,856
arxiv
\section{Introduction} While the notion of anticipation has been known for quite a long time in both psychology, biology or physics domains, it remains difficult to agree on a standard definition that can account for its multiple facets. For example, in \cite{Grush2004}, the author proposes an analogy between motor control and kalman filters where a controller is supposed to produce a signal that is sent to both the plant to control and to the emulator that is then able to produce a prediction of the behavior. In \cite{Riegler2001}, the author refutes this standard definition of anticipatory systems as being based on a predictive model of the system itself and its environment.\\ However, even if there does not exist such a general definition, there is a large consensus on the fundamental role played by anticipation in behavior. Someone that would have been deprived from any anticipation abilities would be severely impaired in its everyday life, from both a perception and action point of view. Of course, the deprivation of any anticipatory capabilities does not need to be so radical and we can also imagine a lighter impairment of the system. For instance, let us simply consider the inability to anticipate changes in the visual information resulting from an eye saccade. This anticipation is known to be largely based on unconscious mechanisms that provide us with a feeling of stability while the whole retina is submerged by different information at each saccade : producing a saccade results in a complete change in the visual perception of the outer world. If a system is unable to anticipate its own saccadic movements, it cannot pretend to obtain a coherent view of the world: each image would be totally uncorrelated from the others. One stimulus being at one location before a saccade could not be identified easily at being the same stimulus at another location after the saccade. The aim of this paper is to precisely pinpoint the importance of this visual anticipation in establishing a coherent view of the environment and to propose a computational model that rely on anticipation to efficiently scan a visual scene.\\ After a quick review of the literature demonstrating that visual anticipation is a critical part of the visual system, we introduce a simple experiment of visual search and explain how the model we propose can solve the task by using both anticipation and a dynamic model of working memory. \section{Visual search} Visual search is a cognitive task that most generally involves an active scan of a visual scene for finding one or several given targets among distractors. It is deeply anchored in most animal behaviors, from a predator looking for a prey in the environment, to the prey looking for a safe place to avoid being seen by the predator. Psychological experiments may be less ecological and may propose for example to find a given letter among an array of other letters, measuring the efficiency of the visual search in terms of reaction time (the average time to find the target given the experimental paradigm). In the early eighties, \cite{Treisman1980} suggested that the brain actually extracts some basic features from the visual field in order to perform the search. Among these basic features that have been recently reviewed by \cite{Wolfe1998}, one can find features such as color, shape, motion or curvature. Finding a target is then equivalent to finding the conjunction of features (that may be unique) that best describ the target. In this sense, \cite{Treisman1980} distinguished two main paradigms (a more tempered point of view can be found in \cite{Duncan1989}). \\ \noindent {\bf Feature search} refers to a search where the target differs from distractors against exactly one feature.\\ {\bf Conjunction search} refers to a search where the target differs from distractors against two or more features. \\ What characterizes best the feature search is a constant search time that does not depend on the number of distractors. The target is sufficiently different from the distractors to pop out. However, in the case of conjunction search, the time to find the target seems to be tightly linked to the number of distractors that share at least one feature with the target (cf. Fig. \ref{fig:search}). These observations lead to the question of how a visual stimulus could be represented in the brain. In \cite{Milner1992}, the authors proposed that the visual perception relies on two separated pathways: one would be dedicated to the extraction of features independently on their spatial positions (the so-called {\em What} pathways) while the other would only extract stimuli position without any information regarding feature properties (the so-called {\em Where} pathway). In this article, we don't deal with the high-level processing of the visual input (the {\em What} pathway) nor with the difficult problem of the communication between the two pathways known as the binding problem and only consider a spatial representation of the visual input, filled by computing basic filters. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{./eps/search.eps} \caption{Feature search can be performed very quickly as illustrated on the left part of the figure; the disc shape literally pops out from the scene. However, as illustrated on the right part of the figure, if the stimuli share at least two features, the pop out effect is suppressed. Hence, finding the disc shape with the stripes going from up-left to down-right requires an active scan of the visual scene. \label{fig:search}} \end{center} \end{figure} \subsection{Saccadic eye movements} The eye movements may have different behavioral goals, leading to five different categories of movements : saccades, vestibulo-ocular reflex, optokinetic reflex, smooth-pursuit and vergence. However, in this article we will only focus on saccades (for a detailed study of eye movements, see \cite{Leigh1999}, \cite{Carpenter1988}).\\ Saccades are fast and frequent eye movements that move quickly the eye from the current point of gaze to a new location in order to center a visual stimulus on the fovea, a small area on the retina where the resolution is at its highest. The velocity of the eyes depends on the amplitude of the movement and can be reached up to 700 degrees per second at a frequency of 3 Hz. The question we would like to address is how the brain may give the illusion of a stable visual space while the visual perception is drastically modified every 200~ms.\\ While the debate to decide whether or not the brain is blind during a saccade has not been settled (\cite{Kleiser2004}, \cite{Ross2001}), the coherence between the perception before and after a saccade cannot be established accurately solely based on perception. One solution is to consider that the brain may use an efferent copy of the voluntary eye movement to remap the representation it has built of the visual world. Several studies shed light on pre-saccadic activities in areas such as V4 and LIP where the locations of relevant stimuli are supposed to be represented. In \cite{Moore1998}, the authors suggest that ``the presaccadic enhancement exhibited by V4 neurons [...] provides a mechanism by which a clear perception of the saccade goal can be maintained during the execution of the saccade, perhaps for the purpose of establishing continuity across eye movements''. In \cite{Merriam2005}, the authors review evidences that LIP neurons, whose receptive field will land on a previously stimulated screen location after a saccade, are excited even if the stimulus disappears during the saccade. \subsection{Visual attention} The capacity to focus on a given stimulus of the visual scene is tightly linked to visual attention that has been defined as the capacity to concentrate cognitive ressources on a restricted subset of sensory information (\cite{James1890}). In the context of visual attention, only a small subset of the retina information is available at any given time to elaborate motor plans or cognitive reasoning (cf. \emph{change blindness} experiments presented in \cite{Regan2001}, \cite{Simons2000}). The selection of a target for an eye movement is then closely related to the notion of spatial attention (\cite{Moore2001}) that is classically divided into two types: {\bfseries overt attention} which involves a saccade to center an object on the fovea and {\bfseries covert attention} in which no eye movement is initiated. These two types of spatial attention were first supposed to be independent (\cite{Posner1990}) but recent studies such as the premotor theory of attention proposed in \cite{Rizzolatti1987} (see also \cite{Chelazzi1993}, \cite{Kowler1995}, \cite{Craighero1999}) consider that covert and overt attention rely on the same neural structures but movement is inhibited in covert attention. \subsection{Computational models} \label{section:computational} Over the past few years, several attempts at modeling visual attention have been engaged (\cite{Ullman1985}, \cite{Tsotsos1995}, \cite{Wolfe2000}, \cite{Itti2001}, \cite{Hamker2004}). The basic idea behind most of those models is to find a way to select interesting locations in the visual space giving their behavioral relevance and whether or not they have been already focused. The two central notions in this context have been proposed by \cite{Ullman1985} and \cite{Posner1984}: \begin{itemize} \item saliency map \item inhibition of return (IOR). \end{itemize} The saliency map is a single spatial map, in retinotopic coordinates, where all the available visual information converge in order to obtain a unified representation of stimuli, according to their behavioral relevances. A winner-take-all algorithm can be easily used to find what is the most salient stimulus within the visual scene which is identified as the attentional point of focus. However, in order to be able to go to the next stimuli, it is important to bias the winner-take-all algorithm in such a way that it prevents returning to an already focused stimulus. The goal of the inhibition of return mechanism is precisely to feed the saliency map with such a bias. The idea is to have another neural map that records focused stimuli and inhibits the corresponding locations in the saliency map. Since an already focused stimulus is actively inhibited by this map, it cannot pretend to win the winner-take-all competition, even if it is the most salient.\\ The existence of a single saliency map is still not proved. In \cite{Hamker2004} the author proposes a more distributed representation of these relevances, clearly dividing the what and the where pathways stated before, and where spatial competition occurs in a motor map instead of a perceptive one. The related model exhibits good performances regarding visual search task in natural scene, but is restricted to covert attention. Therefore, authors do not take into account eye movements and the visual scene is supposed to remain stable: scanning is done without any saccade. During the rest of this article, we will stick to the saliency map hypothesis, even if controverted, in order to illustrate the anticipatory mechanism. \section{A model of visual search with overt attention} \subsection{Experiment} In order to accurately evaluate the model, we setup a simple experimental framework where some identical stimuli are drawn on a blackboard and are observed by a camera. The task is to successively focus (i.e. center) each one of the stimuli without focusing twice on any of them. We estimate the performance of the model in terms of how many times a stimulus has been focused. Hence, the point is not to analyze the strategy of deciding which stimulus has to be focused next (see \cite{Findlay2006a,Findlay2006b} for details on this matter). In the context of the proposed model, the strategy is simply to go from the most salient stimulus to the least salient one, and to randomly pick one stimulus if the remaining ones are equally salient. Figure \ref{fig:scan} illustrates an experiment composed of four identical stimuli where the visual scan path has been materialized. The effect of making a saccade from one stimulus to another is shown and underlines the difficulty (for a computational model) of identifying a stimulus before and after a saccade. Each one of the stimulus being identical to the others, it is impossible to perform an identification based solely on features. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{./eps/scan.eps} \caption{When scanning a visual scene, going for example from stimulus 1 to stimulus 4, as illustrated on the left of the figure, the image received on the retina is radically changed when each stimulus is centered on the retina, as illustrated on the right of the figure. The difficulty in this situation is to be able to remember which stimulus has already been centered in order to center another one. The figures on the stimuli are shown only for explanation purpose and do not appear on the screen; all the stimuli are identical. \label{fig:scan}} \end{center} \end{figure} \subsection{Model} The model is based on three distinct mechanisms (cf. Fig. \ref{fig:SchemaModele} for a schematic view of the model). The first one is a competition mechanism that involves potential targets represented in a saliency map that were previously computed according to visual input. Second, to be able to focus only once on each stimulus, the locations of the scanned targets are stored in a memory map using retinotopic coordinates. Finally, since we are considering overt attention, the model is required to produce a camera movement, centering the target onto the fovea, used to update the working memory. This third mechanism works in conjunction with two inputs: current memory and parameters of the next saccade. This allows the model to compute quite accurately a prediction of the future state of the visual space, restricted to the targets that have been already memorized. A different version of this model, without the anticipatory mechanism can be found in \cite{Vitay2005}.\\ \begin{figure} \begin{center} \includegraphics[width=8.5cm]{./eps/SchemaModele.eps} \caption{Schematic view of the architecture of the model. The image captured by the camera is filtered and represented in the saliency map. This information feeds two pathways : one to the memory and one to the focus map. A competition in the focus map leads to the most salient location that is the target for the next saccade. The anticipation circuit predicts the future state of the memory with its current content and the programmed saccade. \label{fig:SchemaModele}} \end{center} \end{figure} Moreover, the model uses the computational paradigm of two dimensional discrete neural fields (the mathematical basis of this paradigm can be found in \cite{Amari1977} for the one dimensional case, extended to a two dimensional study in \cite{Taylor1999}). The model consists of six {\em n$\times$n} maps of units, characterized by their position in a map, denoted {\bfseries x} $\in [1..n]^2$ and their activity as a function of their position and time, denoted u({\bfseries x},t). The basic dynamic equation that follows the activity of a unit at position {\bfseries x}, depends on its input, computed as a weighted sum over input units, and on an weighted influence of the lateral units in the same map. Equation (\ref{equation_cnft}) is the equation proposed in \cite{Amari1977}, discretized in space, where M is the set of the lateral units, $M~'$ the set of the input units, $w_{M}(x-x~')$ the lateral connection weight function, and $s(x,y)$ the afferent connection weight function. Usually, the weighting functions $s(x,y)$ and $w_{M}(x-x~')$ are chosen as a Gaussian or as a difference of Gaussians, as given by (\ref{function_weight}). \begin{eqnarray} \tau.\frac{\partial u(x,t)}{\partial t} & = & -u(x,t) + \sum_{\mathrm{M}} w_{\mathrm{M}}(x-x~')u(x~',t) + \sum_{\mathrm{M~'}} s(x,y).u(y,t) \label{equation_cnft} \\ \nonumber s(x,y) & = & C.e^{\frac{{\|x-y\|^2}}{c^2}} \mbox{ with } C,c \in \bbbr^{*+} \\ w_{\mathrm{M}} (x-x~') & = & A.e^{\frac{\|x-x~'\|^2}{a^2}}-B.e^{\frac{\|x-x~'\|^2}{b^2}} \mbox{ with } A,B,a,b \in \bbbr^{*+} \label{function_weight} \end{eqnarray} where $u(\textbf{x},t)$ is the activity of the unit at the location {\bfseries x} in a map M, $u(\textbf{x~'},t)$ the activity of the unit at the location {\bfseries x~'} in the same map, u(y,t) the activity of the unit at the location {\bfseries y} in a map M~', different from M and $\tau$ is a given parameter that defines the temporal dynamics . A unit whose activity satisfies (\ref{equation_cnft}) will be called a sigma unit in the following. We also introduce sigma-pi units (\cite{Rumelhart1987}) whose activity satisfies (\ref{equation_sigmapi}). While in (\ref{equation_cnft}) the input of a unit is computed as a sum of activities, in (\ref{equation_sigmapi}), the input of the unit is computed as a sum of product of activities. \begin{equation} \tau.\frac{\partial u(x,t)}{\partial t} = -u(x,t) + \sum_{\mathrm{M}} w_{\mathrm{M}}(x-x~')u(x~',t) + \sum_{i \in \mathrm{I}} w_{i}.\prod_{y \in \mathrm{M_{i}~'}} u(y,t) \label{equation_sigmapi} \end{equation} In the following, we denote $I(${\bfseries x}$,t)$ the input of the unit {\bfseries x}, at time t, that can be written as : \begin{eqnarray} I(x,t) & = & \sum_{\mathrm{M~'}} s(x,y).u(y,t) \mbox{ for sigma units} \\ I(x,t) & = & \sum_{i \in \mathrm{I}} w_{i}.\prod_{y \in \mathrm{M_{i}~'}} u(y,t) \mbox{ for sigma-pi units} \end{eqnarray} We will now describe briefly how the different maps interact. Since the scope of this article is the anticipation mechanism, the description of the saliency map, the focus map and the working memory will not be accurate but a more detailed explanation, with the appropriate dynamical equations, can be found in \cite{Vitay2005}. \subsubsection{Saliency map} The saliency map is updated by convolving the image captured with the camera of the robot used for the simulation with gaussian filters. The stimuli we use are easily discriminable from the background on the basis of the color information. This computation leads to a representation of the visual stimuli with gaussian patterns of activity in a single saliency map. We point out again that this is one of our working hypothesis, detailed in section \ref{section:computational}. \subsubsection{Focus} Units in the focus map have direct excitatory feedforward inputs from the saliency map. The lateral connections are locally excitatory and widely inhibitory so that a competition between the units within the map leads to the emergence of only one stimulus in the focus map. This stimulus is the next target to focus and the movement to perform to center it on the fovea is decoded from this map. \subsubsection{Working memory} Once a stimulus has appeared within the focus map and because it is also present in the saliency map, it emerges immediately within the working memory. Both excitations from the focus map and the saliency map (at a same location) are necessary for the emergence of the stimulus in the working memory area. If the focused stimulus changes, it will not be present anymore in the focus map such that an additional mechanism is needed to maintain it in the memory. It is not shown on the schematic illustration \ref{fig:SchemaModele} but the memory consists in two maps that share excitatory connections in the two ways : the first map excites the second and the second excites the first, weighted so that the excitation is limited in space. \subsubsection{Memory anticipation} The memory anticipation mechanism aims at predicting what should be the state of the working memory, after an eye movement needed to center the stimulus in the focus map, before the movement is initiated. The sigma-pi units in the anticipation map has two inputs : the activity of the units of the focus map and the activity of the units of the working memory. If we denote wm({\bfseries x},t) the activity of the unit {\bfseries x} of the working memory at time t, and f({\bfseries x},t) the activity of the unit {\bfseries x} of the focus map at time t, we define the input I({\bfseries x}) of the unit {\bfseries x} in the anticipation map as : \begin{equation} I(\textbf{x},t) = \beta.\sum_{\textbf{y} \in \bbbr^2} wm(\textbf{y},t).f(\textbf{y}-\textbf{x},t) \label{poids_anticipation} \end{equation} The input of each unit in the anticipation map is computed as a convolution product of the working memory and the focus, centered on its coordinates. To make (\ref{poids_anticipation}) clearer, the condition of the sum is weaker than the one that should be used : since the input maps are discrete sets of units, the two vectors {\bfseries y} and {\bfseries y}-{\bfseries x} mustn't exceed the size of the maps.\\ From (\ref{equation_sigmapi}) and (\ref{poids_anticipation}), the activity of the units in the anticipation map, without lateral connections, satisfies (\ref{equation_anticipation}). \begin{equation} \tau.\frac{\partial u(x,t)}{\partial t} = -u(x,t) + \beta.\sum_{\textbf{y} \in \bbbr^2} wm(\textbf{y},t).f(\textbf{y}-\textbf{x},t) \label{equation_anticipation} \end{equation} Then, the shape of activity in the anticipation map converges to the convolution product of the working memory and the focus map. Since the activity in the focus map has a gaussian shape and the working memory can be written as a sum of gaussian functions, the convolution product of the working memory and the focus map leads to an activity profile that is the profile in the working memory translated by the vector represented in the focus map. This profile is the prediction of the future state of the working memory and is then used to slightly excite the working memory. After the eye movement and when the saliency map is updated, the previously scanned stimuli emerge in the working memory as a result of the conjunction of the visual stimuli in the saliency map and the prediction of the working memory; This is the same mechanism than the one used when a stimulus emerges in the working memory owing to the conjunction of the activity in the saliency map and the focus map. \subsection{Simulation and results} The visual environment consists in three identical stimuli that the robot is expected to scan successively exactly once. A stimulus is easily discriminable from the background, namely a green lemon on a white table. A complete activation sequence of the different maps is illustrated on Fig. \ref{Simulation_EPS}. The saliency map is filled by convolving the image captured from the camera by a green filter in HSV coordinates such that it leads to three distinct stimuli. At the beginning of the simulation (Fig. \ref{Simulation_EPS}a), only one of the three stimuli emerges in the focus map, thanks to the strong lateral competition that occurs within this map. This stimulus, present both in the focus map and in the saliency map, emerges in the working memory. The activation within the anticipation map reflects what should be the state of the saliency map, restricted to the stimuli that are in the working memory, after the movement that brings the focused one in the center of the visual field. During the eye movement (Fig. \ref{Simulation_EPS}b), no visual information is available and the parameter $\tau$ in \ref{equation_cnft} and \ref{equation_anticipation} is adjusted so that only the units in the anticipation map remain active, whereas the activity of the others tends to zero. After the eye movement and as soon as the saliency map is fed with the new visual input, the working memory is updated thanks to the excitation from both saliency and anticipation map at a same location : the prediction of the state of the visual memory is compared with the current visual information. A new target can now be elicited in the focus map thanks to a switch mechanism similar to that described in \cite{Vitay2005}. \begin{figure} \begin{minipage}{0.85\linewidth} \centering \begin{tabular}[htbp]{cccc} \includegraphics[width=0.25\linewidth]{./eps/S1.eps} & \includegraphics[width=0.25\linewidth]{./eps/S2.eps} & \includegraphics[width=0.25\linewidth]{./eps/S3.eps} & \includegraphics[width=0.25\linewidth]{./eps/S4.eps}\\ a) & b) & c) & d)\\ \includegraphics[width=0.25\linewidth]{./eps/S5.eps} & \includegraphics[width=0.25\linewidth]{./eps/S6.eps} & \includegraphics[width=0.25\linewidth]{./eps/S7.eps} & \includegraphics[width=0.25\linewidth]{./eps/S8.eps}\\ e) & f) & g) & h)\\ \end{tabular} \end{minipage} \caption {A sequence of evolution of the model during an overt visual scan trial. a) One of the three stimuli emerges in the focus map and the anticipation's units predict the future state of the visual memory (the maps wm and thal\_wm). b) During the execution of the saccade, only the units in the anticipation map remain active. c) The focused stimulus emerge in the memory since it is both in the saliency map and the anticipation map at the same location. d) A new target to focus is elicited. e) The future state of the memory is anticipated. f) The saccade is executed and only the prediction remains. g) The two already focused stimuli emerge in the memory. h) The attentional focus lands on the last target.\label{Simulation_EPS}} \end{figure} \section{Discussion} We have presented a computational model of visual memory anticipation that is able to ensure the coherence of the visual world despite abrupt changes in the perception that occur after each eye movement. The prediction of the future state of the visual memory enriches the perception of the visual world in order to avoid focusing twice a same stimulus. As we explained previously, saccades are generally too fast and it is impossible, even in the case we were not blind during eye movements, to continuously update a visual memory. An efferent copy of the eye movement is used to establish the missing link between the pre and post-saccadic perceptions. This mechanism is clearly an extension of visual attention models that have been presented in section \ref{section:computational} and where the visual world is purely static.\\ The question of learning the underlying transformation of the anticipatory mechanism, namely the convolution product of the focus map and the working memory, remains open and still studied. We did implement a learning mechanism, under restrictions and strong hypotheses, that relies heavily on the difference between the pre-saccadic prediction and the post-saccadic actual perception. This self generated signal is able to measure to what extent the predicition is correct or not. Hence, it is quite easy to modify weights accordingly. The main difficulty during learning remains the sampling distribution of examples within the input space which is a well known problem in information and learning theory. Without an additional motivational system that could bias examples according to a given task, it is quite unrealistic to rely on a regular distribution of examples.\\ \bibliographystyle{splncs}
2,877,628,090,857
arxiv
\section{Introduction} The inflationary paradigm \cite{Starobinsky:1980te,Guth:1980zm,Sato:1980yn,Linde:1981mu,Albrecht:1982wi,Linde:1983gd,Lyth:1998xn,Riotto:2002yw,Kinney:2003xf,Baumann:2009ds} offers, in its numerous constructions (see e.g. \cite{Martin:2014vha}), a testable \cite{Planck:2015xua,Ade:2015oja} description for the physics of the very early Universe. Inflation addresses several open problems in cosmology, chief among them the question of the origin of cosmological structures. In its simplest realization, the Universe is dominated by the potential energy of a light scalar field, the inflaton, that drives the expansion. In this picture, quantum fluctuations of the scalar field during inflation are precisely the primary source of cosmological perturbations \cite{Mukhanov:1981xt,Starobinsky:1982ee,Guth:1982ec,Bardeen:1983qw,Abbott:1984fp,Abbott:1984qq}. The statistical properties of the Cosmic Microwave Background (CMB) fluctuations and of the Large Scale Structures (LSS) may therefore contain information about the physics of inflation. In addition to scalar density perturbations, inflation generically produces tensor perturbations, resulting in a spectrum of primordial gravitational waves which, via their impacts on the CMB and other astronomical sources, reveal information about inflation {\cite{Starobinsky:1979ty,Rubakov:1982df,Krauss:1992ke,Krauss:2013pha}. \\ The transition from inflation to later stages of the evolution of the Universe (radiation and matter dominance) is referred to as \textsl{reheating}. During reheating the inflaton field loses its energy, eventually leading to the production of ordinary matter. Several reheating models have been proposed: the simplest ones, involve the perturbative decay of an oscillating inflaton field at the end of inflation \cite{Abbott:1982hn,Dolgov:1982th,Albrecht:1982mp}, while more intricate scenarios include non-perturbative processes such as (broad) parametric resonance decay \cite{Kofman:1994rk,Traschen:1990sw,Kofman:1997yn}, tachyonic instability \cite{Greene:1997ge,Shuhmaher:2005mf,Dufaux:2006ee,Abolhasani:2009nb,Felder:2000hj,Felder:2001kt}, and instant preheating \cite{Felder:1998vq}\footnote{See also \cite{Boyanovsky:1996sv,Bassett:2005xm,Allahverdi:2010xz,Amin:2014eta} for reviews {and, e.g., \cite{Drewes:2013iaa,Drewes:2014pfa} for more studies on reheating}.}. The word \textsl{preheating} indicates the initial stage of reheating, especially in the context where decay happens exponentially, generating high occupation numbers in select frequency bands. Immediately after preheating the frequency bands that underwent parametric resonance will have extremely high occupation numbers while the rest of the space will be basically un-populated, a highly non-thermal state. Over time, scattering events will spread out the distribution, eventually leading to a blackbody spectrum characterized by a final temperature $T_{re}$, which normally corresponds to the temperature at the beginning of the radiation-dominated era. \\ \indent For some inflationary scenarios and for given interactions between the inflaton field and other matter fields, numerical studies were performed to derive an effective equation of state (eos). The eos is parametrized by a function $w_{re}(t)$ for the Universe during the various stages of reheating. As inflation ends, the eos parameter is equal to $-1/3$. Assuming a massive inflaton, very quickly the eos climbs to 0, the eos of a massive harmonic oscillator oscillating between potential dominance (eos of $-1$) and kinetic dominance (eos of $1$). During this initial phase of reheating, the frequency of oscillations, characterized by the inflaton mass $m$, will be larger than the expansion rate. It is therefore correct to approximate the eos of the inflaton as a constant of 0. This is the equation of state of the Universe at the beginning of reheating when the Universe is still dominated by the inflaton field. As the inflaton decays and the decay products compose an increasing percentage of the energy density of the Universe, the eos will increase from 0 to $1/3$ at the start of radiation dominance. In \cite{Podolsky:2005bw} it was shown that for a simple chaotic inflation model and for a quartic $g^{2}\phi^{2}\chi^{2}$ interaction ($\phi$ being the inflation and $\chi$ its decay product), the equation of state right after inflation, characterized by $w_{re}=0$, sharply, within a couple efolds, changes to $w_{re}\sim 0.2-0.3$ already during preheating, long before the system reaches thermal equilibrium\footnote{A physical system reaching an effective (macroscopic) state characterized by nearly constant ratio of pressure over energy density while it is, microscopically, still out-of-equilibrium (``pre-thermalization'') had been previously investigated in Minkowski spacetime in \cite{Berges:2004ce}.}. The duration of preheating can therefore generally be regarded as ``instantaneous'' in comparison with the remaining stages of reheating. In cases like the ones described in \cite{Podolsky:2005bw} (see also \cite{Kofman:1994rk}), $w_{re}$ may therefore be rightfully treated as a constant throughout the entire reheating era. \\ \indent Aside from its thermalization temperature, $T_{re}$, and effective equation of state, $w_{re}$, reheating is also characterized by its duration, which one may quantify in terms of e-foldings $N_{re}\equiv \ln (a_{re}/a_{end})$, occurring between the time inflation ends, $t_{end}$, and the beginning of the radiation-dominated era, $t_{re}$. \\ The reheating era is a difficult one to constrain observationally: except for some non-conventional scenarios (e.g. \cite{Taruya:1997iv,Bassett:1999cg,Finelli:2000ya,Tsujikawa:2002nf,Chambers:2007se,Bond:2009xx,Bassett:1998wg,Bassett:1999mt,Bassett:1999ta,Bethke:2013aba,Easther:2013nga,Moghaddam:2014ksa}). In the absence of topological defects like monopoles or strings, the fluctuations produced during reheating remain sub-horizon and cannot leave an observable imprint at the level of the CMB or LSS. A lower bound is placed on the reheating temperature by primordial nucleosynthesis (BBN) $T_{BBN}\sim 10^{-2}GeV$ \cite{Steigman:2007xt}\footnote{{Smaller values may be assigned to the lower bound of the reheating temperature in models such as \cite{Kawasaki:1999na}}.}; the scale of inflation is merely bounded from above (the CMB B-modes recently measured by BICEP2 \cite{Ade:2014xna,Ade:2014gua} do not yet, unfortunately, point to an inflationary signal) and can be as large as $\sim 10^{16}GeV$, leaving for $T_{re}$ an allowed range of many orders of magnitude . Aside for the production of metric fluctuations in the aforementioned scenarios, a variety of signatures (or lack thereof) relative to the production of primordial black holes \cite{GarciaBellido:1996qt,Carr:2009jm,Torres-Lomas:2014bua}, magnetic field \cite{Calzetta:2001cf,DiazGil:2007dy,DiazGil:2008tf}, unwanted relics \cite{Giudice:1999yt,Giudice:2001ep} and also to mechanisms such as baryo-and leptogenesis \cite{Giudice:1999fb,Krauss:1999ng,GarciaBellido:1999sv,Davidson:2000dw,Copeland:2001qw} (and more, see \cite{Allahverdi:2010xz} for an overview and for a full list of related references), may be traced back to specific preheating/reheating models. \\ Another possibility for extracting information about reheating is to consider the expansion history of the Universe between the time the observable CMB scales crossed outside the Hubble radius during inflation and the time they later re-entered, in such a way as to define a relation between inflationary and reheating parameters \cite{Liddle:2003as} \begin{equation}\label{prima} \ln\left[\frac{k}{a_{0}H_{0}}\right]=-N_{k}-N_{re}-N_{RD}+\ln\left[\frac{a_{eq}H_{eq}}{a_{0}H_{0}}\right]+\ln\left[\frac{H_{k}}{H_{eq}}\right]. \end{equation} In this equation, $k$ can be chosen as the pivot scale for a specific experiment, $N_{k}$ is the number of e-foldings between the exit time of the modes at this pivot during inflation and the end of inflation, $N_{re}$ and $N_{RD}$ respectively indicated the e-folds between the end of inflation and the end of reheating and between the end of reheating and the end of the radiation-dominated era. From (\ref{prima}) one realizes that from the CMB constraints on the primordial power spectrum (which would correspond to a prediction for $N_{k}$), for a given inflationary model one would be able to infer the sum of $N_{RD}$ and $N_{re}$. To solve for $N_{re}$ and $N_{RD}$ individually one needs more information. For reheating models that can be parametrized by a constant effective pressure to energy ratio $w_{re}$, one can relate the density at the end of inflation to the density at the end of reheating, and then assuming conservation of entropy after reheating, to the temperature today. This way one obtains another equation with the same two unknowns $N_{re}$ and $N_{RD}$ that can be used to solve for each individually, or to rework the equations to trade the quantity $N_{re}$ for $T_{re}$, the temperature at the end of reheating. All of this is particularly straightforward for single-field models of inflation that are entirely defined by the form of their potential. In summary, for a given inflationary model and for given equations of state during reheating lying within a reasonable physically plausible range, one may use the CMB data to place constraints on the reheating temperature and its duration. These techniques have been successfully employed in several studies \cite{Martin:2006rs,Lorenz:2007ze,Martin:2010kz,Adshead:2010mc,Mielczarek:2010ag,Easther:2011yq,Dai:2014jja,Martin:2014nya}. \\ {In the same spirit as \cite{Martin:2006rs,Lorenz:2007ze,Martin:2010kz,Adshead:2010mc,Mielczarek:2010ag,Easther:2011yq,Dai:2014jja,Martin:2014nya}, and using similar techniques as in \cite{Dai:2014jja}} (where the attention was directed specifically to inflation with power-law potentials, $V(\phi)\sim \phi^{\alpha}$), we consider the constraints imposed by reheating on popular single field inflationary scenarios. We derive predictions for the length of the reheating era, and the temperature at the end of reheating for each model, assuming a constant equation of state during reheating. Accounting for the lower bounds on $T_{re}$ imposed by BBN and considering a physically plausible range of values for $w_{re}$ (likely the average value will fall between 0 and $\frac{1}{3}$) we use the relations between reheating and inflationary parameters and the constraints on the primordial power spectrum amplitude and tilt from Planck \cite{Planck:2015xua,Ade:2015oja} to provide new constraints on the parameter space in given inflationary models. This is a useful and relatively new tool for constraining and differentiating between inflation models. Models might overlap in predictions for $n_s$ and $r$, but not for the same $w_{re}$. As the constraints on $n_s$ gets tighter, this will translate into an increasingly narrow allowed range for $w_{re}$ for a given inflation model, and so this technique of constraining models with reheating will be increasingly efficient in ruling out some models in favor of others. \\ This work is organized as follows: in Sec.~\ref{sec2} we detail the derivation of the reheating duration and of the temperature at the end of reheating as a function of the spectral index, for canonical single-field inflationary models and for reheating scenarios that can be described in terms of a constant effective equation of state; in Sec.~\ref{sec3} we review the analysis of \cite{Dai:2014jja} for a power law potential and we discuss the constraints from reheating on the inflationary parameters; in Secs.~\ref{sec4} through \ref{sec7} we compute the relations between inflationary and reheating parameters in the Starobinsky, Higgs, natural and hilltop inflation models and we discuss the bounds placed on some of these models by reheating; in Sec.~\ref{sec8} we present our conclusions. \section{Calculating $N_{re}$ and $T_{re}$} \label{sec2} A reheating model (or class of models) may be characterized by a thermalization temperature $T_{re}$, a duration, $N_{re}$ (here defined in terms of the number of e-folds counted from the end of inflation), and an equation of state with an effective pressure-to-energy-density ratio, $w_{re}$. The latter should have values larger than $-1/3$ for inflation to come to an end, and is assumed to be smaller than $1$ in order not to violate causality. A variety of reheating scenarios allow for an equation of state that is nearly constant in time. For the purposes of this work we will thus approximate $w_{re}$ as a constant in all our calculations; in our plots for $N_{re}$ and $T_{re}$, we assign to $w_{re}$ sample values ranging in the interval $[-1/3,1]$. We define $N_{re}$ as the time frame from the end of inflation until the equation of state makes a step function transition from the value $w_{re}$ it had during reheating to $w =1/3$, which we define as the start of radiation dominance. $T_{re}$ is the temperature when this transition occurs. From this definition, $N_{re}$ and $T_{re}$ are not well defined if the equation of state during reheating is also equal to $1/3$ (we will discuss this case more later). Also, we assume a standard expansion history after reheating, with a radiation-dominated (RD) era followed by a matter-dominated (MD) one. We derive, following \cite{Martin:2006rs,Lorenz:2007ze,Martin:2010kz,Adshead:2010mc,Mielczarek:2010ag,Easther:2011yq,Dai:2014jja,Martin:2014nya}, an expression for the reheating parameters ($N_{re}$, $T_{re}$ and $w_{re}$) in terms of a set of physical quantities that are specific to inflation and to the cosmological epochs subsequent to reheating. Considering the evolution of the Universe between the Hubble-exit time during inflation (henceforth indicated by $t_{k}$) for observable scales and the time of observation of the same scales ($t_{0}$), one can write matching conditions for the total energy density as well as for the scale factor, $a(t)$, during the intermediate eras. Fig.~(\ref{fig:gg}) summarizes the evolution of the comoving horizon distance throughout this length of time, marked by the transitions between consecutive epochs at $t_{end}$, the end of inflation, $t_{re}$, the end of reheating/beginning of RD era, and $t_{eq}$, the beginning of the MD era. In the figure we equate the size the comoving horizon far back into inflation, corresponding to modes $l=2$, to the size of the horizon today. In order to solve the horizon problem, the span of comoving scales that leave the horizon from $l=2$ to the end of inflation must equal the span of comoving scales that reenter the horizon after inflation till today. Note the factor by which the comoving horizon shrinks between scales $l=2$ and the end of inflation (the length of the first line in the figures) is not known. The slope of that line is set by the fact that the equation of state is $\approx -1$ during inflation. Depending on the model, that line could be longer or shorter. While there is a minimum length in order to solve the horizon problem while having Inflation occur before BBN, there is no upper bound. The value of $w_{re}$ will set the slope of the second line, the rate by which modes reenter the horizon during reheating. In the figure we display the two extreme cases of $w_{re} =1$ and $w_{re} = - 1/3$. One can see from comparing the two plots, the smaller $w_{re}$ is during reheating, the less efficiently modes re-enter the horizon, and the more efolds will be necessary in the post-inflation period. We consider single-field inflationary models with background field equations, $\ddot{\phi}+3H\dot{\phi}+V^{'}=0$ and $3H^{2}M_{P}^{2}\simeq V(\phi)$. We also assume that both $\epsilon$ and $\eta$ remain smaller than 1 throughout the inflationary regime.\\ \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{wre1_plot.pdf} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{wre_13_plot.pdf} \end{subfigure} \caption{Each figure shows the evolution of the comoving horizon distance over time. Each figure shows the extreme cases for $w_{re}$: the first figure for $w_{re} = 1$ and the second for $w_{re} = - \frac{1}{3}$.} \label{fig:gg} \end{figure} \noindent If one assumes a constant equation of state, the change in the scale factor during reheating is easily related to the change in the energy density. Using $\rho \propto a^{-3(1+w)}$, the reheating epoch is described by \begin{align} \frac{\rho_{end}}{\rho_{re}} = \left(\frac{a_{end}}{a_{re}} \right)^{-3(1+w_{re})}, \end{align} where the subscript $end$ refers to the end of inflation (the start of reheating), and $re$ refers to the end of reheating. Writing this in terms of e-foldings \begin{align}\label{eqq2} N_{re} = \frac{1}{3(1+w_{re})} \ln \left(\frac{\rho_{end}}{\rho_{re}} \right)= \frac{1}{3(1+w_{re})} \ln \left(\frac{3}{2}\frac{V_{end}}{\rho_{re}} \right), \end{align} where the last step of (\ref{eqq2}) is obtained by replacing $\rho_{end} = (3/2) V_{end}$, derived by setting $w = - 1/3$ at the end of inflation.\\ The temperature is related to the density by \begin{align}\label{eqq3} \rho_{re} = \frac{\pi^2}{30} g_{re} T_{re}^4, \end{align} where $g_{re}$ is the number of relativistic species at the end of reheating. Combining Eqs.~(\ref{eqq2}) and (\ref{eqq3}) one finds \begin{align}\label{eq2} N_{re} = \frac{1}{3(1+w)} \ln \left(\frac{30 \cdot \frac{3}{2} V_{end}}{\pi^2 g_{re} T_{re}^4 } \right). \end{align} Making the standard assumption that entropy is conserved between the end of reheating and today, one can relate the reheating temperature to the temperature today by taking into account the changing number of helicity states in the radiation gas as a function of temperature, \begin{align}\label{eq6} T_{re}= T_0 \left(\frac{a_0}{a_{re}} \right) \left(\frac{43}{11 g_{re}} \right)^{\frac{1}{3}}=T_0 \left(\frac{a_0}{a_{eq}} \right) e^{N_{RD}} \left(\frac{43}{11 g_{re}} \right)^{\frac{1}{3}}, \end{align} where $N_{RD}$ is the length in e-folds of radiation dominance, $e^{-N_{RD}}\equiv a_{re}/a_{eq}$. The ratio $a_{0}/a_{eq}$ can be rewritten as \begin{align}\label{eq3} \frac{a_0}{a_{eq}} = \frac{a_0 H_{k}}{k} e^{-N_{k}} e^{- N_{re}} e^{- N_{RD}}\, , \end{align} where one uses the relation $k_{}=a_{k} H_{k}$ for the time at which the pivot scale k \footnote{Note in the following when we repeatedly refer to the pivot scale, we will use throughout Planck's pivot scale of $0.05 Mpc^{-1}$.} crosses outside the Hubble radius and $N_{k}$ is defined as the number of e-foldings between the latter and the time inflation ends. Inserting (\ref{eq3}) into (\ref{eq6}) one finds \begin{align}\label{eq8} T_{re} = \left(\frac{43}{11 g_{re}} \right)^{\frac{1}{3}} \left(\frac{a_0 T_0}{k_{}} \right) H_{k} e^{-N_{k}} e^{- N_{re}}. \end{align} Notice that larger values of $N_{re}$ corresponds to smaller $T_{re}$ and vice versa. In other words, as expected, the quicker and more efficiently reheating takes place, the larger the temperature. Plugging (\ref{eq8}) into Eq.~(\ref{eq2}) \begin{align}\label{eq4} N_{re} = \frac{4}{3 (1+ w_{re})} \left[ \frac{1}{4} \ln \left(\frac{3^2 \cdot 5}{\pi^2 g_{re}} \right) + \ln \left(\frac{V_{end}^{\frac{1}{4}}}{H_{k}} \right) + \frac{1}{3} \ln \left(\frac{11 g_{re}}{43} \right) + \ln \left(\frac{k_{}}{a_0 T_0} \right) + N_{k} + N_{re} \right]. \end{align} One can first solve for $N_{re}$ assuming $w_{re} \not\equiv \frac{1}{3}$ \begin{align}\label{align} N_{re}= \frac{4}{ (1-3w_{re} )} \left[- \frac{1}{4} \ln \left(\frac{3^2 \cdot 5}{\pi^2 g_{re}} \right) - \frac{1}{3} \ln \left(\frac{11 g_{re}}{43} \right) - \ln \left(\frac{k_{}}{a_0 T_0} \right) - \ln \left(\frac{ V_{end}^{\frac{1}{4}}}{ H_{k} } \right) - N_{k} \right] \, . \end{align} Notice that the values of the last two terms in Eq.~(\ref{align}) depend on the specific inflationary model. Assuming $g_{re} \approx 100$ and using Planck's pivot of $0.05 Mpc^{-1}$ \footnote{The convention in the Planck analysis defines the pivot scale such that the comoving momentum $k$ becomes horizon sized when $k a_0 = a H$, where we have been using $k = a H$, so using our conventions $\frac{k_{}}{a_0} = 0.05 Mpc^{-1}$.}, one obtains a simplified expression for $N_{re}$, before specifying a particular inflationary model: \begin{align}\label{eq12} N_{re}= \frac{4}{ (1-3w_{re} )} \left[61.6 - \ln \left(\frac{ V_{end}^{\frac{1}{4}}}{ H_{k} } \right) - N_{k} \right]. \end{align} One can then use Eq.~(\ref{eq8}) to obtain \begin{align}\label{mp1} T_{re}= \left[ \left(\frac{43}{11 g_{re}} \right)^{\frac{1}{3}} \frac{a_0 T_0}{k_{}} H_{k} e^{- N_{k}} \left[\frac{3^2 \cdot 5 V_{end}}{\pi^2 g_{re}} \right]^{- \frac{1}{3(1 + w_{re})}} \right]^{\frac{3(1+ w_{re})}{3 w_{re} -1}}. \end{align} \subsection{Special case $w_{re}=\frac{1}{3}$} The final result for $N_{re}$ in Eq.~(\ref{eq12}) only applies for $w_{re} \not\equiv 1/3$. Going back to Eq.~(\ref{eq4}), notice that if $w_{re} = \frac{1}{3}$, $N_{re}$ cancels from both sides of the equation, and one is left with \begin{align} 0 = \frac{1}{4} \ln \left(\frac{30}{\pi^2 g_{re}} \right) + \frac{1}{4} \ln \left(\frac{3}{2} \right) + \ln \left(\frac{ V_{end}^{\frac{1}{4}}}{H_{k}} \right) + \frac{1}{3} \ln \left(\frac{11 g_{re}}{43} \right) + \ln \left(\frac{k_{}}{a_0 T_0} \right) + N_{k} \,. \end{align} Assuming $g_{re} = 100$, and Planck's pivot scale, this simplifies to: \begin{align}\label{eq5} 61.6 = \ln \left(\frac{ V_{end}^{\frac{1}{4}}}{H_{k}} \right) + N_{k} \,. \end{align} For $w=1/3$, it is not possible to derive a prediction for $N_{re}$ or $T_{re}$ but instead, for a particular inflation model, one finds a prediction for $n_s$. Note the ambiguity in $N_{re}$ and $T_{re}$ is due to the fact that we are defining the start of radiation dominance as the moment $w_{re}$ reaches $1/3$. If $w_{re}$ is already equal to $1/3$ during reheating, then there is ambiguity in when to differentiate between the two regimes. \subsection{Model dependent part} In order to solve for $N_{re}$ in Eq.~(\ref{eq12}) (or to solve for $n_s$ in Eq.~(\ref{eq5}) if $w_{re} = 1/3$) for a particular model, one needs to compute $N_{k}$, $H_{k}$, and $V_{end}$. $N_{k}$ can be calculated starting from the definition of e-foldings: \begin{align}\label{eqq1} \Delta N = \int H dt\,. \end{align} Recasting the r.h.s. of (\ref{eqq1}) as an integral over $\phi$ and using the background equation of motion for the inflaton, $3 H \dot{\phi} + V' \simeq 0$, and the Friedmann equation, $H^2 \simeq V/(3 M_P^2)$, one finds \begin{align}\label{eq13} N_k \simeq \frac{1}{M_P^2} \int_{\phi_{end}}^{\phi_k} \frac{V}{V'}\, d \phi\,. \end{align} Next, $H_{k}$ can be written as a function of $n_s$. Using the definition of the tensor-to-scalar ratio $r = P_h/P_{\zeta}$ (where $P_h =(2 H^2)/(\pi^2 M_P^2)$ and $P_{\zeta} = A_s$ at the pivot scale) \begin{align} r_{k} = \frac{2 H_k^2}{ \pi^2 M_P^2 A_s}. \end{align} Then using $r =16 \epsilon$ this gives \begin{align}\label{eq14} H_{k} \simeq \pi M_P \sqrt{8 A_s \epsilon_{k}}. \end{align} Once the form of $V(\phi)$ is specified for a given model, one can express $V_{end}$ as a function of model parameters calculated at the pivot scale. The explicit form of $V_{end}$ along with (\ref{eq13}) and (\ref{eq14}) can be plugged into Eqs.~(\ref{eq12}) and (\ref{mp1}) to derive $N_{re}$ and $T_{re}$ as a function of inflationary model parameters (or into Eq.~(\ref{eq5}) in the case $w_{re} = 1/3$). \section{Polynomial potentials} \label{sec3} {Consider a polynomial type potential \begin{align}\label{eqqq1} V = \frac{1}{2} m^{4 - \alpha} \phi^{\alpha}. \end{align} This was considered in the context of reheating in \cite{Martin:2006rs,Martin:2010kz,Dai:2014jja,Martin:2014nya,Martin:2014vha}}. We quickly review this specific application. At the end of this section, we discuss with some quantitative examples how closely the constraints from inflation compare to the ones from reheating.\\ The first step is to calculate the model dependent parameters in Eq.~(\ref{eq12}), i.e. $N_k$, $H_k$, and $V_{end}$. The number of e-folds between the time the pivot scale exited the Hubble radius and the end of inflation can be derived using Eq.~(\ref{eq13}) \begin{align} N_k = \frac{1}{2 \alpha M_P^2} \left(\phi_k^2 - \phi_{end}^2 \right) . \end{align} The potential in these polynomial models is generally steep enough so that $\phi_k \gg \phi_{end}$ and it is appropriate to approximate \begin{align}\label{eqq33} N_k \approx \frac{1}{2 \alpha M_P^2} \phi_k^2\, . \end{align} We now require $N_k$ as a function of $n_s$. From the expression of the spectral index as a function of the slow-roll parameters, $n_s= 1 - 6 \epsilon + 2 \eta$ (where $\epsilon = (M_P^2/2)(V'/V)^2$ and $\eta = M_P^2 V''/V$), and using (\ref{eqq33}) to rewrite $\epsilon$ and $\eta$ as functions of $N_k$, one finds \begin{align} N_k = \frac{\alpha + 2}{2 (1 - n_{s})}. \end{align} From Eq.~(\ref{eq14}) and using the previous equation, $H_k$ is given by \begin{align} H_k =\pi M_{P}\sqrt{\frac{4\pi A_{s}}{\alpha+2}(1-n_{s})}. \end{align} Lastly one computes $V_{end}$ in terms of $n_s$ and $A_s$,% \begin{align}\label{eqqq2} V_{end}=3 M_P^2 H_k^2 \frac{\phi_{end}^{\alpha}}{\phi_k^{\alpha}}= 6 \pi^2 M_P^4 A_s (1- n_s) \left (\frac{\alpha (1- n_s)}{2 (\alpha + 2)} \right), \end{align} where the value of the inflaton field at the end of inflation was computed by solving for $\phi_{end}$ from the condition $\epsilon = 1$.\\ Thus $N_k$, $H_k$, and $V_{end}$ are all expressed as functions only of $\alpha$, $n_s$ and $A_s$ and one may plot $N_{re}$ (and $T_{re}$) as a function of $n_{s}$ for some fixed values of $w_{re}$ and $\alpha$. We use $n_{s}=0.9682\pm0.0062$ and Planck's central value $A_{s}=2.196\times 10^{-9}$ (small variations in $A_{s}$ have negligible effects on reheating predictions). \\ \begin{figure} \centering \includegraphics[width=16cm]{polynomialplotnew4.pdf} \caption{Plots of $N_{re}$ and $T_{re}$, the length of reheating and the temperature at the end of reheating respectively, for polynomial potentials with exponent $\alpha$. The solid red line corresponds to $w_{re} = -1/3$, the dashed green line to $w_{re} = 0$, the dotted blue line to $w_{re} = 2/3$, and the dot-dashed black line to $w_{re} =1$. The pink shaded region corresponds to the $1 \sigma$ bounds on $n_s$ from Planck. The purple shaded region corresponds to the $1 \sigma$ bounds of a further CMB experiment with sensitivity $\pm 10^{-3}$ \cite{Amendola:2012ys,Andre:2013afa}, using the same central $n_s$ value as Planck. Temperatures below the dark green shaded region are ruled out by BBN. The light green shaded region is below the electroweak scale, assumed 100 GeV for reference. This region is not disallowed but would be interesting in the context of baryogenesis.} \label{fig:a} \end{figure} We plot in Fig.~\ref{fig:a} $N_{re}$ and $T_{re}$ predictions for $\alpha = 2/3$, 1, 2 and 4. The case $\alpha = 2/3$ is favored by axion-monodromy models, and $\alpha=1$ and $\alpha =2$ give promising predictions when compared with the Planck data. The case $\alpha =4$ is difficult to reconcile with $w_{re}\leq 1$ even considering the 2$\sigma$ bounds on $n_{s}$\footnote{{An exception where $\phi^{4}$ may still be viable is in the context of warm inflation \cite{Bartrum:2013fia,Bastero-Gil:2014oga}.}}. \\ Instantaneous reheating is defined as the limit $N_{re} \rightarrow 0$, visualized in the figure as the point where all the lines converge. Such instantaneous reheating leads to the maximum temperature at the end of reheating, and the equation of state parameter is irrelevant. \\ (Thus, while not shown, a $w_{re} = \frac{1}{3}$ solution would correspond to a vertical line passing through the instantaneous reheat point.) From Fig.~\ref{fig:a}, $\alpha =2/3$ can be consistent with Planck bounds, but assuming an equation of state $w_{re} \geq 0$, the model would tend to predict smaller reheating temperatures if one considers Planck's $1 \sigma$ bound on $n_{s}$; using Planck's 2$\sigma$ bounds, any reheating temperature up to the maximum instantaneous case is still allowed. For $\alpha = 1$ and $\alpha = 2$ all the lines in Fig.~\ref{fig:a} are shifted towards the central value of $n_{s}$ when compared to the $\alpha=2/3$ case, thus allowing for a wider range of reheating temperatures as well as values of the equation of state parameter.\\ Consider now the case $w_{re} =1/3$. Solving Eq.~(\ref{eq5}) for the polynomial potential, one obtains \begin{align}\label{specific1} 61.6= \frac{1}{4} \ln \left(\frac{3 \alpha}{4 \pi^2 A_s (\alpha + 2)}\right) + \frac{\alpha + 2}{2 (1-n_s)}. \end{align} Using Planck's central value for $A_s$, Eq.~(\ref{specific1}) gives specific predictions for $n_s$ \begin{equation} \begin{cases} n_s = 0.977 \quad\quad \text{for} \quad\alpha = \frac{2}{3}\,,\\ n_s = 0.974 \quad\quad \text{for} \quad\alpha = 1\,,\\ n_s = 0.965 \quad\quad \text{for} \quad\alpha = 2\,.\\ \end{cases} \end{equation} Notice that larger values of $\alpha$ require smaller values of $n_s$. With the $2 \sigma$ bounds on $n_s$ from Planck, $0.956 < n_s < 0.981$, $w_{re} = \frac{1}{3}$ would be consistent with all three values of $\alpha$.\\ \begin{figure}[H] \centering \includegraphics[width=14cm]{phi23new.pdf} \caption{Parameter space for $\phi^{2/3}$ inflation. The figures show $r$ and $N_k$ predictions that give the correct $A_s$ for the plotted $n_s$ at the pivot scale. The green portion of the line comprises the region of parameter space corresponding to reheating models with $w_{re}>1$, the yellow part corresponds to $w_{re} >1/3$, red to $w_{re} <1/3$ and orange to $w_{re}<0$. Note the most likely $w_{re}$, between $0$ and $1/3$, falls in the red region.} \label{fig:b} \end{figure} \begin{figure}[H] \centering \includegraphics[width=14cm]{phi11new.pdf} \caption{Parameter space for $\phi$ inflation. Shading is as for Fig.~(\ref{fig:b}). } \label{fig:b1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=14cm]{phi22new.pdf} \caption{Parameter space for $\phi^2$ inflation. Shading is as for Fig.~(\ref{fig:b}). } \label{fig:b2} \end{figure} Figs.~\ref{fig:b}-\ref{fig:b2} shows the parameter space in the $r$ and $N_k$ vs. $n_s$ plane, corresponding to the different reheating scenarios. We allow for any $N_k > 19$. We note again that there is no maximum allowed $N_k$. A minimum on $N_k$ is determined by the temperature at the end of reheating in order to solve the horizon and flatness problems. One finds that $N > 24.9$ if reheating after inflation is to be above the BBN scale and $N >34.8$ for reheating above the electroweak scale in order for scales on the order of the horizon today ( i.e $l=2$) to have left the horizon during inflation. A simple estimate of the ratio of expansion scales between $l=2$, and Planck's pivot scale, at $l \approx 685.8$, if the expansion rate during Inflation were constant, is $\Delta N \approx \ln (l_2/l_1) \approx 6.5$. However, in the large field modes we are considering, the variation in $H$ is not negligible and the exact $\Delta N = \ln ( \frac{k_2\, H_1}{k_1\, H_2})$ is closer to $\Delta N \approx 5.9$. This means that for reheating greater than the BBN scale, one finds $N_k \geq 19$ (or $N_k \geq 29$ for reheating above the electroweak symmetry breaking scale). \\ The green part of the line in Fig.~\ref{fig:b} corresponds to the region of parameter space that requires reheating models with $w_{re}$ larger than one, the yellow part corresponds to $w_{re} >1/3$, red to $w_{re} <1/3$ and orange to $w_{re}<0$. We stress that a value of $w_{re}$ between $0$ and $1/3$ is most likely and these solutions fall in the red band in Fig.~\ref{fig:b}.\\ One can see that requiring $0\leq w_{re} \leq 1/3$ corresponds to respectively setting an upper and a lower bound on the tensor-to-scalar ratio \begin{equation} \begin{cases} 0.05 \leq r \leq 0.06 \quad\quad \text{for} \quad\alpha = \frac{2}{3}\,,\\ 0.07 \leq r \leq 0.09 \quad\quad \text{for} \quad\alpha = 1\,,\\ 0.14 \leq r \leq 0.18 \quad\quad \text{for} \quad\alpha = 2\,.\\ \end{cases} \end{equation} Since it now appears that the majority of BICEP2's signal is comprised of dust \cite{Ade:2015tva}, it is difficult to find a viable reheating scenario for $\phi^2$ inflation; if we loosen our restriction to just requiring $w_{re}<1$ then one obtains a bound $r\geq 0.11$, which is just inside the 2$\sigma$ limit \cite{Ade:2015tva}. \\ The assumption $0\leq w_{re}\leq 1/3$ results in tighter constraints on $r$ than Planck's 2$\sigma$ bound on $n_{s}$ alone. For $\phi^2$, the $n_{s}$ 2$\sigma$ bound yields $0.08 \leq r \leq 0.18$. Restricting $w_{re}$ also provides stronger constraints on $N_{k}$: for $\phi^2$, the $n_{s}$ 2$\sigma$ bound yields $45 \leq N_{k}\leq 103$, whereas $0\leq w_{re}\leq 1/3$ yields $44 \leq N_{k}\leq 57$. \section{Starobinsky model} \label{sec4} The action for the Starobinsky model \cite{Starobinsky:1980te} has the form \begin{align}\label{seq1} S = \int d^4 x \sqrt{-g} \left[\frac{M_P^2}{2} (R + \alpha R^2) + \mathcal{L}_{matter} \right], \end{align} where $R$ is the Ricci scalar. Performing a conformal transformation \cite{Wetterich:1987fk,Kalara:1990ar} \begin{align} \tilde{g}_{\mu \nu} = \omega^2 g_{\mu \nu}, \end{align} where $\omega^2 = 1 + 2 \alpha R$, the action (\ref{seq1}) is rewritten as the canonical Einstein-Hilbert action plus other terms which form a modified $\mathcal{L}_{matter}$ \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \left[ \tilde{R} - \frac{\alpha \phi^2}{(1+ 2 \alpha \phi)^2} - \frac{6 \alpha^2}{(1+ 2 \alpha \phi)^2} (\tilde{\partial} \phi)^2 \right] + \mathcal{L}_{matter} \right], \end{align} where what we now call $\phi$ is equal to $R$, the original, untransformed Ricci scalar. Notice that $\tilde{\partial}^{\alpha}$ carries factors of the metric, therefore $\not\equiv \partial^{\alpha}$. Next one defines $\bar{\phi}$, a canonically normalized version of $\phi$ \begin{align} \bar{\phi} = \sqrt{\frac{3}{2}} M_P \ln(1+ 2 \alpha \phi). \end{align} Rewriting the action in terms of $\bar{\phi}$ one finds \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \left[\tilde{R} - \frac{1}{4 \alpha} \left(1 - e^{- \sqrt{\frac{2}{3}} \frac{\bar{\phi}}{M_P}} \right)^2 \right] - \frac{1}{2} (\tilde{\partial} \bar{\phi})^2 + e^{- 2 \sqrt{\frac{2}{3}} \frac{\bar{\phi}}{M_P}} \mathcal{L}_{matter} \right]. \end{align} If one assumes that the other fields in $\mathcal{L}_{matter}$ are subdominant during inflation and can be ignored, then one can verify that this Einstein frame action behaves as normal gravity plus a canonical scalar field with the potential \begin{align} V = \frac{M_P^2}{8 \alpha} \left(1- e^{- \sqrt{\frac{2}{3}} \frac{\bar{\phi}}{M_P}} \right)^2\,. \end{align} Dropping the bar on $\phi$ from now on, but continuing to work with the canonical version of the field, one can easily compute the number of e-foldings between the horizon exit of the pivot scale and the end of inflation \begin{align} N_k = \frac{1}{M_P^2} \int_{\phi_{end}}^{\phi_k} \frac{V}{V'}\, d \phi=\frac{1}{2 M_P^2} \sqrt{\frac{3}{2}} \left[M_P \sqrt{\frac{3}{2}} e^{\sqrt{\frac{2}{3}} \frac{\phi}{M_P}} - \phi \right] \Big|^{\phi_k}_{\phi_{end}}. \end{align} With the approximations $\phi_k \gg \phi_{end}$, and $M_P e^{\sqrt{\frac{2}{3}} \frac{\phi_k}{M_P}} \gg \phi_k$ , the previous expression simplifies to \begin{align} N_k = \frac{3}{4} e^{\sqrt{\frac{2}{3}} \frac{\phi_k}{M_P} } \end{align} which can be inverted for $\phi_k$ \begin{align}\label{eq9} \phi_k = \sqrt{\frac{3}{2}} M_P \ln \left(\frac{4}{3} N_k \right) \end{align} The next step is to compute $\epsilon_k$ and $\eta_k$ in order to derive $N_k$ as a function of $n_s$ using $n_s = 1 - 6 \epsilon + 2 \eta$. The slow-roll parameters have the following form \begin{align}\label{eq33} \epsilon_k \simeq \frac{3}{4 N_k^2},\quad\quad\quad \eta_k = - \frac{1}{N_k}, \end{align} where Eq.~(\ref{eq9}) was used along with the approximation $N_{k}\gg 1$. From Eq.~(\ref{eq33}) then one finds \begin{align}\label{n1} N_k = \frac{2}{1 - n_s}. \end{align} Using the expressions above, one derives $H_k$ as a function of $n_s$ and $A_s$ \begin{eqnarray}\label{n2} & &H_k = \pi M_P \sqrt{\frac{3}{2} A_s} (1- n_s),\\\label{n3} & &V_{end} = \frac{9}{2} \pi^2 M_P^4 A_s (1-ns)^2 \frac{ \left(\frac{1}{\frac{\sqrt{3}}{2}+1} \right)^2}{ \left(1 - \frac{3}{8}(1-n_s) \right)^2} \end{eqnarray} Eqs.~(\ref{n1})-(\ref{n3}) are all that is needed to derive the results for the duration and for the temperature of reheating. \begin{figure}[H] \centering \includegraphics[width=9cm]{starobinskyplotnew.pdf} \caption{Plots of $N_{re}$ and $T_{re}$, the length of reheating and the temperature at the end of reheating respectively, for Starobinsky and Higgs inflation. All curves and shaded regions are as for Fig.~\ref{fig:a}} \label{fig:c} \end{figure} Fig.~\ref{fig:c} shows good compatibility with Planck's $1 \sigma$ bounds on $n_s$ for all the possible $w_{re}$ values. Also, if one does not put any restrictions on the value of $w_{re}$ then any temperature between the BBN bound and the instantaneous reheating value is allowed within the 1$\sigma$ bound. \begin{figure}[H] \centering \includegraphics[width=14cm]{star2new.pdf} \caption{Parameter space for Starobinsky inflation. Shading is as for Fig.~\ref{fig:b}. } \label{figd} \end{figure} \noindent We plot in Fig.~\ref{figd} the parameter space in the $r$ and $N_k$ vs. $n_s$ plane for Starobinsky inflation, for different ranges of $w_{re}$\footnote{{See also \cite{Martin:2014vha,Martin:2014nya} for $n_{s}$ vs $r$ plots in the Starobinsky model for $w_{re}=0$.}}. For $0 <w_{re} <1/3$, the corresponding range for the spectral index is $0.953 < n_s < 0.964$. This also corresponds to the range $0.004 \leq r \leq 0.007$ and $42 \leq N_k \leq 56$. \section{Higgs Inflation} \label{sec5} The idea behind Higgs inflation \cite{Bezrukov:2007ep} is to allow the Standard Model Higgs field to be the inflaton by adding a non-minimal coupling to gravity. The Jordan frame action is \begin{align} S = \int d^4 x \sqrt{-g} \left[\frac{M_P^2}{2} R \left(1 + 2 \xi \frac{H^{\dagger} H}{M_P^2} \right) + \mathcal{L}_{matter} \right]\,, \end{align} where H is the Higgs doublet. We may again perform a conformal transformation to write the action in the form of Einstein gravity plus a modified $\mathcal{L}_{matter}$. The transformation is given by $\tilde{g}_{\mu \nu} = \omega^2 g_{\mu \nu}$ with $\omega^2 = 1 + 2 \xi \frac{H^{\dagger} H}{M_P^2}$. Rewriting the action in terms of the transformed metric, we find \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \tilde{R} - \frac{3 \xi^2}{\omega^4 M_P^2} \left(\tilde{\partial} H^{\dagger} H \right)^2 + \frac{1}{\omega^4} \mathcal{L}_{matter} \right]\,. \end{align} Next, one extracts the kinetic and potential terms for the Higgs field contained within $\mathcal{L}_{matter}$. One can use $V_h = \frac{\lambda}{4} (H^{\dagger} H - \frac{\nu^2}{2})^2$, dropping the $\nu$ part (we are interested in inflation scales much larger than electroweak scale). Ignoring all the Higgs interactions with other fields, and only considering its self coupling (which we assume is the dominant term in the Higgs potential at inflation scales) \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \tilde{R} - \frac{3 \xi^2}{\omega^4 M_P^2} \left(\tilde{\partial} H^{\dagger} H \right)^2 - \frac{1}{\omega^4} \left(\partial H^{\dagger} \right)^2- \frac{\lambda}{4 \omega^4} \left (H^{\dagger} H \right)^2 + \frac{1}{\omega^4} \mathcal{L}_{matter} \right]\,, \end{align} where now $\mathcal{L}_{matter}$ comprises all the matter fields except the Higgs. Note: one needs to convert $\partial^{\alpha} \rightarrow \omega^2 \tilde{\partial}^{\alpha}$. The Higgs is no longer canonical because of the effect of the non-minimal coupling. To canonically normalize all four of the Higgs degrees of freedom, one must work in unitary gauge, where three of the four degrees of freedom are equal to zero, $H = \frac{1}{\sqrt{2}} \left( \begin{array}{cc} 0 \\ h \end{array} \right)$ \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \tilde{R} - \frac{3 \xi^2 h^2}{\omega^4 M_P^2} \left(\tilde{\partial} h \right)^2 - \frac{1}{2 \omega^2} \left(\tilde{\partial} h \right)^2 - \frac{\lambda}{4 \omega^4} h^4 + \frac{1}{\omega^4} \mathcal{L}_{matter} \right]\,. \end{align} The canonically normalized version of $h$ is $\bar{h}$, defined as \begin{align}\label{app1} \frac{\partial \bar{h}}{\partial h} = \frac{1}{ \left(1 + \frac{\xi}{M_P^2} h^2 \right)} \sqrt{1 + \frac{\xi}{M_P^2} h^2 (6 \xi + 1) }. \end{align} Before integrating the previous equation, it is useful to introduce a few approximations. First one uses $6 \xi \gg 1$. To get a successful inflation model, one should require $\xi \approx 10^4$. Next one uses the condition $(6 \xi)/(M_P^2) h^2 \gg 1$. $h \approx M_P$ when inflation ends, and therefore $h> M_P$ for the duration of inflation. This allows one to rewrite Eq.~(\ref{app1}) as \begin{align} \bar{h}= \frac{\sqrt{6} \xi}{M_P} \int dh\, \frac{h}{1 + \frac{\xi h^2}{M_P^2}}\,, \end{align} which integrates to \begin{align} \bar{h}= \sqrt{\frac{3}{2}} M_P \ln \left(1+ \frac{\xi h^2}{M_P^2} \right). \end{align} Rewriting the action in terms of $\bar{h}$, one finds \begin{align} S = \int d^4 x \sqrt{- \tilde{g}} \left[\frac{M_P^2}{2} \tilde{R} - \frac{1}{2 } \left(\tilde{\partial} \bar{h} \right)^2 - \frac{\lambda M_P^4}{4 \xi^2} \left(1 - e^{- \sqrt{\frac{2}{3}} \frac{\bar{h}}{M_P} } \right)^2 + e^{-2 \sqrt{\frac{2}{3}} \frac{\bar{h}}{M_P} } \mathcal{L}_{matter} \right]\,, \end{align} The potential term for the canonical field takes the same form as the Starobinsky potential with the identification $(1)/(8 \alpha) = (\lambda M_P^2)(4 \xi^2)$. Since we have a canonical field evolving in the same potential as the Starobinsky case, the Higgs inflation model gives the same predictions for $N_{re}$ and $T_{re}$ (see also \cite{Martin:2014vha,Martin:2014nya}). We note Starobinsky and Higgs inflation have different low scale behavior \cite{Bezrukov:2008ut,GarciaBellido:2008ab,Bezrukov:2011gp,Gorbunov:2012ns,Kehagias:2013mya} and so while the allowed parameter space as a function of $w_{re}$ is the same, the $w_{re}$ that is most likely for Starobinsky vs. Higgs inflation is likely to differ. Tighter constraints could be obtained by considering gravitational, Planck suppressed couplings in the Starobinsky case, and standard model couplings in the Higgs case \cite{Bezrukov:2008ut,GarciaBellido:2008ab,Bezrukov:2011gp}. Of course new physics may modify the running of the couplings or add new couplings at these high scales (see for example \cite{Gorbunov:2012ns}); in this respect, our approach of characterizing an allowed parameter space by assuming a range of $w_{re}$ between 0 and 1/3 can usefully help bracket different allowed scenarios.} \section{Natural Inflation} \label{sec6} The potential for natural inflation is \cite{Freese:1990rb} \begin{eqnarray}} \def\ea {\end{eqnarray} V(\phi)=\Lambda^{4}\left[1+\cos\left(\frac{\phi}{f}\right)\right]\,. \ea The number of e-folds $N_{k}$ between the time the pivot scale modes crossed outside the horizon and the end of inflation is given by \begin{eqnarray}} \def\ea {\end{eqnarray}\label{ef} N_{k}=\left(\frac{f}{M_{P}}\right)^{2}\ln\left[\frac{\sin^{2}\left(\chi_{end}/2 \right)}{\sin^{2}\left(\chi_{in}/2\right)}\right] \ea where $\chi\equiv\phi/f$. The slow-roll parameters have the following form \begin{eqnarray}} \def\ea {\end{eqnarray}\label{ep} \epsilon=\frac{1}{2}\left(\frac{M_{P}}{f}\right)^{2}\left[\frac{1-\cos(\chi)}{1+\cos(\chi)}\right]\,,\quad\quad\quad \eta=-\left(\frac{M_{P}}{f}\right)^{2}\frac{\cos\chi}{1+\cos\chi}. \ea The field value at the end of inflation can be determined by setting $\epsilon=1$; this leads to the following equation for $\chi_{end}$ \begin{eqnarray}} \def\ea {\end{eqnarray} \frac{1}{2}\frac{M_{P}^{2}}{f^{2}}\frac{\sin^{2}\left(\chi_{end}\right)}{\left[1+\cos\left(\chi_{end}\right)\right]^{2}}=1. \ea The solution is \begin{eqnarray}} \def\ea {\end{eqnarray} \cos(\chi_{end})=\frac{-1+b}{1+b}\,,\quad\quad\quad\quad b\equiv\left(\frac{M_{P}}{\sqrt{2}f}\right)^{2}, \ea The number of e-folds in Eq.~(\ref{ef}) can be written as \begin{eqnarray}} \def\ea {\end{eqnarray} N_{k}=\left(\frac{f}{M_{P}}\right)^{2}\ln\left[\frac{1-\cos(\chi_{end})}{1-\cos(\chi_{in})}\right]=\left(\frac{f}{M_{P}}\right)^{2}\ln\left[\frac{2}{(1+b)}\frac{1}{(1-\cos(\chi_{in}))}\right]. \ea The value of the field at the pivot scale during inflation is then given by \begin{eqnarray}} \def\ea {\end{eqnarray}\label{chi} \cos(\chi_{in})=1-z\,,\quad\quad\quad\quad z\equiv \frac{2}{(1+b)}\exp{\left[-N_{k}\left(\frac{M_{P}}{f}\right)^{2}\right]}. \ea Using (\ref{ep}) and (\ref{chi}), one finds \begin{eqnarray}} \def\ea {\end{eqnarray} n_{s}-1\equiv -6\epsilon+2\eta=-\left(\frac{M_{P}}{f}\right)^{2}\left(\frac{2+z}{2-z}\right), \ea which leads to \begin{eqnarray}} \def\ea {\end{eqnarray} N_{k}=-\left(\frac{f}{M_{P}}\right)^{2}\ln\left[\left(1+\frac{M_{P}^{2}}{2f^{2}}\right)\left(\frac{(1-n_{s})-\frac{M_{P}^{2}}{f^{2}}}{(1-n_{s})+\frac{M_{P}^{2}}{f^{2}}}\right)\right]. \ea Notice that the previous expression is positive and real only if the argument of the logarithm is defined between zero and one \begin{eqnarray}} \def\ea {\end{eqnarray}\label{cond2} 0 < \left(1+\frac{M_{P}^{2}}{2f^{2}}\right)\left(\frac{(1-n_{s})-\frac{M_{P}^{2}}{f^{2}}}{(1-n_{s})+\frac{M_{P}^{2}}{f^{2}}}\right)< 1. \ea The conditions (\ref{cond2}) are equivalent to requiring that \begin{eqnarray}} \def\ea {\end{eqnarray}\label{cond1} \left(\frac{f}{M_{P}}\right)^{2}>\frac{1}{(1-n_{s})} \quad\quad \text{and} \quad\quad 3+n_{s}+\left(\frac{M_{P}}{f}\right)^{2} > 0 . \ea The second condition in (\ref{cond1}) is always true. The first condition implies a minimum $f$ for each $n_s$, and the bound on $f$ increases with increasing $n_s$. Using the central value for the Planck constraints on the spectral index, then (\ref{cond1}) gives $f>5.6 \,M_{P}$. \\ \noindent The tensor-to-scalar ratio can be expressed in terms of $n_{s}$ \begin{eqnarray}} \def\ea {\end{eqnarray} r=16\,\epsilon = 4\left[(1-n_{s})-\frac{M_{P}^{2}}{f^{2}}\right]. \ea Using $V_{in}=\Lambda^{4}\left[1+\cos(\chi_{in})\right]\simeq 3H^{2}M_{P}^{2}$, one finds \begin{eqnarray}} \def\ea {\end{eqnarray} V_{end}=3H^{2}M_{P}^{2}\left[\frac{1+(1-n_{s})\left(\frac{f}{M_{P}}\right)^{2}}{2+4\left(\frac{f}{M_{P}}\right)^{2}}\right]. \ea \begin{figure}[H] \centering \includegraphics[width=16cm]{naturalplotnew.pdf} \caption{We show $N_{re}$ and $T_{re}$, the length of reheating and the temperature at the end of reheating respectively, for natural inflation, for 3 values of the coupling $f$. Again, shading is as in Fig.~\ref{fig:a} } \label{fig:e} \end{figure} Fig.~\ref{fig:e} shows $N_{re}$ and $T_{re}$ solutions for various reheating parameters $w_{re}$ and for various couplings $f$ in natural inflation. Unlike polynomial inflation, or Starobinsky/ Higgs inflation, natural inflation has an extra free parameter, and so one no longer gets a precise prediction for the temperature and length of reheating once a reheating model, $w_{re}$, and $n_s$ are specified. But one can get reasonable bounds on the coupling $f$ such that a viable reheating model exists.\\ One can obtain separate, and stronger constraints on $f$ based on the requirement for viable reheating. These constraints likewise are functions of $n_s$, and their effects are displayed in Figure \ref{fig:f}\footnote{{See also \cite{Martin:2014vha,Martin:2014nya} for $n_s$ vs $r$ plots in the natural inflation model for $w_{re}=0$.}}. \\ There is no upper limit on $f$. For $f \gtrsim 14 M_P$ the various $w_{re}$ lines reach an asymptotic form. As a result even for very large $f$, there is a valid solution for each $w_{re}$ value consistent with Planck's $1 \sigma$ bounds. The asymptotic solution for large $f$ for $0\leq w_{re} \leq 1/3$ corresponds to a solution for the spectral index in the range $0.956\leq n_{s}\leq 0.965$. \\ \begin{figure}[H] \centering \includegraphics[width=14cm]{naturalplanck2new.pdf} \caption{Representation of the $r$ and $N_k$ vs. $n_s$ plane for natural inflation. The colored region is the entire allowed parameter space that can produce the measured $A_s$ value at the pivot scale, and the $n_s$ value at the pivot scale as plotted. Shading again follows Fig.~\ref{fig:b}} \label{fig:f} \end{figure} Note the minimum on $f$ increases with increasing $n_s$, such that for larger $n_s$, a larger $f$ is needed to find a solution consistent with the reheating model being considered. The top limit in Fig.~\ref{fig:f} A (the bottom limit in Fig.~\ref{fig:f} B) is approached asymptotically for large $f$. The bottom part of the parameter space in Fig.~\ref{fig:f} A (top part in Fig.~\ref{fig:f} B) corresponds to small $f$. Everywhere in the figure $N_k > 19$, such that inflation lasts long enough to allow for BBN.\\ Using Planck's $2 \sigma$ bounds on $n_s$, requiring $w_{re} \leq 1$ gives $r \geq 0.02$, and requiring $w_{re} \leq 1/3$ gives $r \geq 0.05$. For $w_{re}\leq 1/3$, values of $n_{s}$ smaller than Planck's central value would be favored for any $f$. The weakest constraint is for large values $f$, for which $n_{s}\leq 0.965$. \section{Hilltop inflation} \label{sec7} The potential is given by \cite{Linde:1981mu,Linde:1984cd} \begin{eqnarray}} \def\ea {\end{eqnarray} V\left(\phi\right)=M^{4}\left[1-\left(\frac{\phi}{\mu}\right)^{p}\right]. \ea We begin by considering $p > 2$. The exact expression for the number of e-foldings between the time the pivot scale crossed outside the horizon and the end of inflation is \begin{eqnarray}} \def\ea {\end{eqnarray}\label{N} N_{k}=\frac{\mu^{2}}{2p M_{P}^{2}}\left[\chi_{in}^{2}-\chi_{end}^{2}+\frac{2}{p-2}\chi_{in}^{2-p}-\frac{2}{p-2}\chi^{2-p}_{end}\right]. \ea where one defines $\chi\equiv\phi/\mu$.\\ The slow-roll parameters are \begin{eqnarray}} \def\ea {\end{eqnarray}\label{epsilon} \epsilon=\frac{p^{2}}{2}\frac{M_{P}^{2}}{\mu^{2}}\frac{\chi^{2(p-1)}}{\left(1-\chi^{p}\right)^{2}}\,,\quad\quad\eta\equiv -p(p-1) \frac{M_{P}^{2}}{\mu^{2}} \frac{\chi^{p-2}}{\left(1-\chi^{p}\right)}. \ea Setting $\epsilon=1$ at the end of inflation one derives the equation for $\chi_{end}$ \begin{eqnarray}} \def\ea {\end{eqnarray}\label{chiend} \frac{p^{2}}{2}\frac{M_{P}^{2}}{\mu^{2}}\frac{\chi_{end}^{2(p-1)}}{\left(1-\chi_{end}^{p}\right)^{2}}=1. \ea Let us consider the case where $\mu > M_{P}$ and define $q\equiv M_{P}/\mu$. For small values of $q$, one can search for a solution for $\chi_{end}$ in the form of a Taylor expansion around $q=0$ \begin{eqnarray}} \def\ea {\end{eqnarray} \chi_{end}=a_{0}+a_{1}\, q+\frac{1}{2}\,a_{2} \,q^{2}+\mathcal{O}(q^{3}). \ea One can show that, up to order $q^{2}$, a solution to Eq.~(\ref{chiend}) is (see e.g. \cite{Martin:2014vha}) \begin{eqnarray}} \def\ea {\end{eqnarray}\label{solchiend} \chi_{end}=1-\frac{1}{\sqrt{2}}\,q+\frac{(p-1)}{4}\,q^{2}. \ea Similarly, one can look for a solution for the initial value of the scalar field using (\ref{solchiend}) and (\ref{N}), to find\begin{eqnarray}} \def\ea {\end{eqnarray} \chi_{in}=1-\sqrt{\frac{1+4\,N_{k}}{2}}\,q +\mathcal{O}(q^{2}). \ea Using (\ref{solchiend}) in the expression for the slow-roll parameters, (\ref{epsilon}), the spectral index as a function of $N_{k}$ is \begin{eqnarray}} \def\ea {\end{eqnarray}\label{ns} n_{s}-1\simeq -\frac{6}{1+4 N_{k}}, \quad\quad\quad\longrightarrow\quad\quad\quad N_{k}\simeq \frac{1}{4}\left(\frac{6}{1-n_{s}}-1\right). \ea The tensor-to-scalar ratio is \begin{eqnarray}} \def\ea {\end{eqnarray}\label{r} r\simeq \frac{8}{3}(1-n_{s}). \ea Notice that (\ref{ns}) and(\ref{r}) only apply for small values of $q$, more precisely for \begin{eqnarray}} \def\ea {\end{eqnarray} q < \frac{\sqrt{2}}{(p-1)\sqrt{1+4 \,N_{k}}}. \ea For $p\in (3,8)$ and for $N_{k}\in (30,100)$, the previous condition is satisfied if $q\leq 0.01$.\\ Within the same range of validity, the potential at the end of inflation is given by \begin{eqnarray}} \def\ea {\end{eqnarray}\label{pot} V_{end}\simeq \sqrt{\frac{3}{2}}H^{2}M_{P}^{2}\sqrt{1-n_{s}}. \ea For $p\leq 2$ one derives the same results as Eqs.~(\ref{ns})-(\ref{r}) and (\ref{pot}). For $p=2$, however, a new expression for $N_k$ is required. In this case we find \begin{eqnarray}} \def\ea {\end{eqnarray}\label{N2} N_{k}=\frac{\mu^{2}}{2 M_{P}^{2}}\left[\frac{\chi_{in}^{2}}{2}-\frac{\chi_{end}^{2}}{2} - \ln \chi_{in} + \ln \chi_{end} \right]. \ea \begin{figure}[H] \centering \includegraphics[width=16cm]{hilltopp2new.pdf} \caption{Plots of $N_{re}$ and $T_{re}$, for hilltop inflation with $p=2$ and for three different values of $\mu$. Shading is as in Fig.~\ref{fig:a} } \label{fig:g} \end{figure} \begin{figure} \centering \includegraphics[width=16cm]{hilltopp3new.pdf} \caption{Plots of $N_{re}$ and $T_{re}$, for hilltop inflation with $p=3$ and for three different values of $\mu$. Shading is as in Fig.~\ref{fig:a} } \label{fig:h} \end{figure} \begin{figure} \centering \includegraphics[width=16cm]{hilltopp4new.pdf} \caption{Plots of $N_{re}$ and $T_{re}$, for hilltop inflation with $p=4$ and for three different values of $\mu$. Shading is as in Fig.~\ref{fig:a} } \label{fig:i} \end{figure} \noindent The plots for hilltop inflation are derived using a numerical procedure and therefore they convey more information than one would obtain with the above analytic results since they cover a range in which the latter would not apply (i.e. for smaller values of $\mu$)\footnote{{Notice that hilltop inflation was previously studied in the context of reheating in \cite{Martin:2010kz,Martin:2014vha,Martin:2014nya}.}}.\\ \noindent Note that for $p = 1$ the potential is just a straight line, and so should give the same predictions as the $V \propto \phi$ inflation model considered above. In Figs.~\ref{fig:g}-\ref{fig:i} we therefore plot $N_{re}$ and $T_{re}$ for various reheating scenarios parametrized by $w_{re}$, and various values of $\mu$ for $p = 2$ and larger.\\ \noindent Just as with natural inflation, hilltop inflation has two free parameters, in this case $M$ and $\mu$. This extra freedom means for each different $p$ value, there are $\mu$ values that are readily consistent with Planck data and $\mu$ values that are not. One can give bounds on $\mu$ for each $p$ model such that reasonable reheating solutions exist, and these results are shown in Figure \ref{fig:j1} and \ref{fig:j2}. \\ \begin{figure} \centering \includegraphics[width=14cm]{hilltoppplanck.pdf} \caption{Plot of the parameter space in the $r$ vs. $n_{s}$ plane for the three hilltop models with $p =2$, 3, and 4. Shading is as in Fig.~\ref{fig:b} } \label{fig:j1} \end{figure} \begin{figure} \centering \includegraphics[width=14cm]{hilltoppplancknk.pdf} \caption{Plot of the parameter space in the $N_k$ vs. $n_s$ plane for the three hilltop models with $p =2$, 3, and 4. Shading is as in Fig.~\ref{fig:b} } \label{fig:j2} \end{figure} \noindent As with the bound on $f$ for natural inflation, there is a minimum on $\mu$ required for $p=2$ to get any solution at all, even before reheating is considered, and that bound is a function of $n_s$. This $\mu_{min}$ corresponds to $r \rightarrow 0$, so in this case there is no minimum on $r$ before reheating is considered. For $p=3$, $4$ there is no such minimum on $\mu$ to get a solution in the regime $\mu\geq M_{P}$. There appears to be no observational constraint from Planck for very large values of $\mu$. \\ The upper bounds in the plots in Figure \ref{fig:j1} (the lower bounds in Figure \ref{fig:j2}) correspond to larger $\mu$, and the lower bounds in Figure \ref{fig:j1} (or the upper bounds in Figure \ref{fig:j2}) correspond to smaller $\mu$. Using the 2$\sigma$ bounds on $n_s$ and requiring $w_{re} \leq 1/3$ gives the following lower bounds on the tensor-to-scalar ratio: $r \geq 0.02$ (for $p=2$) and $r \geq 0.007$ ($p=3$) and $r\geq 0.003$ ($p=4$). Using the central value of $n_s$ and requiring $w_{re} \leq 1/3$ gives the bounds: $r \geq 0.03$ (for $p=2$ and $p=3$) and $ r \geq 0.02$ ($p=4$). The region in parameter space that is associated with these more likely values of $w_{re}$ then allows for fairly small $r$ values. \section{Discusion and Conclusions} \label{sec8} \indent Inflation includes a wide variety of models that give similar predictions for the fairly small number of available inflationary observables. The physics of reheating can provide an additional opportunity to break this degeneracy. While CMB fluctuations themselves do not supply direct probes of the physics during the reheating era, the details of reheating affect the predictions for inflation (and vice versa) because they determine the nature of the cosmic thermal history after inflation (see e.g., \cite{Martin:2006rs,Lorenz:2007ze,Martin:2010kz,Adshead:2010mc,Mielczarek:2010ag,Easther:2011yq,Dai:2014jja,Martin:2014nya}). Although we do not know exactly what occurred during reheating, we can make reasonable assumptions such as that the average equation of state during reheating was very likely between 0 and $1/3$. This leads to independent constraints on observables like $n_s$ and $r$, that can then be tested against CMB data.\\ \indent One can parametrize our ignorance about reheating in terms of an equation of state, $w_{re}$, a length $N_{re}$ (measured in terms of number of e-folds elapsed from the end of inflation), and a final temperature, $T_{re}$. For any given inflationary model, one can write relations between the specific model parameters, the amplitude of the scalar power spectrum $A_{s}$, the spectral index $n_{s}$, and the reheating parameters ($w_{re}$, $N_{re}$ and $T_{re}$). These relations are derived by accounting for the total expansion history between the time the observable CMB modes crossed outside the Hubble radius during inflation and the time of observation, and employing a continuity equation for the energy density during the different cosmological epochs. We also assume that $w_{re}$ is constant. For single-field models the derivation is particularly straightforward. The main results are summarized in Eq.~(\ref{mp1}) for $T_{re}$ and $w_{re}$ (and in Eq.~(\ref{eq12}) for $N_{re}$ and $w_{re}$) as a function of inflationary parameters and observables. \\ {We consider a broad range for the equation of state parameter, $-1/3 \leq w_{re} \leq 1$, and the corresponding limits on CMB observables for different inflationary models. } We notice that a $\phi^2$ potential would favor relatively large values of $r$: a reheating model with $w_{re} \leq 1$ implies $r \geq 0.11$; to allow for a reheating model with $w_{re} \leq 1/3$ which is very probable, requires $r \geq 0.14$. Since it appears BICEP2's signal is dust instead of primordial gravitational waves, it is difficult to reconcile $\phi^2$ inflation with the data. We also consider Starobinsky/Higgs inflation, natural inflation and the hilltop models. For Starobinsky and Higgs inflation, requiring $w_{re}\leq 1/3$ corresponds to $r\geq 0.004$. Because natural and hilltop inflation models have two free parameters, there are ranges of parameter space that can fit well the data for any value of the reheating parameter $w_{re}$. For natural inflation, we find that Planck's 2$\sigma$ bound on $n_{s}$ favors a tensor-to-scalar ratio $r \geq 0.05$ for $w_{re} \leq 1/3$ (Fig.~\ref{fig:f}). For the same range of $w_{re}$, the hilltop model, on the other hand, allows for smaller $r$ values, specifically $r \geq 0.02$ for $p=2$, $r \geq 0.007$ for $p =3$, or $r \geq 0.003$ for $p =4$ (Fig.~\ref{fig:j1}). \\ We show this parameter space in Fig.~(\ref{fig:aa}), where we recreate a version of the Planck Figure 12 in their recent Constraints on Inflation paper, using their 1 and 2 $\sigma$ TT, TE, EE + lowP constraints on $n_s$ and $r$ \cite{Ade:2015oja}. To get their model parameter space they impose $N_k$ between 50 and 60. Instead here, we don't specify $N_k$ but plot the parameter space for which there exists a reheating solution with $0 \leq w_{re} \leq \frac{1}{3}$. Constraining models in this way, using $w_{re}$, is a nice model dependent but straightforward and well motivated way of representing the parameter space. \\ \begin{figure} \centering \includegraphics[width=11cm]{planckplotnsr.pdf} \caption{We recreate a version of the Planck Figure 12 in their recent Constraints on Inflation paper, using their 1 and 2 $\sigma$ TT, TE, EE + lowP constraints on $n_s$ and $r$ \cite{Ade:2015oja}, but plotting the parameter space for models such that there exists a reheating solution for $0 \leq w_{re} \leq \frac{1}{3}$, as opposed to Planck's choice of parameter space for which there is a solution with $N_k$ between 50 and 60. Following the conventions in Planck's version of the plot, the green line is $\phi^3$, the black is $\phi^2$, the pink is $\phi^{4/3}$, the yellow $\phi$, the red $\phi^{2/3}$, the orange Starobinsky/ Higgs model, the puple region is natural inflation, and the green region is the quartic hilltop model.} \label{fig:aa} \end{figure} \indent {To conclude we find that considering broad, well-motivated physical constraints on the reheating equation of state indeed allows one to narrow the viable parameter space for inflation models, offering an improvement over merely specifying whether or not an inflation model can reproduce the correct predictions at the pivot scale. These methods will become increasingly effective with future more precise CMB data.}\\ \noindent{{\it �Note added: �\\Our analysis was initially performed considering the Planck 2013 results for the scalar power spectrum parameters. Just after completion, but before submission, Planck released their 2015 data, so we have updated our analysis using the new observational bounds on $n_{s}$ and $A_{s}$. All presented results are now based on the Planck 2015 data. �While completing the first version of this work, it was brought to our attention that a similar approach was carried out by \cite{Munoz:2014eqa}. Some of the results on natural inflation were reproduced in \cite{Munoz:2014eqa} and, where there is overlap, we find agreement if we consider the Planck 2013 bounds on the scalar power spectrum parameters. Furthermore, when this work was near completion, two papers concerning Higgs inflation and reheating were released \cite{Cai:2015soa, Gong:2015qha}. We find agreement with the results of \cite{Cai:2015soa} if we consider their pivot scale ($k_{p}=0.002 Mpc^{-1}$ as opposed to $0.05 Mpc^{-1}$). }} \acknowledgments It is a pleasure to thank M. Liguori and J. Wang for useful correspondence. This research was supported in part by a grant from the DOE. \bibliographystyle{JHEPmodplain}
2,877,628,090,858
arxiv
\section{Introduction} \IEEEPARstart{D}{ynamics} of piecewise-smooth systems are encountered in various applications in electrical engineering and physics: controlled buck converter \cite{deane}, boost converter in discontinuous mode \cite{tse}, impact oscillators \cite{grebogi}, etc. Significant theoretical understanding has been developed for systems with continuous maps. Theory for piecewise-smooth maps has been partially developed in \cite{bernardo}. Results related to the existence and stability of period-1 and period-2 fixed points in discontinuous maps have been reported in \cite{bernardo} and \cite{dutta}. Analysis of bifurcation in piecewise-smooth systems has been shown in \cite{nusse} and \cite{yorke}. Most of the research efforts till date have been on analyzing piecewise-smooth systems through bifurcation diagrams and numerical simulation (e.g. \cite{1d}, \cite{border}, \cite{2d} and \cite{abed}). \cite{homer}, \cite{avrutina}, \cite{avrutinb} and \cite{avrutinc} have developed analytical studies to show the existence of higher periodic orbits. However, complete characterizations of stable periodic orbits for piecewise-smooth systems are yet to be developed. \par In this paper we have examined the stable periodic orbits of piecewise-smooth systems analytically. Such systems are often modeled as discrete maps which are divided into regions separated with borderlines. These maps are piecewise smooth and are differentiable everywhere except at the borderlines due to discontinuity. The one-dimensional piecewise smooth map that is investigated in this paper, is defined as \cite{border}: \begin{equation}\label{basic} x_{n+1}=f(x_n,a,b,\mu,l)=\left\{ \begin{array}{lcl} ax_{n}+\mu & for & x_{n} \leq 0\\ bx_{n}+\mu + l & for & x_{n} > 0 \end{array} \right. \end{equation} \par From the stability point of view, `$a$' and `$b$' are assumed to be in the range $(0,\,1)$. Height of the discontinuity is denoted by `$l$' and `$\mu$' is the parameter to be varied. Let us assume $l>0$ in equation \eqref{basic}. There are three cases as illustrated in Figure \ref{bifur}. \begin{subfigures} \begin{figure} \begin{center} \scalebox{.5}{ \input{bifur.pstricks}} \caption{Graph of the map for $0<a<1$ and $0<b<1$, and $l>0$ \cite{border}} \label{bifur} \end{center} \end{figure} \begin{enumerate} \item[]{\textbf{Case 1:}} For $\mu>0$, there is a stable fixed point on the right-half plane. Location of the fixed point can be obtained from equation \eqref{basic} as $x_{R} =\frac{\mu+l}{1-b}$. \item[]{\textbf{Case 2:}} For $0> \mu>-l$, there are two stable fixed points on both sides of the discontinuity as shown in Figure \ref{bifur}. \item[]{\textbf{Case 3:}} For $\mu<-l$, there is a stable fixed point in the left half plane and it is given by $x_{L}=\frac{\mu}{1-a}$. \end{enumerate} \par It may be observed that the left half of the map intersects the $45^{\circ}$ line for $\mu < 0$ and the right half of the map intersects this line for $\mu > -l$. This implies that the fixed point $x_{L}$ collides with the border at $\mu = 0$ and the fixed point $x_{R}$ collides with the border at $\mu =-l$. Therefore two border collision events are expected as $\mu$ is varied. \par Three additional cases may be observed when $l<0$ (see Figure \ref{bifur1}). \begin{enumerate} \item[]{\textbf{Case 4:}} For $\mu<0$, there is a stable fixed point in the left half plane and it is given by $x_{L}=\frac{\mu}{1-a}$. \item[]{\textbf{Case 5:}} For $-l>\mu>0$, there is no fixed point. \item[]{\textbf{Case 6:}} For $\mu >-l$, there is another stable fixed point in the right half plane: $x_{R} =\frac{\mu+l}{1-b}$. \end{enumerate} \begin{figure} \begin{center} \scalebox{.5}{ \input{negativel.pstricks}} \caption{Graph of the map for $0<a<1$ and $0<b<1$, and $-l>0$} \label{bifur1} \end{center} \end{figure} \end{subfigures} Case 5 is the most interesting amongst the six listed above as it contains no fixed point. This case has been analyzed in detail in this paper. As $l$ is independent of $\mu$, without loss of generality, it may be assumed that $l=-1$. One can see that when $x_{n}\in(x_{p}, 0]$ (which is in the closed left half plane), $x_{n+1}$ belongs to the right half plane. As the system is stable, it can be concluded that after some $k$ steps, the point $x_{n+k}$ returns to the left half plane again. This leads us to the following questions: \begin{itemize} \item Do periodic orbits exist for such systems? \item If yes, then how to characterize them? \end{itemize} These questions are answered in this paper. It may be noted that the range of $\mu$ is crucial to determine the existence of orbits. Only when $0<\mu<-l$, is there a possibility of existence of orbits whereas for all other ranges of $\mu$, only fixed points exist. This motivates one to find the range of $\mu$ for the existence of certain specific kind of orbits of prescribed periodicity. It is shown in this paper, that a complete characterization of all orbits based on the range of $\mu$ is possible. \par \pagebreak \section{Preliminaries} First we define the term periodic orbit \cite{aligood}. \begin{definition}\label{periodic} Let $f$ be a map from $\mathbb{R}$ to $\mathbb{R}$. We call $p$ a periodic point of period $k$ if $f^k(p)=p$, where $k$ is the smallest such positive integer. The orbit with initial point $p$ (which consists of $k$ points) is called a periodic orbit of period k. We will often use the abbreviated terms period-$k$ point and period-$k$ orbit for a periodic orbit having period-$k$. \end{definition} Let $\mathcal{L}=(-\infty,\,0]$ (the left half plane) and $\mathcal{R}=(0,\,\infty)$ (the right half plane). Given a particular sequence of points $\{x_n\}_{n\geq 0}$ through which the system evolves, one can convert (code) this sequence into a sequence of \cLs and \cRs by indicating which of the two sets ($\mathcal{L}$ or $\mathcal{R}$) the corresponding point belongs to. Clearly, a periodic orbit has a string of \cLs and \cRs that keeps repeating. We call this repeating string, a \emph{pattern} and denote it by $\sigma$. The length of the string $\sigma$ is denoted by $|\sigma |$ and gives the number of symbols in the pattern i.e., the period of the orbit. A periodic orbit with a pattern $\sigma$ is denoted as $\mathcal{O_{\sigma}}$. \ps denotes the interval of parameter $\mu$ for which orbit \mo exists. The sum of geometric series $1+k+k^2+\ldots+k^n$ is denoted by $S_n^k$. \begin{definition} A periodic orbit \mo is termed as admissible if \ps$\neq\phi$. The pattern of an admissible orbit is called an admissible pattern. \end{definition} \begin{definition} If a pattern of a periodic orbit \mo consists of only one \cR and multiple \cLs or vice-versa, it is called an atomic pattern. \end{definition} Thus, there are two types of atomic patterns; those with pattern $\overbrace{\mathcal{LLL\cdots\cdots LL}}^n\mathcal{R}$, abbreviated as \ml (termed as \cL-atomic pattern) and those with pattern $\mathcal{L}\overbrace{\mathcal{RRR\cdots\cdots RR}}^n$, abbreviated as \mr (termed as \cR-atomic pattern). The pattern $\mathcal{LR}$ is both, \cL-atomic as well as \cR-atomic. \begin{definition} A pattern is called a molecular pattern if it is made up of a combination of atomic patterns. \end{definition} \begin{example} $\mathcal{LLRLLRLR}$ is a molecular pattern. It is made up by combining the atomic patterns $\mathcal{LLR}$ and $\mathcal{LR}$. \end{example} \section{Analysis of Periodic Orbits} \begin{Lemma}[Atomic Lemma]\label{atomic} An atomic pattern of any period is admissible. \end{Lemma} \begin{IEEEproof} Consider an atomic orbit \oln with period $n+1$. We write down the inequalities as: \begin{align*} x_{0}&\leq0 ,\\ x_1&=ax_0+\mu\leq0 ,\\ x_2&=ax_1+\mu\leq0,\\ &=a^2x_0+(a+1)\mu\leq0,\\ \vdots\nn\\ x_{n-1}&=a^{n-1}x_0+\mu S_{n-2}^a\leq 0,\\ x_n&=a^{n}x_0+\mu S_{n-1}^a>0,\\ x_{n+1}&=x_{0}=bx_n+\mu-1\leq0,\\ \therefore\, x_0 &=\frac{(a^{n-1}b+a^{n-2}b+\ldots+ab+b+1)\mu-1}{1-a^{n}b}. \end{align*} \par Substituting the value of $x_0$ into the list of inequalities above, would yield a list of upper bounds for $\mu$ (whenever the point $x_i$ is in \cL) and lower bounds for $\mu$ (when the point $x_i$ is in \cR). We denote upper bounds by $\mu^{upper}_i$ and lower bounds by $\mu^{lower}_i$. We define $\mu_{2}=\underset{i}{min}(\mu^{upper}_{i})$ and $\mu_{1}=\underset{i}{max}(\mu^{lower}_i)$. Therefore, \ps$=(\mu_{1},\,\mu_{2}]$. A simple algebraic manipulation of the inequalities above gives: \ben \mathcal{P_{L^{\mathrm{n}}R}}=\mathbf{\left(\frac{a^{n}}{S_n^a},\quad \frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}\right]}. \een Let us assume $\mathcal{P_{L^{\mathrm{n}}R}}=\phi$. \begin{align*} \therefore\, & \frac{a^{n}}{S_n^a}>\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}.\\ \therefore\, & a^n\times \bigg(a^{n-1}b+S_{n-1}^a\bigg)-a^{n-1}\times \bigg(S_n^a\bigg)>0.\\ \therefore\, & a^{n-1}\bigg[a^nb+aS_{n-1}^a-S_n^a\bigg]>0.\\ \therefore\, & -a^{n-1}(1-a^nb)>0. \end{align*} which is a contradiction as $a,b\in(0,\, 1)$. Hence $\mathcal{P_{L^{\mathrm{n}}R}}\neq\phi$. \par Similarly, consider an atomic orbit \orn. We write down the inequalities as: \begin{align*} x_{0}&\leq0,\\ x_{1}&>0,\\ \vdots\\ x_{n+1}&=x_{0}\leq0.\\ \therefore\,x_{0}&=\frac{(b^{n-1}+b^{n-2}+\ldots+b+1)(\mu-1)+b^n\mu}{1-b^{n}a} \end{align*} Finding $\mu_{1}$ and $\mu_{2}$ in the way as explained above, we get \ben \mathcal{P_{LR^{\mathrm{n}}}}=\mathbf{\left(\frac{ab^{n-1}+S_{n-2}^b}{ab^{n-1}+S_{n-1}^b},\quad \frac{S_{n-1}^b} {S_{n}^b}\right]} \een Further, it can be easily checked that $\mathcal{P_{LR^{\mathrm{n}}}}\neq\phi$. \end{IEEEproof} \begin{example} Let us consider an orbit $\mathcal{O_{LR}}$. Here $x_{0}\leq0,\,x_{1}>0$ and $x_{2}=x_{0}$. From equation \eqref{basic} \begin{align} x_{1}&=ax_{0}+\mu.\nn\\ x_{2}&=bx_{1}+\mu-1, \nn\\ &=abx_{0}+(b+1)\mu-1, \nn\\ &=x_{0}.\nn\\ \therefore\,x_{0}&=\frac{(b+1)\mu-1}{1-ab}\leq0 \label{x0}.\\ \therefore\,\mu&\leq\frac{1}{b+1}.\nonumber \end{align} Substituting the value of $x_{0}$ in $x_{1}$ we get: \begin{align} x_{1}&=a\frac{(b+1)\mu-1}{1-ab}+\mu >0 \label{x1}.\\ \therefore\,\mu&>\frac{a}{1+a}.\nonumber \end{align} Hence $\mathcal{P_{LR}}=\left(\frac{a}{1+a},\,\frac{1}{1+b}\right]$. \par For example, if we assume $a=\frac{1}{2}$ and $b=\frac{1}{3}$, then $\mathcal{P_{LR}}=\left(\frac{1}{3},\, \frac{3}{4}\right]$. Assume $\mu=\frac{3}{5}$. We substitute the values of $a,b,l$ and $\mu$ to find $x_{0},x_{1}$ and $x_{2}$. From equation \eqref{x0} and \eqref{x1}: \begin{align*} x_{0}=x_{2}=\frac{-6}{25}.\\ x_{1}=\frac{12}{25}. \end{align*} Above analysis shows that orbit $\mathcal{O_{LR}}$ has one point in the closed left half plane and other point in the open right half plane and hence its pattern is $\sigma=\mathcal{LR}$. Moreover, $x_2=x_0$ shows that this is in fact a period-$2$ orbit. $\mathcal{P_{LR}}=\left(\frac{1}{3},\, \frac{3}{4}\right]$ gives the range for $\mu$ where the orbit $\mathcal{O_{LR}}$ is admissible. \end{example} \begin{note} The map given by equation \eqref{basic} is invariant under transformation $f(x,a,b,\mu,l)\rightarrow f(-x,b,a,-[\mu+l],l)$. Due to replacement of $x$ by $-x$, involved patterns will be inverted (i.e., \cLs will become \cRs and vice-versa). Therefore, for the sake of simplicity, we will only consider \cL-atomic patterns. The results will be directly applicable to \cR-atomic patterns through the transformation mentioned above. \end{note} \subsection{Problem Formulation and Analysis} We have proved that atomic orbits are admissible. Additionally, we have obtained a closed form solution for the range of $\mu$ for these atomic orbits. This leads to following questions: \begin{enumerate} \item Are atomic orbits the only kind of orbits? For example, can there be an orbit like $\mathcal{O_{LLLRR}}$? \item Can we characterize all the possible types of admissible orbits? \item For a given $n$, how many distinct patterns exist with period $n$? \item Is there any algorithm to generate all the admissible patterns? \end{enumerate} \par In this paper we provide answers to all the above questions. We take the first step towards characterization of all possible types of admissible patterns, by proving that certain combinations of \cLs and \cRs cannot appear in any admissible pattern $\sigma$. \begin{Lemma}\label{thm:funda} For any admissible orbit $\mathcal{O_{\sigma}}$, its pattern cannot contain consecutive \cLs and consecutive \cRs simultaneously. \end{Lemma} \begin{IEEEproof} We know, $-l=1$ and $0<\mu<1$. We first find conditions on $\mu$ such that consecutive \cRs do not appear. Let us assume $x_{0}\leq0,x_1>0$ and $x_{2}\leq x_0$. Then from equation \eqref{basic} \begin{align*} x_{1}&=ax_{0}+\mu>0,\\ x_2=&abx_{0}+(b+1)\mu-1\leq x_0,\\ \therefore\,\mu\leq&\frac{(1-ab)x_0+1}{b+1}. \end{align*} substituting $x_0\leq 0$ in above equation we get, $\mu\leq \frac{1}{b+1}$. \par Now we find conditions on $\mu$ such that consecutive \cLs do not appear. Let us assume $x_{0}>0,x_1\leq 0$ and $x_{2}>x_0$. Then from equation \eqref{basic} \begin{align*} x_{1}&=bx_{0}+\mu-1\leq0,\\ x_2&=abx_0+(a+1)\mu-a>x_0.\\ \therefore\, \mu&>\frac{(1-ab)x_0+a}{a+1}. \end{align*} Substituting $x_0>0$ in above equation we get, $\mu>\frac{a}{a+1}$. Since $a,b\in(0,\,1)\Rightarrow\,\frac{a}{a+1}<\frac{1}{b+1}$. This proves the lemma. \end{IEEEproof} Summarizing, we can say: \begin{itemize} \item When $0<\mu\leq\frac{1}{b+1}$, then any \cR is always immediately followed by \cL. So in this range, patterns with consecutive \cRs do not exist (see Figure \ref{case3}). \item When $\frac{a}{a+1}<\mu\leq1$, then any \cL is immediately followed by \cR. So in this range, patterns with consecutive \cLs do not exist (see Figure \ref{case2}). \item When $\frac{a}{a+1}<\mu\leq\frac{1}{b+1}$, only possible pattern is $\mathcal{LR}$. \end{itemize} \begin{subfigures} \begin{figure}[ht!] \begin{center} \scalebox{.7}{ \input{case3.pstricks}} \caption{The range of `$\mu$' where only singleton \cR is possible.} \label{case3} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \scalebox{.7}{ \input{case2.pstricks}} \caption{The range of `$\mu$' where only singleton \cL is possible.} \label{case2} \end{center} \end{figure} \end{subfigures} This lemma helps us predict whether certain patterns are admissible or in-admissible e.g., $\mathcal{LLRR}$ and $\mathcal{LLRLRRLR}$ are clearly not admissible patterns as these patterns contain both consecutive \cLs and consecutive \cRs simultaneously. An important corollary of Lemma \ref{thm:funda} is that all the admissible patterns are either atomic patterns (\ml or \mr) or molecular patterns made up of purely \cL-atomic patterns or \cR-atomic patterns. \par We generalize this lemma to find conditions on $\mu$ for \emph{at most} $n$ consecutive \cLs or \emph{at least} $n$ consecutive \cLs to appear in a pattern. \begin{Lemma}[At Most \& At Least Lemma] When $\mu\leq\frac{a^{n-1}}{a^{n-1}b+s_{n-1}^a}$ then at least $n$ consecutive \cLs appear in the pattern and when $\mu>\frac{a^{n}}{S_n^a}$ then at most $n$ consecutive \cLs appear in the pattern. \end{Lemma} \begin{IEEEproof} Let us assume $x_0\leq 0,x_1\leq 0, \ldots, x_{n-1}\leq 0, x_n>0$. From Atomic Lemma we get, \begin{align} x_{n-1}&=a^{n-1}x_0+\mu S_{n-2}^a\leq 0, \nn\\ &\therefore\, x_0\leq-\frac{\mu S_{n-2}^a}{a^{n-1}}, \label{x01}\\ x_n&=a^nx_0+\mu S_{n-1}^a>0, \nn\\ &\therefore\, x_0>-\frac{\mu S_{n-1}^a}{a^n},\label{x02}\\ x_{n+1}&=ba^nx_0+\mu bS_{n-1}^a +\mu-1. \label{x03} \end{align} First we find the condition on $\mu$ such that at least $n$ consecutive \cLs appear in a pattern. For this, we assume $x_{n+1}\leq x_0$. Then from equation \eqref{x03}, \begin{equation}\label{uvalue} \mu\leq \frac{(1-ba^n)x_0+1}{(1+bS_{n-1}^a)}. \end{equation} Substituting \eqref{x01} in \eqref{uvalue} we get, \begin{equation} \mathbf{\mu\leq \frac{a^{n-1}}{ba^{n-1}+S_{n-1}^a}}. \end{equation} \par Now we find the condition on $\mu$ such that at most $n$ consecutive \cLs appear in a pattern. Let us assume $x_{n+1}> x_0$. Then from equation \eqref{x03}, \begin{equation}\label{uvalue2} \mu> \frac{(1-ba^n)x_0+1}{(1+bS_{n-1}^a)}. \end{equation} Substituting \eqref{x02} in \eqref{uvalue2} we get, \begin{equation} \mathbf{\mu> \frac{a^{n}}{S_{n}^a}}. \end{equation} \end{IEEEproof} At Most \& At Least Lemma (from now on we refer to it as AMAL Lemma) gives us the conditions on $\mu$ for the appearance of at most/at least $n$ consecutive \cLs in a pattern. In a similar way we can find the conditions on $\mu$ such that at most/at least $2n$ or $3n$ or $n-1$ consecutive \cLs appear in a pattern. Note that at least and at most conditions for consecutive \cRs can be found in the same fashion. It is important to note that all these conditions are placed on the parameter line $\mu$ in a specific order. We now prove that these conditions on $\mu$ are such that the admissible combinations for the molecular patterns are limited. \begin{Lemma} Every molecular pattern is a combination of at most two atomic patterns of successive cardinality. \end{Lemma} \begin{IEEEproof} From AMAL Lemma we get the at most/at least conditions on $\mu$. The statement of this lemma is equivalent to showing that on the $\mu$ parameter line (see Figure \ref{examp2}), at any given point, the two active conditions (one at least and one at most) come from succesive values of $n$. This is equivalent to showing that $\frac{a^{n}}{S_{n}^a}<\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}<\frac{a^{n-1}}{S_{n-1}^a}$ for every $n\geq 2$. From Atomic Lemma we know that $\frac{a^{n}}{S_{n}^a}<\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}$. Hence to prove this lemma it is enough to prove that $\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}<\frac{a^{n-1}}{S_{n-1}^a}$. Let us assume \begin{align*} \frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}&>\frac{a^{n-1}}{S_{n-1}^a},\\ \therefore\, a^{n-1}S_{n-1}^a&>(a^{n-1}b+S_{n-1}^a)a^{n-1},\\ \therefore\, 0&>a^{2(n-1)}b. \end{align*} This is a contradiction as $a,b \in (0,\,1)$. Hence $\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}<\frac{a^{n-1}}{S_{n-1}^a}$ \end{IEEEproof} We summarize the above lemma into the following cases: \begin{enumerate} \item[]{\textbf{General Case:} $\mathbf{\mu\in\big(\frac{a^{n}}{S_{n}^a},\, \frac{a^{n-2}}{a^{n-2}b+S_{n-2}^a}\big]}$.} All the patterns consist of either $n-1$ or $n$ consecutive \cLs or a combination of both. \item[]{\textbf{Case a:} $\mathbf{\mu\in\big(\frac{a^{n}}{S_{n}^a},\, \frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}\big]}$.} The only possible pattern is the pattern having exactly $n$ consecutive \cLs and a single \cR. This is the pattern \ml. The above range of $\mu$ is nothing but the value of $\mathcal{P_{L^\mathrm{n}R}}$. \item[]{\textbf{Case b:} $\mathbf{\mu\in\big(\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a},\,\frac{a^{n-1}}{S_{n-1}^a}\big]}$.} Only a combination of $n-1$ and $n$ consecutive \cLs \- appear in a pattern. Hence atomic orbits cannot exist here. We call this region as a \emph{molecular region}. \item[]{\textbf{Case c:} $\mathbf{\mu\in\big(\frac{a^{n-1}}{S_{n-1}^a},\, \frac{a^{n-2}}{a^{n-2}b+S_{n-2}^a}\big]}$.} The only possible pattern is the pattern having exactly $n-1$ consecutive \cLs \- and a single \cR. This is the pattern of $\mathcal{L^{\mathrm{n-1}}R}$. The above range of $\mu$ is nothing but the value of $\mathcal{P_{L^\mathrm{n-1}R}}$. \end{enumerate} \begin{figure}[ht!] \begin{center} \scalebox{.7}{ \input{blockorbits.pstricks}} \caption{Position of cases with respect to range of parameter $\mu$} \label{examp2} \end{center} \end{figure} \begin{note} Consider an orbit $\mathcal{O_{L^{\mathrm{n}}RL^{\mathrm{n-1}}R}}$. Its pattern is a combination of patterns \ml and $\mathcal{L^{\mathrm{n-1}}R}$. The range of $\mu$ for existence of $\mathcal{O_{L^{\mathrm{n}}RL^{\mathrm{n-1}}R}}$, $\mathcal{P_{L^{\mathrm{n}}RL^{\mathrm{n-1}}R}}$ can be calculated as \begin{equation*} \mathbf{\mathcal{P_{L^{\mathrm{n}}RL^{\mathrm{n-1}}R}}=\Big(\frac{a^{2n-1}b+a^{n-1}}{a^{2n-1}b+(a^{n-1}b+1)S_{n-1}^a},\,\frac{a^{2n-2}b+a^{n-1}}{a^{2n-2}b^{2}+(a^{n-1}b+1)S_{n-1}^a}\Big]}. \end{equation*} Case b above tells us that $\mathcal{P_{L^{\mathrm{n}}RL^{\mathrm{n-1}}R}}$ is placed between $\mathcal{P_{L^\mathrm{n}R}}$ and $\mathcal{P_{L^\mathrm{n-1}R}}$ i.e., in the molecular region. This shows that the patterns are not arranged monotonically with respect to their cardinalities. \end{note} \begin{Lemma}\label{repetition} No molecular pattern is a repetition of any single atomic pattern. \end{Lemma} \begin{IEEEproof} Consider an orbit whose pattern is a repetition of one atomic pattern, say $(\mathcal{L^{\mathrm{n}}R})^k$. From equation \eqref{basic}, one can find a relation between $x_0$ and $x_{n+1}$ given by $x_{n+1} = a^nbx_0 + b\mu S^{a}_{n-1} + \mu - l$ which represents an affine relation between $x_0$ and $x_{n+1}$. Note that the relation between $x_{n+1}$ and $x_{2n+2}$ is exactly the same as the relation between $x_0$ and $x_{n+1}$. In general, the relation between $x_{(i-1)(n+1)}$ and $x_{i(n+1)}$ is exactly the same as the affine relation between $x_0$ and $x_{n+1}$. Since we are assuming that the pattern is $(\mathcal{L^{\mathrm{n}}R})^k$, therefore $x_{k(n+1)}=x_{0}$. Since there is the same affine relation between $x_{(i-1)(n+1)}$ and $x_{i(n+1)}$ for $i = 1, 2, \ldots k$, therefore one can conclude that $x_{i(n+1)}= x_0$ for all $i = 1, 2, \ldots, k$. Thus, the orbit is really an orbit with the pattern \ml. \end{IEEEproof} Putting the last two lemmas together, one can conclude that \begin{Lemma}\label{only2} Every molecular pattern is a combination of exactly two atomic patterns of succesive cardinality. \end{Lemma} \par We know that molecular patterns are a combination of only two atomic patterns with successive cardinality. We now show that these possible combinations are restricted. In order to do this, we generalize the map given by equation \eqref{basic} by replacing the symbols \cL and \cR with the atomic blocks $\mathcal{L^{\mathrm{n}}R}$ and $\mathcal{L^{\mathrm{n-1}}R}$ -- a trick that we have already used in the proof of Lemma~\ref{repetition}. \par Assume $\mu\in\big(\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a},\,\frac{a^{n-1}}{S_{n-1}^a}\big]$. Therefore, by the arguments stated above, the possible patterns are combinations of $\mathcal{L^{\mathrm{n}}R}$ and $\mathcal{L^{\mathrm{n-1}}R}$. Let us denote $\mathcal{L^{\mathrm{n}}R}$ by $\mathcal{L}'$ and $\mathcal{L^{\mathrm{n-1}}R}$ by $\mathcal{R}'$. From equation \eqref{x01}, when $x_0\leq -\frac{\mu S_{n-2}^a}{a^{n-1}}$ then $n$ consecutive \cLs appears before a \cR appears. In other words, at least one $\mathcal{L}'$ appears in the pattern. When $x_0>-\frac{\mu S_{n-2}^a}{a^{n-1}}$, $n$ consecutive \cLs cannot appear. In this case, at most $n-1$ \cLs could appear. Since the value of $\mu$ is restricted, one can in fact say that exactly $n-1$ \cLs would appear thereby guaranteeing at least one $\mathcal{R'}$ in the pattern. Consider $x_{new}=\frac{-\mu S_{n-2}^a}{a^{n-1}}$ which is a border that decides between $\mathcal{L}'$ and $\mathcal{R}'$. We define a new map as: \begin{equation*} \tilde{x}_{N+1}= \left\{ \begin{array}{ccc} \tilde{a}\tilde{x}_{N}+\tilde{\mu} & for & \tilde{x}_{N} \leq x_{new}\\ \tilde{b}\tilde{x}_{N}+\tilde{\mu} + \tilde{l} & for & \tilde{x}_{N} > x_{new} \end{array} \right. \end{equation*} Where $\tilde{a}=ba^n,\, \tilde{b}=ba^{n-1},\, \tilde{\mu}=\mu bS_{n-1}^a+\mu-1$ and $\tilde{l}=-\mu ba^{n-1}$. By co-ordinate transformation $y_N=\tilde{x}_{N}-x_{new}$, we shift the border to zero. Hence the map equation becomes: \begin{equation*} y_{N+1}=\left\{ \begin{array}{ccc} \tilde{a}y_{N}+\tilde{\mu} - x_{new}(1-\tilde{a}) & for & y_{N} \leq 0\\ \tilde{b}y_{N}+\tilde{\mu} - x_{new}(1-\tilde{b}) + \tilde{l} & for & y_{N} > 0 \end{array} \right. \end{equation*} i.e. \begin{equation}\label{basic2} y_{N+1}=\left\{ \begin{array}{ccc} \tilde{a}y_{N}+\bar{\mu} & for & y_{N} \leq 0\\ \tilde{b}y_{N}+\bar{\mu} + \bar{l} & for & y_{N} > 0 \end{array} \right. \end{equation} Here, $\bar{\mu}=\tilde{\mu}-x_{new}(1-\tilde{a})$ and $\bar{l}=\tilde{l}+x_{new}(\tilde{b}-\tilde{a})$. In order to obtain orbits, this map should satisfy the condition $-\bar{l}>\bar{\mu}>0$ (see case 5 in the introduction of this paper). We show that this is indeed true. Consider, $-\bar{l}>\bar{\mu}>0$. Substituting for $\bar{l}$ and $\bar{\mu}$ we get, $-\tilde{l}-x_{new}(\tilde{b}-\tilde{a})>\tilde{\mu}-x_{new}(1-\tilde{a})>0$. Substituting $\tilde{a},\tilde{b},\tilde{\mu},\tilde{l}$ and $x_{new}$ and simplifying we get, $\frac{a^{n-1}}{S_{n-1}^a}>\mu>\frac{a^{n-1}}{a^{n-1}b+S_{n-1}^a}$. This satisfies our earlier assumption about the range of $\mu$. \par Now using the Lemma \ref{thm:funda} one can show that consecutive $\mathcal{L'}$s and consecutive $\mathcal{R'}$s cannot appear simultaneously in any pattern. Similarly, using AMAL lemma, we get conditions on $\bar{\mu}$ for appearance of at most/at least $n$ consecutive $\mathcal{L'}$ in the pattern. Thus, one gets atomic and molecular patterns involving $\mathcal{L'}$ and $\mathcal{R'}$. Further, using Lemma \ref{only2}, we can conclude that every molecular pattern involving $\mathcal{L'}$ and $\mathcal{R'}$ is made up by combining only two atomic patterns of successive cardinality. One can then again define a new map to investigate the molecular region of patterns involving two atomic patterns of $\mathcal{L'}$ and $\mathcal{R'}$. Continuing in this way, one would finally arrive at an atomic pattern in terms of the new symbols defined. This fractal-like process makes the present study even more interesting. \begin{example} Consider a pattern $\mathcal{LLRLLRLR\ LLRLLRLR\ LLRLR}$. This pattern corresponds to a period-$21$ molecular orbit. Let $\mathcal{LLR}$ be denoted by $\mathcal{L}'$ and $\mathcal{LR}$ be denoted by $\mathcal{R}'$. Then the above pattern becomes $\mathcal{L'L'R'L'L'R'L'R'}$. Further now let $\mathcal{L'L'R'} \equiv \mathcal{LLRLLRLR}$ be denoted by $\mathcal{L}''$ and $\mathcal{L'R'} \equiv \mathcal{LLRLR}$ by $\mathcal{R}''$. Hence the above pattern can be written as $\mathcal{L''L''R''}$ which is atomic in the symbols $\mathcal{L''}$ and $\mathcal{R''}$. Therefore this is an admissible pattern. \par Now consider another pattern say $\mathcal{LLRLLRLR\ LLRLR\ LLRLR\ LLRLLRLR}$. It can be represented as $\mathcal{L''R''R''L''}$. This pattern does not correspond to any admissible orbit as it is not atomic or molecular in the new symbols. \end{example} Note that the results above can be put together to obtain an algorithm for generating admissible patterns. Now that we have characterized for all admissible patterns, we turn to the question of finding how many different patterns exist for any given period $n$. \begin{thm} For any $n$, there exist $\phi(n)$ distinct admissible patterns of cardinality $n$. \end{thm} \begin{IEEEproof} Let $\sigma$ be a pattern with $|\sigma| = n$. Further assume that there are $k$ \cRs in $\sigma$. Hence the number of \cLs in $\sigma$ are $n-k$. Assume without loss of generality, that $k \leq n-k$. If $k=1$, then the pattern is $\mathcal{L^{\mathrm{n-1}}R}$. If $k\neq 1$ then suppose $k \divides (n-k)$ i.e. $n-k=qk$. One possibility is all the $k$ atomic blocks are of type $\mathcal{L^{\mathrm{q}}R}$. This pattern is repetition of $\mathcal{L^{\mathrm{q}}R}$ which is not admissible by Lemma \ref{repetition}. \par Another possibility is that at least one atomic block has more than $q$ \cLs. This would force some other atomic block to have less than $q$ \cLs. By Lemma \ref{only2}, each molecular orbit has only two types of atomic blocks with successive cardinality and therefore such cases are not possible. \par Now suppose, $k\notdivides (n-k)$ i.e. $n-k=qk+p$. Using Lemma \ref{only2} we conclude that there are $p$ atomic blocks of type $\mathcal{L^{\mathrm{q+1}}R}$ and $k-p$ atomic blocks of type $\mathcal{L^{\mathrm{q}}R}$ as $n-k=q(k-p)+(q+1)p$. Denoting $\mathcal{L^{\mathrm{q}}R}$ by $\mathcal{L}'$ and $\mathcal{L^{\mathrm{q+1}}R}$ by $\mathcal{R}'$, we are now back to the original problem, with $|\sigma'|=k$, with $p$ $\mathcal{R}'$s and $k-p$ $\mathcal{L}'$s in $\sigma'$. Now we set $k$ as new $n$ and $min\{p, k-p\}$ as the new $k$. \par This process is repeated until $p=1$ or $p=k-1$. This is only possible if the original $n$ and $k$ were co-prime. Thus the number of \cLs and \cRs that appear in a period $n$ orbit have to be co-prime to $n$. Since there are $\phi(n)$ numbers co-prime to $n$, there are $\phi(n)$ distinct admissible patterns. \end{IEEEproof} The proof above in fact gives us an algorithm of generating admissible patterns of any given period $n$. We demonstrate this with an example: \begin{example} Suppose we need to generate all admissible patterns for $n = 18$. From the theorem above, we know there are $\phi(18) = 6$ distinct admissible patterns. Let us find these admissible patterns. The numbers co-prime to $18$ are $1,5,7,11,13,17$ respectively. Thus the $6$ distinct patterns would have $1,5,7,11,13$ and $17$ \cRs in them. The patterns corresponding to $1$ and $17$ are the \cL-atomic and \cR-atomic patterns respectively. Consider the case of $5$ \cRs. Then the pattern must contain $13$ \cLs. As $13 = 2 \times 5 + 3$, we can conclude that there must be $3$ copies of $\mathcal{LLLR}$ and two copies of $\mathcal{LLR}$ in the pattern. Now following the proof, we look at patterns of length $5$ having two $\mathcal{R'} = \mathcal{LLR}$ and three $\mathcal{L'} = \mathcal{LLLR}$. Again, since $3 = 1 \times 2 + 1$, we conclude that there must be one pattern of $\mathcal{L'L'R'} = \mathcal{LLLR LLLR LLR}$ and one pattern of $\mathcal{L'R'} = \mathcal{LLLR LLR}$. Thus the pattern corresponding to $5$ is $\mathcal{LLLR LLLR LLR LLLR LLR}$. The case of $13$ \cRs is obtained from the this pattern by interchanging \cL and \cR. Finally, consider the case of $7$ \cRs and therefore $11$ \cLs. As $11 = 1 \times 7 + 4$, there should be $4$ copies of $\mathcal{LLR}$ and three copies of $\mathcal{LR}$. Now looking at patterns of length $7$ with $4$ $\mathcal{L'} = \mathcal{LLR}$ and three $\mathcal{R'} = \mathcal{LR}$, we have $4 = 1 \times 3 + 1$ and so there should be one copy of $\mathcal{L'L'R'} = \mathcal{LLR LLR LR}$ and two copies of $\mathcal{L'R'}= \mathcal{LLR LR}$. Thus, the final pattern is $\mathcal{LLR LR LLR LR LLR LLR LR}$. \end{example} Given a pattern $\sigma$ with $|\sigma| = n$, let us assume that the first symbol in the pattern stands for the point $x_0$. Then one can evaluate $x_n$ and by setting $x_n = x_0$, one obtains an expression for $x_0$ in terms of the parameters $a,b,\mu, l$. The value of $x_0$ can then be substituted into the inequalities corresponding to each position (as demonstrated in the Atomic Lemma) to obtain $\mu_1$ and $\mu_2$ such that $\mathcal{P_{\sigma}} = (\mu_1,\mu_2]$. If the period $n$ is very large this method of substitution becomes very cumbersome. Amongst all these inequalities, if one knows the precise location of the inequalities that gives one $\mu_1$ and $\mu_2$, then it saves a lot of work. We now state a lemma that helps us find the precise location of that \cL and \cR in the pattern where, if one substitutes $x_0$, one gets $\mu_{2}$ and $\mu_{1}$. Observe that every \cL gives an upper limit for $\mu$ and every \cR gives a lower limit for $\mu$. Given a pattern $\sigma$ with $|\sigma| = n$, we define the binary sequence $\mathfrak{F}(\sigma)$ by substituting $0$ for $\mathcal{L}$ and $1$ for $\mathcal{R}$. Observe that all cyclic shifts of the binary sequence $\mathfrak{F}(\sigma)$ represent the same admissible pattern. Among all the cyclically shifted binary sequences of $\mathfrak{F}(\sigma)$, the sequence that corresponds to the largest binary number is called \cL-way arranged pattern. Similarly, the cyclically shifted binary sequence that corresponds to the smallest binary number is called \cR-way arranged pattern. Observe that a \cL-way arranged pattern always begins with a \cR and ends with a \cL, whereas a \cR-way pattern always begins with a \cL and ends with a \cR. \begin{Lemma}\label{arranged} The \cR-way arranged pattern gives the location for determining $\mu_{1}$ and the \cL-way arranged pattern gives the location for determining $\mu_{2}$. \end{Lemma} \begin{proof} Every inequality $x_{i}\leq 0$ gives an upper bound for $\mu$ whereas every inequality $x_{i}> 0$ gives an lower bound for $\mu$. First let us consider a \cL-atomic pattern. From the Atomic Lemma one knows that for a chain of consecutive \cLs, the upper bound becomes tighter with each subsequent \cL. As a result, the value of the upper bound becomes smaller. For an atomic pattern $\mathcal{L^{\mathrm n}R}$, if we rearrange the symbols with all the \cLs \- following the \-\- \cR, then the last \cL gives the minimum upper bound, i.e., the value of $\mu_2$. This is indeed the \cL-way arrangement for the atomic pattern. Meanwhile, the lower limit for $\mu$ is obtained from the lone \cR in the pattern and the \cR-way arrangement would have this \cR as the last symbol. A similar argument applies for a \cR-atomic pattern. Let us now consider a molecular pattern made up of \cL-atomic patterns. Following the fractal like argument that we have used before, this molecular pattern can be recursively rewritten as a pattern of new symbols, until one obtains an atomic pattern in those new symbols. If the new symbols are $\mathcal{L'}$ and $\mathcal{R'}$, then using the first part of this proof, we know that the upper and lower bounds can be found from specific $\mathcal{L'}$-way arrangement and $\mathcal{R'}$-way arrangement. On expanding, these symbols into the original string of \cL and \cR, one can then argue that it is indeed the \cL-way arranged pattern and the \cR-way arranged pattern that defines the positions of the \cL and \cR that gives the tightest upper and lower bounds for $\mu$. \end{proof} \begin{example} Let us consider an example to demonstrate the above lemma. Consider a pattern of the form \newline $\mathcal{RLRLLRLLRLRLLRLRLLRLL}$ with 21 symbols. If we start with $x_0$ in \cR and write out the equations for each $x_i$, then we obtain an expression for $x_0$ in terms of $a,b,\mu,l$ by equating $x_0$ to $x_{21}$. Assuming that we know $a,b,l$, each of the inequalities corresponding to $x_i$s give us an upper or lower bound for $\mu$. \begin{subfigures} \begin{figure}[ht!] \begin{center} \input{explain.pstricks} \caption{\cL-way arranged pattern} \label{examp} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \input{explain1.pstricks} \caption{\cR-way arranged pattern} \label{examp1} \end{center} \end{figure} \end{subfigures} In Figure \ref{examp}, the pattern is arranged in \cL-way, whereas in Figure \ref{examp1}, the pattern is arranged in \cR-way. From these patterns, one can conclude that the upper bound $\mu_2$ for such a pattern is obtained by considering the 5-th \cL in the pattern $\mathcal{RLRLLRLLRLRLLRLRLLRLL}$, that is, by considering the inequality arising from $x_7$ at position $7$. Meanwhile the lower bound $\mu_1$ for such a pattern is obtained by considering the 7-th \cR that appears in the pattern, that is, the inequality arising from $x_{15}$ at position $15$. For example, if one assumes $a = 0.85, b = 0.8$ and $l = -1$, then one obtains lower bounds at every position having \cR, that is, at positions $0,2,5,8,10,13,15,18$. These values turn out to be $0.3085, 0.3553, 0.3235, 0.3054, 0.3532, \- 0.3223, 0.3650, 0.3290$ respectively. Thus, the value for $\mu_1 = 0.3650$ which comes from the inequality at position $15$ of the original pattern. The cyclic shift that brings the \cR at position $15$ to the last position is indeed the \cR-way arrangement. Similarly, one obtains upper bounds at positions $1,3,4,6,7,9,11,12,14,16,17,19,20$ -- the values obtained are $0.4270, 0.4671, 0.3880, 0.4399, 0.3658, 0.4243, 0.4652, 0.3865, 0.4388, 0.4753, 0.3946, 0.4445, 0.3696$ respectively. Thus, the value for $\mu_2 = 0.3658$ which comes from the \cL at position $7$. The cyclic shift that brings the \cL at position $7$ to the last position is the \cL-way arrangement. Thus, the above pattern appears for $\mu$ in the range $(0.3650, 0,3658]$. It was also observed that the inequalities one obtains are exactly the same, no matter which cyclic shift one considers as the original pattern. \end{example} \section{Conclusions} In this paper, we have examined stable periodic orbits of piecewise-smooth systems analytically. Using a model given in literature, we first concluded that stable periodic orbits would appear only for certain values of parameters. We considered the case where the parameters $a,b\in (0,\,1)$ and $l = -1$. With these parameters, it turns out that stable periodic orbits exist for $\mu \in (0,1]$. We have shown several interesting results about these periodic orbits. It was shown that stable orbits of any periodicity exists in such a system. The exact patterns for these periodic orbits were determined. It was shown that all periodic orbits can be thought of as a combination of at most two distinct atomic patterns of successive cardinality. Further, it was shown that given any $n$, there are precisely $\phi(n)$ distinct types (patterns) of periodic orbits with cardinality $n$. We have also given an algorithm of determining the range of $\mu$ where the periodic orbits display a particular pattern. \bibliographystyle{IEEEtran}
2,877,628,090,859
arxiv
\section{INTRODUCTION} Elucidating the formation mechanism of relativistic jets in active galactic nuclei (AGNs) is one of the greatest challenges of astrophysics in this century (e.g., Blandford and Znajek 1977; McKinney 2006; Komissarov et al. 2007; McKinney et al. 2012). Plasma composition of jets is a fundamental but difficult issue (Begelman et al. 1984 for review), because emission timescales of the bulk population such as low-energy electrons/positrons and protons are too long. To examine plasma composition, discrete blobs in blazar jets have been utilized over the years. So far, three approaches have been pursued. The first is based on the synchrotron self-absorption limit combined with total kinetic powers of jets (Reynolds et al. 1996; Hirotani et al. 1999, 2000; Hirotani 2005). The literature indicates the existence of $e^{\pm}$ pair plasma in M 87, 3C 279 and 3C 345. The second is the constraint by the detection of circular polarization. Wardle et al. (1998) and Homan et al. (2009) examined the case of 3C 279 and found that the minimum Lorentz factor of non-thermal electrons/positrons should be much larger than unity for electron-proton (hereafter $e/p$) content. They rather favored an alternative possibility of dominant $e^{\pm}$ pair content with a small minimum Lorentz factor of non-thermal electrons/positrons (see, however, Ruszkowski and Begelman 2002). The third approach is the constraint from the absence of bulk-Compton emission in flat spectrum radio quasars (Sikora and Madejski 2000; Ghisellini \& Tavecchio 2010) and it has been observationally tested for PKS~1510-089 and SWIFT~J0746.3+2548 (Kataoka et al. 2008; Watanabe et al. 2009). The same approach has also been applied to the kiloparsec-scale knots in PKS~~0637-752 (Georganopoulos et al. 2005; Uchiyama et al. 2005; Mehta et al. 2009). They claim that jets contain more $e^{+}e^{-}$ pairs than protons, but that jets are dynamically dominated by protons. However, it should be noted that the estimate of a total kinetic power $L_{\rm j}$ of each blob is difficult, because of the existence of invisible components such as low-energy electrons/positrons and protons. Therefore, the assumption of constant $L_{\rm j}$ was made and the $L_{\rm j}$ are inferred from non-thermal emissions. Since plasma composition is sensitive to $L_{\rm j}$, a better estimate of $L_{\rm j}$ is essential. Regarding to the estimate of $L_{\rm j}$, it is essential to take into account of the thermal component (e.g., Kino and Takahara 2008). Cocoons associated with Fanaroff-Riley~I and ~II (FR~I and FR~II) radio galaxies are also known to be good tools for exploring plasma composition. In contrast to blobs in blazars, investigations using cocoon dynamics allow us to better estimate of energy injection into the cocoon. The total pressure $P$ can be estimated with fewer uncertainties based on the dynamical interaction between jets and the intra-cluster medium (ICM) and $P$ involves the contributions of invisible components (e.g., Rawlings and Sanders 1991; Fabian et al. 2002). For FR~I radio galaxies, many authors have discussed the ratio of $P$ to that of non-thermal electrons ($P_{-}^{\rm NT}$) for various sources based on observed non-thermal emissions (e.g., Dunn et al. 2005; Croston et al. 2005; Rafferty et al. 2006; De Young 2006; B{\^i}rzan et al. 2008). First of all, we should emphasize that these studies indicate that the total pressure $P$ tends to be larger than that of non-thermal electrons, i.e., $P>P_{-}^{\rm NT}$ (e.g., B{\^i}rzan et al. 2008). This means that the finite pressure of low-energy electrons/positrons and/or protons is required in these sources. The derived $P/P_{-}^{\rm NT}$ values in the previous work extend over a wide range from the order of unity to thousands (e.g., Birzan et al. 2008; Cavagnolo et al. 2010). For FR~I sources, however, an entrainment process of surrounding medium via the jet boundary layer could work (e.g., De Young 1993; Bicknell 1984; Rossi et al. 2008) and the process makes jets heavier. Therefore, jets in FR~I sources could undergo severe proton loading during their propagations and this could cause the large scatter of $P/P_{-}^{\rm NT}$. Instead, in this work, we focus on FR~II radio galaxies (Fig. \ref{fig:cocoon}) from the viewpoint of the important advantage they represent. Contrary to FR~I sources, we know from relativistic hydrodynamic simulations that no significant entrainment appears for FR~II sources (Scheck et al. 2002; Mizuta et al. 2004). Therefore, a plasma composition test for FR~II radio galaxies would allow us to give better constraints on plasma composition in AGN jets without an entrainment effect. Regarding an observational indication of a difference between total and non-thermal pressures in FR~II radio galaxies, Ito et al. (2008) (hereafter I08) recently examined for the following sources (Cygnus A, 3C~223, 3C~284 and 3C~219). In I08, they show that the energy density of total plasma is larger than the energy density of non-thermal electrons by the factor of 4-310 in the case of minimum-energy condition (e.g., Miley 1980; Kellermann and Pauliny-Toth 1981). This implies that the minimum-energy condition is violated, particle energy is dominant, and low-energy electrons/positrons and/or protons (i.e., cosmic-rays) are required to explain the total $P$ in these FR~II sources. In \S 2, we describe the basic idea and assumptions of our method. In \S 3, we briefly explain the dynamical determination of the total pressure in the cocoon. In \S 4 we express $P$ as functions of the number density ratio of protons to electrons. In \S 5, we explain details of the plasma composition test. It is applied to Cygnus A in \S 6. Summary and discussions are given in \S 7. \section{Method and problem setting} Here, we describe the basic idea and assumptions of our method. In this work, the number densities of protons ($n_{p}$), positrons ($n_{+}$) and electrons ($n_{-}$) are related using the parameter $\eta$ as follows: \begin{eqnarray} \label{eq:eta} n_{p}&\equiv&\eta n_{-} \nonumber \\ n_{+}&=& (1-\eta)n_{-} \quad (0\leq \eta\leq 1) , \end{eqnarray} where the latter relation is derived from the charge neutrality condition. The case of $\eta=0$ corresponds to pure $e^{\pm}$ plasma while $\eta=1$ corresponds to the pure $e/p$ plasma. We denote that $n_{p}=n_{p}^{\rm T}+n_{p}^{\rm NT}$, $n_{-}=n_{-}^{\rm T}+n_{-}^{\rm NT}$, $n_{+}=n_{+}^{\rm T}+n_{+}^{\rm NT}$, and $n_{\pm}=n_{-}+n_{+}$ where $n_{\pm}$ is the sum of the total number densities of electrons and positrons. Hereafter, superscripts T and NT represent thermal and non-thermal components, respectively. The distinction between thermal and non-thermal particles may not be trivial for relativistic plasmas. In this paper, we refer the thermal component to Maxwellian-like distribution characterized by the temperature, while we refer the non-thermal one to particles following a power-law distribution characterized by the power-law index and minimum and maximum energies as detailed below. Since we focus on relativistic plasmas in the present work, thermal component correspondingly has a relativistic temperature. Hence one should be cautious since most of observational papers refer the thermal component to non-relativistic plasmas (e.g., Garrington and Conway 1991). The allocation of partial pressure of each plasma population is the central concern of this paper. In general, $P$ is decomposed to \begin{eqnarray}\label{eq:p-def} P&=&P_{-}+P_{+}+P_{p}+P_{\rm B} \nonumber \\ &=& P_{-}^{\rm T}+P_{+}^{\rm T} + P^{\rm NT}_{-} + P^{\rm NT}_{+}+ P_{p}^{\rm T}+ P^{\rm NT}_{p} +P_{\rm B}, \end{eqnarray} where $P^{\rm T}_{-}$, $P^{\rm T}_{+}$, $P^{\rm T}_{p}$, $P^{\rm NT}_{-}$, $P^{\rm NT}_{+}$, $P^{\rm NT}_{p}$, and $P_{\rm B}$ are, the partial pressures of thermal (T) electrons, thermal positrons, thermal protons, non-thermal (NT) electrons, non-thermal positrons, non-thermal protons, and a magnetic pressure respectively. We also define $P_{\pm}=P_{-}+P_{+}$ as the sum of the total pressures of electrons and positrons. Throughout this work, we do not include the magnetic pressure because it is sub-dominant in $P$. Isobe et al. (2005) summarize the energy density of energetic electrons as typically being 10 times larger than that of magnetic fields in various radio lobes (e.g., Isobe et al. 2002; Tashiro et al. 1998, 2009; Hardcastle and Croston 2010) and it also holds in Cygnus A (Yaji et al. 2010). \subsection{Basic idea of the method} The essence of our method is as follows. First, the total pressure in the cocoon ($P$) is determined through dynamical considerations following I08 where they obtained $P$ via the comparison of the expanding cocoon model with radio observations. Second, average energy per one particle in the cocoon is evaluated. It is essential that our formulation is based on the basic conservation laws of mass, momentum, and energy in the cocoon. Since it depends on coupling of protons to the electrons/positrons, we examine several representative cases with different equations of state. Third, $n_{-}$ can be partially constrained by using the absence of thermal bremsstrahlung emission from the cocoon and the supply rate of electrons from the hot spots. Finally, $n_{p}$ and $P_{p}$ can be obtained by inserting the obtained quantities into the equation of state (EOS). \subsection{On particle distribution functions} Since observational data at low frequencies below GHz are quite limited, it is hard to explore the properties of low-energy electrons (including positrons). Bearing this difficulty in mind, we pick up plausible cases of electron distribution function. As the canonical case referred to as case (a), we consider two-temperature thermal plasmas, where protons and electrons have different temperatures and contributions of non-thermal components to the total pressure are negligible. As an alternative, we also examine case (b) where protons and electrons take the same temperature without non-thermal components. We further explore two cases (c) and (d) in which non-thermal population makes a dominant contribution to the total pressure with a negligible pressure of thermal population. For the non-thermal population, we assume the powe- law distribution functions: \begin{eqnarray}\label{eq:ne-c} n_{-}^{\rm NT}(\gamma_{-}) &\propto& \gamma_{-}^{-s_{e}} (\gamma_{-,\rm min}\le \gamma_{-} \le \gamma_{-,\rm max}), \nonumber \\ n_{p}^{\rm NT}(\gamma_{p}) &\propto& \gamma_{p}^{-s_{p}} (\gamma_{p,\rm min}\le \gamma_{p} \le \gamma_{p,\rm max}), \end{eqnarray} for case (c) with $s_{p}=s_{e}>2$. Observations of the spectral index in the radio lobe of Cygnus A suggest $s_{e}>2$ (e.g., Carilli et al. 1991; Yaji et al. 2010). Lastly, we set case (d) in which the number spectrum of non-thermal electrons is given by a broken power law: \begin{eqnarray}\label{eq:ne-d} n_{-}^{\rm NT}(\gamma_{-}) &\propto& \left\{ \begin{array}{ll} \gamma_{-}^{-s_{e,1}} & \mbox{$(\gamma_{-,\rm min}\le \gamma_{-} \le \gamma_{-,\rm crit})$}, \\ \gamma_{-,\rm crit}^{s_{e,2}-s_{1}}\gamma_{-}^{-s_{e,2}} & \mbox{$(\gamma_{-,\rm crit}\le \gamma_{-} \le \gamma_{-,\rm max})$}, \end{array} \right. \nonumber \\ n_{p}^{\rm NT}(\gamma_{p}) &\propto& \gamma_{p}^{-s_{p}} (\gamma_{p,\rm min}\le \gamma_{p} \le \gamma_{p,\rm max}), \end{eqnarray} where $s_{e,1}<2$ and $s_{p}>2$ are satisfied. This model is based on Stawarz et al. (2007) who suggest that observed spectra at the jet termination shock (hot spot) of FR II jets (Cygnus A) can be explained by the break at non-thermal electron energy (hereafter $\gamma_{\pm,\rm crit}$). This type of spectra may be due to the absorption of electromagnetic waves emitted at the harmonics of cyclotron frequency of cold protons, as discussed by Hoshino et al. (1992) and Amato and Arons (2006). Some observations for other FR~II sources could also be compatible with this picture (e.g., Perlman et al. (2010) for 3C445.) For cases (c) and (d), the minimum energy of non-thermal electrons/positrons ($\gamma_{\pm,\rm min}m_{e}c^{2}$) and protons ($\gamma_{p,\rm min}m_{e}c^{2}$) are generally assumed as \begin{eqnarray} \gamma_{\pm,\rm min}\approx \gamma_{p,\rm min}\approx \Gamma_{\rm j} , \end{eqnarray} which is expected when protons and electrons/positrons are separately heated and accelerated at termination shocks. On the other hand, the values of the maximum energy of non-thermal pairs ($\gamma_{\pm,\rm max}m_{e}c^{2}$) and protons ($\gamma_{p,\rm max}m_{p}c^{2}$ ) are largely uncertain. While $\gamma_{\pm,\rm max}m_{e}c^{2}$ may be significantly affected by radiative coolings, $\gamma_{p,\rm max}m_{p}c^{2}$ may reach the range of highest energy cosmic-rays (e.g., Takahara 1990; Rachen and Biermann 1993). It is reasonable to suppose that $\gamma_{\pm,\rm max}\gg \gamma_{\pm,\rm min}$ and $\gamma_{p,\rm max}\gg \gamma_{p,\rm min}$. \section{Total pressure $P$} In this section, we briefly describe the basic idea of estimating the total pressure $P$. In Fig. \ref{fig:cocoon} we show a cartoon of the interaction of the jet and ICM. Heating and acceleration processes work at hot spots and those particles are injected into cocoons. The cocoon model was proposed by Begelman and Cioffi (1989) in which the dissipated energy of jet bulk motion is the origin of the total pressure of cocoon and a cocoon of FR IIs is expected to be overpressured against ICM pressure ($P_{\rm ICM}$) with a significant sideways expansion. Therefore, the assumption of $P=P_{\rm ICM}$ is not valid. We have proposed the method of dynamical constraint on $P$ by comparison of the cocoon model with the actually observed morphology of the cocoons (Kino and Kawakatu 2005; I08) and the method is applied to various radio lobes (e.g., Machalski et al. 2010). We use this model in the present work. The reliability of the expanding cocoon model is well examined in Kawakatu and Kino (2006). The results of relativistic hydrodynamical simulations of Scheck et al. (2002) and Perucho and Marti (2007) support the above analytic model. The mass and energy injections from the jet into the cocoon, which govern the cocoon pressure $P$ and mass density $\rho$ averaged by the source age ($t_{\rm age}$) are written as \begin{eqnarray}\label{eq:pc} \frac{{\hat \gamma}}{{\hat \gamma}-1} \frac{PV}{t_{\rm {age}}}= 2 T^{01}_{\rm j} A_{\rm j} \equiv 2 L_{\rm j}, \quad T_{\rm j}^{01}=\rho_{\rm j}c^{2}\Gamma_{\rm j}^{2}v_{\rm j} , \end{eqnarray} \begin{eqnarray}\label{eq:rho} \frac{\rho V}{t_{\rm age}}= 2 J_{\rm j} A_{\rm j} , \quad J_{\rm j}=\rho_{\rm j}\Gamma_{\rm j}v_{\rm j} , \end{eqnarray} where ${\hat \gamma}$, $V$, $A_{\rm j}$, $T_{\rm j}^{01}$, $J_{\rm j}$, $\rho_{\rm j}$, and $\Gamma_{\rm j}$ are the adiabatic index of the plasma in the cocoon, the volume of the cocoon, the cross-sectional area, the total energy flux, and rest mass flux, rest mass density, and bulk Lorentz factor of the jet, respectively. The term $V$ is evaluated as $V=2(\pi/3){\cal R}^{2}LS^{3}$, where $LS$ and $\cal R$ are the linear size of the cocoon along the jet axis and the aspect ratio of the cocoon, respectively. Here we denote physical quantities of the jet with the subscript j. Throughout this work, we focus on a relativistic jet. Correspondingly, the shocked plasma has relativistic energy, thus we take $\hat \gamma=4/3$. The $PdV$ work done by the cocoon against ICM is taken into account in the energy equation Eq. (\ref{eq:pc}) following I08. For given $\rho_{\rm ICM}$, we can dynamically estimate total pressures $P$ by measuring $LS$, ${\cal R}$, and the head cross-sectional area of the cocoon. Here the relations of $LS=\beta_{\rm hs}ct_{\rm age}$ and ${\cal R}\equiv l_{\rm c}/LS<1$ hold where $l_{\rm c}$ and $\beta_{\rm hs}c$ are the lateral size of the cocoon and advance velocity of the hot spot, respectively. Since ${\cal R}$ and $\beta_{\rm hs}$ have some uncertainties, actual $P$ is bounded by maximum and minimum values \begin{eqnarray}\label{eq:prange} P_{\rm min}\le P \le P_{\rm max}. \end{eqnarray} Thus we can obtain the total pressure of cocoon $P$, which includes the partial pressures of non-radiating particles. The estimate of $P$ has actually been done by I08 for some FR II sources and we adopt $P$ values in I08 in this work. \section{Pressure as a function of $\eta$} In this section, we express $P$ as a sum of the partial pressures and represent it as a function of $\eta$ (we call this Equation of State, EOS) for respective cases. \subsection{Case (a)} First, we examine the canonical case of two-temperature thermal plasma. Here we assume that $P_{-}^{\rm NT}=P_{+}^{\rm NT}=P_{p}^{\rm NT}=0$ and $n_{-}^{\rm NT}=n_{+}^{\rm NT}=n_{p}^{\rm NT}=0$. The EOS in the cocoon filled with relativistic plasma is given by \begin{eqnarray}\label{eq:EOS} P&\approx &P^{\rm T}_{\pm}+P^{\rm T}_{p} \nonumber \\ &=&(n_{-}^{\rm T}+n_{+}^{\rm T})kT_{\pm}+n_{p}^{\rm T}kT_{p} , \end{eqnarray} where $T_{\pm}$, and $T_{p}$ are the electron/positron temperature, and proton temperature, respectively. Hereafter we adopt $T_{\pm}=T_{-}=T_{+}$ where $T_{-}$ and $T_{+}$ are temperatures of electrons and positrons, respectively. Following Kino et al. (2007), we can obtain $T_{\pm}$ and $T_{p}$ from Eqs. (\ref{eq:pc}), (\ref{eq:rho}), and (\ref{eq:EOS}): \begin{eqnarray} \label{eq:T-two} kT_{\pm} = \frac{\Gamma_{\rm j}m_{e}c^{2}}{4}, \quad kT_{p} = \frac{\Gamma_{\rm j}m_{p}c^{2}}{4} , \end{eqnarray} which are typically given by $kT_{\pm} = 1.3~ \left(\frac{\Gamma_{\rm j}}{10}\right) ~ {\rm MeV}$, and $kT_{p} = 2.3 ~ \left(\frac{\Gamma_{\rm j}}{10}\right) ~ {\rm GeV}$. Here we assume the limit of inefficient $e/p$-coupling i.e, protons and electrons are separately thermalized so that $kT_{\pm}= (m_{e}/m_{p}) k T_{p}$ since plasma number densities in large scale jets are conservatively expected to be too dilute to achieve efficient $e/p$-coupling (e.g., Kino et al. 2007 and references therein). The emission timescale is so long that radiative cooling is negligible. It is worth noting that the geometrical factors in Eqs. (\ref{eq:pc}) and (\ref{eq:rho}) are completely canceled out and $kT_{\pm}$ and $kT_{p}$ are governed only by $\Gamma_{\rm j}$. Inserting Eq. (\ref{eq:T-two}) into Eq. (\ref{eq:EOS}), we rewrite the total pressure in the cocoon $P$ as \begin{eqnarray}\label{eq:P} P(\eta)&=& 2.05 \times 10^{-6}~ n_{-}^{\rm T}\left[(2-\eta)+\eta\frac{m_{p}}{m_{e}}\right] \left(\frac{\Gamma_{\rm j}}{10}\right) ~\rm erg \ cm^{-3} , \nonumber \\ \end{eqnarray} where the first term and second term in the square bracket correspond to the partial pressure of pairs and protons, respectively. \subsection{Case (b)} As an opposite extreme to case (a), here we consider the case of one-temperature plasma. In this example, some of the proton energy is somehow transferred to electrons/positrons to achieve an efficient $e/p$-coupling. Then hotter electrons/positrons and colder protons are produced. From the condition $kT_{\pm}=kT_{p}$, and Eqs. (\ref{eq:pc}) and (\ref{eq:rho}), we obtain \begin{eqnarray} kT_{\pm} =kT_{p} =\frac{\Gamma_{\rm j}m_{e}c^{2}}{8} \left[(2-\eta)+\eta \frac{m_{p}}{m_{e}}\right] . \end{eqnarray} In this case, each population (i.e., $p/e^{-}/e^{+}$) has the same kinetic energy. The total pressure is given by Eq. ($\ref{eq:P}$) the same as case (a). The essential difference from case (a) is that $kT_{\pm}$ in case (b) is much higher than the one in case (a). \subsection{Case (c)} For comparison with the canonical case (a), we examine case (c) when the cocoon pressure is dominated by non-thermal particles. Case (c) concerns when the spectral indices of non-thermal particle energy distributions satisfy $s_{p}=s_{e}>2$ as some theoretical work on relativistic shocks suggests (e.g., Bednarz and Ostrowski 1998; Kirk et al. 2000; Achterberg et al. 2001; Spitkovsky 2008; Sironi and Spitkovsky 2011) and as the radio lobes of Cygnus A show $s_{e}>2$ (e.g., Carilli et al. 1991; Yaji et al. 2010). In this case, electrons and protons with the lowest energies are the main carriers of energy. Then, the evaluation of partial pressures of non-thermal plasma is basically the same as in case (a) when we replace $kT_{\pm}$ with $\gamma_{\pm,\rm min}m_{e}c^{2}$ and $kT_{p}$ with $\gamma_{p,\rm min}m_{p}c^{2}$. Then $P$ is given by \begin{eqnarray} P(\eta)= \frac{\Gamma_{\rm j}n_{-}^{\rm NT}m_{e}c^{2}}{3} \frac{s_{e}-1}{s_{e}-2}\left[(2-\eta)+\eta \frac{m_{p}}{m_{e}}\right] . \end{eqnarray} From this, it is clear that we can appropriately evaluate $\eta$ for the case (c) by replacing $n_{-}^{\rm T}$ to $n_{-}^{\rm NT}$ in the same way as case (a). \subsection{Case (d)} Here we examine the pressure of non-thermal electrons when they follow a broken power law spectrum Eq. (\ref{eq:ne-d}). Stawarz et al. (2007) indicated $\gamma_{\pm,\rm crit}\sim m_{p}/m_{e}$ for the hot spots in Cygnus A. The energy of the electron component is governed by those with break energy, while the number is dominated by those with lowest energies. Since $s_{p}>2$ is satisfied, lowest-energy protons carry the most energy. Therefore, the total pressure $P$ is expressed as \begin{eqnarray}\label{eq:Pcase-e} P(\eta)= \frac{\Gamma_{\rm j}n_{-}^{\rm NT}m_{e}c^{2}}{3} \left[ \frac{s_{e,1}-1}{-s_{e,1}+2} A_{\pm}(2-\eta)+ \frac{s_{p}-1}{s_{p}-2} \eta \frac{m_{p}}{m_{e}}\right] , \nonumber \\ \end{eqnarray} where $A_{\pm}= (\gamma_{\pm,\rm crit} /\gamma_{\pm,\rm min})^{-s_{e,1}+2}$. Thus $\eta$ can be evaluated when we replace $n_{-}^{\rm T}$ to $n_{-}^{\rm NT}$ and include factor $A_{\pm}$. \section{Testing plasma composition} We explain the method for constraining plasma composition of AGN jets for thermal plasma cases (a) and (b) in 5.1, 5.2, and 5.3. The application to non-thermal plasma cases (c) and (d) can be readily understood and is explained in 5.4. \subsection{Characteristic pressures} Firstly we define characteristic pressures which divide the number-density/pressure plane into several regions as shown in Fig. \ref{fig:npelectron}. As a preparation, here we define $\eta_{\rm eq}$ \begin{eqnarray} \eta_{\rm eq}&\equiv& \frac{2}{m_{p}/m_{e}-1}= 1.1\times 10^{-3} \quad (P_{\pm}=P_{p}) . \end{eqnarray} The partial pressure of proton-associated electrons is implicitly neglected since it is subdominant in the case of inefficient $e/p$-coupling. The line with $n_{-}=1\times 10^{3}n_{\rm p}$ divides the pair-supported and proton-supported cocoon in the limit of inefficient $e/p$-coupling plasma. By definition, the cocoon with $\eta>\eta_{\rm eq}$ is proton-supported (dark gray region in Fig. \ref{fig:npelectron}) while the cocoon with $\eta<\eta_{\rm eq}$ is pair-supported one (light gray region in Fig. \ref{fig:npelectron}). When $n_{-}$ is bounded by $n_{-, \rm min}$ and $n_{-, \rm max}$ as argued in the next subsection, the allowed region of $n_{-}$ is segmented by some characteristic pressures by the characteristic values of $n_{-}$ and $\eta$, i.e., $n_{-,\rm min}$, $n_{-,\rm max}$, $\eta=0$, $\eta=\eta_{\rm eq}$, and $\eta=1$. Here, we define six characteristic pressures as follows; \begin{eqnarray} P(\eta=0;n_{-}=n_{-,\rm min}) &\equiv& P_{0,\rm min}, \nonumber \\ P(\eta=\eta_{\rm eq};n_{-}=n_{-,\rm min})&\equiv& P_{\rm eq,min}, \nonumber \\ P(\eta=0;n_{-}=n_{-,\rm max}) &\equiv& P_{0,\rm max}, \nonumber \\ P(\eta=\eta_{\rm eq};n_{-}=n_{-,\rm max})&\equiv& P_{\rm eq,max},\nonumber \\ P(\eta=1;n_{-}=n_{-,\rm min}) &\equiv& P_{1,\rm min}, \nonumber \\ P(\eta=1;n_{-}=n_{-,\rm max}) &\equiv& P_{1,\rm max} . \end{eqnarray} Then, by definition, we have the following relations \begin{eqnarray} P_{0,\rm min}:P_{\rm eq,min} : P_{0,\rm max}:P_{\rm eq,max} :P_{1,\rm min} :P_{1,\rm max} \nonumber \\ = 1:2 :\frac{n_{-,\rm max}}{n_{-,\rm min}} :2\frac{n_{-,\rm max}}{n_{-,\rm min}} :\frac{m_{p}}{m_{e}} :\frac{m_{p}}{m_{e}}\frac{n_{-,\rm max}}{n_{-,\rm min}} , \end{eqnarray} where we approximate $2-\eta_{\rm eq}\approx 2$. To evaluate these pressures, we estimate $n_{-,\rm min}$ and $n_{-,\rm max}$ in the next subsection. \subsection{Estimation of $n_{-}$} Here we constrain the number density of electrons in the cocoon ($n_{-}$). We denote the lower and upper limits of $n_{-}$ as $n_{-,\rm min}$ and $n_{-,\rm max}$, respectively. The values of $n_{-,\rm min}$ and $n_{-,\rm max}$ are independently constrained and we show them below. \subsubsection{Lower limit of $n_{-}$} Here we estimate the lower limit of $n_{-}$ and examine the case when the number density of thermal electrons is larger than that of non-thermal electrons $n_{-}^{\rm T}\ge n_{-}^{\rm NT}$, since non-thermal electrons are partially injected from the background thermal electrons. (Later, the extreme cases of $n_{-}^{\rm T}\le n_{-}^{\rm NT}$ will also be discussed, being identical to cases (c) and (d)). Since the shocked plasma at hot spots expands sideways and is injected into the cocoon, we can estimate $n_{-}^{\rm NT}$ by using $n_{\rm hs}^{\rm NT}$ where $n_{\rm hs}^{\rm NT}$ is the number density of non-thermal electrons in a hot spot. We stress that $n_{\rm hs}^{\rm NT}$ is well constrained by observed non-thermal emissions of hot spots for FR II sources (e.g., Harris and Krawczynski 2006 for review). By connecting the number density from the jet to the cocoon based on Eq.~(\ref{eq:rho}) and shock conditions along the jet axis shown in Kino and Takahara (2004)( hereafter KT04), we obtain \begin{eqnarray} n_{-,\rm min}&=& \frac{n^{\rm NT}_{\rm hs} A_{\rm j} LS} {2V\beta_{\rm hs}} . \end{eqnarray} In general, number density of non-thermal electrons with power law distribution $n^{\rm NT}_{\rm hs} \propto \int^{\gamma_{\rm hs,max}} _{\gamma_{\rm hs,min}}\gamma_{\rm hs}^{-s_{\rm hs}} d\gamma_{\rm hs}$ can be given by \begin{eqnarray}\label{eq:nhs} n^{\rm NT}_{\rm hs}\propto \gamma_{\rm hs,min}^{-s_{\rm hs}+1} . \end{eqnarray} We assume the standard value of $s_{\rm hs}\approx 2$ and $\gamma_{\rm hs,min}\approx \Gamma_{\rm j}$. \subsubsection{Upper limit of $n_{-}$} The upper limit of $n_{-}$ can be constrained by the absence of thermal bremsstrahlung from hot electrons in the cocoon/lobes viewed in $X$-ray observations (Wilson et al. 2000, 2006). The observed $X$-ray emissions associated with radio lobes are non-thermal emissions and there is no evidence for thermal $X$-ray emission from coccons/lobes (Harris and Krawczynski 2006 for review). From this, we can safely use the condition of $L_{{\rm X, obs}}>L_{\rm brem}(n_{-}^{\rm T},T_{\pm})$ where $L_{\rm brem}/V= \alpha_{\rm f}r_{e}^{2}m_{e}c^{3}(n_{-}^{\rm T})^{2}F_{\pm}(\Theta_{\pm}) ~{\rm erg~s^{-1}~cm^{-3}}$, $F_{\pm}(\Theta_{\pm})= 48\Theta_{\pm}(\ln 1.1\Theta_{\pm}+5/4)$, and $\Theta_{\pm}=kT_{\pm}/m_{e}c^{2}$, for bremsstrahlung at relativistic temperature (Eq. (22) in Svensson 1982) and $\alpha_{\rm f}$ and $r_{e}$ are the fine structure constant and the classical electron radius, respectively. From this, we obtain the maximum $n_{-}$ as follows: \begin{eqnarray}\label{eq:Lx} n_{-,\rm max} = \left(\frac{L_{\rm brem}} {V\alpha_{\rm f}r_{e}^{2}m_{e}c^{3} F_{\pm}(\Theta_{\pm})}\right)^{1/2} . \end{eqnarray} It is worth commenting on the availability of constraining the upper limit of $n_{-}$ by the analysis of the internal depolarization of the radio lobes. Relativistic plasma makes a smaller contribution to Faraday rotations since electron inertia increases for the relativistic regime and it suppresses rotations of polarization angle (e.g., Ichimaru 1973; Melrose et al. 1997; Quataert and Gruzinov 2000; Huang and Shcherbakov 2011). Therefore, it is not effective to use the constraint by RM in the present work. \subsection{Estimation of $n_{p}$} Once $n_{-}$ is estimated, the proton number density $n_{p}$ can be determined as \begin{eqnarray}\label{eq:nproton} n_{p} =\eta n_{-} , \end{eqnarray} by definition. Here, of course, the conditions of $0\le \eta \le 1$ and $n_{-,\rm min}\le n_{-}\le n_{-,\rm max}$ are imposed. In Fig. \ref{fig:np-cartoon}, the allowed region of $n_{p}$ is added to that of $n_{-}$ shown in Fig. \ref{fig:npelectron}. In the same way as Fig. \ref{fig:npelectron}, the plane is divided into 5 regions. Finally, the allowed regions of $n_{p}$ and $n_{-}$ can be obtained by adjoining the range of $P$. The allowed regions drawn in Fig. \ref{fig:np-cartoon} are bounded by Eq. (\ref{eq:prange}). Thus, we can obtain the definitive allowed regions of $n_{p}$ and $n_{-}$. \subsection{Application to cases (c) and (d)} In 5.1, 5.2 and 5.3, we consider physical quantities associated with thermal plasma in cases (a) and (b). But those can also be applied to non-thermal plasma by the proper replacements of number densities and average energies of particles. With regard to average energies, we have already explained the replacements in the previous section. As for $n_{-,\rm min}$, the estimate shown in 5.2.1. can be applied both for thermal and non-thermal plasma. As for $n_{-,\rm max}$, the estimate shown in 5.2.2. can be applied only for thermal plasma. So, we do not use $n_{-,\rm max}$ for the cases (c) and (d). Thus we can properly estimate $\eta$ for cases (c) and (d). \section{Application to Cygnus A} Here we apply the above method to Cygnus A ($z=0.0562$) which is one of the best studied FR II radio galaxies (e.g., Carilli and Barthel 1996; Steenbrrugge et al. 2008, 2010; Yaji et al. 2010). The physical quantities of Cygnus A have been well constrained by previous work. To constrain the real values of $P$ and $n_{-}$, we carefully evaluate ${\cal R}$, $\beta_{\rm hs}$, and $\Gamma_{\rm j}$. The term ${\cal R}$ has an effect on $n_{-}$ via a cocoon volume $V$. The term $\Gamma_{\rm j}$ is directly proportional to $P$. The term $\beta_{\rm hs}$ controls the source age $t_{\rm age}$ which governs the injection rates of mass and energy into the cocoon. These are summarized in 6.1. The resultant allowed region of $n_{-}$ and $n_{p}$ is summarized in 6.2. \subsection{Viable ranges of physical quantities} We show adopted conditions of the model parameters for deriving the above results. We fix the cross section area of the jet as $A_{\rm j}=\pi R_{\rm hs}^{2}=\pi (2~{\rm kpc})^{2}$ (Wilson et al. 2000) and the number density of ICM just ahead of the hot spot as $n_{\rm ICM}=0.5 \times10^{-2} ~{\rm cm^{-3}}$ (the shell No. 6 in Table 5 in Smith et al. 2002). \begin{itemize} \item {\em Cocoon morphology ${\cal R}$.} From images of the Cygnus A cocoon, we can directly constrain ${\cal R}$. The upper limit ${\cal R}\approx0.5$ is determined by the Chandra X-ray image (Wilson et al. 2000, 2006; Yaji et al. 2010). The lower limit ${\cal R}\approx0.25$ is directly measured by the 330 MHz VLA image (see also Carilli et al. 1991; Lazio et al. 2006). Therefore we set \begin{eqnarray} 0.25\le {\cal R} \le 0.5 , \nonumber \end{eqnarray} in the present work. \item {\em Cocoon head velocity $\beta_{\rm hs}$.} Cocoon head velocity which equals the hot spot advance velocity ($\beta_{\rm hs}$) is well constrained by the synchrotron aging method. The estimated $\beta_{\rm hs}$ has some uncertainty due to the uncertainty of magnetic-field strength in the cocoon. From the result of synchrotron aging diagnosis in Carilli et al. (1991), we adopt the allowed range of $\beta_{\rm hs}$ as \begin{eqnarray} 0.01 \le \beta_{\rm hs} \le 0.06. \nonumber \end{eqnarray} We emphasize that sufficiently large uncertainty is taken into account here. The adopted value of $ \beta_{\rm hs}$ is quite typical for hot spots in FR II radio galaxies (e. g., Scheuer 1995). \item {\em Lorentz factor of the jet $\Gamma_{\rm j}$.} It is difficult to determine the true velocity of the jet. At least we may say that apparent velocity of blobs obtained by VLBI observations show a minimum velocity of underlying flow. A fast apparent motion of a blob at the jet base $(0.56\pm 0.28)~c$ has been reported by VLBI observations (Bach et al. 2003). Furthermore, suggestions of superluminal motion were made (Krichbaum et al. 1998; Bach et al. 2002) although they had not been clearly confirmed. On VLA scale, a clear asymmetry in brightness distribution of a kpc-scale jet due to a relativistic motion is seen (Perley et al. 1984). Therefore, overall radio observations seem to indicate relativistic motion. Bearing this in mind, we assume that the jet is relativistic and the four-velocity of the jet $ \Gamma_{\rm j}\beta_{\rm j}$ is set as \begin{eqnarray} 1 \le \Gamma_{\rm j}\beta_{\rm j} \le 30. \nonumber \end{eqnarray} Here the upper limit is assumed as $\Gamma_{\rm j}\approx 30$ based on the statistical study of radio jets of MOJAVE sources (Lister et al. 2001, 2009; Kellermann et al. 2004). \item {\em Cocoon pressure $P$.} Using the value of $V=1\times 10^{70}{\cal R}^{2}~{\rm cm^{3}}$, we can estimate the total pressure $P$ as \begin{eqnarray}\label{eq:P-cygnusA} 8\times 10^{-11}~{\rm erg \ cm^{-3}} \le P \le 4\times 10^{-9}~{\rm erg \ cm^{-3}}. \end{eqnarray} The lower limit equals the ICM pressure $8\times 10^{-11}~{\rm erg \ cm^{-3}} $ measured by Arnaud et al. (1984) to satisfy the over-pressured cocoon condition. Although the upper limit of $P$ is basically adopted from I08, the value $4\times 10^{-9}~{\rm erg \ cm^{-3}}$ is 4 times larger than the original estimate in I08. This is due to the change in minimum value of ${\cal R}$ from 0.5 to 0.25 based on VLA's 0.3 GHz image. It should be stressed that our adoption of the allowed range of $P$ is sufficiently wide compared with all of the previous work (e.g, Carilli 1998 for review). Note that Yaji et al. (2010) estimates that $P_{-}^{\rm NT}$ in the radio lobes as $P_{-}^{\rm NT}\approx (1-2)\times 10^{-9}~{\rm erg~cm^{-3}}$ for $\gamma_{\pm}\approx 1$ which causes $P^{\rm NT}_{-}>P_{\rm min}$. So, if $P$ completely equals the radio lobe pressure, then the range $P_{\rm min}\le P<P^{\rm NT}_{-}$ is excluded and the allowed $P$ range becomes narrower. The allowed example with $P_{\rm min}\le P^{\rm NT}_{-} \le P\le P_{\rm max}$ is involved in cases (c) and (d). \item {\em Non-thermal electron number density $n^{\rm NT}_{-,\rm hs}$.} The lower limit $n_{-,\rm min}$ largely depends on $n^{\rm NT}_{\rm hs}$. For $s_{\rm hs}=2$, the number density of non-thermal electrons in the hot spot can be obtained from \begin{eqnarray}\label{eq:nNT} n^{\rm NT}_{\rm hs} \approx 1 \times 10^{-3} \left(\frac{\gamma_{\rm hs,min}}{10}\right)^{-1} ~{\rm cm^{-3}} , \end{eqnarray} via detailed comparisons of the SSC model with the observed broadband spectrum (Wilson et al. 2000; KT04; Stawarz et al. 2007) where $\gamma_{\rm hs,min}\approx \Gamma_{\rm j}$. We stress that these three independent papers derive similar values of $n^{\rm NT}_{\rm hs}$ although Stawarz et al. 2007) adopts the different electron-distribution function shown in Eq. (\ref{eq:ne-d}). Furthermore, we note the importance of low-frequency radio spectra since it affects the estimate of $n^{\rm NT}_{\rm hs}$. Regarding low-frequency radio observation, we briefly comment on the work of Lazio et al. (2006). They indicated spectral flattening and turnover at $\sim 100~{\rm MHz}$. However it seems difficult to determine these accurately because the spot sizes are smaller than the VLA beam sizes at the above frequencies. The LOw Frequency ARray (LOFAR) (http://www.lofar.org/) and Square Kilometer Array (SKA) (http://www.skatelescope.org/) will, in future, tell us the real turnover frequency with sufficiently high resolution. \item {\em Thermal electron number density $n_{-}^{\rm T}$.} Here we comment on the difficulty of constraining $n_{-}^{\rm T}$. We use the absence of bremsstrahlung emission. The $X$-ray observations for Cygnus A show the flux upper limit as $\sim 1 \times 10^{-13}~{\rm erg~s^{-1}~cm^{-2}}$ (e.g., Smith et al. 2002). As already mentioned, the constraint from the intrinsic RM is not available, because plasma temperature is relativistic in the present work. Even worse, Cygnus A is known for its unusually large RM values and thus it is not a good example from which to argue the intrinsic depolarization (Dreher et al. 1987; Garrington and Conway 1991). No evidence for intrinsic depolarization between 5 and 15 GHz is found and the origin of the large RM is thought to be the external bow shock which surrounds the radio lobes (Dreher et al. 1987; Carilli et al. 1988). Hence it is not appropriate to use the constraint from RM for Cygnus A. \end{itemize} \subsection{Results} Below we show resultant allowed region of $n_{-}$ and $n_{p}$ for cases (a), (b), (c) and (d). \subsubsection{Case (a)} Considering the uncertainties of $\Gamma_{\rm j}\beta_{\rm j}$ and $\beta_{\rm hs}$, we examine two limiting cases with $\Gamma_{\rm j}\beta_{\rm j}=1$ and $\beta_{\rm hs}=0.01$ being a High-$n$ case, and that with $\Gamma_{\rm j}\beta_{\rm j}=30$ and $\beta_{\rm hs}=0.06$ being a Low-$n$ case. For the High-$n$ case, $n_{-}$ is about two orders of magnitude larger than that of the Low-$n$ case. In Fig. \ref{fig:npH}, we show the allowed region of $n_{-}$ and $n_{p}$ for the High-$n$ case. First of all, we find that $n_{-}>n_{p}$ always holds and this satisfies $\eta\sim 10^{-2}$ at $P=P_{\rm max}$. This implies that positron mixture is inevitable. In other words, $P_{1,\rm min}$ is much larger than $P_{\rm max}$ obtained by the Cygnus A cocoon calorimetry. (If we are force to make $P_{1,\rm min}$ smaller, then $\gamma_{\rm min}$ becomes larger and such a case coincides with (b).) The allowed regions of $n_{-}$ and $n_{p}$ are further divided by two regions. The pair of light-gray regions show the one in which $P_{\pm}>P_{p}$ is satisfied. On the contrary, the pair of dark-gray regions display the one in which $P_{\pm}<P_{p}$ holds. Interestingly, we find that the regions of $P_{p}<P_{\pm}$ and $P_{p}>P_{\pm}$ are both wide in the range of allowed $P$. Only in the range of $P\sim (3-6)\times 10^{-10}~{\rm erg cm^{-3}}$, the pair dominance $P_{p}<P_{\pm}$ alone is permitted in the High-$n$ case. Fig. \ref{fig:npL} displays the result for the Low-$n$ case. Similar to the High-$n$ case, $n_{-}>n_{p}$ always holds and they satisfy $\eta\sim 10^{-1}$ at $P=P_{\rm max}$. Due to the decrease in $n_{-,\rm min}$, the number densities in allowed regions are about two orders of magnitude smaller than that for the High-$n$ case shown in Fig. \ref{fig:npH}. Correspondingly, $P_{0,\rm min}$, $P_{\rm eq, min}$, and $P_{1,\rm min}$ decrease. Since $P_{\rm eq,max}<P_{\rm max}$ is still satisfied, both of the regions with $P_{p}<P_{\pm}$ and that with $P_{p}>P_{\pm}$ are allowed in this case. In other words, the Low-$n$ case also draws the same conclusion with the High-$n$ case qualitatively. Quantitatively, the upper limit of $n_{p}$ becomes larger when $n_{-,\rm min}$ becomes smaller and correspondingly the maximum $\eta$ achieved at $P_{\rm max}$ becomes larger by a factor of $\sim 10$ than that for the High-$n$ case. In summarizing case (a), we find that $\eta<1$ always holds in the allowed range of $P$. In other words, this indicates the existence of $e^{\pm}$ pairs in the cocoon. We find that (i) $e^{\pm}$ pair is dominant in terms of number density, and (ii) both the ``pair-supported cocoon (i.e., $P_{\pm} >P_{p}$)'' and the ``proton-supported one (i.e, $P_{\pm} <P_{p}$)'' are allowed. The pair-supported cocoon is different from the previously suggested one in which protons are dynamically dominated (e.g., De Young 2006). \subsubsection{Case (b)} For Cygnus A, we face a difficulty of realizing one-temperature plasma. First, let us consider the case of same $n_{-,\rm min}$ as in Figs. \ref{fig:npH} and \ref{fig:npL}. Then all of these thermal electrons should be heated up to $kT_{\pm}\sim 10^{4}m_{e}c^{2}$ and injected into the lobes in the case (b). In the radio lobes, Yaji et al. (2010) evaluates the number density of non-thermal electrons as $\sim 10^{-7}~{\rm cm^{-3}}$ at $\gamma_{-}\sim 10^{4}$. So, if we allow the existence of thermal plasma with the same $n_{-,\rm min}$ in Figs. \ref{fig:npH} and \ref{fig:npL} but with $kT_{\pm}=kT_{p}\sim 10^{4}m_{e}c^{2}$, a big thermal bump at $\sim 10^{9}~{\rm Hz}$ should appear. However there is no such bump in the observed spectra of the radio lobes. Therefore, we can exclude the case of the same $n_{-,\rm min}$ with $kT_{\pm}=kT_{p}\sim 10^{4}m_{e}c^{2}$. Next, we consider smaller $n_{-,\rm min}$. Using the relation $n_{-,\rm min}\propto \gamma_{\rm hs,min}^{-1}$ in Eq.~(\ref{eq:nhs}), the increase in $\gamma_{\rm hs,min}$ leads to the decrease in $n_{-,\rm min}$ in Figs. \ref{fig:npH} and \ref{fig:npL}; basically, $\gamma_{\rm hs,min}\sim 10^{4}$ is required at the hot spot (e.g., Harris et al. 2000; Hardcastle, Birkinshaw, and Worral 2001; Blundell et al. 2006; Godfrey et al. 2009). However, in the case of Cygnus A, the model spectra of the hot spots with $\gamma_{\rm hs,min}\ge 2000$ conflict with the observed ones (KT04). Therefore, case~(b) is not likely for Cygnus A. \subsubsection{Case (c)} Let us consider the case of dominant non-thermal pressures and a separate acceleration of electrons and protons with a steep power law spectrum. This is almost identical to (a). A slight difference between this case and (a) is the evaluation of $n_{-,\rm min}$. Since non-thermal pairs are dominated in this case, the allowed region would be limited around $n_{-}\approx n_{-,\rm min}$ in Figs. \ref{fig:npH} and \ref{fig:npL}. \subsubsection{Case (d)} Let us consider case (d). The factor $A_{\pm}= (\gamma_{\pm,\rm crit} /\gamma_{\pm,\rm min})^{-s+2}$ in Eq. (\ref{eq:Pcase-e}) is the only element to change the result from (a). Since $\gamma_{\rm crit,\pm}\sim m_{p}/m_{e}$ is suggested by Stawarz et al. (2007), we can estimate $A_{\pm}$ as $A_{\pm} \approx 14(\Gamma_{\rm j}/10)^{0.5}$ for $s_{e,1}= 1.5$. Therefore, a difference between this case and (a) is the larger $P_{\pm}$ by a factor of $A_{\pm}$. Although the spectral break may be suggested from radio observations for case (d), $n_{-}^{\rm NT}$ is dominated by electrons at a break energy $\gamma_{\rm crit,\pm}m_{e}c^{2}$ and proton energies are not entirely transported to electrons. Therefore, results of (d) are expected to be intermediate between cases (a) and (b). \section{Summary and discussions} In this work, we propose a new method for testing plasma composition of AGN jets by using the cocoon dynamics. In particular, we properly evaluate partial pressures of protons and $e^{\pm}$ pairs. The point of the method is that $n_{p}$ and $P_{p}$ can be constrained by considering the global conservations of kinetic energy, mass, and momentum of shocked plasma in the cocoon. Regarding particle distribution functions in the cocoon, it is hard to determine them uniquely because of sparseness of observational data. Therefore, we examine four typical cases in this work. Cases (a), (b), (c) and (d) respectively present two-temperature thermal plasma, one-temperature thermal plasma, non-thermal plasma with their spectral indices harder than two, and non-thermal plasma with a broken power-law electron spectrum. The three significant advantages of the present work compared with previous work are summarized as follows; \begin{enumerate} \item $P$ estimate is based on global cocoon dynamics. Since it is beaming-independent calorimetry of the true amount of energy released by the jet, the estimate of $P$ from cocoon dynamics has fewer uncertainties compared with blazar studies. \item We focus on powerful FR II sources. Relativistic hydrodynamic simulations tell us that FR II sources have less entrainment phenomena than FR I sources. Therefore, FR IIs are better for testing genuine plasma composition of AGN jets. \item We properly deal with the partial pressure of thermal electrons/positrons $P_{\pm}^{\rm T}$. Although $P_{\pm}^{\rm T}$ is a critically important finite quantity, most prior efforts assume $P_{\pm}^{\rm T}=0$ merely for simplicity. \end{enumerate} Applying the method to the best studied FR II source Cygnus A, we draw the following conclusions which primarily indicate the existence of numerous $e^{\pm}$ pairs in the cocoon of Cygnus A. \begin{itemize} \item Cases (a), (c) and (d), in which the average energy of electrons and positrons is significantly lower than that of protons ($\eta <10^{-1}$ for Low-$n$ case; $\eta <10^{-2}$ for High-$n$ case), are allowed without violating the observational constraints. The results in (a) and (c) are almost the same, except that the lowest energy electrons are thermal ones and non-thermal ones for (a) and (c), respectively. Cases (a) and (d) also show similar results but for a larger $P_{\pm}$ in (d) by a factor of $\sim 14$ than the one in (a). \item We can rule out case (b) in which electrons and positrons are heated up to the proton temperature of $\sim 10^{4}m_{p}c^{2}$. Because there is no thermal bump due to the hot thermal plasma. \item For (a), (c) and (d), we find that the number density of $e^{\pm}$ is larger than $n_{p}$ in any allowed $P$ and the obtained $n_{+}$ is always more than 10 times larger than $n_{p}$. We conclude that pure $e/p$ plasma is excluded and $e^{\pm}$-proton mixture composition is achieved in the Cygnus A jet. Therefore, further studies on the $e^{\pm}$ pair loading problem extending previous ones (e. g., Blandford \& Levinson 1995; Li \& Liang 1996; Thompson 1997; Beloborodov 1999; Yamasaki, Takahara \& Kusunose 1999) will be more important and the study of its bulk acceleration of $e^{\pm}$ outflow (Iwamoto and Takahara 2002, 2004; Asano and Takahara 2007, 2009) will also be highly motivated. \item We find that both $e/p$ plasma and $e^{\pm}$ pair pressure supported scenarios are permitted within the limit of current observational constraints. We quantitatively show the allowed regions of $P_{p}>P_{\pm}$ and $P_{p}<P_{\pm}$ by our new method (see Figs. \ref{fig:npH} and \ref{fig:npL}). \end{itemize} Lastly we add a brief comment on $P_{p}^{\rm NT}$. Recently Atoyan and Dermer (2008) has suggested the possibility of a secondary emission induced by high-energy protons at Cygnus A. The luminosity of the secondary emission depends on $P_{p}^{\rm NT}$. If the emission is detected in the future, it will provide us a new direct constraint on $P_{p}^{\rm NT}$. It could also give us a new constraint on cosmic-ray propagations influenced by the galactic magnetic field (Dermer et al. 2009). \section*{Acknowledgments} We thank the referee for useful suggestions for major improvement of the original manuscript. We also thank H. Ito for helpful discussions. This work is supported in part by Ministry of Education, Culture, Sports,Science, and Technology (MEXT) Research Activity Start-up 2284007 (NK).
2,877,628,090,860
arxiv
\section{Introduction} Even though a number of microlensing projects have yielded RR Lyrae light curves with excellent phase coverage \citep[see, for example,][]{soz08}, phase coverage is more of a problem when studying RR Lyraes in external galaxies. In order to overcome this, a number of methods have been developed to use the observed data to get a smooth approximation to the actual RR Lyrae light curve that captures any real physical bumps or dips and eliminates any that are caused by numerics. The traditional method applied has been Fourier decomposition wherein the observed data are fit by an expression of the form $$m(t) = A_0 + \sum_{k=1}^{N}A_k\sin (k{\omega}t + {\phi}_k). \eqno(1)$$ Here $N$ is the order of the fit, $A_0$ is the mean magnitude, ${\omega} = 2{\pi}/P$, $P$ is the pulsating period and $t$ is the time of observation. Usually a least squares fitting procedure yields $A_0$, $A_k$, and ${\phi}_k.$ A major problem with this technique is ringing: the Fourier curve given by equation (1) exhibits a series of unphysical bumps and dips. This can occur even when the original data are well distributed in phase but exhibit a large scatter. This is exacerbated when there are noticable gaps in phase coverage. The solution to this is to reduce the order of the fit but then this looses real features of the light curve. Another method is that of Principal Component Analysis \citep[PCA, see ][for examples]{kn04,deb09}. Here instead of sine functions being the basis, the data itself determines these basis functions: the resulting light curve is expressed as a sum of these basis functions. PCA has the advantage that a very good approximation of the light curve can be realised with significantly fewer parameters than required by the Fourier method. However, while a stringent comparison of the two methods is beyond the scope of this paper, we can say that, at least in the case of RR$c$ stars, the parameters of a fit using the methods developed here do have some physical interpretation. \citet{ak94} studied the method of cubic splines in approximating variable star light curves. A cubic spline is a series of cubic polynomials pieced together such that the intersection of two such polynomials is required to be continuous up to the second derivative. Our method uses cubic polynomials but does not require continuity up to the second derivative. Looking at figure 4 in \citet{ak94}, the graphs for LCB12, LCR12, we again see ringing as a result of having 16 parameters: as our results show, this is a clear reason why cubics polynomials as opposed to cubic splines are to be preferred in approximating RR Lyrae light curves. One motivation for this study is that requiring continuity up to the second derivative seems too stringent. Pulsation shocks are dramatic events with sudden reversals. There is no reason to suppose any fitted curve is continuously differentiable to the second degree. This is particularly true of fundamental mode RR Lyraes at phases close to minimum light where the star suddenly starts to get brighter. In this paper we examine the use of cubic polynomials to fit the light curves of RR Lyraes by requiring continuity only up to the first derivative. \section{Preliminaries} We define $t \bmod P$ (for any real number $t$ and positive real $P$) to be that positive number $x$ satisfying both $0\le x < P$ and $(t-x)/P$ is an integer. We are interested in approximating data points: $(t_1,y_1),\ldots, (t_n,y_n)$ by a periodic function $y=f(t)$ of period $P$. The residuals from our fit are $r_i = y_i - f(t_i).$ We define $PD$ to be the proportion of the period when the luminosity is decreasing (which, when magnitudes are used, results in the proportion of the period when the $y_i$ are increasing). Let $\bar{y}$ be the average of the $y_i.$ Define the Total Sum of Squares, $SST$, to be $$SST = \sum_{i=1}^n[\bar{y} - f(t_i)]^2,$$ and the Error Sum of Squares, $SSE$, as $$SSE = \sum_{i=1}^n r_i^2.$$ The quantity $R^2$, is defined as, $$R^2 = {{(SST-SSE)}\over{SST}},$$ and is the portion of the variation of the $y_i's$ explained by our model $f(t)$. The adjusted $R^2(adj)$, denoted by $RA$, is $$R^2(adj) = 1 - (n-1)(1-R^2)/(n-r-1),$$ where there are $r$ parameters. Then the best fit is from that combination of parameters such thet $SSE$ is a minimum. For any function which is twice differentiable except for a finite set of points, we define the total bending ($TB$) as follows: divide the domain ($[0,P]$ if periodic with period $P$) into intervals so that on each interval, the function is only concave up or down. On each interval, the bending is the angle between the tangent lines at the two endpoints (not through the vertical) and at each nondifferentiable point, the angle between the left and right sided tangent lines. $TB$ is the sum of all these angles. This provides a good measure of "ringing". We use an $F$ test \citep{w80} to test for the significance of having a more/less complex model. This $F$ statistic is $$\frac{[SSE(N) - SSE(A)]/[df(N)-df(A)]}{SSE(A)/df(A)},$$ where $SSE$ is the error sum of squares, $N$ and $A$ stand for null hypothesis (less complex model) and alternative hypothesis (more complex model) respectively. The expression $df$ stands for degrees of freedom which is the number of data points minus the number of parameters. \section{Our Approximation} Our goal is to obtain good approximations which reflect the true shape of the light curve, yet are simple and without ringing or other anomalies. Our method is based on the observation than an increasing or decreasing portion of $\sin(t)$ function can be remarkably well approximated by a cubic polynomial. We note also that a cubic polynomial is uniquely determined by the $y$ coordinate and the slope at two points. RR Lyrae light curves come in basically two varieties: types $ab$ and $c$ corresponding to fundamental and overtone modes respectively. For the overtone $c$ type, both increasing and decreasing portions are roughly half the period of a sine curve (of different periods) and each can be approximated by a cubic. For the fundamental $ab$ type, the increasing portion is roughly half of a period of a sine curve, while the decreasing part is similar to the bottom half of the decreasing half of a sine curve, though near the minimum, there is a noticeable dip before it starts to increase again. In this paragraph, we outline the method in general terms and describe the details in the following paragraphs. Essentially approximate RR Lyrae light curves by either 2 cubics or 3 cubics. When we use 2 cubics, for example when trying to model RRc curves, the parameters of our fit are the Period, Shift (phase point at which first observation occurs), the Maximum and Minimum and the Proportion of the Curve that is Decreasing and the slope at maximum, that is a total of 6 parameters. In this case, it is clear that these parameters have a physical meaning. We choose these parameters in order to minimize the $SSE.$ In this case the fitted curve is continous as is its derivative except perhaps at maximum. When we use 3 cubics, we choose 4 points with the understanding that the first and last points are the same and 3 time intervals. Note the sum of these three time intervals is the period. We have the $x$ and $y$ values at these points and the slopes. This is a total of 9 parameters together with the shift as defined in the case of using 2 cubics. We choose these 10 parameters to minimize the $SSE$ as before. With this in mind, we define two different piecewise defined functions and their periodic extensions. $S_1(t)$ is defined on $0 \le t \le T_1$ to be that cubic which passes through the points $(0,M)$, and $(T_1,m)$, with derivative equal to $D$ at $t=0$ and with zero derivative at $T_1.$ Here $M$ and $m$ are the maximum and minimum, respectively of the curve to be fitted. Furthermore, $S_1(t)$ is defined on $T_1 \le t \le P$ to be that cubic which passes through the points $(T_1,m)$ and $(P,M),$ with derivative equal to zero at $t=T_1$ and $P.$ The periodic extension $S_1(t \bmod P)$ is said to be of type $2C.$ This is continuous since $S_1(0) = S_1(P).$ Note that for sinusoidal curves, $D$ will equal 0 ($2C$ will be differentiable) while for fundamental $ab$ type curves, $D$ will be positive and $2C$ will not be differentiable. In all that follows, when we approximate a type $c$ by $2C$, we require $D=0$ and denote this by $2C-0.$ Our parameters here are period, shift (how far into the period the first data point is), $D$, $m$, $M$, and $PD.$ Again each of these parameters has some physical meaning. Similarly $S_2(t)$ is piecewise defined by dividing $[0,P]$ into three intervals, using a cubic on each and requiring continuity and differentiability where any two cubics meet. As before its periodic extension is of type $3C.$ Our parameters here are period, shift, the two time values where continuity is required between two cubics, three $y$ values and three slopes. Because of the simplicity of both $2C-0$ and $3C$, there is no ringing. Consequently, no noise is added when obtaining residuals. Furthermore, having fewer parameters improves the power of an $F$ test. In practice, we compute the mean and standard deviation for the original data. Points beyond 2.5 standard deviations away from the mean are considered outliers and omitted. We need good initial guesses to best approximate out data. For $2C$, our initial approximation comes from the Fourier series of 7 terms and $D=0.$ We use $T_1=PD*P$ and replace each data point by $(t_i,y_i)$ by $(t_i-shift,y_i).$ We are then in a position to minimize $SSE.$ From the Fourier fit, we find the maximum as the point (shift, $M$), followed by the minimum as (shift + $T_1$, m). For $3C$, our initial approximation uses $2C$ and we define $T_0$ to be between $60\%$ and $90\%$ of $T_1$ (see later) and use the intervals $0\le t\le T_0$, $T_0 \le t \le T_1,$ and $T_1 \le t \le P$ for our three cubics. $P$ is obtained by maximizing the power function. We obtain both the $y$ coordinate and slope at $0$, $T_0$, $T_1$ from $2C$ from which we obtain $3C.$ For $3C$, we define $T_0$ as $60\%$, $70\%$, $80\%$ and then $90\%$ of $T_1$. For each case, we find the minimum $SSE$ and keep the best (smallest $SSE$) set of parameters. We continue our removal of outliers. For the residual, removing outliers based on the Fourier series is not appropriate because of ringing. We look at residuals using both $2C$ and $3C$. For each, we compute the mean and standard deviation. We remove any points more than 3 standard deviations out using both criteria. When comparing various approximations, we compare all on this same final data set. We minimize $SSE$ by looping through the parameters with successively smaller step sizes, $s$. For a fixed parameter, in addition to having the current value of $SSE,$ we evaluate $SSE$ at this parameter plus $s$, and at this parameter minus $s$. Finally we fit these three points by a quadratic polynomial, find its minimum and evaluate $SSE$ at this point. Of the current four estimates of our parameter, we select the one giving the smallest $SSE$ as the parameter's new value. We continue this until we have a good approximation of a relative minimum. \section{Test Data} \begin{figure} \vspace{0cm} \centering \epsfxsize=8.0cm{\epsfbox{compare_lc.eps}} \caption{Results of fitting to data (open squares) drawn from a known (dashed curves) function using Fourier (top panel, solid curve) and cubic polynomial (bottom panel, solid curve) methods.} \label{UNIQUE_LABEL} \end{figure} We first test our method on a known function, where the known function is taken from the RR Lyrae light curve templates developed by \citet{lay98}. We draw a random number of points from this function and independently add Gaussian noise to each phase point. Specifically, we add noise normally distributed with mean zero and standard deviation 0.1 to the synthetic data. In our tests, this synthetic data is not a "train" of data but just covers approxiametly one period. We then try to reproduce the original curve using our cubic polynomial method and the traditional Fourier technique. Figure 1 presents our results. Here the known function (chosen to resemble a typical RR$ab$ type light curve) is the dashed curve. The open squares with error bars are the points drawn randomly from this function and to which Gaussian noise has been added independently. The solid dark lines represent the Fourier and cubic polynomial fits (top and bottom panels respectively) to these open square points. The curves are plotted as a function of phase, going from 0 to 1. However, we allow the two methods to "rediscover" this periodicity. The Fourier fit of order 6 yielded considerable ringing, a period of 1.14 and had a $SSE$ of $0.027$. We emphasize that by a period of 1.14, we do not mean a period of 1.14 days, but that this signifies a change in the period as reported by the Fourier method when compared to the period of the original template curve from which the data were drawn. A reported period of 1 signifies no difference between the estimated and original period. Approximating by a pair of cubics ($2C$), we obtained a slightly different period of 0.997, $D=2.93$, $PD=0.905$ and $SSE=0.050$. These values of $D$ and $PD$ both indicate Bailey type $ab$. Using a $3C$ (differentialble) approximation, we obtained a period of 1.00004 and $SSE=0.04$. We see that our method does a very good job at mimicking the known function and has little to no ringing. In contrast, a Fourier fit to the same points produces noticeable and significant ringing. Further, the period obtained by the cubic polynomial method matches exactly that of the original curve. Next we tested our method on real data. The data were taken from \citet{br04} and consist of $HST$ observations of RR Lyrae stars in the Andromeda Halo - along the southeast minor axis of M31, about $51'$ from the nucleus. The data are available at two wavelengths, F606W and F814W. In what follows we report results based on both bands (F606W and F814W are referred to as the first and second bands respectively). \citet{br04} used a fast algorithm based on the Lomb-Scargale periodogram \citep{sc82} to search for periodicities in their time series data after data reduction and photometry. \citet{br04} analyzed these data and found 169 variables of which 55 were clearly RR Lyraes. Of these 55 stars, \citet{br04} classified 29 as RR$ab$, 25 as RR$c$ and 1 as RR$d$. It is the data for these 55 stars which we discuss in this paper. Note we start with the photometry for these 55 stars as published in \citet{br04}. As pointed out by the referee, intrinsic precision in this dataset is about 0.03 and 0.04 in $V,I$ respectively. This measurement error is not accounted for in our fits but for the purposes of this paper it is appropriate since we are presenting a differential comparison between our method and that of Fourier series. Our data are not corrected for reddening. \citet{br04} found a ratio of RR$c$ to RR$abc$ of 0.46, mean periods of RR$c$ and RR$ab$ stars of 0.316 and 0.594 days respectively. In figures 2-4, the label "normalized magnitude" just refers to the fact that the magnitudes are scaled to lie between 0 and 1. We note that these are HST data and as such somewhat immune to the 1-day aliasing problems arising when RR Lyraes are observed from the ground. The sampling rate was 250 exposures over a 41 day period \citep{br04} with a cadence that should be random enough to offset other aliasing problems. Observations in the two bands were made at slightly different times: this again helps to counteract aliasing. \section{Results} \begin{figure} \centering \epsfxsize=8.0cm{\epsfbox{V100.eps}} \caption{Results for RR$c$ star V100: open squares are data, thin/thick dashed curves uses $2C$ and Fourier respectively, solid curve uses $3C.$ The $y$ axis is scaled such that the range of magnitudes goes from 0 to 1.} \label{UNIQUE_LABEL} \end{figure} A major finding of our work is that when using the cubic polynomials method on the dataset mentioned, we find 23 RR$c$ stars with a mean period of 0.312, 29 RR$ab$ stars with a mean period of 0.594. This leads to a ratio of RR$c$ to RR$abc$ stars very similar to previous work: 0.442. This ratio is lower than that reported in \citet{br04} because there were two $RRc$ stars which were reclassified as $RRd$ in our work. In almost all cases, the periods discovered by our method is very close to that published by \citet{br04}. An intriguing result is that our method reveals significantly more multimode stars than previously discovered and we discuss this later in this section. First overtone stars all have periods less than 0.39 whilst fundamental mode stars have periods greater than 0.44. A nice result is that the type $c$ stars all have $0.51 < PD < 0.76$ while all type $ab$ stars have $PD > 0.76.$ We observe three other imperfect tests for Bailey type. Type $c$ has $D$ (first band) less than 0.76 except for V76 and V58. $PD$ (second band) is less than 0.76 except for V120; $D$ (second band) is less than 0.25 except for V40 and V120. Also, type $ab$ has $D$ (first band) greater than 0.25 except for V78; $PD$ (second band greater than 0.76 except for V122; and $D$ (second band) is greater than 0.25 except for V66, V71 and V82. In every case, both bands have $PD$ greater than 0.51 with two exceptions, both of which are in the second band where there is a generally more noise. For V76, $PD=0.36$ while for V157, $PD=0.48.$ Generally, the $PD$'s of the two bands correlate nicely, as do the $D$'s. Further, type $c$ all have $D < 0.2$ while all type $ab$ except V78 have $D > 0.5.$ This provides a good way to distinguish between types $ab$ and $c.$ Type $c$ can be approximated about equally well by a Fourier series of order 2-4, or $2C$ or $3C$. The advantages of using $2C$ are its simplicity, minimal $TB$ and using only parameters that have physical meaning. Further we can generate $PD$ and $D$ whose importance has already been established. The 23 type RR$c$ data sets can be summarized on average as follows. The average RA for $2C$ and $3C$, we call $R2c$ and $R3c$, respectively. A similar quantity for an order 2 Fourier series is labelled as $Rf2c.$ Likewise, $TB$ for $2C$ is named as $TB2c$ and so on. We have $R2c=0.906,R3c=0.907$ and $Rf2c=0.902$ while $TB2c=5.2,$ $TB3c=8.2$ and $TBf2c=5.2$. While $3C$ may give slightly better approximations the extra bending strongly suggests it is not worth the effort. Fourier series give somewhat worse approximations than $2C$ with no reduction in bending. Finally, $2C$ is simplest and only involves parameters with physical meaning which clearly makes it superior. Figure 2 displays a typical example of type $c$ using star V100. For this star, $R2=0.951$, $R3=0.951$, $Rf2=0.950$, $TB2=5.5$, $TB3=10.5$ and $TBf2=5.5$. The 29 type $ab$ stars can be summarized on average as follows. Using similar nomenclature to that specified for the RR$c$ stars above, we have $R2ab = 0.954$, $R3ab = 0.962$ and $Rf8ab = 0.962$ while $TB2ab = 5.4$, $TB3ab=8.4$ and $TBf8 = 16.4.$ Since $3C$ gives about as good an approximation as Fourier series with 8 terms but with much less bending and fewer parameters, we prefer $3C$. Comparing $R2ab$ to $R3ab$ we see that $RA$ has gone from 0.954 to 0.962, which means the ratio of errors, $(1-R2ab)/(1-R3ab) = 1.21$: a $21\%$ reduction in error so $3C$ is clearly superior. The increase in bending simply means we are twisting more to fit the data - as can be seen especially well in figures 1 to 3. Figures 1 to 3 are typical examples of the sort of approximations possible with cubic polynomials. Further, figure 3 presents typical examples of type $ab$ using V57 and V136. V57 (left panel of figure 3) is an example (of 6 or 7 stars) where the decreasing portion seems to momentarily increase before a final dip to the minimum, while V136 (right panel of figure 3) does not show this behavior. For V57 we have $R2=0.953$, $R3=0.969$, $Rf8=0.970$, $TB2=5.8$, $TB3=10.4$ and $TBf8=16.2$, and for V136 we have $R2=0.933$, $R3=0.937$, $Rf8=0.938$, $TB2=4.7$, $TB3=7.1$ and $TBf8=13.8$. The proportion decreasing using $3C$ is not the same as $PD$ (using $2C$). However, it is usually within $0.001.$ The period obtained by $2C$ and $3C$ are usually within $0.0001$, while they differ from the period using Fourier series by perhaps $0.001$. Worse yet, the optimal period for a Fourier series depends on the order of the series. There are three stars, listed in Table 1, which have two periods. These were analyzed as follows. We removed outliers in the data as before. We then fit by $2C$ or $2C-0$ depending on type, and removed outliers based on their residuals. This gave an initial period equal to $prd1$, $RA=RA1$ and $SSE=SSE1.$ We then obtained the next period and its power $=pwr.$ We subsequently refit by $2C-0$ if necessary, and then we fit the residuals by another $2C-0$ and finally obtained a combined best fit (with both $D=0$). This gives $prd2$, $RA2$ and $SSE2$, from which we calculated an $F$ statistic. These are all listed in Table 1. For each star, the first and second lines correspond to the first (F606W) and second (F814W) bands, respectively. The $F$ statistic tells us that we are more than $99.95\%$ certain that the second $2C-0$ is significant and so the second period is significant. A different approach is based on Scargle's analysis. Using his equation (18), if it is possible to select $N$ possible periods a priori, then in order to conclude that, with greater than $99\%$ certainty, the best one is valid, the power for more than one must exceed the threshold $-\ln [1-0.99^{(1/N)}].$ However there is no way of knowing in advance what the true period is. If we assume it lies between 0.250 and 0.800 and round to three decimal places, there are 551 possibilities which gives a threshold of 10.9. Each of the powers listed in Table 1 is above this except for V95 in the second band. If a second period is present it should be present in each band. We conducted an extensive search in the first band which has higher amplitude and then checked some periods near the predicted period - the second power was conclusive. For a number of stars, using only one band, we discovered we could obtain a much better approximation using two periods differing by only about 0.001. We assume these are anomolies and ignore them though more data might lead to different results. Several stars had significant evidence of a second period in one band but not in the other and these were ignored. These stringent criteria leave the three stars in Table 1 with two periods. In each case the period ratio is 0.75. Figure 4 displays a graph of V1 over 4 primary periods which equals 3 secondary periods. This also presents the interaction of the two periods. These three stars have primary periods between 0.353 and 0.383. \begin{figure*} \vspace{0cm} \hbox{\hspace{0.5cm} \epsfxsize=8.0cm \epsfbox{V57.eps} \epsfxsize=8.0cm \epsfbox{V136.eps} } \vspace{0cm} \caption{Results for RR$ab$ stars V57 (left panel) and V136 (right panel): open squares are data, thin/thick dashed curves uses $2C$ and Fourier, solid curves uses $3C$.} \label{UNIQUE_LABEL} \end{figure*} \section{Maximum and Minimum Light Colors} Recent work has focused on the properties of RR Lyraes at minimum light has a possible way to estimate reddening \citep{lub77,cle95,kf05}. The theoretical basis for this has been established by \citet{skm93}, \citet{kan95} and \citet{kp96}. These authors showed the importance of Period-Color (PC) relations at maximum light. Cepheids have flatter PC relations at maximum light and definite relation at minimum light such that higher amplitude Cepheids are driven to cooler and hence redder colors. In the case of RR Lyraes this is reversed with a flat PC relation at minimum light and a discernable relation at maximum light. Figure 5 presents PC relations at maximum and minimum light for the \citet{br04} data calculated using both Fourier series (open circles) and cubic polynomials (solid black squares) to approximate the data. Firstly, we see broad support for the contention that PC relations at minimum light are much flatter than those at maximum light. Secondly, as pointed out by the referee, we notice somewhat tighter and flatter relations are present for the PC relation at minimum light when using a cubic polynomial fit. \section{Conclusions} We have found a new way to approximate the light curve of an RR Lyrae star by fitting cubic polynomials to the data. This method can fit the data with fewer parameters than Fourier series and suffers virtually no ringing. It can also estimate periodicities in the data. When we apply this method to RR Lyrae data in the Andromeda halo, we find, in addition to the multiperiodic star V90 reported by \citet{br04}, an additional 2 other multiperiodic stars (V1 and V95, previously classified as type $RRc$) in the data sample: here we require this multiperiodicity to be present in both bands. Then the ratio of the number of RR$c$ stars to the ratio of the number of RR$abc$ stars is 0.442 - as opposed to \citet{br04} who found a ratio of 0.462. In this ratio, \citet{br04} do not count RRd stars in either numerator or denominator. The ratio of number of $RRc$ to number of $RRab$ where we include $RRd$ stars together with the $RRc$ stars is in this case is 0.473. \begin{table*} \centering \caption{Stars with multi-periodic components} \label{NOLABEL} \begin{tabular}{ccccccccccc}\hline V & Band & Period1 & $RA1$ & $SSE1$ & Period2 & $RA2$ & $SSE2$ & $pwr$ & $F$ & Number of points\\ \hline 1 & F606W & 0.3815 & 0.856 & 0.252 & 0.5104 & 0.940 & 0.096 & 13.6 & 26.3 & 74 \\ 1 & F814W & 0.3816 & 0.779 & 0.227 & 0.5108 & 0.865 & 0.133 & 16.4 & 14.6 & 91 \\ 90 & F606W & 0.3533 & 0.823 & 0.579 & 0.4742 & 0.919 & 0.252 & 18.8 & 27.3 & 93\\ 90 & F814W & 0.3533 & 0.572 & 0.616 & 0.4747 & 0.809 & 0.263 & 26.2 & 30.2 & 99\\ 95 & F606W & 0.3616 & 0.781 & 0.429 & 0.4855 & 0.910 & 0.164 & 19.0 & 36.4 & 99\\ 95 & F814W & 0.3614 & 0.697 & 0.379 & 0.4855 & 0.744 & 0.309 & 9.7 & 6.0 & 115\\ \hline \end{tabular} \end{table*} \begin{figure} \centering \epsfxsize=8.0cm{\epsfbox{V1.eps}} \caption{Results for multiperiodic star V1: an example of a multiperiodic star: the two dominant periods are $P_1 = 0.382$ and $P_0 = 0.510$ with a period ratio of 0.75.} \label{UNIQUE_LABEL} \end{figure} \begin{figure} \centering \epsfxsize=8.0cm{\epsfbox{pc_minmax.eps}} \caption{PC results at maximum (top) and minimum (bottom) light using a sixth order Fourier fit (open circles) and cubic polynomials (solid black squares).} \label{UNIQUE_LABEL} \end{figure} \section*{acknowledgments} CCN thank the funding from National Science Council (of Taiwan) under the contract NSC 98-2112-M-008-013-MY3. The authors thank Karen Kinemuchi for helpful discussions during the preparation of this manuscript. The authors also thank the referee, Jan Lub, for very useful comments.
2,877,628,090,861
arxiv
\section{Introduction} The most important aspect of word learning is often thought to be \emph{referential uncertainty}. Quine \cite{quine2013word} famously framed referential uncertainty as a general problem everybody faces when trying to learn an unknown language. Suppose an anthropologist studies an isolated tribe. When members of the tribe see a rabbit, they shout ``gavagai''. The anthropologist hears the word for the first time and has no idea what the word means. In principle the meaning of the word could be anything from perceptual features of objects present (or not present!), features of the environment, social and historical facts etc. The space of possible meanings is essentially infinite and the question is how the anthropologist can solve this puzzle. Many researchers claim that children face the same problem when trying to learn language. Models that deal with word learning try to capture referential uncertainty in various ways. Schematically, we can distinguish between the following approaches to meaning and corresponding models (see Fig. \ref{f:models}). \paragraph{Word-Object Mapping Models (WOM)} Some models approach word learning as a discrete mapping problem from words to object, e.g. \cite{fontanari2009cross,kachergis2012associative}. How difficult the mapping problem is depends on factors such as the number of objects in the context of an interaction or the feedback given by the caregiver. If there is only one object, then the mapping can be learned instantaneously and no problem exists. If there are multiple objects, then referential uncertainty does exist if there is ambiguous, unreliable or no feedback from the caregiver. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{figs/overview-representations} \end{center} \caption{Left: models of word-object mappings learn associations between words and a priori known objects. Middle: symbolic feature based models learn associations to symbolic feature representations. Right: word meanings are organizations of $n$ dimensional, continuous feature vectors.} \label{f:models} \end{figure} \paragraph{Combinations of Feature Models (CFM)} Some researchers \cite{siskind1996computational,deBeule2006cross-situational,wellens2008coping} have modeled word meaning as combinations of symbolic features. The features themselves are known prior to word learning. In CFM the meaning space is as large as all possible combinations of features. The meaning space grows combinatorially with the number of features. There are fundamental differences between WOM and CFM. In CFM, a word can be linked to multiple features. The learner upon hearing a word and seeing a single object cannot know which features of the object the word refers to. Consequently, referential uncertainty can occur in single object contexts This is different from WOM, where referential uncertainty is exclusively related to the number of objects in the context. \paragraph{Continuous Meaning Space Models (CMS)} Few models address the learning of words related to representations in continuous vector spaces \cite{belpaeme2012acs,spranger2013acquisition}. The problem of referential uncertainty in continuous meaning spaces is large and depends chiefly on the number of objects in a learning context, the number of dimensions of meaning vectors and whether or not objects can refer to subspaces of the meaning space (color space etc). In that sense CMS behave similar to CFM. The difference to CFM is that the meaning space is an infinite continuous vector space by default. The few available CMS models are often shown to work in low-dimensions and with few words. In this paper we survey the state of the art in word learning algorithms for high-dimensional continuous meaning spaces with referential uncertainty caused by words referring to features of objects (rather than caused by multiple objects in the context). As it turns out, this problem has been dealt with in the Machine Learning (ML) community. We first analyze the problem of word learning from the viewpoint of machine learning in $n$-dimensional spaces. We then test various state-of-the-art ML methods on simulated and grounded data sets, analyze dynamics of online learning and the impact of various referential uncertainty. Lastly, we discuss how all this fits in the general landscape of language learning models. \section{Description Games} \label{s:tutor} One interaction pattern (game) often used in models of word learning is the \emph{description game} (DG). The basic structure of DG is the following. The learner observes a particular situation (context) of $l$ objects ($l=1$ for this paper) and also observes $k$ words uttered by the tutor ($k=5$ for this paper). So for instance, both learner and tutor observe one object. The tutor says ``block, bright, red,..,..'' (order does not matter). The learner then has to learn the meaning of these words by integrating information over various trials. The success of the learner is measured by testing which words the learner produces for objects and how this overlaps with the production of words of the tutor. An important aspect of these games is the \emph{tutor strategy} - the representation and algorithm the tutor uses for producing $k$ words for an object. In this paper, objects are represented by $n$ dimensional feature vectors $o\in[0,1]^{n}$. The tutor represents each word using a prototype $p\in[0,1]^{n}$ and a weight vector $w\in[0,1]^{n}$. Prototypes for tutors are randomly drawn from a uniform distribution $\mathrm{U}(0,1)^n$. Weights are drawn from a binomial distribution $\mathrm{B}(1,0.5)^n$. If a weight vector is all 0 we randomly set one of the weights to 1. For an object $o\in[0,1]^{n}$ the tutor first computes a weighted Euclidean distance $\operatorname{wd}$ $$\operatorname{wd}_{w,p}(o) = \sqrt{ \sum_{i = 1}^{n} w_i (o_i - p_i)^2}$$ The tutor then chooses the $k$ closest words for the description of an object, i.e. $$\argmin_{w,p \in P}(\operatorname{wd}_{w,p}(o))$$ This representation and word production strategy has the following desirable properties. The strategy models feature dimension sensitivity of words/prototypes. For instance, there could be a word that is sensitive only to the brightness dimension. Other words could be sensitive to blue and red channels. Secondly, words are not produced uniformly - similar to human language. Some words are used often, others only occur few times in the training set. Lastly, the tutor produces $k$ words for any object. Together with the object distribution in $[0,1]^n$ this leads to interesting, non-linear interactions between words and objects. Let us briefly examine the difficulty of the learning task. The learner has to learn to produce (predict) the same words as the tutor based on examples provided by the tutor. Suppose that $m=|W|$ denotes the number of words the tutor knows. In principle, chance performance is equal to choosing $k$ out of $m$ words without repetition and order does not matter. For experiments with $k=5$ and $m=100$ this amounts to ${100 \choose 5} = 75,287,520$ assuming a uniform distribution of words. \section{Description Games and Machine Learning} \label{s:learner} In machine learning terms, DG is a \emph{supervised}, \emph{multi-class} (multiple words), \emph{multi-label} (multiple words per object), \emph{online} classification problem. There is an immense literature and an abundance of algorithms that can be tested on this problem \cite{zhang2014review,tsoumakas2009mining}. For this paper, we test roughly a dozen different learning methods from linear models, to ensemble classifiers, Bayesian learning and neural networks that solve the description game learning problem. The following paragraphs give a (brief) overview of the various methods we tested. \paragraph{Nearest neighbor (NN)} One of the simplest and often best performing methods is nearest neighbor - also called KNN or in this paper \emph{KNeighbors} \cite{cover1967nearest} . KNeighbors is a \emph{non-parametric} method that stores all samples ever encountered. New samples are classified based on the class of its $k$ nearest neighbor (stored examples). The algorithms simplicity and the (often out of the box) success of this method have led to its widespread adoption. We also use a related algorithm from the same family: \emph{Nearest Centroid (NC)}. NC represents classes using centroids of corresponding samples. New samples are classified based on the shortest distance to centroids. This method is among the most widely used in word learning because it corresponds nicely with ideas in psychology \cite{rosch1975cognitive}. \paragraph{Generalized Linear Models (GLM)} GLM describe a family of algorithms that all model predictions as linear combinations of input variables. All models furthermore assume that predictions (dependent variables) are generated from exponential probability distributions (Gaussian, binomial, gamma etc). Learners also differ in terms of regularization and learning regime (closed-form, stochastic etc). We are using various classifiers: \emph{Logistic Regression}, \emph{Online Passive Aggressive} (PA) \cite{crammer2006online} and \emph{Stochastic Gradient Descent} (SGD) \cite{bottou2010large}. \paragraph{Ensemble methods (ENM)} ENM are meta algorithms that try to improve classification results by combining results of sets (ensembles) of classifiers (NN, linear or others). There are basically two types of ensemble methods. The first relies on weak, underfitting classifiers each often only slightly better than random choice. There are two main methods in this group: \emph{AdaBoost} \cite{freund1995desicion} and \emph{Gradient Boosting} \cite{friedman2001greedy}. AdaBoost works by fitting a series of weak learners. Each step (boosting iteration), training data is weighted to focus on samples that are not correctly classified in the previous step. Successive classifiers therefore essentially encode classification for various aspects of the data. New samples are classified by computing the majority estimates of classifiers. Gradient Boosting is an extension of boosting for optimizing any differentiable loss function. Another class of ensemble methods takes the opposite approach and relies on sets of classifiers that are complex, over-fitting classifiers. Here, we use \emph{Random Forest} \cite{breiman2001random} and \emph{Extra Trees} \cite{geurts2006extremely}, which use ensembles of decision trees. Decision trees are a non-parametric method that learns binary decisions (nodes) and arranges them in a binary tree \cite{rokach2014data}. Both RandomForest and ExtraTrees build multiple overfitting classifiers on random subsets of samples and features. \paragraph{Bayesian Methods (BM)} BM rely on Bayes theorem to transform the classification problem into one of estimating probability distributions. New samples are classified based on prior probabilities of classes, as well as posterior estimates of sample probabilities and the probability of observing samples given classes. Bayesian classifiers primarily differ in the assumptions they make for the probability distributions that need to be estimated. We use \emph{Gaussian Naive Bayes} classifiers \cite{jordan2002discriminative} (normal distributions, independent features) and \emph{Multinomial Naive Bayes} (multinomial distributions, independent features). Parameters for probability distributions are estimated using expectation-maximization. \paragraph{Neural networks (NN)} Recently neural networks have pushed the state-of-the-art in many classification problems such as face recognition, image labeling etc. The current trend is to stack multiple layers of neurons (mostly non-linear functions) and train them (often one layer at a time) using variants of the backpropagation algorithm. NN can take various forms in terms of network topologies, transfer functions, training regimes and learning rules. For the purpose of this paper, we used a multi-layer perceptron (\emph{MLP}) - (2 layers, rectified linear units, with a final sigmoid layer for classification). \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{figs/objects} \end{center} \caption{Objects used in evaluating different algorithms.} \label{f:objects} \end{figure} \section{Experimental Setup} \subsection{Datasets} We compare different learners on simulated and robot data sets. The robot data sets consists of 20 different objects (see Figure \ref{f:objects}) of various color, shape, size. We recorded approximately 1000 scenes in which two robots observe objects from different perspectives. Scenes differ in the positions of objects, which also alters objects' perceived shape and color features. For each object $n=17$ feature dimensions are extracted: x, y, z position, YUV color values (mean min, max), width, height, length. Data is scaled between 0 and 1 using a linear scaler and for some classifiers to zero mean, unit variance. \paragraph{GRO1} This dataset consists of all object observations ever made by the two robots in one matrix ($4532 \times 17$). GRO1 is used to experiment, with tutor and learner having the same object perception, in particular, the same feature estimation for objects. \paragraph{GRO2} This dataset is a grounded robot data set. It consists of two matrices. One for each robot (two matrices of size $4532 \times 17$). Each row corresponds to the perception of the same physical object from the viewpoint of the two different robots. This data is used in the following way. The tutor produces words from the perspective of robot 1 (matrix 1). The learner learns from the observation of the same object but from perspective of the other robot (matrix 2). \emph{GRO2} is used to evaluate what happens if there is \emph{perceptual deviation} \cite{spranger2012deviation}. That is tutor and learner see the scene from different viewpoints and therefore have different feature estimations for objects. For instance, the tutor robot might observe different x, y positions for an object, since he sees the object from a different perspective. But also color and shape features will be slightly different. \paragraph{SIM} This dataset is simulated and consists of 4532 object observations of $n=17$ feature dimensions drawn from a uniform distribution $\mathrm{U}(0,1)$. \subsection{Methods} Learners are trained on samples of objects and one-hot vector encoded words produced by the tutor. Each classifier has to predict (produce) the correct set of $k$ words given (new) vectors of objects by predicting a one-hot vector encoding of word activations. We then measure the difference between the production of the tutor and the prediction of the learner. For learning algorithms that predict probabilities $p$ for words (e.g. MLP), if $p>0.5$ then the word is counted as a prediction. For all experiments here, we draw $|W|=100$ prototypes and weights for the tutor (according to the description in Section \ref{s:tutor}) and perform 4-fold validation on data sets each consisting of 4532 samples. This means that training happens on roughly 3400 samples and testing on 1100 new samples. In summary, the standard parameters for our evaluations are $n=17$ (number of features), $|W|=100$ (number of words and prototypes tutor), $k=5$ (number of words uttered by the tutor) and $p=0.5$ (for the binomial distribution of tutor weights). Classifiers not supporting multi-class, multi-label by default were trained using one-vs-rest \cite{spolaor2015systematic}. The exception are RandomForest, KNeighbors, NearestCentroid and MLP. Most classifiers rely on various hyper-parameters. We optimized hyper-parameters using parameter grid searches on a separate simulated dataset \emph{SIM-DEVELOP} (same characteristics as \emph{SIM}). Hyper-parameters were optimized once on \emph{SIM-DEVELOP} and then fixed for all results reported here. \subsection{Measures} In this paper we use a single performance measure: \emph{f-score} \cite{sokolova2009systematic}. F-score is defined as the harmonic mean of \emph{precision} and \emph{recall}. There are various definitions of precision and recall depending on the classification problem (binary, multi-class, multi-label). Generally speaking, precision measures the amount of wrong words per sample found by the learner. Recall measures how many of the correct words were predicted by the learner. We use a particular f-score measure which is called \emph{example-based} (or \emph{sample}) that does not take into account unbalanced word distributions. An f-score of 100 means that all words and only those words uttered by the tutor are uttered by the learner. \section{Results} \begin{table}[t] \begin{center} \begin{tabular}{| l | c | c | c |} \hline & SIM & GRO1 & GRO2 \\\hline \emph{Nearest Neighbor} & & & \\\hline KNeighbors & 67.89 & 74.06 & 61.25\\\hline NearestCentroidOvR & 65.81 & 83.44 & 63.87\\\hline \emph{Linear Models} & & & \\\hline SGD & 79.49 & 87.93 & 65.27\\\hline PassiveAggressive & 63.32 & 84.70 & 65.04\\\hline LogisticRegression & 76.61 & 86.17 & 61.95\\\hline \emph{Ensemble Methods} & & & \\\hline RandomForest & 80.84 & 87.91 & 65.79\\\hline ExtraTrees & 67.89 & 74.06 & 61.25\\\hline AdaBoost & 79.49 & 87.93 & 65.27\\\hline GradientBoosting & 80.83 & {\bf 92.58} & 65.68\\\hline \emph{Bayesian} & & & \\\hline GaussianNB & 63.85 & 84.44 & 65.25\\\hline MultinomialNB & 75.35 & 87.02 & 67.64\\\hline \emph{Neural} & & & \\\hline MLP & {\bf 82.74} & 91.58 & {\bf 70.87} \\\hline \end{tabular} \end{center} \caption{Results comparison grounded and simulated data (sample-based f-score, $n=17$, $k=5$, $p=0.5$, $|W|=100$)} \label{t:experiment-1} \end{table} Table \ref{t:experiment-1} shows the performance of various classifiers on grounded and simulated data. Many learners perform well on the task with respect to task complexity. In particular, simple algorithms such as GaussianNB or linear models perform well. More complex methods such as ensemble methods generally are top performers. The best performing method on \emph{SIM} and \emph{GRO2} is the multi-layer perceptron MLP. On \emph{GRO1} GradientBoosting is the front runner. Although ensemble methods generally perform quite similar. None of the methods fail catastrophically, which is mostly due to hyper-parameter optimization. Interestingly all methods improve on grounded data (\emph{GRO1}) some by as much as 20 points (e.g. PassiveAgressive). This suggests that methods are able to take advantage of structure available in grounded data. However, all methods perform worse on \emph{GRO2} than on \emph{GRO1}. In some cases performance differs by almost 30 points between \emph{GRO2} and \emph{GRO1}. A word on how to understand these results. This study focusses on understanding the baseline for word learning in high-dimensional meaning spaces. These results are existence proofs (lower bounds) showing that learners can solve this problem with relatively high f-score. These results do not show limits of individual learning algorithms, rather results show general trends that hint at the scope of the learning problem. \subsection{Scaling Object Feature Dimensions} We are interested in understanding referential uncertainty in high-dimensional meaning spaces. Consequently, we manipulated the number of dimensions of object features $n$. The number of dimensions in the grounded data sets is fixed by the vision system, so we increased the number of dimensions for simulated data. All other parameters are kept the same. Figure \ref{f:scaling} shows how classifiers do for $n \in \{10, 100, 1000, 10000\}$. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{figs/scaling} \end{center} \caption{Performance of classifiers for increasing $n$ (number of object feature dimensions) with $p=0.5$, $|W|=100$, $k=5$, data set size 4532, 4 fold.} \label{f:scaling} \end{figure} Average performance across all classifiers degrades linearly with orders of magnitude of difference in $n$ for the best performing classifiers (MLP, AdaBoost). These results suggest that classifiers optimized for various $n$ and/or increasing the number of training samples, could actually deal with even higher $n$-dimensional data (remember that all classifiers were optimized for $n=17$). There is one classifier that performs poorly (MultinomialNB) all along, while others (e.g. ExtraTrees, RandomForest) degrade much more rapidly with number of dimensions than the best performing ones. Others perform best for certain $n$ (e.g. PassiveAggressive). All of these classifiers perform reasonably well or very well on the hyper-parameter optimized $n=17$. Consequently, these classifiers are sensitive to hyper-parameters with respect to number of dimensions. \subsection{Scaling Word Sensitivity} Another dimension of scaling is the sensitivity of words. Prototype weights for all experiments reported so far were drawn from a binomial distribution $\mathrm{B}(1,p=0.5)$. This means that words are on average sensitive to half of the dimensions. In our experiments, referential uncertainty is tied to the fact that words can refer to aspects of objects. To test learners, we ran experiments for various $p\in \{0.1, 0.25, 0.5, 0.75, 1.0\}$ and $n=100$. All other parameters stay the same. Figure \ref{f:scaling-p} shows that quite a few learners (AdaBoost, ExtraTrees, MLP etc) have absolutely no problem dealing with various $p$. We can conclude that these learners will perform well on mixed languages where some words encompass all features of an object and some are more specific and only refer to certain features. Other learners such as KNeighbors become better with larger $p$. This is no surprise since KNeighbors stores full examples. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{figs/scaling-p} \end{center} \caption{Performance of learners for varying values of $p \in\{0.1,0.25,0.5,0.75,1.0\}$ with $n=100$, $|W|=100$ and $k=5$.} \label{f:scaling-p} \end{figure} \subsection{Online Learning} One important aspect in language learning from a developmental point of view is online learning. We tested how well the algorithms perform over time. For this we incrementally train classifiers on the training set. For instance, we train on the first $m$ object observations and evaluate the f-score on the test data set. Figure \ref{f:online} shows the performance of classifiers over time. Pretty much all classifiers learn very fast with most of the gains happening on the first 500 training samples. In other words, the learner becomes quite sufficient after 500 interactions with the tutor. After 1000 training samples all classifiers are within 5 points of their final f-score. These results for online learning are quite remarkable given that there are 100 words that need to be learned. Certainly, what helps here is that certain words will be used frequently and others less frequently. We analyzed results with respect to frequency of words and it becomes evident that this is indeed one big driver of the speed of learning. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{figs/online} \end{center} \caption{Online learning performance of various classifiers on simulated data.} \label{f:online} \end{figure} \subsection{Discussion of Results} Our results show that many off-the-shelf machine learning algorithms can deal with high-dimensional meaning spaces. We only optimized classifiers for $n=17$ and we get linear drop in performance for exponential increase in number of dimensions. This suggest that we might be able to get on par performance even for very high $n$ - if we optimize classifiers for each $n$ and if we logarithmically increase training set size. Our results also show that many classifiers have no problem dealing with feature sensitivity related to referential uncertainty. Our experiments suggest that referential uncertainty in high-dimensional meaning spaces is NOT an exponentially growing problem. In other words adding dimensions has at most a linear effect on learners in the paradigm discussed here. This is an interesting result because it suggests that referential uncertainty although often thought of as an exponentially growing problem with the number of dimensions or the degree with which words can associate to aspects of objects is much less of an issue than one might expect. \paragraph{Meaning Space Structure} There is a difference in performance between \emph{SIM} and \emph{GRO1} data sets. The reason is that there is much more structure in the real world than in the simulated world. The real world data sets consist of limited sets of objects that make up very defined spaces in the perceptual data space. For instance, there are certain clearly separable color regions based on the objects in \emph{GRO1}. This structure in the environment helps all learning algorithms in becoming more successful. \paragraph{Tutoring} Tutoring strategies are often thought to be about social feedback (e.g. pointing or agreement). But tutoring can also mean that the tutor is structuring the environment (and possibly also the language) for the learner. This can include taking perspective or conceptualizing the world from the viewpoint of the learner. Generally speaking it has been found that tutoring strategies help learners \cite{spranger2013acquisition,spranger2015incremental}. The delta in performance between \emph{GRO1} and \emph{GRO2} confirms these ideas. In \emph{GRO1} the tutor will utter words based on what the learner sees. In \emph{GRO2} that is not the case. All classifiers perform less well on \emph{GRO2}. \paragraph{Unbalanced data} Another aspect that affects performance is the fact that training of words is unbalanced. There are some words that occur often and others that don't (in fact some words do not even appear in the training set and only in the test set). The classifiers have difficulties with sparsely used words. Something that becomes apparent when examining macro-averaged f-scores (not reported here). This is often much lower than micro-averaged f-scores. This split suggests that (generally speaking) learners are good in learning frequent words but less good in learning less frequent words. \paragraph{Representation} It is interesting to analyze various learning algorithms with respect to whether they actually build representations similar to that of the tutor. We deliberately chose various algorithms none of which directly tried to replicate the tutor behavior in the learner by learning the same representation. The tutor operates using weighted feature distances to prototypes. Words are only sensitive to a particular feature channel (e.g. the brightness). Algorithms such as KNeighbors do not explicitly represent information like that. They just collect samples. Others such as RandomForrest do actually learn how to distinguish different words based on explicitly learning which features matter with respect to the word. An interesting result of this study is that both of these algorithms perform comparably well in terms of replicating the tutors behavior. But if we look at different $p$ value experiments, we can see that discriminative feature learners such as AdaBoost outperform KNeighbors. \section{Related Work} \paragraph{Referential uncertainty} There is an important difference between the setup here and other studies. Many studies are concerned with enumerable objects in context and how this leads to referential uncertainty (see \cite{frank2008bayesian,belpaeme2012acs,deBeule2006cross-situational} among others). In this paper, we use referential uncertainty closer to Quine's formulation and early studies by Siskind \cite{siskind1996computational}. Quine focusses on aspects of a situation and not on enumerable objects as the source of referential uncertainty. In that sense the problem in Quine is larger. Even if you know the referent, the learner still knows nothing about the aspect of the referent the word refers to (color, shape etc). The question remains which referential uncertainty problem is solved by children (possibly both). \paragraph{Description vs Discrimination} An important distinction between various models is the tutoring strategy - representation and algorithm for word production in the tutor. In description games, the speaker minimize distances between the topic object and words (here weighted Euclidean distances). In other types of interaction (called \emph{guessing games}), the function being maximized for each word is the difference between the topic object $t$ and all the other objects (or features) in the context. An interesting question is whether different production strategies require different learning algorithms or not. Often the learner is modeled after the tutor and both use the same production and interpretation algorithms (see \cite{spranger2013acquisition,wellens2008coping} for some recent examples). This obviously biases the system and the question is whether this is necessary. What we can say for the models described in this paper is that nowhere did we bias the learners explicitly towards a particular production algorithm. Rather all that happens is that the learner is trying to replicate the tutor behavior. \paragraph{Child Learning Strategies} Researchers in child language acquisition have provided many ideas about strategies that children use to learn the meaning of words. Some of them such as \emph{perceptual biases} \cite{pruden2006birth} could potentially be exploited by learning algorithms - if the language affords it. That is, if the language to be learned is based on salient perceptual distinctions then algorithms that learn discriminative features (e.g. decision trees, ensemble methods) can take advantage of that. Other child language learning strategies based on \emph{linguistic constraints} \cite{gleitman1990structural} are not by definition part of the learning paradigm discussed in this paper. We only focus on unordered sets of words uttered by the tutor. The impact of strategies such as \emph{mutual exclusivity} \cite{markman1988children} and \emph{contrast} \cite{clark1987principle} on the learning problem defined in this paper is subject to future work. \section{Conclusion} Abstract models can be used to answer questions about what algorithms children use to learn language. This is often done by trying to replicate empirical data. Another goal of models is to grasp the essence of the learning problem, characterize how hard it is and how the best known algorithms perform. This paper addresses aspects of the second goal. We defined an abstract version of the word learning problem, translated it to Machine Learning and compared state of the art methods on the problem. This establishes a baseline that can be used to understand word learning from the viewpoint of complexity. Source code as well as data sets are published online at \url{https://github.com/mspranger/icdl2016language}. \bibliographystyle{IEEEtran}
2,877,628,090,862
arxiv
\section{Introduction} \setcounter{equation}{0} \setcounter{theorem}{0} Geometric attempts to generalize the Yang-Mills construction to $p$-form gauge fields with $p>1$ have led to no-go results that indicate that this goal cannot be achieved while maintaining spacetime locality \cite{Nepomechie,Teitelboim1,DaDeCa1}. In fact, self-interactions of $p$-form gauge fields are so constrained that one can completely list them, even if one drops any a priori geometric interpretation of the $p$-forms as connections for extended objects. This task was explicitly performed in \cite{HK1}, where the following question was analyzed. Consider the free action, \begin{equation} I =\int d^n x\sum_a \big({-1 \over 2(p_{a} +1)!}H^a_{\mu_1 \ldots \mu_{p_{a}+1}} H^{a \mu_1 \ldots \mu_{p_{a}+1}}\big), \label{Lagrangian} \end{equation} for a system of (non-chiral) exterior form gauge fields $B^a_{\mu_1 \ldots \mu_{p_{a}}}$ of degree $\geq 2$. Here, the $H^a$'s are the ``field strengths" or ``curvatures", \begin{eqnarray} H^a&=&{1\over (p_{a}+1)!} H^a_{\mu_1 \ldots \mu_{p_{a}+1}}dx^{\mu_1} \ldots dx^{\mu_{p_{a}+1}} =dB^a, \label{FieldStrength}\\ B^a&=&{1\over p_{a}!} B^a_{\mu_1 \ldots \mu_{p_{a}}} dx^{\mu_1} \ldots dx^{\mu_{p_a}}. \end{eqnarray} We assume throughout that the spacetime dimension satisfies the condition $n>p_a+1$ for each $a$ so that all the $p_a$-forms have local degrees of freedom. The action (\ref{Lagrangian}) is invariant under the abelian gauge transformations, \begin{equation} B^a \rightarrow B^a + d \Lambda^a, \label{origaugetr} \end{equation} where $\Lambda^a$ are arbitrary $p_a-1$ forms. The equations of motion, obtained by varying the fields $B^a_{\mu_1 \ldots \mu_{p_a}}$, are given by, \begin{equation} \partial_\rho H^{a \rho \mu_1 \ldots \mu_{p_a}}=0 \Leftrightarrow d \overline{H}^a=0, \end{equation} where $\overline{H}^a$ is the dual of $H^{a \rho \mu_1 \ldots \mu_{p_a}}$. The question addressed in \cite{HK1} was: what are the consistent (local) interactions that can be added to the free action (\ref{Lagrangian})? Interaction terms are said to be consistent if their preserve the number (but not necessarily the form) of the independent gauge symmetries. Of course, one can always add to (\ref{Lagrangian}) gauge-invariant interaction terms constructed out of the curvature components and their derivatives, \begin{equation} \int f(H^{(k)}_{\mu_1 \dots \mu_{p_k +1}}, \partial_\nu H^{(k)}_{\mu_1 \dots \mu_{p_k +1}}, \cdots, \partial_{\nu_{1} \dots \nu_q} H^{(k)}_{\mu_1 \dots \mu_{p_k +1}}) d^nx. \end{equation} Being strictly gauge-invariant, these terms actually do not deform the gauge symmetries. One may, however, also search for interaction terms that deform not only the action, but also the gauge transformations. These turn out to be extremely scarce, as the following theorem indicates: \begin{theorem} Besides the obvious gauge-invariant interactions, the only consistent interaction vertices that can be added to (\ref{Lagrangian}) have the Noether form, \begin{equation} V = \sum_{(A)} g_{(A)} V_{(A)} \label{vertex0} \end{equation} where the $g_{(A)}$ are the coupling constants and the $V_{(A)}$ read \begin{equation} V_{(A)} = \int j^{(t)} \wedge B^{(t)}. \label{vertex} \end{equation} Here, $j^{(t)}$ are gauge-invariant conserved $(n-p_t)$-forms, $dj^{(t)} \approx 0$, and therefore, are exhausted by the exterior polynomials in the curvature forms $H^{(k)}$ and their duals $\bar{H}^{(k)}$ {\em \cite{HKS1}}. \label{central} \end{theorem} Because $j^{(t)}$ must have exactly form-degree $n-p_t$, so that the form degree of the integrand of (\ref{vertex}) matches the spacetime dimension $n$, there may be no vertex of the type (\ref{vertex0}) for given spacetime dimension and form-degrees of the exterior form gauge fields. For example, a set of $2$-form gauge fields admits gauge symmetry-deforming non-trivial interactions only in $n=4$ dimensions \cite{HenneauxPL2} and these are of the Freedman-Townsend type \cite{FT1}. Other examples of vertices of the form (\ref{vertex}) involving $p$-form gauge fields of different form degrees are provided by the Chapline-Manton interactions \cite{ChaplineManton,Nito,Cham1,Cham2,BergRooWitNieu,Baulieu1}. The analysis of \cite{HK1} also enabled one to exhibit new symmetry-deforming interactions, but again only in special dimensions (see also \cite{BrandtD1}; these interactions have been further analysed in \cite{BrandtT1,BrandtT2}). In (\ref{vertex}), the $j^{(t)}$ are exterior polynomials in $H^{(k)}$ and $\bar{H}^{(k)}$ with coefficients that can involve $dx^\mu$. If one imposes Lorentz invariance, bare $dx^\mu$'s cannot appear. Note also that if $(n-1)$-forms are included, an infinite number of couplings (\ref{vertex}) may in general be constructed since arbitrary powers of the duals (which are zero forms) can appear. The vertices (\ref{vertex0}) have a number of remarkable properties: \begin{enumerate} \item First, while the strictly gauge-invariant vertices may involve derivatives of the individual components $H^{(k)}_{\mu_1 \dots \mu_{p_k +1}}$ of the curvatures, the vertices (\ref{vertex}) are very special: they can be expressed as polynomials in the exterior product (``exterior polynomials") in the (undifferentiated) forms $B^{(k)}$, $H^{(k)}$ and $\bar{H}^{(k)}$. This is not an extra requirement. Rather, this property follows directly from the demand that (\ref{vertex0}) defines a consistent interaction. \item If the vertices (\ref{vertex0}) do not involve the duals $\bar{H}^{(k)}$, one recovers the familiar Chern-Simons terms \cite{DJT1}. These are off-shell gauge-invariant up to a total derivative and so, do not deform the gauge transformations. Vertices (\ref{vertex0}) involving the duals are only on-shell gauge-invariant up to a total derivative. These vertices do deform the gauge transformations. \item Although the vertices (\ref{vertex0}) deform the gauge symmetries when they involve the duals $\bar{H}^{(k)}$, they do not modify the algebra of the gauge transformations (to the first order in the coupling constants considered here) because they are linear in the $p$-form potentials. This is in sharp contrast with the Yang-Mills construction, which yields a vertex of the form $\bar{H}^a \wedge B^b \wedge B^c$. There is thus no room for an analog of the Yang-Mills vertex for exterior forms of degree $\geq 2$. How the result is amended in the presence of $1$-forms will be discussed at the end. \item The fact that the gauge transformations remain abelian to first-order in the coupling constant is not in contradiction with \cite{LavLuPoSt1}. Indeed, we focus here only on symmetries of the equations of motion that are also symmetries of the action. Furthermore, the non-abelian structure uncovered in \cite{LavLuPoSt1} concerns symmetries associated with non-trivial global features of the spacetime manifold, which are rigid symmetries \cite{CrJuLuPo1}. \end{enumerate} The above theorem was stated and discussed in \cite{HK1} but a complete demonstration of it was not given. The purpose of this paper is to fill this gap. As we shall see, the proof has an interest in itself since it illustrates various cohomologies arising in local field theory. We conclude this introduction by observing that the interaction vertices are in general not duality-invariant, in the sense that an interaction vertex that is available in one version of the theory may not be so in the dual version where some of the $p$-form potentials are traded for ``dual" $(n-p-2)$-form potentials. \section{Consistent interactions and Local BRST Cohomology} \setcounter{equation}{0} \setcounter{theorem}{0} Our approach to the problem of constructing consistent interaction vertices for a gauge theory is based on the BRST symmetry. As shown in \cite{BH1,HenneauxCont1}, the question boils down to computing the local BRST cohomological group at ghost number zero in the algebra of local $n$-forms depending on the fields, the ghosts, the antifields and their derivatives. These groups are denoted by $H^0(s\vert d)$. The cocycle condition reads, \begin{equation} sa + db =0, \label{cocycle} \end{equation} where $a$ (respectively $b$) is a local $n$-form (respectively $(n-1)$-form) of ghost number zero (respectively one). Trivial solutions of (\ref{cocycle}) are of the form, \begin{equation} a = s m + dn \label{coboundary} \end{equation} where $m$ (respectively $n$) is a local $n$-form (respectively, $(n-1)$-form) of ghost number $-1$ (respectively $0$). One often refers to (\ref{cocycle}) as the ``Wess-Zumino consistency condition" \cite{WZ}. If $a$ is a solution of (\ref{cocycle}), its antifield-independent part defines a consistent interaction; and conversely, given a consistent interaction, one can complete it it by antifield-dependent terms to get a BRST cocycle (\ref{cocycle}). As explained in \cite{BH1,HenneauxCont1}, it is necessary to include the antifields in the analysis of the cohomology in order to cover symmetry-deforming interactions. In the case at hand, the gauge symmetries are reducible and the following set of antifields is required \cite{BV2,HenneauxTeitelboim}, \begin{equation} B^{*a \mu_1 \ldots \mu_{p_a}}, B^{*a\mu_1 \ldots \mu_{p_a-1}},\ldots, B^{*a\mu_1},B^{*a}. \label{antifieldlist} \end{equation} The Grassmann parity and the {\it antighost} number of the antifields $B^{*a \mu_1 \ldots \mu_{p_a}}$ associated with the fields $B^a_{\mu_1 \ldots \mu_{p_a}}$ are equal to $1$. The Grassmann parity and the {\it antighost} number of the other antifields is determined according to the following rule. As one moves from one term to the next one to its right in (\ref{antifieldlist}), the Grassmann parity changes and the antighost number increases by one unit. Therefore the parity and the antighost number of a given antifield $B^{*a \mu_1 \ldots \mu_{p_a-j}}$ are respectively $j+1$ modulo $2$ and $j+1$. Reducibility also imposes the following set of ghosts, \begin{equation} C^a_{\mu_1 \ldots \mu_{p_a-1}},\ldots,C^a_{\mu_1 \ldots \mu_{p_a-j}},\ldots, C^a. \label{ghosts} \end{equation} These ghosts carry a degree called the pure ghost number. The pure ghost number of $C^a_{\mu_1 \ldots\mu_{p_a-1}}$ and its grassmann parity are equal to 1. As one moves from one term to the next one to its right in (\ref{ghosts}), the Grassmann parity changes and the ghost number increases by one unit up to $p_{a}$. We denote by ${\cal P}$ the algebra of spacetime forms with coefficients that are polynomials in the fields, antifields, ghosts and their derivatives. The action of $s$ in ${\cal P}$ is the sum of two parts, namely, the ``Koszul-Tate differential $\delta$" and the ``longitudinal exterior derivative $\gamma$": \begin{equation} s=\delta +\gamma, \end{equation} where we have, \begin{eqnarray} \delta B^a_{\mu_1 \ldots \mu_{p_a}}&=&0, \\ \delta C^a_{\mu_1 \ldots \mu_{p_a-j}}&=&0, \\ \delta {\overline B}^{*a}_1 +d{\overline H}^a &=&0, \nonumber \\ \delta {\overline B}^{*a}_2 +d{\overline B}^{*a}_1 &=&0,\nonumber \\ &\vdots& \label{defduaux} \\ \delta {\overline B}^{*a}_{p_a+1}+d{\overline B}^{*a}_{p_a} &=& 0, \nonumber \end{eqnarray} and, \begin{eqnarray} \gamma {{B}^{*a\mu_1 \dots \mu_{p_a+1-j}}}&=&0,\\ \gamma B^a + dC^a_1 &=&0 ,\\ \gamma C^a_1 + dC^a_2 &=&0,\\ &\vdots& \nonumber \\ \gamma C^a_{p_a-1} + dC^a_{p_a} &=&0,\\ \gamma C^a_{p_a} & = &0. \end{eqnarray} In the above equations, $C^a_{j}$ is the $(p_a-j)$-form whose components are $C^a_{\mu_1 \ldots \mu_{p_a-j}}$. Furthermore, we have systematically denoted (as above) the duals by an overline to avoid confusion with the *-notation of the antifields. The actions of $\delta$ and $\gamma$ on the individual components of the antifields (\ref{antifieldlist}), ghosts (\ref{ghosts}) and their derivatives are easily read off from the above formulas (recalling that $\delta (dx^\mu)=\gamma (dx^\mu)=0$, $[\partial_\mu,\delta]=0,[\partial_\mu,\gamma]=0$). \section{General procedure for working out BRST cohomology} \label{ggen} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} In order to prove the theorem, we shall solve the BRST cocycle condition by proceeding as in the Yang-Mills case \cite{BH2,BBH2}. To that end, one expands the cocycles and the cocycle condition according to the antighost number. Thus, if $a$ is a BRST cocycle (modulo $d$), then its various components in the expansion, \begin{equation} a = a_0 + a_1 + a_2 + \cdots + a_k, \; \; antigh(a_i) = i, \label{expansion} \end{equation} must fulfill the chain of equations, \begin{eqnarray} \label{chain} \gamma a_0 + \delta a_1 + db_0 &=& 0, \\ &\vdots& \nonumber \\ \gamma a_{k-1} + \delta a_k + db_{k-1} &=& 0, \\ \gamma a_k + db_k &=& 0. \end{eqnarray} The last equation in this chain no longer involves the differential $\delta$ and can be easily solved. The idea, then, is to start the resolution of the cocycle condition from $a_k$ and to work one's way up until one reaches $a_0$, which is the quantity of physical interest. [Recall that $a_0$ defines a consistent deformation of the Lagrangian. And conversely, if $a_0$ is a consistent deformation of the Lagrangian, then one may complete it by terms of positive antighost number, as in (\ref{expansion}), so as to construct a BRST cocycle $a$. Furthermore, trivial BRST cocycles (in the cohomological sense) correspond to trivial deformations (i.e., deformations that can be absorbed through redefinitions of the field variables) \cite{BH1,HenneauxCont1}. The reconstruction of the cocycle $a$ from $a_0$ stops at some antifield number $k$ because $a_0$ is polynomial in the derivatives (see the argument in \cite{BBH2} section 3). Before doing this, we shall introduce some useful notations and give a few solutions. In the analysis of the BRST cohomology, it turns out that two combinations of the fields and antifields play a central r\^ole. The first one combines the field strengths and the duals of the antifields and is denoted $\tilde H^a $, \begin{equation} \tilde H^a = {\overline H}^a + \sum_{j=1}^{p_a+1} {\overline B}^{*a}_j. \end{equation} The second one combines the $p_a$-forms and their associated ghosts and is denoted $\tilde B^a$, \begin{equation} \tilde B ^a = B^a + C^a_1 + \ldots + C^a_{p_a}. \end{equation} It is easy to see that both $\tilde H^a$ and $\tilde B ^a$ have a definite Grassmann parity respectively given by $n-p_a+1$ and $p_a$ modulo $2$. On the other hand, exterior products of $\tilde H^a$ or $\tilde B^a$ (including the $\tilde H^a$ and $\tilde B^a$ themselves) are not homogeneous in form degree and ghost number. To isolate a component of a given form degree $k$ and ghost number $g$, we enclose the product in brackets $[\ldots]^{k,g}$. The component in $[A]^{k,g}$ which has definite antighost number $l$ is denoted $[A]^{k,g}_l$. Since products of $\tilde B ^a$ very frequently appear in the rest of the analysis, we introduce the following notations, \begin{equation}\label{conve} {\cal Q}^{a_1 \ldots a_m}=\tilde B ^{a_1}\ldots \tilde B ^{a_m} \quad \hbox{and} \quad {\cal Q}^{a_1\ldots a_m}_{k,g}=[\tilde B ^{a_1}\ldots \tilde B ^{a_m}]^k_g. \end{equation} We shall not write explicitly the wedge product from now on ($dx^0 dx^1$ can clearly only mean $dx^0\wedge dx^1$). We also define the three ``mixed operators": $\Delta = \delta + d$, $\tilde \gamma =\gamma + d$ and $\tilde s=s+d$. Using those definitions we have the following relations: \begin{eqnarray} \Delta {\tilde H}^a &=0, \quad \Delta {\tilde B}^a =0,\quad \Delta {H}^a=0\\ \tilde \gamma {\tilde H}^a &=0, \quad\tilde \gamma {\tilde B}^a =H^a, \quad\tilde \gamma {H}^a =0 \label{formulerusse} \\ \tilde s {\tilde H}^a &=0, \quad \tilde s {\tilde B}^a =H^a, \quad \tilde s { H}^a =0. \end{eqnarray} Eq. $\tilde \gamma {\tilde B}^a=H^a$ is known in the literature as the ``horizontality condition" \cite{Baulieu1}. It is easy to construct solutions of the Wess-Zumino consistency condition out of the variables $H^a, \tilde H^a, \tilde B^a$. For example, in ghost number zero, \begin{equation} a^{n,0}=[P_b(H^a,\tilde H^a) \tilde B^b]^{n,0},\label{dddd} \end{equation} is a solution of (\ref{cocycle}). This can be seen by applying $\tilde s$ to $P_b(H^a,\tilde H^a) \tilde B^b$. One gets $\tilde s(P_b \tilde B^b)=(-)^{\epsilon_P} P_b (\tilde s \tilde B^b)=(-)^{\epsilon_P} P_b H^b$ and thus, $s[P_b \tilde B^b]^{n,0} + d[P_b \tilde B^b]^{n-1,1}=[\tilde s (P_b \tilde B^b)]^{n,1}=[P_b H^b]^{n,1}=0$ (no ghost occurs in $P_b H^b$). We shall prove in this article the remarquable property that all antifield dependent solutions of the Wess-Zumino consistency condition in ghost number $0$ are in fact of the form (\ref{dddd}) (modulo antifield independent terms). According to the discussion at the beginning of Section {\bf 2}, this is equivalent to proving Theorem {\bf \ref{central}} since $a^{n,0}_0=[P_b(H^a,\tilde H^a)\tilde B^b]^{n,0}_0= P_b(H^a,{\overline H^a})B^b$ is of the required form. \section{Some useful lemmas} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} In order to construct the general solution of the (mod $d$) BRST cocycle condition along the lines indicated in the previous section, we shall need a few lemmas. \begin{lemma} \label{endtrivial} Let $a_k$ be a solution of $\gamma a_k + db_k =0$, with non-vanishing antighost number $k$. Then one has $a_k = a'_k + \gamma m_k + dn_k$ where $a'_k$ is annihilated by $\gamma$, $\gamma a'_k = 0$. \end{lemma} \proof{The proof proceeds as in the Yang-Mills case: one analyses the descent equation associated with $\gamma a_k + db_k =0$. In \cite{HK2} we have listed all the non-trivial descents without taking into account the antifields. However the results are unchanged even if one includes the antifields since their contributions to non-trivial descents can always be absorbed by trivial terms (the proof of this statement is identical to the one in the Yang-Mills case \cite{BBH2}). Therefore, if $a_k$ involves the antifields, the descent associated with it is necessarily trivial so that one can find a different representative $a'_k$ in the same class of $H(\gamma \vert d)$ as $a_k$ which is annihilated by $\gamma$.} \begin{lemma} \label{gammacohomo} The general solution of $\gamma a_k = 0$ is given by, \begin{equation} a_k = \sum_I P^I_k \omega^I + \gamma c_k, \label{cocyhg} \end{equation} where the $\omega^I$ are polynomials in the undifferentiated "last" ghosts of ghosts $C^a_{p_a}$ and the $P^I_k$ are spacetime $n$-forms with coefficients that are polynomials in the field strengths, their derivatives, the antifields and their derivatives (these variables will be denoted $\chi$ in the sequel). \end{lemma} \proof{The proof of this lemma is quite standard. One redefines the variables into three sets obeying respectively $\gamma x^i=0,\ \gamma y^\alpha=z^\alpha$, $\gamma z^\alpha=0$. The variables $y^\alpha$ and $z^\alpha$ form ``contractible pairs" and the cohomology is then generated by the (independent) variables $x^i$. In our case, the $x^i$ are given by $dx^\mu$, the fields strengths components, the antifields and their derivatives as well as the last (undifferentiated) ghosts of ghosts. A complete proof of the lemma in the absence of antifields can be found in \cite{HK2}. Here we simply note that the antifields are automatically part of the $x^i$ variables since they are all $\gamma$-closed and do not appear in the $\gamma$ variations.} Using the conventions (\ref{conve}) and dropping the trivial term, we can write the cocycle (\ref{cocyhg}) as, $ a_k=\sum_m P_k^{a_1\ldots a_m} [\tilde B ^{a_1} \ldots \tilde B ^{a_m}]^{0,l} = \sum_m P_k^{a_1\ldots a_m} {\cal Q}^{a_1 \ldots a_m}_{0,l}, $ with $l=\sum_m p_{a_m}$. \begin{lemma} \label{smallerdegrees} Let $\alpha$ be an antifield independent $\gamma$-cocycle that takes the form \begin{equation} \alpha = R_1(H^{a_r} , C^{a_r}_{p_{a_r}}) R_2(H^{b_s} , C^{b_s}_{p_{b_s}}), \; p_{b_s}>p_{a_r}, \end{equation} where $R_1$ (respectively $R_2$) is an exterior polynomial in the curvature form $H^{a_r}$ (respectively $H^{b_s}$) and the last ghost of ghost $C^{a_r}_{p_{a_r}}$ (respectively $C^{b_s}_{p_{b_s}}$) such that $p_{b_s}>p_{a_r}$. Assume that $R_1$ contains no constant term and is trivial in $H(\gamma\vert d)$, \begin{equation} R_1=\gamma U_1 + dV_1. \end{equation} Then, $\alpha$ is also trivial in $H(\gamma \vert d)$. \end{lemma} \proof{This result was proved in \cite{HK2}. Since $R_1$ is trivial, it is the obstruction to the lift of a $\gamma$-cocycle $\beta_1$ through the descent equations of $H(\gamma\vert d)$. Because of the condition $p_{b_s}>p_{a_r}$, $\alpha$ then also appears as the obstruction to the lift of the $\gamma$-cocycle $\beta_1 R_2$ indicating that $\alpha$ is trivial in $H(\gamma\vert d)$.} The theorem applies in particular when $R_1$ is an arbitrary polynomial of degree $>0$ in the curvatures $H^{a_r}$. \begin{lemma} \label{triviality} Let $a$ be a cochain with form-degree $p$ and ghost number $g$, $a\equiv [a]^{p,g}$, and let $a=a_0+\ldots +a_k$ be its expansion according to the antighost number, $a_i=[a]^{p,g}_{i}$. Assume that the last term $a_k$ takes the form $a_k=[P]^{q,-k}_k \chi$ where $P$ is an exterior polynomial in $\tilde H$ and $H$ and where $\chi \equiv \chi^{p-q,k+g}$ is an exterior polynomial in $H$ and $C^a_{p_a}$ which is trivial in $H(\gamma\vert d)$, $\chi(H,C)=\gamma m +dn$. Then one can redefine $a_k$ away by adding $s$-exact terms modulo $d$ to $a$, \begin{equation} a=su+dv + \hbox{\ terms of antighost number $< k$}. \end{equation} \end{lemma} \proof{One has $P(\tilde H, H)=[P]^{q-k,0}_0 + \ldots + [P]^{q,-k}_k +\ldots + [P]^{n,-n+q-k}_{n-q+k}$ and $\tilde s \tilde P=0$. One has also by assumption, $\chi\equiv \chi^{p-q,k+g}=\gamma m^{p-q,k+g-1} +dm^{p-q-1,k+g}$ with $m^{p-q,k+g-1}\equiv m$ and $m^{p-q-1,k+g}\equiv n$. If we define $m^{i,j}\ (i<p-q-1)$ through the descent equation $\gamma m^{p-q-1,k+g}+ dm^{p-q-2,k+g+1}=0,\ldots$ and $\tilde m = m^{p-q,k+g-1}+m^{p-q-1,k+g}+m^{p-q-2,k+g+1}+\ldots + m^{0,k+g+p-q-1}$, one gets, $\chi^{p-q,k+g}=\tilde \gamma \tilde m - dm^{p-q,k+g-1}=\tilde s \tilde m -dm^{p-q,k+g-1}$. Thus, $\tilde s ((-1)^{\epsilon_P} P\tilde m)=a_k -P dm^{p-q,k+g-1}$. If we project this equation on the form degree $p$ of $a_k$, one finds the equation, \begin{equation} su^{p,g-1}+du^{p-1,g}=a_k-[P]^{q-1,-k+1}_{k-1}dm^{p-q,k+g-1}, \end{equation} where we have set $u^{p,g-1}\equiv [(-1)^{\epsilon_P}P\tilde m]^{p,g-1}$ and $u^{p-1,g}\equiv [(-1)^{\epsilon_P} P\tilde m]^{p-1,g}$. Thus, \begin{equation} a_k=su^{p,g-1}+du^{p-1,g}+ \hbox{\ terms of antighost number $<k$}, \end{equation} which is the desired result.} \section{Proof of theorem} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} We now have all the necessary tools required to solve the Wess-Zumino consistency condition (\ref{cocycle}). Consider first the case where the expansion of $a$ (which has total ghost number $0$) reduces to $a_0$ (no antifields). Then, $a \equiv a_0$ fulfills $\gamma a_0 + db_0 =0$. This equation was investigated in detail in \cite{HK2}, where it was shown that it has only two types of solutions: those for which one can assume that $b_0 = 0$, which are the strictly gauge-invariant terms; and those for which no redefinition yields $b_0 = 0$ (``semi-invariant terms"), which are exhausted by the Chern-Simons terms. Both types of solutions preserve the form of the gauge symmetries and are in agreement with the theorem; we can thus turn to the case where $a$ involves the antifields, $k \not=0$. By lemma {\bf \ref{endtrivial}}, one can assume that the last term $a_k$ in the expansion of $a$ is annihilated by $\gamma$. Indeed, the (allowed) redefinition $a \rightarrow a - sm_k - dn_k$ (see Lemma {\bf {\ref{endtrivial}}}) enables one to do so. Then, the next to last equation in the chain (\ref{chain}) implies $d \gamma b_{k-1}$, i.e., by the algebraic Poincar\'e lemma, $\gamma b_{k-1} + d c_{k-1} = 0$ for some $c_{k-1}$ (the cohomology of $d$ is trivial in form-degree $n-1$). Now, two cases must be considered: either $k>1$, in which case lemma {\bf \ref{endtrivial}} implies again that one can assume $\gamma b_{k-1} = 0$ through redefinitions. Or $k=1$, in which case $b_{k-1} \equiv b_0$ does not involve the antifields and may lead to a non trivial descent. This second possibility arises only if $H(\gamma)$ does not vanish in pureghost number one since $a_k\equiv a_1$ must be a non-trivial element of $H^k(\gamma)$ or else can be eliminated through a redefinition. In the absence of $1$-forms, $H^1(\gamma)$ vanishes (lemma {\bf \ref{gammacohomo}}), so we can assume $k>1$. The case $k=1$ will be discussed in section {\bf \ref{1forms}} where we allow for the presence of $1$-forms. If $k>1$, one can expand the elements $a_k$ and $b_{k-1}$ according to lemma {\bf \ref{gammacohomo}}, \begin{equation}\label{b0} a_k = \sum P^I_k \omega^I, \; \; b_{k-1} = \sum Q^I_{k-1} \omega^I \end{equation} ($\gamma$-trivial terms can be eliminated). The next to last equation in the chain (\ref{chain}) then implies \begin{equation}\label{charcoo} \delta P^I_k + d Q^I_{k-1} = 0, \end{equation} which indicates that $P^I_k$ is a cocycle of the cohomology $H(\delta \vert d)$. This cohomology, which is related to the so-called {\em invariant} characteristic cohomology, was completely worked out in \cite{HKS1}. It was shown that all its representatives can be written as the $[\ ]^{n,-k}$ component of an exterior polynomial in $H^a$ and $\tilde H^a$, \begin{equation} P_k^I=[P^I(H^a, \tilde H^a)]^{n,-k}, \quad\quad (k>1).\label{cfgtr} \end{equation} It is because of this property that antifield dependent solutions of the Wess-Zumino consistency condition, which belong a priori to the algebra generated by all the variables and their individual, successive derivatives, turn out to be expressible in terms of the forms $H^a$, $\tilde H^a$ and $B^a$ only. Relation (\ref{cfgtr}) implies that the term $a_k$ of highest antighost number in the expansion of $a$ is up to trivial terms of the form, \begin{equation} a_k = [P^I(H^a, \tilde H^a)]^{n,-k} \omega^I, \end{equation} where the pureghost number of the $\omega^I$ must be equal to $k$ in order to obtain a BRST cocycle in ghost number $0$. The question is now: can we construct from the known higher-order component $a_k$ the components $a_j$ of lower antighost numbers in order to obtain a solution of the Wess-Zumino consistency condition? As we have seen in Section {\bf \ref{ggen}} this is always possible when the $\omega^I$ are linear in the ghosts of ghosts and the resulting BRST cocyle is then given by (\ref{dddd}). We are now going to show that when the $\omega^I$ in $a_k$ are at least quadratic in the ghosts of ghosts then one encounters an obstruction in the construction of the corresponding solution of the Wess-Zumino consistency condition. To proceed we exhibit explicitly in $a_k$ the $\tilde B^a$ which correspond to the forms of lowest degree occuring in $a_k$ and denote them by $\tilde B^{a_i}_1$. The form degree in question is called $p$. The other $\tilde B^a$ are denoted $\tilde B^{b_j}_2$. Thus we write $a_k$ as, \begin{equation} a_k=[P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n,-k}[{\tilde B}^{a_1}_1 \ldots {\tilde B}^{a_r}_1 {\tilde B}^{b_1}_2 \ldots {\tilde B}^{b_s}_2]^{0,k}. \end{equation} Of course, $k>p$ ($a_k$ is at least quadratic in the $\tilde B$). In fact, $k>p+1$ since there is no $1$-form in the problem. A direct calculation then shows that the equations $\gamma a_j +\delta a_{j+1}+db_j=0$ determining $a_{k-1},a_{k-2},\ldots$ have a solution up to $a_{k-p}$. These solutions are, \begin{eqnarray} a_{k-j}&=&[P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n-j,-k+j}[{\tilde B}^{a_1}_1 \ldots {\tilde B}^{a_r}_1 {\tilde B}^{b_1}_2 \ldots {\tilde B}^{b_s}_2]^{j,k-j},\label{recu1} \\ && \hspace{7cm} \hbox{for\ $0\leq j \leq p$} \nonumber \label{recurP}. \end{eqnarray} Unless $a_k$ is trivial (i.e., can be removed by the addition of exact terms to $a$), there is however an obstruction in the construction of $a_{k-p-1}$. To discuss this obstruction, one needs to know the ambiguity in the $a_{k-j}$ ($0\leq j\leq p$). One easilly verifies that it is given by $a_{k-j}\rightarrow a_{k-j} + m_0 + m_1 +\ldots +m_{j-1}$ where $m_0$ satisfies $\gamma m_0=0$, $m_1$ satisfies $\gamma m_1 +\delta n_1 + db_1=0,\ \gamma n_1=0$, $m_2$ satisfies $\gamma m_2+\delta n_2+db_2=0,\ \gamma n_2 +\delta l_2+dc_2=0,\ \gamma l_2=0$, etc. However, none of these ambiguities except $m_0$ in $a_{k-p}$ can play a role in the construction of a non-trivial solution. To see this, we note that $\delta$, $\gamma$ and $d$ conserve the polynomial degree of the variables of any given sector\footnote{By sector we mean the variables corresponding to a given $p$-form and its associated antifields and ghosts.}. We can therefore work at fixed polynomial degree in the variables of all the different $p$-forms. Since $n_1$, $l_2$, etc. are $\gamma$-closed terms which can be lifted at least once, they have the generic form $R[H,\tilde H]{\cal Q}$ where ${\cal Q}$ has to contain a ghost of ghost of degree $p_A<p$. Because we work at fixed polynomial degree, the presence of such terms imply that $P_{a_1\ldots a_r b_1\ldots b_s}$ has to depend on $H^A$ (a dependence on $\tilde H^A$ is not possible since by assumption $k>p$). However, $a_k$ is then of the form described in Lemma {\bf \ref{triviality}} and can be eliminated from $a$ by the addition of trivial terms and the redefinition of the terms of antighost numbers $<k$. Therefore we may now assume that $a_k$ does not contain $H^A$ and that the only ambiguity in the definitions of the $a_{k-j}$ is $m_0$ in $a_{k-p}$. Since $k>p$, we have to substitute $a_{k-p}$ in the equation $\gamma a_{k-p-1}+\delta a_{k-p}+ db_{k-p-1}=0$. We then get, \begin{eqnarray} \gamma a_{k-p-1}+\delta [P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n-p,-k+p}[{\tilde B}^{a_1}_1 \ldots {\tilde B}^{a_r}_1 {\tilde B}^{b_1}_2 \ldots {\tilde B}^{b_s}_2]^{p,k-p}\\ +\delta m_0+db_{k-p-1} =0, \end{eqnarray} which can be written as, \begin{eqnarray} \gamma a^{'}_{k-p-1} + db^{'}_{k-p-1} +\delta m_0 \hspace{2cm} \nonumber \\ + (-)^{\epsilon_P}r [P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n-p-1,-k+p+1}H^{a_1}_1{\cal Q}^{a_2 \ldots a_r b_1 \ldots b_s} _{0,k-p} =0.\label{eqdelobs} \end{eqnarray} By acting with $\gamma$ on the above equation we obtain $d\gamma b^{'}_{k-p-1}=0 \Rightarrow \gamma b^{'}_{k-p-1}+db^{''}_{k-p-1}=0$ which means that $b^{' }_{k-p-1}$ is a $\gamma$ mod $d$ cocycle. Because we have excluded $1$-forms from the discussion, $k-p-1>0$ so that we may assume that $b^{'}_{k-p-1}$ is strictly annihilated by $\gamma$. Accordingly, $db^{'}_{k-p-1}=[d\beta_{a_2\ldots a_r b_1\ldots b_s}(\chi)]{\cal Q}^{a_2 \ldots a_r b_1 \ldots b_s}_{0,g+q-p} +\gamma l^{n}_{0,k-p-1}$. Equation (\ref{eqdelobs}) then reads, \begin{eqnarray} &\hspace{-4cm}(-)^{\epsilon_P} r [P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n-p-1,-k+p+1}H^{a_1}_1 \nonumber \\ &\hspace{4cm} +\delta \alpha_{a_2\ldots a_r b_1\ldots b_s}(\chi) + d \beta_{a_2\ldots a_r b_1\ldots b_s}(\chi)=0,\label{xindeppourP} \end{eqnarray} where we have set $m_0=\alpha_{a_2\ldots a_r b_1 \ldots b_s}(\chi) {\cal Q}^{a_2 \ldots a_r b_1 \ldots b_s} _{0,k-p}$. Eq. (\ref{xindeppourP}) implies, \begin{eqnarray} [P_{a_1 \ldots a_r b_1 \ldots b_s}]^{n-p-1,-q+p+1}H^{a_1}_1=0,\label{leqasa} \end{eqnarray} since $\delta$ and $d$ both increase the number of derivatives of the $\chi$. Let us first note that $P_{a_1 \ldots a_r b_1 \ldots b_s}$ cannot depend on $\tilde H^{c}_1$ because in that case we would have $k-p-1\leq 0$ which contradicts our assumption that there is no $1$-form (indeed, the component of form-degree $n$ of a polynomial in $H^a$ and $\tilde H^a$ which depends on $\tilde H^c_1$ has maximum antighost number $p+1$). Therefore, $P_{a_1 \ldots a_r b_1 \ldots b_s}$ will satisfy (\ref{leqasa}) only if it is of the form, $P_{a_1 \ldots a_r b_1 \ldots b_s}=R_{c a_1\ldots a_r b_1\ldots b_s}H_1^{c}$ with $R_{c a_1\ldots a_r b_1\ldots b_s}$ symmetric in $c \leftrightarrow a_1$ (resp. antisymmetric) if $H_1$ is anticommuting (resp. commuting). However, using Lemma {\bf \ref{triviality}} we conclude once more that in that case $a_k$ can be absorbed by the addition of trivial terms and a redefinition of the components of lower antighost number of $a$. This ends our proof of the statement that for a system of $p$-forms with $p \geq 2$ all the antifield dependent solutions of the Wess-Zumino consistency conditions in ghost number $0$ are of the form (\ref{dddd}). \section{Presence of $1$-forms} \label{1forms} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} If $1$-forms are present in the system of $p$-forms considered, the solutions in Theorem {\bf \ref{central}} are still valid. However, new solutions of the Wess-Zumino consistency condition appear, so the list is no longer exhaustive. The first set of new solutions, related to the Noether conserved currents of the theory, arise because $H^1(\gamma)$ no longer vanishes. Although the term $b_{k-1}\equiv b_0$ which appears in (\ref{b0}) may lead to a non-trivial descent, one can show that (\ref{charcoo}) still holds \cite{BBH2,theseBK} so that $P^I\equiv P^a$ has to be an element of $H^n_1(\delta\vert d)$. This cohomology is isomorphic to the set $a^\Delta$ of non-trivial global symmetries of the theory. The corresponding solutions of the Wess-Zumino consistency condition can then be written as, \begin{equation} \label{WZcurrents} a=k^a_\Delta (j^\Delta B_1^a + a^\Delta C^a_1), \end{equation} where the $j^\Delta$ are the Noether currents corresponding to the $a^\Delta$ and satisfy $\delta a^\Delta + dj^\Delta=0$. The dimension of this set of solutions is infinite since one can construct infinitely many conserved currents $j^\Delta$ \cite{HKS1}. This feature is characteristic of free lagrangians. Although these solutions define consistent interactions to first order in the deformation parameter, it is expected that most of them are obstructed at the second order. Furthermore, they are severely constrained by Lorentz invariance. The second set of new solutions of the Wess-Zumino consistency condition arise because the condition $k-p-1>0$ under (\ref{eqdelobs}) may no longer hold. Indeed, if $p=1$ and $k=2$ then we have $k-p-1=0$. As above, the term $b^{'}_{k-p-1}\equiv b^{'}_0$ appearing in (\ref{eqdelobs}) may now lead to a non-trivial descent in $H(\gamma\vert d)$. According to the analysis of \cite{HK2}, equation (\ref{leqasa}) is then replaced by, \begin{eqnarray} (-)^{\epsilon_P} r [P_{a_1 \ldots a_r b_1 \ldots b_s}(H^a,\tilde H^a)]^{n-2}_{0}H^{a_1}_1 + V_{a_2\ldots a_r b_1\ldots b_s}(H^a)=0. \label{leqasa2} \end{eqnarray} The only solution of the above equation for $P^I$ is $P^I\equiv k_{abc}{\tilde H}^a_1$ with $k_{abc}$ completely antisymmetric \cite{BBH2,theseBK}. The corresponding BRST cocyles are given by, \begin{equation} a=k_{abc}[\tilde H^a_1 \tilde B^b_1 \tilde B^c_1]^n_0. \end{equation} They give rise to the famous Yang-Mills vertex since $a_0=k_{abc}\overline H^a_1 B^b_1 B^c_1$. In particular, the above discussion confirms that is not possible to construct a Lagrangian with coloured $p$-forms ($p>1$) since vertices of the form $a_0 \sim {\overline H} B A$ (where $A$ is a $1$-form potential) do not exist. This fact is well appreciated in the litterature. \section{Comments and conclusions} In this paper we have provided the complete proof of the Theorem given in \cite{HK1} on the consistent deformations of non-chiral free $p$-forms. The same techniques can be used to study solutions of the Wess-Zumino consistency condition at other ghost numbers (e.g., candidate anomalies) \cite{theseBK}. For instance, one can show that if all the exterior gauge fields have form degree $\geq 3$, Theorem {\bf\ref{central}} is also valid for candidate anomalies (the gauge potential being replaced by the corresponding ghosts of pure ghost number $1$). The same methods have also been extended recently to cover chiral $p$-forms \cite{BeHeSe1}. \section{Acknowledgements} This work is suported in part by the ``Actions de Recherche Concert{\'e}es" of the ``Direction de la Recherche Scientifique - Communaut{\'e} Fran{\c c}aise de Belgique", by IISN - Belgium (convention 4.4505.86) and by Proyectos FONDECYT 1970151 and 7960001 (Chile). Bernard Knaepen is supported by a post-doc grant from the ``Wiener-Anspach" foundation.
2,877,628,090,863
arxiv
\section{Introduction} Several families of systems in nonequilibrium steady states, including plasmas~\cite{Lima2000, Ourabah2015, Livadiotis2017} cannot be described by the usual canonical ensembles of statistical mechanics, but instead follow the so-called $q$-canonical distributions $P(\bm x|\beta, q)$, which for a system with Hamiltonian $H(\bm x)$ are given by \begin{equation} P(\bm{x}|\beta, q)=\rho(H(\bm x))=\frac{1}{\zeta}\left[1-(1-q)\beta H(\bm{x})\right]_{+}^{\frac{1}{1-q}}, \label{2.2} \end{equation} where $q$ is regarded as an additional, free parameter. Tsallis statistics~\cite{Tsallis1988, Tsallis2009} was proposed originally in 1988 and is widely regarded as an explanation for these $q$-canonical systems, however there are other alternative frameworks such as superstatistics~\cite{Beck2003,Beck2004,Sattin2006,Hanel2011}. Since their introduction, there has been interest in the properties of these $q$-canonical systems, particularly in the interpretation and possible values of the nonextensive index $q$ ~\cite{Plastino2013,Plastino2017,Plastino2018}. Notably, in the superstatistical framework, the connection has been made between the value of $q$ and the uncertainty of the superstatistical temperature~\cite{Beck2004}. \nocite{Briggs2007} \nocite{Sivia2006} In this work we want to introduce some recent fluctuation identities in statistical mechanics, namely the conjugate variables theorem (CVT)~\cite{Davis2012,Davis2016} into this problem. Fluctuation identities can be applied to the standard statistical mechanics successfully, recovering a large number of properties related to the expectation values and the fluctuations of the Hamiltonian and other observables. We show that the use of these theorems can vastly simplify the computations and reveal useful information of systems in $q$-canonical ensembles. \section{Results} For a system with microstates $\bm x \in V$ we consider the definition for the expectation value of a function $f(\bm{x})$ in the state of knowledge $I$ as \begin{equation} \langle f \rangle_{I}=\int_V d\bm{x} f(\bm{x})P(\bm{x}|I). \label{2.1} \end{equation} Now we will consider the case with $f(\bm x)=H(\bm x)$ the Hamiltonian of the system, which we will assume is bounded from below, that is, $H(\bm x) \geq E_0$. For this Hamiltonian we will denote the density of states by $\Omega(E)$, given by \begin{equation} \Omega (E):= \int_V d\bm{x}\delta(H(\bm{x})-E). \label{2.4} \end{equation} \noindent Motivated by the large class of systems with constant specific heat, let us assume the form \begin{equation} \Omega (E)= \Omega_{0} E^{\alpha} \label{2.5} \end{equation} where $\alpha$ is a system-dependent exponent, and we have set (without loss of generality) $E_0=0$. This form not only includes the ideal gas and systems of classical harmonic oscillators, but sometimes has been used to describe more complex systems~\cite{Sanchez2018}. \noindent We can now determine the probability density for the energy as \begin{align} P(E|\beta, q) & = \Big<\delta(H-E)\Big>_{\beta, q} \nonumber \\ & = \int_V d\bm{x}\rho(H(\bm{x}))\delta(H(\bm x)-E) \nonumber \\ & = \rho(E)\Omega(E), \label{eq_pdf_E} \end{align} and in terms of this, define the expectation value for an arbitrary function of energy $g(E)$ as \begin{equation} \left\langle g \right\rangle =\int_{0}^\infty dE \Big(\Omega_{0} E^{\alpha}\Big)\left[\frac{1}{\zeta}(1-(1-q)\beta E)_+^{\frac{1}{1-q}}\right] g(E). \label{2.6} \end{equation} Given that the probability density for the energy, $P(E|\beta, q)$ in Eq. \ref{eq_pdf_E} is non-negative and $\Omega(E)$ is always positive, it is clear that $\rho(E) \geq 0$ and therefore \begin{equation} 1-(1-q)\beta E\geq0, \end{equation} hence, the energy must also be bounded from above, and we have \begin{equation} 0 \leq E\leq\frac{1}{(1-q)\beta}. \label{2.7} \end{equation} \noindent This allows us to define \begin{equation} E_{1} := \frac{1}{\beta(1-q)} \label{eq_E1} \end{equation} as the maximum allowed value of energy under given $\beta$ and $q$. Now the expected energy is \begin{equation} \left\langle E \right\rangle_{\beta, q, \alpha} = \frac{\Omega_0}{\eta}\int_{0}^{E_1} dE \;E^{\alpha} (1-(1-q)\beta E)^{\frac{1}{1-q}}E, \label{2.8} \end{equation} with the normalization constant $\eta$ given by \begin{equation} \eta = \Omega_0 \int_{0}^{E_1}dE E^{\alpha}(1-(1-q)\beta E)^{\frac{1}{1-q}}. \end{equation} \noindent Taking into consideration the upper limit $E_1$ defined in Eq. \ref{eq_E1}, we finally arrive at \begin{equation} \left\langle E \right\rangle_{\beta, q, \alpha} =\frac{\alpha+1}{\beta ((\alpha+1)(1-q)+2-q)}. \label{2.9} \end{equation} Another quantity of interest is the microcanonical inverse temperature, defined by \begin{equation} \beta_\Omega(E) := \frac{d}{dE}\ln \Omega(E) = \frac{\alpha}{E}, \label{eq_beta_omega} \end{equation} whose expected value we can compute in the same manner as before, obtaining \begin{equation} \left\langle \beta_{\Omega} \right\rangle_{\beta, q, \alpha} = \beta((1-q)(\alpha+1)+1). \label{2.11} \end{equation} We could, in principle, compute the variances $\langle(\delta E)^{2}\rangle$ and $\langle(\delta \beta_{\Omega})^{2}\rangle$ using the same explicit approach. However, we will take a look at a more convenient method of calculating the expectation value and variance of a function. \subsection{The conjugate variables theorem} \vspace{5mm} Now, instead of direct integration we will make use of the conjugate variables theorem (CVT)~\cite{Davis2012}, which for a single continuous random variable $x \in [a, b]$ takes the form \begin{equation} \Big<\frac{\partial \omega}{\partial x}\Big>_I + \Big<\omega\frac{\partial}{\partial x}\ln P(x|I)\Big>_I = 0, \label{eq_cvt} \end{equation} where $\omega(x)$ is an arbitrary, differentiable function, and we have assumed that $P(x|I)$ vanishes at the boundaries $x=a$ and $x=b$. The CVT then provides a family of expectation identities where $\omega$ can, in principle, be chosen suitably. In our case, the energy $E$ is such that $E \in [0, E_1]$ with $$P(E=0|\beta, q) = P(E=E_1|\beta, q) = 0,$$ so Eq. \ref{eq_cvt} becomes \begin{align} \left\langle \frac{\partial \omega}{\partial E} \right\rangle_{\beta, q} & =-\left\langle\omega \frac{\partial}{\partial E}\ln((1-(1-q)\beta E)^{\frac{1}{1-q}}E^{\alpha})\right\rangle_{\beta,q} \nonumber \\ & =\left\langle\omega \left[\frac{\beta}{1-(1-q)\beta E}-\frac{\alpha}{E}\right]\right\rangle_{\beta,q}. \label{2.13} \end{align} The second term in the left expectation corresponds to the microcanonical inverse temperature $\beta_\Omega$ (Eq. \ref{eq_beta_omega}), while the first term is the so-called fundamental inverse temperature~\cite{Davis2019} \begin{equation} \beta_{F}(E) := -\frac{d}{dE}\ln\rho(E). \label{2.14} \end{equation} Let us use the choice $\omega(E)=(1-(1-q)\beta E)g(E)$, where $g(E)=E^{m}$ with $m$ an integer. Under this choice, Eq. \ref{2.13} yields the recurrence relation \begin{equation} \left\langle E^{m} \right\rangle_{\beta,q} =\frac{m+\alpha}{\beta (1+(\alpha+1+m)(1-q))}\left\langle E^{m-1} \right\rangle_{\beta,q}, \label{2.15} \end{equation} which will let us easily calculate the expectation values of both the energy and the microcanonical inverse temperature. Now as a starting point of the recurrence, let us choose $m=1$. We have then \begin{equation} \left\langle E \right\rangle_{\beta,q}=\frac{1+\alpha}{\beta(1+(\alpha+2)(1-q))}, \label{2.16} \end{equation} which is precisely the same result as Eq. \ref{2.9}, although somewhat rearranged. In this way, we see that for the energy, the use of the CVT is a simple and consistent alternative method to obtain expectation values. The same holds true for the expectation of $\beta_\Omega$ (Eq. \ref{eq_beta_omega}), for which we choose $m=0$ and obtain \begin{equation} \left\langle \beta_{\Omega} \right\rangle_{\beta,q}=\beta(1+(\alpha+1)(1-q)). \label{2.17} \end{equation} \noindent In order to calculate the variance of the energy, we take $m=2$ to obtain \begin{equation} \left\langle E^{2} \right\rangle_{\beta,q}=\frac{\alpha+2}{\beta (1+(\alpha+3)(1-q))}\left\langle E \right\rangle_{\beta,q}, \end{equation} which can be rearranged as \begin{equation} \left\langle E^{2} \right\rangle_{\beta,q}= \frac{(\alpha+2)((1-q)(\alpha+2)+1)}{(\alpha+1)((1-q)(\alpha+3)+1)}\left\langle E \right\rangle^{2}_{\beta,q}. \label{2.18} \end{equation} \noindent With this, the variance for the energy is \begin{equation} \left\langle (\delta E)^{2} \right\rangle_{\beta,q}=\left(\frac{(\alpha+2)((1-q)(\alpha+2)+1)}{(\alpha+1)((1-q)(\alpha+3)+1)}-1\right)\left\langle E \right\rangle^{2}_{\beta,q}. \label{2.19} \end{equation} \noindent In the same manner, for the microcanonical inverse temperature $\beta_\Omega$ we replace $m=-1$ in Eq. \ref{2.15} and obtain \begin{equation} \left\langle \beta_{\Omega}^{2} \right\rangle_{\beta,q}=\frac{\alpha}{(\alpha-1)}\beta^{2}((1-q)\alpha+1)((1-q)(\alpha+1)+1), \label{2.20} \end{equation} hence the variance will be given by \begin{equation} \left\langle (\delta\beta _{\Omega})^{2}\right\rangle_{\beta,q}=\left(\frac{\alpha}{(\alpha-1)}\frac{((1-q)\alpha+1)}{((1-q)(\alpha+1)+1)}-1\right)\left\langle \beta_{\Omega} \right\rangle^{2}_{\beta,q}. \label{2.21} \end{equation} \noindent Finally, let us calculate the expectation value and the variance of the fundamental inverse temperature $\beta_F$. For this, let us take $\omega(E)=1$ so that, together with the definition of both inverse temperature estimators, Eq. \ref{2.13} leads to $$\left\langle \beta_{\Omega} \right\rangle_{\beta,q} = \left\langle \beta_{F} \right\rangle_{\beta,q}.$$ \noindent For $\langle \beta_F^2 \rangle_{\beta,q}$ let us use the choice $$\omega(E)=\beta_{F}(E)=\frac{\beta}{1-(1-q)\beta E},$$ from which it follows that \begin{equation} (1-q)\left\langle \beta_{F}^{2} \right\rangle_{\beta,q} = \left\langle \beta_{F}^{2} \right\rangle_{\beta,q} - \left\langle \beta_{\Omega}\beta_{F} \right\rangle_{\beta,q}. \label{2.22} \end{equation} Here we see that, in order to determine the variance of $\beta_F$ from Eq. \ref{2.22} we need the expectation of $\beta_{\Omega}\cdot \beta_{F}$. For that, let us return to CVT and choose $\omega(E)=\beta_{\Omega}(E)$. Then, we have \begin{equation} \frac{-1}{\alpha} \left\langle \beta_{\Omega}^{2} \right\rangle_{\beta,q} = \left\langle \beta_{\Omega}\beta_{F} \right\rangle_{\beta,q} - \left\langle \beta_{\Omega}^{2} \right\rangle_{\beta,q}, \end{equation} from which it follows that \begin{equation} \left\langle \beta_{\Omega}\beta_{F} \right\rangle_{\beta,q} = \frac{(1-q)\alpha+1}{(1-q)(\alpha+1)+1}\left\langle \beta_{F} \right\rangle^{2}_{\beta,q}, \end{equation} and finally \begin{equation} \left\langle \beta_{F}^{2} \right\rangle_{\beta,q}=\frac{1}{q}\frac{(1-q)\alpha+1}{(1-q)(\alpha+1)+1}\left\langle \beta_{F} \right\rangle^{2}_{\beta,q}. \label{eq_betaF_2} \end{equation} \noindent The variance of $\beta_F$ can then be obtaining by rewriting Eq. \ref{eq_betaF_2} as \begin{equation} \left\langle (\delta\beta_{F})^{2} \right\rangle_{\beta,q}=\left(\frac{1}{q}\frac{(1-q)\alpha+1}{((1-q)(\alpha+1)+1)}-1\right)\left\langle \beta_{F} \right\rangle^{2}_{\beta,q}. \label{2.23} \end{equation} \section{Discussion} \vspace{5mm} In order to gain some intuition on the obtained results, let us consider the case of $\bm{x}$ consisting of $n$ quadratic degrees of freedom, for which the density of states is~\cite{Greiner2012}, \begin{equation} \Omega(E; V, n) = \Omega_0(V, n)E^{\frac{n}{2}-1}, \end{equation} hence $\alpha = \frac{n}{2}-1$. For instance, this is the case of an ideal gas of $N$ particles in $D$ dimensions, where $n=N\cdot D$. First, it is straightforward to see that the canonical ensemble expressions are recovered when $q \rightarrow 1$. That is, Eq. \ref{2.16} becomes \begin{equation} \langle E \rangle_{\beta, n} = \frac{n k_{B}T}{2}, \end{equation} which is the canonical equipartition theorem~\cite{Greiner2012}. Together with this, the expectation of the microcanonical inverse temperature given by Eq. \ref{2.17} becomes simply \begin{equation} \langle \beta_{\Omega} \rangle_{\beta, n} = \beta, \end{equation} as it should in the case of a canonical ensemble. This also gives the expectation of the fundamental inverse temperature as $\beta$. The variance of the energy becomes \begin{equation} \left\langle(\delta E)^2\right\rangle_{\beta, n} = \frac{n (k_{B} T)^{2}}{2}, \end{equation} corresponding to the well-known formula connecting energy fluctuations with the heat capacity at a constant volume in the canonical ensemble. From all this it is clear that all energy-dependent thermodynamic properties are correctly recovered. An interesting fact arises when we study the variance of the inverse temperature estimators, since in the canonical ensemble, the concept of fluctuations of temperature is unclear, and actually has not been devoid of controversy~\cite{Kittel1988, Mandelbrot1989}. As $\beta$ is strictly fixed it cannot fluctuate, however the variances of both estimators $\beta_{\Omega}(E)$ and $\beta_{F}(E)$ exist in this model (because $E$ itself is allowed to fluctuate) and have a well-defined expression for $q=1$. This may seem paradoxical at first, however there are two ways to approach this apparent contradiction. The first is the traditional interpretation in superstatistics of the variance as a result of spatio-temporal variations on the temperature, motivated by the fact that it is possible to recover $q$-canonical ensembles from particular assumptions about the microscopic dynamics of a system~\cite{Sattin2004}. The second approach to this point regards the variance as a product of the uncertainty as to the actual value of the (unique) temperature of the system; this is compelling when deriving the $q$-canonical ensembles through marginalization and Bayesian probability rules~\cite{Sattin2006,Davis2018}. In such interpretation, the nonextensive index $q$ is directly related to the lack of information we have about the system. Examining the variance of the microcanonical inverse temperature $\beta_\Omega$, given in Eq. \ref{2.21}, we can notice a pole at $\alpha = 1$, which corresponds to $n=4$. This behaviour does not appear in the variance of the fundamental inverse temperature (Eq. \ref{2.23}). Moreover, when taking $q \rightarrow 1$, Eq. \ref{2.21} becomes \begin{equation} \left\langle (\delta \beta_{\Omega})^2_{\beta} \right\rangle=\frac{\beta^{2}}{\alpha-1}, \label{2.24} \end{equation} which again has a pole at $\alpha=1$ and presents negative values below that, which of course cannot be realized for variances. This is an indicator that the microcanonical inverse temperature fails to be a reliable estimator for temperature. On the other hand, when taking $q \rightarrow 1$ in Eq. \ref{2.23}, the variance of $\beta_F$ becomes zero as expected, since in the canonical ensemble the fundamental inverse temperature is precisely the constant $\beta$. When taking the thermodynamic limit, i.e. when $\alpha \rightarrow \infty$, the variances of both the energy and the microcanonical inverse temperature go to zero. This is to be expected, as $\beta_\Omega$ becomes a constant when the fluctuations of $E$ vanish. However, the variance of the fundamental inverse temperature in this limit is \begin{equation} \left\langle (\delta\beta_{F})^2 \right\rangle_{\beta,q}=\left(\frac{1-q}{q}\right)\left\langle\beta_{F}\right\rangle^{2}_{\beta,q}, \label{2.25} \end{equation} which is not manifestly zero. This is counterintuitive since not only $\beta_F$ is a function of the energy (and should have zero variance as is the case with $\beta_\Omega$), but also because in the thermodynamic limit, the distributions of any parameter will collapse to its observed value. All of this seems to suggest that the only consistent case in the thermodynamic limit is $q=1$. In order to better understand the behavior of this quantity we will expand it as \begin{equation} \frac{\left\langle (\delta \beta)^2 \right\rangle_{\beta,q}}{\beta^{2}}=\frac{\alpha^{2}(1-q)^{3}}{q}+\frac{2\alpha(1-q)^{2}}{q}. \label{2.26} \end{equation} The first thing to note is that Eq. \ref{2.26} has been written with the $\beta$ parameter on the left side so that it can be immediately seen as an dimensionless function of $q$ and $\alpha$. Now, in the simultaneous limit $\alpha \rightarrow \infty$ and $q \rightarrow 1$ there is a tradeoff between the growth rate of $\alpha$ and the vanishing of $(1-q)$, resulting in that the whole expression vanishes because of the higher powers of $1-q$ in both terms. Secondly, the fact that this happens in the variance of the fundamental inverse temperature $\beta_F$ suggests that this inverse temperature has information about the ensemble that the microcanonical inverse temperature $\beta_\Omega$ does not possess. Moreover, the variance of $\beta_F$ is always higher than the variance of $\beta_\Omega$, and so the idea that the value of $q$ is a measure of the uncertainty we have on any given thermodynamic quantity, reinforces the previous points and again suggests that the fundamental inverse temperature contains a more accurate depiction of the ensemble. Another, more fundamental aspect of Eq. \ref{2.23} is that, being a variance, it has to non-negative, and moreover the square of the expectation value of $\beta_{F}$ is positive. From this it follows that \begin{equation} \frac{(1-q)\alpha+1}{q((1-q)(\alpha+1)+1)} \geq 1. \label{2.28} \end{equation} For the range $0<q<1$, Eq. \ref{2.28} does not deliver any new information. However, for $q > 1$ some considerations must be taken into account. The first case is \begin{equation} (1-q)\alpha+1<0\qquad \text{and} \qquad q((1-q)(\alpha+1)+1)<0, \end{equation} which corresponds to \begin{equation} q>1+\frac{1}{\alpha+1}. \label{2.29} \end{equation} \noindent When taking these conditions, Eq. \ref{2.28} gives \begin{equation} \alpha \leq -1 \qquad \text{and} \qquad (1-q)^{2} \geq 0, \label{2.30} \end{equation} which however is ruled out, together with the first condition (Eq. \ref{2.29}), because we know the minimum value of $\alpha$ is $-\frac{1}{2}$. Therefore, only the following case must hold, \begin{equation} (1-q)\alpha+1>0 \qquad \text{and} \qquad q((1-q)(\alpha+1)+1)>0 \end{equation} from which it follows that \begin{equation} q < 1+\frac{1}{\alpha+1}, \label{eq_lutsko} \end{equation} precisely the upper bound shown recently by Lutsko and Boon~\cite{Lutsko2011}, depending on the value of $\alpha$. With this inequality, Eq. \ref{2.28} becomes \begin{equation} \alpha \geq -1 \qquad \text{and} \qquad (1-q)^{2} \geq 0. \label{2.32} \end{equation} The characteristic point $q_{LB} = 1 + 1/(\alpha + 1)$ also appears in the plot for the variance of $\beta_{F}$, shown in Fig. \ref{Bfvar}, where from $q_{LB}$ onwards the variance takes negative values (which is of course not admissible) for any positive $\alpha$. \begin{figure}[h!] \centering \includegraphics{Bf.png} \caption{Variance of the fundamental inverse temperature $\beta_{F}$ with $\beta=0.3$ as a function of $q$, for different values of $\alpha$.} \label{Bfvar} \end{figure} Having taken a look at the temperature estimators, let us now explore the behavior of the energy. Specifically, we know that \begin{equation} 0 \leq \frac{\left\langle E \right\rangle_{\beta,q}}{E_{max}}\leq1. \label{3.1} \end{equation} Now, as we know, $E_{max}=\frac{1}{(1-q)\beta}$. Therefore, using this and Eq. \ref{2.16}, the inequality given in Eq. \ref{3.1} becomes \begin{equation} 0 \leq \frac{(1-q)(1+\alpha)}{(1-q)(2+\alpha)+1}\leq1. \label{3.2} \end{equation} In the range $0<q<1$, the bounded quantity in Eq. \ref{3.2} is always positive for all $\alpha$, so the lower bound of zero holds true. Moreover, imposing the upper limit in this case gives $q \leq 2$, which is also true. For $q > 1$, given that the bounded ratio in Eq. \ref{3.1} is always positive we have that \begin{equation} (1-q)(2+\alpha)+1 < 0, \label{3.3} \end{equation} in other words, \begin{equation} q>1+\frac{1}{2+\alpha}. \label{3.4} \end{equation} With these considerations, Eq. \ref{3.2} imposes that \begin{equation} q \geq 2. \label{3.5} \end{equation} This condition is always stronger than the one in Eq. \ref{3.4}, and so all values of $q$ between $1$ and $2$ are excluded. However, the condition in Eq. \ref{eq_lutsko} tells us that $q$ cannot go over $q_{LB}$, and since we have to take them into account simultaneously, $q$ is then allowed to exist in the ranges $0<q\leq 1$ and $2 \leq q < 1+\frac{1}{\alpha+1}$, the latter only for $\alpha \leq 0$. Because $\alpha \geq -\frac{1}{2}$, the maximum value allowed for $q$ is $3$ in this case, and for sufficiently large values of $\alpha$, $q$ has no admissible values over $1$. \section{Concluding remarks} We have obtained several identities valid for a family of systems in the $q$-canonical ensemble, namely the systems described by densities of states of the form $\Omega(E) \propto E^{\alpha}$, which is shared by the ideal gas, harmonic oscillators and in general systems with constant heat capacities. Although the systems where $q$-canonical ensembles are observed may be more complex than this particular form, rather than a depiction of particular systems we aim to explore the use of fluctuation identities such as the conjugate variables theorem as a tool capable of describing a variety of systems given the form of their density of states. In this sense, the usefulness of this tool to calculate properties of a system is clearly shown in the analysis done for the different restrictions that arise naturally on the admissible values of $q$. As a quick example, the inequality $q<1+\frac{1}{1+\alpha}$ was obtained by Lutsko and Boon through an elaborate analysis of the Hamiltonian and the distribution. Here, the same result was obtained in a straightforward manner by examining the variance of the fundamental inverse temperature $\beta_F$. Furthermore, a second inequality ($q \geq 2$ for $-\frac{1}{2} \leq \alpha \leq 0$) was also unveiled from a condition over the expected energy, which again shows the advantages of working with expectation values of the observables of the system. Overall, the use of the conjugated variables theorem alongside a model for the density of states proves to be a very efficient way to study systems described by $q$-canonical ensembles. \section{Acknowledgments} SD gratefully acknowledges funding from CONICYT Anillo ACT-172101 grant.
2,877,628,090,864
arxiv
\section{Introduction} Twistor correspondences, as pioneered by Roger Penrose \cite{pnlg}, provide a way of understanding certain differential geometries as fundamentally arising from moduli spaces of compact complex curves in a complex manifold. It has emerged only recently, however, that an analogous pattern of phenomena can also be expected to arise from moduli spaces of compact complex curves-with-boundary in a complex manifold, where the boundaries of the curves are constrained to lie in a maximal totally real submanifold. Our previous work in this direction \cite{lmzoll} focused on spaces of holomorphic disks in $\mathbb C\mathbb P_2$, with boundaries on a totally real embedding of $\mathbb R\mathbb P^2$. In the present article, we will see that a similarly rich geometric story arises from the moduli space of holomorphic disks in $\mathbb C\mathbb P_3$ with boundaries on a totally real $\mathbb R\mathbb P^3$. Penrose's original twistor correspondence, which he called the {\sl nonlinear graviton}, hinged on the idea that {self-dual} conformal metrics on $4$-manifolds tend to arise from suitable holomorphic families of $\mathbb C\mathbb P_1$'s in complex $3$-manifolds. Penrose's formulation of these ideas involved local analytic continuations of real-analytic geometries into the complex domain, thereby making the metric signature essentially irrelevant. Nonetheless, it was specifically the positive-definite realm of Riemannian geometry that witnessed the most intensive subsequent cultivation of these ideas, a development largely attributable to the elegant and definitive global Riemannian reformulation of the Penrose correspondence discovered by Atiyah, Hitchin, and Singer \cite{AHS}. By contrast, however, the present article will focus entirely on $4$-manifolds with {\em split-signature} metrics, meaning pseudo-Riemannian metrics of signature $(++--)$; these have elsewhere been called {\em neutral metrics} \cite{law}, and are characterized by the fact that they have components $$ \left[\begin{array}{cccc}+1 & & & \\ & +1 & & \\ & & -1 & \\ & & & -1\end{array}\right]$$ in a suitably chosen basis for any given tangent space. What we will develop here is a global twistor correspondence for self-dual split-signature $4$-manifolds in which {\em every null geodesic is a simple closed curve}. Such metrics will turn out to naturally arise as moduli spaces for holomorphic disks in $\mathbb C\mathbb P_3$ with boundary on a fixed totally real submanifold. As in the Riemannian case, a split-signature metric $g$ on an oriented $4$-manifold $M$ is said to be {\em self-dual} if its Weyl (or conformal) curvature tensor, considered as a bundle-valued $2$-form, is its own Hodge star; cf. \S \ref{selfdual} below. This is a conformally invariant condition, and should therefore primarily be thought of as a constraint on the conformal class $[g]=\{ {\zap f} g~|~{\zap f}\neq 0\}$ of the metric. Notice that any locally conformally flat split-signature metric on an oriented $4$-manifold is automatically self-dual. For us, the protypical example is the indefinite product metric $$g_0=\pi_1^*h-\pi_2^*h$$ on $S^2\times S^2$, where $\pi_1 , \pi_2 : S^2\times S^2\to S^2$ are the two factor projections and $h$ is the standard homogeneous metric on $S^2$. This metric is actually conformally flat, since, thinking of $S^2\times S^2$ as the locus $$x_1^2+x_2^2+x_3^2=1, ~~ y_1^2+y_2^2+y_3^2=1,$$ in $\mathbb R^{3,3}= \mathbb R^3\times \mathbb R^3$, and introducing `stereographic' coordinates by \begin{equation} \label{stereo} \begin{array}{ccccccc}{\mathfrak x}_1 & = &\displaystyle \frac{x_1}{2(x_3-y_3)} & ~~~~ & {\mathfrak x}_2 & = & \displaystyle \frac{x_2}{2(x_3-y_3)} \\ &&&&&&\\ {\mathfrak y}_1 & = &\displaystyle \frac{y_1}{2(x_3-y_3)} & ~~~~ & {\mathfrak y}_2 & = &\displaystyle \frac{y_2}{2(x_3-y_3)} ~,\end{array} \end{equation} the metric can be re-expressed in the form $$g_0 = \frac{d{\mathfrak x}_1^2+ d{\mathfrak x}_2^2-d{\mathfrak y}_1^2-d{\mathfrak y}_2^2}{{\mathfrak x}_1^2+ {\mathfrak x}_2^2+ [{\mathfrak y}_1^2+{\mathfrak y}_2^2-{\mathfrak x}_1^2-{\mathfrak x}_2^2+ \frac{1}{4}]^2} ~.$$ However, this example has a second fundamental property that will play a crucial r\^ole in this paper. Indeed, the null geodesics of $(S^2\times S^2, g_0)$ are all {\em embedded circles}, since each is obtained by simultaneously traversing a great circle in each $S^2$ with equal speed. Following Guillemin \cite{mogul}, we will use the word {\em Zollfrei} to describe pseudo-Riemannian metrics with this property; for a detailed discussion, see \S \ref{zollfrei} below. The Zollfrei condition is also conformally invariant, so that we may consider it as yet another property of the conformal class $[g]$. Among all split-signature metrics on a given manifold, the Zollfrei condition is highly non-generic. It may therefore seem surprising that it becomes an {\em open} condition when restricted to the subspace of self-dual metrics: \begin{main}\label{auto} Let $(M,g)$ be a self-dual Zollfrei $4$-manifold. Then, with respect to the $C^2$ topology, there is an open neighborhood of $g$ in the space of pseudo-Riemannian metrics on $M$ such that every self-dual metric contained in this neighborhood is also Zollfrei. \end{main} For the purpose of studying the moduli of self-dual conformal structures, it thus seems reasonable to focus for the present on understanding those self-dual metrics which are also Zollfrei. But this point of view immediately prompts us to ask, ``Which $4$-manifolds admit self-dual Zollfrei metrics?'' We have just seen that $S^2\times S^2$ is one such manifold. Another example is given by the projective quadric $${\mathbb M}^{2,2}= \Big\{ [x_1: x_2 : x_3 : y_1: y_2: y_3 ] \in \mathbb R\mathbb P^5~\Big|~ |\vec{x}|^2 - |\vec{y}|^2 = 0\Big\}~,$$ which may be viewed as the quotient of $(S^2\times S^2, g_0)$ by the isometric $\mathbb{Z}_2$-action generated by the double antipodal map $$(\vec{x},\vec{y}) \mapsto (-\vec{x},-\vec{y}).$$ However, we will show in \S \ref{prelim} that these are the {only} topological possibilities: \begin{main}\label{topprop} Let $(M,g)$ be a connected oriented split-signature $4$-manifold which is both Zollfrei and self-dual. Then $M$ is homeomorphic to either $S^2\times S^2$ or ${\mathbb M}^{2,2}$. \end{main} This topological rigidity, however, is by no means symptomatic of any kind of underlying {\sl geometric} rigidity. To the contrary, our central purpose here is to prove the following {\em flexibility} result: \begin{main} \label{zfsd} There is a natural one-to-one correspondence between \begin{itemize} \item equivalence classes of smooth self-dual split-signature conformal structures on $S^2\times S^2$; and \item equivalence classes of totally real embeddings $\mathbb R\mathbb P^3\hookrightarrow \mathbb C\mathbb P_3$, \end{itemize} at least in a neighborhood of the standard conformal metric $[g_0]$ and the standard embedding of $\mathbb R\mathbb P^3$. \end{main} Here, two conformal structures are considered to be equivalent iff one is the pull-back of the other via some orientation-preserving self-diffeomorphism of $S^2\times S^2$; two embeddings $\mathbb R\mathbb P^3\hookrightarrow \mathbb C\mathbb P_3$ are considered to be equivalent iff they are interrelated by a reparameterization of $\mathbb R\mathbb P^3$ and/or the action of $PSL(4,\mathbb{C})$ on $\mathbb C\mathbb P_3$. In particular, the moduli space of self-dual Zollfrei conformal structures on $S^2\times S^2$ is infinite-dimensional; and, roughly speaking, the general such conformal structure depends on $3$ free functions of $3$ variables. The correspondence between the two kinds of structures depends on the existence of an $(S^2\times S^2)$-family of holomorphic disks with boundary on a given totally real $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$. By contrast, the same arguments also show that $({\mathbb M}^{2,2}, [g_0])$ has no non-trivial self-dual deformations. Indeed, by analogy with the Blaschke conjecture \cite{beszoll,lmzoll}, we are tempted to speculate that, up to conformal isometry, $({\mathbb M}^{2,2}, [g_0])$ might well be the only non-simply-connected self-dual Zollfrei $4$-manifold. Finally, it is interesting to observe that $g_0$ may be viewed as an indefinite scalar-flat K\"ahler metric on $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, and that, conversely, any indefinite scalar-flat K\"ahler metric on a complex surfaces is automatically self-dual. In this regard, our techniques lead to the following: \begin{main}\label{D} The only complex surface $(M,J)$ admitting Zollfrei scalar-flat indefinite K\"ahler metrics is $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$. Every such metric arises from a family of analytic disks in $\mathbb C\mathbb P_3$ with boundary on a totally real $\mathbb R\mathbb P^3$. Near the standard metric $g_0$, moreover, indefinite scalar-flat K\"ahler metrics of fixed total volume are in one-to-one correspondence with those totally real embeddings $\mathbb R\mathbb P^3\hookrightarrow \mathbb C\mathbb P^3-Q$ on which the pull-back of the $3$-form $$\phi = \Im m \frac{z_1 dz_2 \wedge dz_3\wedge dz_4 - \cdots - z_4 dz_1 \wedge dz_2 \wedge dz_3 }{(z_1^2+z_2^2+z_3^2+z_4^2)^2}$$ vanishes. Here $Q$ denotes the quadric surface $z_1^2+z_2^2+z_3^2+z_4^2=0$. \end{main} In particular, the moduli space of such metrics is once again infinite-dimensional. In the special setting of metrics with circular symmetry, indefinite scalar-flat K\"ahler metrics on $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$ were previously investigated by Tod \cite{tod} and, independently, by Kamada \cite{kamada}, both of whom discovered that infinite-dimensional families of such metrics can be written down in closed form by means of the Lorentzian analogue of the first author's hyperbolic ansatz \cite{mcp2}. We thus believe that the chief interest of the present article must be found, not in the mere infinite-dimensionality of the relevant moduli space, but, rather, in the manner in which our holomorphic disk picture allows one to explore this interesting, geometric, non-linear ultra-hyperbolic second-order equation in terms of a first-order elliptic boundary-value problem. \section{Zollfrei Metrics} \label{zollfrei} If $(M,g)$ is an indefinite pseudo-Riemannian manifold, a geodesic $\gamma \subset M$ is said to be a {\em null geodesic} if $g(v,v)=0$ for any vector $v$ tangent to $\gamma$. We will primarily consider these null geodesics as {\em unparameterized} curves, even though $g$ endows them with a preferred class of so-called affine parameters. The reason behind this point of view is that the null geodesics of a pseudo-Riemannian manifold $(M,g)$ are {\em conformally invariant} as unparameterized curves; that is, ${\zap f} g$ has the same null geodesics as $g$, for any non-zero function $\zap f$ on $M$. Indeed, let ${\mathscr H}\subset T^*M$ be the hypersurface of non-zero null co-vectors, and notice that ${\mathscr H}$ is foliated by a unique system of curves tangent to $\ker (\omega|_{\mathscr H})$, where $\omega$ is the usual symplectic form on $T^*M$. The Hamiltonian formalism then tells us that that the projections into $M$ of these integral curves are precisely the null geodesics of $g$. The conformal invariance of null geodesics is thus an immediate consequence of the conformal invariance of ${\mathscr H}$. Manifolds for which every null geodesic is a simple closed curve will play a central r\^ole in this paper, and, following Guillemin \cite{mogul}, we will therefore introduce some convenient terminology to describe such spaces: \begin{defn}\label{simple} An indefinite pseudo-Riemannian manifold $(M,g)$ will be called {\em Zollfrei} if the image of each of its maximally extended null geodesics is an embedded circle $S^1\subset M$. \end{defn} Notice that this condition is conformally invariant, so it makes perfectly good sense to say that $(M,[g])$ is Zollfrei, where $$[g] = \{ {\mathzap f} g ~|~ {\mathzap f} : M \to \mathbb R^\times\}$$ denotes the conformal class determined by the metric $g$. Guillemin's definition \cite{mogul} is actually a good deal more stringent than Definition \ref{simple}. Let ${\mathzap Q}\subset \mathbb P T^*M$ denote the quotient ${\mathscr H}/\mathbb R^\times$, where $\mathbb R^\times = \mathbb R - \{ 0\}$ acts by scalar multiplication on $T^*M$. The lifts of null geodesics then define a foliation of ${\mathzap Q}$ by curves. Guillemin's definition then amounts to the following: \begin{defn}\label{guil} Let $(M,g)$ be a Zollfrei manifold. We will say that $M$ is {\em strongly Zollfrei} if the foliation of ${\mathzap Q}$ by lifted null geodesics is a (locally trivial) circle fibration. \end{defn} Since this condition is obviously also conformally invariant, the strongly Zollfrei condition will also be considered as primarily pertaining to the conformal class $[g]$ rather than to the particular metric $g$ representing it. If $(M,[g])$ is a strongly Zollfrei $n$-manifold, we may defined its {\em space of null geodesics} $N$ to be the leaf-space of the null-geodesic foliation of ${\mathzap Q}$. Because the foliation is assumed to be a locally trivial circle fibration, $N$ is then automatically a smooth manifold of dimension $2n-3$. The symplectic description of the foliation endows $N$ with a {\em contact structure}, meaning a maximally non-integrable codimension-$1$ sub-bundle $C\subset TN$ of the tangent bundle. Concretely, the tangent space $T_\gamma N$ of $N$ at a null geodesic $\gamma$ is locally represented on $M$ as equivalence classes of solutions $w$ of Jacobi's equation $$\nabla_v\nabla_v w = R_{vw}(v)$$ subject to the constraint $$g(v, w) = \mbox{constant}$$ and the equivalence relation $$w \sim w + (a + bt) v,$$ where $R$ is the curvature tensor of $g$, $t$ is a local affine parameter for $\gamma$, $v= d/dt$, and $a$ and $b$ are constants; in these terms, the contact sub-bundle $C\subset TN$ then corresponds to those Jacobi fields $w$ which satisfy the constraint $$g(v,w)=0.$$ If ${\mathzap L}\to N$ is the line-subundle of $T^*N$ consisting of the $1$-forms which annihilate $C$, then ${\mathzap L}^\times := {\mathzap L}-0_N$ is a symplectic submanifold of $T^*N$, called \cite{arnold} the {\em symplectification} of the contact manifold $(N,C)$. However, the pull-back of ${\mathzap L}^\times$ to ${\mathzap Q}$ can be canonically identified with ${\mathscr H}$, and the Marsden-Weinstein reduction of ${\mathscr H}\subset T^*M$ is therefore globally well defined. This shows that the null geodesic foliation must actually be periodic up on ${\mathscr H}$, and not just down on ${\mathzap Q}$. Thus, no matter which metric $g$ we choose in the conformal class $[g]$, the the null geodesics of a strongly Zollfrei manifold are all automatically {\em periodic} with respect to their affine parameters. (This conclusion should be contrasted with the closed but non-periodic null geodesics \cite{hawkell} of the Taub-NUT metric and related examples.) For this reason, Definition \ref{guil} is logically equivalent to the definition used by Guillemin in \cite{mogul}. \section{Self-Duality} \label{selfdual} Suppose that $M$ is an oriented $4$-manifold, and that $g$ is a split-signature pseudo-Riemannian metric. Then, as in the Riemannian case, the Hodge star operator $\star : \Lambda^2 \to \Lambda^2$ satisfies $\star^2=+{\mathbf 1}$, so there is an invariant splitting $$\Lambda^2 = \Lambda^+\oplus \Lambda^-$$ of the $2$-forms into the $(\pm 1)$-eigenspaces of $\star$. The inner product induced by $g$ is of Lorentz signature on both $\Lambda^\pm$, reflecting the fact that $SO_+(2,2)$ is a double cover of $SO_+(1,2)\times SO_+(1,2)$. Sections of $\Lambda^+$ (respectively, $\Lambda^-$) are called {\em self-dual} (respectively, {\em anti-self-dual}) forms. Thinking of the curvature tensor ${\mathcal R}$ of $g$ as a linear map ${\mathcal R}: \Lambda^2 \to \Lambda^2$, we thus obtain a decomposition $$ {\mathcal R}= \left( \mbox{ \begin{tabular}{c|c} &\\ $W_++\frac{s}{12}$&$\mathring{r}$\\ &\\ \cline{1-2}&\\ $\mathring{r}$ & $W_-+\frac{s}{12}$\\&\\ \end{tabular} } \right) . $$ of the Riemann tensor into simpler pieces. Here $W_+$ and $W_-$ are the trace-free pieces of the appropriate blocks, and are called the {\em self-dual} and {\em anti-self-dual Weyl curvatures}, respectively. The scalar curvature $s$ is understood to act by scalar multiplication, whereas $\mathring{r}$ is a disguised form of the trace-free part of the Ricci curvature tensor. \begin{defn} An oriented split-signature pseudo-Riemannian $4$-manifold $(M,g)$ is called {\em self-dual} if it satisifes $W_-\equiv 0$. \end{defn} This condition is conformally invariant, in the sense that if $g$ is self-dual, so is the metric ${\mathzap f} g$, where ${\mathzap f}: M\to \mathbb R^\times$ is any non-zero function. Thus the self-duality condition should fundamentally be understood as pertaining to a conformal class $$[g] = \{ {\mathzap f} g ~|~ {\mathzap f} : M \to \mathbb R^\times\}$$ rather than to a particular metric $g$ representing it. If $(M,g)$ is a pseudo-Riemannian manifold, we will say that a real linear subspace $\Pi $ of a tangent space $T_xM$ is {\em isotropic} if it consists entirely of null vectors. Notice that this is a conformally invariant condition. If $(M,g)$ is an oriented split-signature $4$-manifold, then the space of isotropic $2$-planes in $TM$ has two connected components, each of which is a circle bundle over $M$. Indeed, if $\Pi \subset T_xM$ is an isotropic $2$-plane, then $\wedge^2\Pi $ corresponds, by index-lowering, to a null $1$-dimensional subspace of either $\Lambda^+$ or $\Lambda^-$. In the first case, one says that $\Pi $ is an {\em $\alpha$-plane}, whereas in the second case one says that $\Pi $ is a {\em $\beta$-plane}. We will henceforth use ${\zap p} : F\to M$ to denote the circle-bundle of $\beta$-planes over an oriented split-signature $4$-manifold $(M,g)$. \begin{defn} An immersed connected surface in $S\looparrowright M$ will be called a {\em proto-$\beta$-surface} if its tangent space $T_{x}S$ is a $\beta$-plane for all $x\in S$. If, in addition, the proto-$\beta$-surface $S$ is maximal, in the sense that it is not a proper subset of a larger proto-$\beta$-surface, we will say that $S$ is a {\em $\beta$-surface}. \end{defn} \begin{lem}\label{noii} Let $(M,[g])$ be an oriented $4$-manifold with split-signature conformal metric, and let $S\looparrowright M$ be any proto-$\beta$-surface. Then the second fundamental form of $S$ vanishes. Consequently, $S$ is totally geodesic. \end{lem} \begin{proof} The tangent bundle of any proto-$\beta$-surface $S$ is locally spanned by vector fields $v$ and $w$ with $[v,w]=0$ and $$g(v,v)=g(w,w)=g(v,w)=0.$$ Now notice $\nabla_vw=\nabla_wv$, and hence $$g(v,\nabla_vw)=g(v,\nabla_wv)= {\textstyle \frac{1}{2}} wg(v,v)=0,$$ whereas we also have $$g(w, \nabla_vw) = {\textstyle \frac{1}{2}} vg(w,w)=0.$$ However, $TS= TS^\perp$ with respect to $g$, so we conclude that $$\nabla_vw\in TS.$$ Similarly, $$g(w, \nabla_ww)= {\textstyle \frac{1}{2}} w g(w,w)=0,$$ and $$g(v,\nabla_ww) = wg(v,w)- g(\nabla_vw, w) = 0,$$ so we must have $$\nabla_ww\in TS,$$ too. Thus $$\nabla : \Gamma (TS) \times \Gamma (TS) \to \Gamma (TS).$$ In other words, the second fundamental form \begin{eqnarray*} \gemini : TS\times TS & \to & TM/TS \\ (v,w) & \mapsto & \nabla_vw \bmod TS \end{eqnarray*} vanishes. Equivalently, $S$ is totally geodesic, in the sense that any geodesic tangent to $S$ at some point necessarily remains within $S$. \end{proof} This observation then allows one to prove the following: \begin{lem}\label{proteus} Let $(M,[g])$ be an oriented $4$-manifold with split-signature conformal structure. Then the following are equivalent: \begin{romenumerate} \item $[g]$ is self-dual; \item every $\beta$-plane $\Pi \subset TM$ is tangent to some proto-$\beta$-surface; \item if $\Pi \subset TM$ is any $\beta$-plane, and if $v,w\in \Pi $, then ${\mathcal R}_{vw}v\in \Pi $, too. \end{romenumerate} \end{lem} \begin{proof} Suppose that $\Pi \subset TM$ is a $\beta$-plane, and let $v$ and $w$ span $\Pi $. Then, using $g$ to freely identify vectors and $1$-forms via index lowering, $v\wedge w\in \Lambda^-$ and $\langle v\wedge w , v\wedge w\rangle =0$. Hence \begin{eqnarray*} g (w, {\mathcal R}_{vw}v) & = & \left\langle v\wedge w , {\mathcal R}(v \wedge w) \right\rangle \\ & = & \left\langle v\wedge w , \left( W_-+\frac{s}{12}\right)(v \wedge w) \right\rangle\\ & = & \left\langle v\wedge w , W_-(v \wedge w) \right\rangle . \end{eqnarray*} On the other hand, $$g (v, {\mathcal R}_{vw}v)=0$$ by the Bianchi identities. Since $\Pi =\Pi ^\perp$ with respect to $g$, and because $W_-$ is a trace-free quadratic form on $\Lambda^-$, it therefore follows that $(iii)$ is equivalent to requiring that $W_-\equiv 0$. Hence $(i) \Longleftrightarrow (iii)$. Now if $S$ is any proto-$\beta$-surface in $M$, and if $v$ and $w$ are any vector fields on $S$, then $\nabla_vw\in TS$ by Lemma \ref{noii}. The Riemann curvature tensor of ${\mathcal R}$ of $g$ thus satisfies $${\mathcal R}_{vw}v = \nabla_v\nabla_wv- \nabla_w\nabla_vv - \nabla_{[v,w]}v\in TS$$ whenever $S$ is a proto-$\beta$-surface and $v,w\in TS$. When every $\beta$-plane $\Pi $ can be expressed as $T_xS$ for some proto-$\beta$-surface, it thus follows that condition $(iii)$ holds. Hence $(ii) \Longrightarrow (iii)$. Conversely, suppose that $(iii)$ holds. Let $\Pi \subset T_xM$ be a $\beta$-plane, and let $\gamma$ be any null geodesic through $x$ tangent to $\Pi $. Let ${\mbox{\cyr \em P} }\to \gamma$ be the rank-$2$ sub-bundle of $TM|_\gamma$ obtained from $\Pi $ by parallel transport of along $\gamma$, and notice that each fiber of $\mbox{\cyr \em P} $ is the unique $\beta$-plane containing $v=\gamma^\prime$. Hypothesis $(iii)$ therefore guarantees that $$w\in {\mbox{\cyr \em P} } \Longrightarrow {\mathcal R}_{vw}v\in{\mbox{\cyr \em P} },$$ and a solution of Jacobi's equation \begin{equation} \label{jac} \nabla_v\nabla_v w = {\mathcal R}_{vw}v \end{equation} along $\gamma$ is therefore a section of ${\mbox{\cyr \em P} }$ iff $w|_x , (\nabla_vw)|_x\in \Pi $. Now let $U\subset \Pi $ be an open disk about $0\in \Pi $ which is sufficiently small so as to be mapped diffeomorphically to a surface $S=\exp (U)$ with $T_xS=\Pi $ by the exponential map of $g$. Then $S$ is a union of null geodesics $\gamma$ through $x$, and along each such $\gamma$ we have $$T_{\tilde{x}}S = \left\{ w|_{\tilde{x}}~\Big|~ w \mbox{ solves (\ref{jac}) along } \gamma, w|_x=0, (\nabla_v w)|_x\in \Pi \right\}$$ for each $\tilde{x}\neq x$. The above argument thus shows that the tangent spaces of $S$ are precisely the $\beta$-planes obtained from $\Pi $ by parallel transport along radial null geodesics. In particular, $S$ is a proto-$\beta$-surface tangent to the given $\beta$-plane $\Pi $. Thus $(iii)\Longrightarrow (ii)$, and our proof is complete. \end{proof} Now given an oriented split-signature $4$-manifold $(M,g)$, let us consider the bundle ${\zap p} : F\to M$ of $\beta$-planes. We may define a $2$-dimensional real distribution $E\subset TF$ \label{dodger} by declaring that its value at $\Pi $ is the horizontal lift of $\Pi \subset TM$ to $T_\Pi F$. Then every proto-$\beta$-surface in $M$ has a canonical lift as an integral surface of $E$, and conversely every integral surface of $E$ projects to a proto-$\beta$-surface in $M$. In this way, we see that $E$ is integrable iff $(M,g)$ is self-dual. In particular, when $(M,g)$ is a self-dual, there is a foliation $\mathscr F$ of $F$ tangent to $E$, and we can then obtain (maximal) $\beta$-surfaces in $(M,g)$ by projecting the leaves of $\mathscr F$ into $M$ via ${\zap p} : F\to M$. Lemma \ref{proteus} thus implies \begin{prop}\label{roger} The following assertions regarding an oriented split-signature \linebreak $4$-manifold $(M,g)$ are logically equivalent: \begin{itemize} \item $g$ is self-dual; \item each $\beta$-plane $\Pi\subset TM$ is tangent to a unique $\beta$-surface $S\looparrowright M$; \item the distribution of $2$-planes $E\to F$ is Frobenius integrable. \end{itemize} \end{prop} Since Lemma \ref{noii} tells us that each $\beta$-surface $S$ is totally geodesic, the Levi-Civita connection $\nabla$ of the ambient metric $g$ induces a torsion-free connection $\triangledown$ on $S$ whose geodesics of $\triangledown$ are precisely those null geodesics of $(M,g)$ which are contained in $S$. But, as we saw in \S \ref{zollfrei}, null geodesics are conformally invariant as unparameterized curves, so the {\em projective class} $[\triangledown ]$ of this induced connection \cite{lmzoll,schouten} therefore depends only on the ambient conformal class $[g]$. \begin{prop}\label{projflat} Let $(M,g)$ be a self-dual split-signature $4$-manifold, let $S\looparrowright M$ be any $\beta$-surface, and let $\triangledown$ be the connection induced on $S$ by restriction of the Levi-Civita connection $\nabla$ to $S$. Then $\triangledown$ is {\em projectively flat}. Indeed, there is a local diffeomorphism $\mbox{\cyr f} : \tilde{S}\to \mathbb R\mathbb P^2$, where $\tilde{S}$ is the universal cover of $S$, which maps each geodesic to a portion of some projective line. \end{prop} \begin{proof} Locally, we have a $3$-parameter family of $\beta$-surfaces in $M$, and, by taking the derivatives at $S$, these define a $3$-dimensional space of sections of the normal bundle $TM/TS$ of $S$. These are the covariantly constant local sections for the natural flat connection on the normal bundle $TF/TS$ of lift of $S$ to $F$ induced by the integrable distribution $E$; and we obtain a natural $3$-dimensional space of local sections of $TM/TS$ by pushing forward parallel local sections of $TF/TS$ via the derivative of $\zap p$. Moreover, these sections can be taken to be {\em global} sections on the universal cover $\tilde{S}$ of $S$, since the pull-back of $TF/TS$ to $\tilde{S}$ is not only flat, but actually has {trivial} holonomy. We can describe these local sections more concretely by exploiting our fixed metric $g$ in the self-dual conformal class $[g]$. Indeed, since $TS$ is maximally isotropic with respect to $g$, our metric induces a non-degenerate pairing $$TS\times (TM/TS)\to \mathbb R,$$ thus giving us an isomorphism between the cotangent bundle $T^*S$ and the normal bundle $TM/TS$ of $S$. Thus a section of the normal bundle precisely corresponds to a $1$-form $\varphi$ on $S$. We claim that a $1$-form arises from a $1$-parameter family of $\beta$-surfaces iff it satisfies the {\em generalized Killing equation} \begin{equation} \label{gke} \triangledown \varphi = {\textstyle \frac{1}{2}} d\varphi~, \end{equation} which, according to ones taste, can be rewritten either as $$\triangledown_j \varphi_k + \triangledown_k \varphi_j=0$$ or as $$ (\triangledown \varphi ) (v,v) = 0~~~\forall v. $$ Let us first demonstrate the `only if' direction of this assertion. Suppose we have a $1$-parameter family of proto-$\beta$-surfaces obtained by moving some open subset $U\subset S$. Then we can foliate these surfaces by null geodesics in a smooth manner, say with tangent vector field $v$. The vector field $u$ representing the variation then satisfies $[u,v]=0$, and projects to the section of $TM/TS$ along $U$ which represents the first variation of the family. The $1$-form $\varphi$ on $U$ representing the first variation is then given by $$\varphi (w) = g (u, w) ~~~\forall w\in TS.$$ But now \begin{eqnarray*} (\triangledown \varphi ) (v,v) & = & g (\nabla_v u , v) \\ & = & g (\nabla_u v , v) \\ &=& {\textstyle \frac{1}{2}} u g (v,v) \\ &=& 0. \end{eqnarray*} Since $v$ can be chosen to point in any direction at any point of $U$, it follows that $\triangledown \varphi$ must be skew-symmetric, and $\varphi$ is therefore a solution of (\ref{gke}). Now (\ref{gke}) is an over-determined equation, and a solution $\varphi$ is completely determined by its $1$-jet at a point of $S$. To see this, observe that we certainly have $$\mbox{Alt }( \triangledown\triangledown \varphi ) =0, $$ since $S$ does not carry any non-zero $3$-forms. Using (\ref{gke}), however, this six-term identity can be rewritten as the three-term identity $$ \triangledown_j\triangledown_k \varphi_\ell = \triangledown_\ell\triangledown_k \varphi_j -\triangledown_k\triangledown_\ell \varphi_j , $$ and we may then notice that the right-hand side is just a curvature term. Along a null geodesic $\gamma \subset S$ with tangent field $v$, $\varphi$ therefore satisfies the ordinary differential equation \begin{equation} \label{gje} \triangledown_v \triangledown_v \varphi = \varphi( {\mathcal K}_{v\bullet}v) , \end{equation} where ${\mathcal K}$ is the curvature tensor of $\triangledown$, and where the right-hand-side denotes the $1$-form $$w\mapsto \varphi ( {\mathcal K}_{vw}v) .$$ Since a solution of (\ref{gje}) is completely determined by the value of $\varphi$ and $\triangledown_v \varphi$ at one point, it follows that solutions of (\ref{gke}) are completely determined by the value of $\varphi$ and $\triangledown \varphi$ at one point of a convex subset $U\subset S$. But (\ref{gke}) then tells us that a solution is consequently determined by the value of the $1$-form $\varphi$ and the $2$-form $d\varphi$ at one point. This shows that the space of solutions is at most $3$-dimensional. But since the codimension of the leaves in $F$ is exactly $3$, we conclude that the space of solutions of (\ref{gke}) must be exactly $3$-dimensional up on the universal cover $\tilde{S}$ of $S$. Thus, let $\mathbb V\cong \mathbb R^3$ be the space of solutions of (\ref{gke}) on $\tilde{S}$, and let $\P ({\mathbb V})\cong \mathbb R\mathbb P^2$ be the corresponding real projective space. For each $x\in \tilde{S}$, set $${\mathbb L}_x= \{ \mbox{solutions $\varphi$ of (\ref{gke}) on $\tilde{S}$ for which } \varphi|_x=0\},$$ and notice that this is a $1$-dimensional subspace of $\mathbb V$, since the freedom in choosing such a solution amounts to specifying the value of the $2$-form $d\varphi$ at $x$. We may thus define a map \begin{eqnarray*} \mbox{\cyr f} : \tilde{S} & \longrightarrow & \P ({\mathbb V}) \\ x & \mapsto & {\mathbb L}_x. \end{eqnarray*} Let us then first notice that any geodesic $\gamma$ is sent to a projective line by this map, because equation (\ref{gke}) implies that $\varphi (v) = \mbox{constant}$, where $v$ is an autoparallel tangent field for $\gamma$; thus $\mbox{\cyr f} (\gamma ) \subset \P ({\mathbb V}_0 )$, where ${\mathbb V}_0\subset {\mathbb V}$ is the plane defined by $(\varphi|_x)(v)=0$ for some arbitrary $x\in \gamma$. Now let $t$ be an affine parameter along $\gamma$, with $v=d/dt$, and let $w$ be a parallel vector field along $\gamma$ which is linearly independent from $v$. Then the restriction of an element of ${\mathbb V}_0$ to $\gamma$ satisfies $\varphi (v) \equiv 0$ and $\varphi (w) = f(t)$, where $f$ is a solution of the second order linear ordinary differential equation \begin{equation} \label{ode} \frac{d^2f}{dt^2}+ \kappa f=0, \end{equation} where $\kappa (t)= {{\mathcal K}^2}_{121}$ with respect to the frame $e_1=v$, $e_2=w$. If $\{f_1, f_2\}$ is a basis for the solution space of (\ref{ode}), then $\mbox{\cyr f}$ sends $\gamma$ to $\P ({\mathbb V}_0)\cong \mathbb R\mathbb P^1$ by $t\mapsto [f_2 (t) : - f_1(t)]$. However, equation (\ref{ode}) implies that the Wronskian $W= f_1f_2^\prime - f_1^\prime f_2$ is a non-zero constant along $\gamma$. Thus at least one of the expressions $$\frac{d}{dt}\left( \frac{-f_1}{f_2}\right) = \frac{W}{f_2^2}~~~\mbox{ and }~~~ \frac{d}{dt}\left( \frac{f_2}{-f_1}\right) = -\frac{W}{f_1^2}$$ is defined and non-zero at each point of $\gamma$, and $\mbox{\cyr f}$ thus sends $\gamma$ to the projective line $\P ({\mathbb V}_0) \subset \P ({\mathbb V})$ via a smooth immersion. Since the geodesic $\gamma$ is arbitrary, it follows that $\mbox{\cyr f} : \tilde{S}\to \P ({\mathbb V})$ is an equidimensional smooth immersion sending each geodesic to a portion of a projective line. In particular, the connection $\triangledown$ induced on the $\beta$-surface $S$ is projectively flat. \end{proof} The above proof is loosely based on a spinor argument given in \cite{thick}. The careful reader may notice that, in its present form, the proof is not manifestly conformally invariant. However, it is not terribly difficult to embellish the argument so as to achieve this end. The main point is that the $g$-induced map $TM/TS\to T^*S$ actually carries a conformal weight, so that the $1$-form fields $\varphi$ under discussion may better be described as $1$-forms with values in a line bundle. It is also worth pointing out that the above result depends quite strongly on the assumption that $M$ is self-dual. Indeed, it is not difficult to construct non-self-dual $4$-manifolds with isolated $\beta$-surfaces on which the induced connection is {\em not} projectively flat. For example, let $(\Sigma , h)$ be an oriented Riemannian $2$-manifold of {\em non-constant} Gauss curvature. Since \cite{schouten} a torsion-free connection $\triangledown$ on a surface is projectively flat iff its Ricci curvature $\rho$ satisfies $$ 2\triangledown_j \rho_{k\ell } - 2 \triangledown_k \rho_{j\ell} + \triangledown_j \rho_{\ell k} - \triangledown_l \rho_{\ell j} =0, $$ it follows that the Riemannian connection of such a generic metric $h$ is not projectively flat. Now give $\Sigma\times \Sigma$ the indefinite product metric $\pi_1^*h-\pi_2^*h$, and observe that the diagonal $\Sigma \hookrightarrow \Sigma \times {\Sigma}$ becomes a $\beta$-surface if we endow $\Sigma\times \Sigma$ with the non-product orientation. However, the induced connection on this $\beta$-surface is just the Riemannian connection of $h$, so this $\beta$-surface is {\em not} projectively flat. Of course, this example in no way contradicts Proposition \ref{projflat}, since the $4$-manifold in question is non-self-dual. \section{Projectively Flat Surfaces} Proposition \ref{projflat} reveals that surfaces with flat projective connections play an important r\^ole in the theory of split-signature self-dual manifolds. The systematic study of surfaces with flat projective structures, also known as $\mathbb R\mathbb P^2$-structures, was begun by Kuiper \cite{nico}, who in particular observed that if $(S,[\triangledown ])$ is any projectively flat surface, the locally trivial nature of the geometry always gives rise to a developing map $\mbox{\cyr f} : \tilde{S}\to \mathbb R\mathbb P^2$, defined on the universal cover $\tilde{S}$ of $S$, as well as a representation of $\phi : \pi_1 (S) \to PGL (3, \mathbb R )$, both of which are unique up to an overall $PGL(3, \mathbb R)$ transformation. Crucial explorations of this idea by Sullivan and Thurston \cite{sulthurs} eventually allowed Choi and Goldman \cite{chogo} to develop a substantially complete theory of flat projective structures on {\em compact} surfaces. In this article, we will be specifically interested in the case when the relevant projective structure is {\em Zoll}, meaning \cite{lmzoll} that every geodesic is a simple closed curve. It seems quite plausible that a connected surface which admits a Zoll projective connection must necessarily be compact, but, to our knowledge, this still seems to be an open problem. Fortunately, however, the {\em projectively flat} case of the problem is a bit more manageable. \begin{lem}\label{closed} Let $(S, [\triangledown])$ be a connected surface with flat projective structure, and suppose that every maximal geodesic of $[\triangledown ]$ is a simple closed curve in $S$. Then $S$ is compact. \end{lem} \begin{proof} It suffices to consider the case when $S$ is {\em orientable}, since otherwise we may pass to an oriented double cover without sacrificing the assumption that every geodesic is a simple closed curve. Since $S$ is assumed to be Zoll, every (maximal) geodesic $\gamma\subset S$ is an embedded circle; and because we may now assume that $S$ is orientable, any such $\gamma$ has an open neighborhood $U\subset S$ diffeomorphic to an annulus $S^1 \times (-\epsilon , \epsilon )$. Now develop the universal cover $\tilde{U}$ of $U$ onto the $2$-sphere in such a manner that $\gamma$ is sent to some portion of the equator $x_3=0$, sending a chosen base-point to $(1,0,0)$.We orient the equator in the usual west-to-east manner, and give $\gamma$ the corresponding orientation. Let $I\subset \tilde{U}$ be an arc (that is, an embedded closed interval) such that $\gamma \subset U\subset S$ is obtained from $I$ by identifying its two endpoints via the covering map $\tilde{U}\to U$, and such that the initial end-point is a pre-image of the chosen base-point for $S$. Then the restriction of the developing map to some open neighborhood of $I\subset \tilde{U}$ lifts to the universal cover $V$ of $S^2-\{ (\pm 1, 0, 0)\}$, where $V$ may may be explicitly identified with $\mathbb R\times (-\pi/2, \pi/2)$ through the use of spherical coordinates $$ (x_1, x_2, x_3) = (\cos \theta \cos \varphi , \sin \theta \cos \varphi , \sin \varphi ), ~~~~ (\theta , \varphi )\in \mathbb R\times (-\pi/2, \pi/2).$$ This lift of the development then takes $I$ diffeomorphically onto a closed interval, say, $\tilde{I}= [0,L]\times \{0\}$ in $V$. Thus, a perhaps smaller neighborhood $U^\prime$ of $\gamma\in S$ can be obtained from a neighborhood $V^\prime$ of $\tilde{I}\subset V$ by identifying some neighborhood $V_1$ of $(0,0)$ with a neighborhood $V_2$ of $(L,0)$; moreover, this identification is carried out via a lift of the action of some $A\in SL(3, \mathbb R)$ of $S^2= (\mathbb R^3-0)/\mathbb R^+$. Notice that this transformation $A$ must send the equator to itself, in an orientation-preserving manner. Hence $(0,0,1)$ must be an eigenvector of $A^t$, with eigenvalue $\lambda > 0$. Let us now examine the action of $A^t$ on the space $\mathbb R\mathbb P^{2*} = \P (\mathbb R^{3*})$ of great circles in $S^2$. We have just observed that $[0,0,1]$ must be a fixed point of this action. But our hypotheses also preclude the existence of a point $p\in \mathbb R\mathbb P^{2*}$, $p\neq [0,0,1]$, such that $\lim_{n\to \infty} (A^t)^n(p)= [0,0,1]$. Indeed, if there were such $p$, the great circles corresponding to the iterates $(A^t)^n(p)$ would, for $n\gg 0$, link up end-to-end via $A$ to form part of a geodesic $\gamma^\prime\neq \gamma$ in the annulus $U \subset S$ which spiraled into $\gamma$; every point of $\gamma$ would then be an accumulation point of $\gamma^\prime$, and as the Zoll hypothesis implies that $\gamma^\prime$ must be a closed subset of $S$, this would imply that $\gamma\subset \gamma^\prime$, contradicting the fact that $\gamma$ is a maximal geodesic. We can also run this argument backwards in parameter time by replacing $A$ with $A^{-1}$, and thereby deduce that there cannot be any point $p\in \mathbb R\mathbb P^{2*}$, $p\neq [0,0,1]$, such that $\lim_{n\to \infty} (A^t)^{-n}(p)= [0,0,1]$. The complex eigenvalues of $A$ must therefore all have the same modulus. Moreover, there cannot be a vector $v\in \mathbb R^{3*}$ such that $A^t(v) = \lambda v + (0,0,1)$. Hence $A\in SL(3,\mathbb R)$ can be put in one of the normal forms $$(a) ~~ \left[\begin{array}{ccc}\cos \psi & -\sin \psi & 0 \\ \sin \psi & \hphantom{-}\cos \psi & 0 \\0 & 0 & 1\end{array}\right] ~~~\mbox{or}~~~(b)~~ \left[\begin{array}{ccc}\pm 1 & \hphantom{\pm}1 & \hphantom{\pm}0 \\\hphantom{\pm}0 & \pm1 & \hphantom{\pm}0 \\\hphantom{\pm}0 & \hphantom{\pm}0 & \hphantom{\pm}1\end{array}\right] $$ by an appropriate change of basis of the $x_1x_2$-plane. Now suppose that $A$ has normal form $(a)$. Then any geodesic with initial point and tangent direction close to that of $\gamma$ will remain in our annular neighborhood $U$; indeed, every such geodesic is explicitly represented in our $(\theta, \varphi)$ coordinates as the union of the graphs $$ \varphi = \tan^{-1} (t\sin (\theta -\theta_0 + k\psi )), ~~\theta \in [0,\psi ], ~~k\in \mathbb{Z},$$ for $t$ and $\theta_0$ given constants, with $t$ is sufficiently small, and where the ostensible $\bmod$-$2\pi$ ambiguity of $\psi$ has been remedied by setting $\psi=L$. But the Zoll condition stipulates that every geodesic is a simple closed curve, and a simple closed curve in an annulus necessarily has winding number one. Thus the Zoll assumption guarantees that $\psi$ is a multiple of $2\pi$, and the developing map will therefore be well defined on a neighborhood of any such geodesic $\gamma\subset S$, even though general principles had led us to expect that it would merely be defined up on the universal cover $\tilde{S}$. Moreover, the subset of $\P(TS)$ consisting of directions tangent to geodesics $\gamma$ with this normal form is {\em open}. On the other hand, if $A$ has normal form $(b)$, then the $\pm$ sign must be $+$; if not, an annular neighborhood of the given geodesic $\gamma$ would contain closed geodesics with self-crossings, obtained by gluing together great circles near the equator with their reflections through the $x_3$-axis. Thus, the transformation $A$ must take the normal form $$(a) ~~ \left[\begin{array}{ccc}1 & 0 & 0 \\ 0& 1 & 0 \\0 & 0 & 1\end{array}\right] ~~~\mbox{or}~~~(b)~~ \left[\begin{array}{ccc} 1 & 1 & 0 \\0 & 1 & 0 \\ 0 & 0 & 1\end{array}\right]~, $$ and we will henceforth say that a given geodesic $\gamma$ is type $(a)$ or type $(b)$ depending on which one of these normal forms occurs. Now recall that the developing-map construction gives us an immersion $\tilde{\mbox{\cyr f}} : \tilde{S}\to S^2$ and a group homomorphism $\phi : \pi_1 (S) \to SL(3,\mathbb R )$ such that the deck-transformation action of $\pi_1 (S)$ on the universal cover $\tilde{S}$ is compatible with the action of $SL(3,\mathbb R )$ on $S^2$. In particular, we may define a natural intermediate cover $\hat{S}$ of $S$ by setting $\hat{S}= \tilde{S}/\ker \phi$, so that $S$ is then obtained from $\hat{S}$ by dividing by the action of a matrix group $G= \phi [\pi_1 (S)]\subset SL(3,\mathbb R )$, and such that we still have a developing map $\hat{\mbox{\cyr f}}: \hat{S}\to S^2$ which correctly intertwines the effective actions of $G$ on $\tilde{S}$ and $S^2$. Let $\varpi : \hat{S}\to S$ be the covering map. If $\gamma\subset S$ is a geodesic of type $(a)$, then $\varpi^{-1}(\gamma)=\coprod_j \hat{\gamma}_j$, where each $\gamma_j\subset \hat{S}$ is a closed geodesic of type $(a)$, and where $\varpi|_{\hat{\gamma}_j} : \hat{\gamma}_j\to \gamma$ is a diffeomorphism for each $j$; this follows immediately from the fact that every non-trivial deck transformation of $\hat{S}$ must act non-trivially on $S^2$, whereas non-trivial coverings of a closed geodesic of type $(a)$ are invisible to the developing map. On the other hand, if $\gamma^\prime\subset S$ is a geodesic of type $(b)$, then $\varpi^{-1}(\gamma^\prime)=\coprod_j \hat{\gamma}_j^\prime$, where each $\hat{\gamma}_j^\prime\approx \mathbb R$ is a non-closed geodesic in $\hat{S}$; moreover, the conjugates of the image of $[\gamma]\in \pi_1(S)$ in $G$ give us deck transformations of $\hat{S}$ which roll up the various $\hat{\gamma}_j^\prime$ into copies of $\gamma$, while simultaneously acting on $S^2$ via linear transformations of normal form $(b)$. Such a matrix acts on $S^2$ in a manner fixing a great circle, and on this great circle there is a preferred antipodal pair of points, given by $(\pm 1 , 0, 0)$ for the standard model, which are the accumulation points of the non-closed orbits, and which we will refer to as the {\em goals} of $\hat{\gamma}_j^\prime$. Since $S$ is paracompact, $G= \pi_1(S)/\ker \phi$ is countable, so it follows that only countably many points of $S^2$ occur as goals of geodesics of type $(b)$. Now let $\hat{x}\in \hat{S}$ be any point that is not sent to a goal, and let $x=\varpi (\hat{x})$ be its projection to $S$. Then only countably many geodesics through $x$ can be of type $(b)$, since any such geodesic would develop onto a great circle joining $\hat{\mbox{\cyr f}} (x)$ to a goal. Moreover, the set of directions in $\mathbb P (T_xS)\approx S^1$ which are tangent to geodesics of type $(a)$ is open. Thus the set $B\subset \mathbb P (T_xS)$ of directions tangent to geodesics of type $(b)$ is a countable closed subset of the circle. We claim that $B=\emptyset$. If not, choose a base-point for $\mathbb P (T_xS)$ which is not in $B$ and use the counter-clock-wise angle from this direction to define a homeomorphism $\mathbb P (T_xS)\approx [0, \pi]/\{ 0,\pi\}$ which sends the base-point to the equivalence class $\{ 0, \pi\}$. Then $B$ then becomes a non-empty countable closed subset $\mbox{\cyr B} \subset (0,\pi )$. Let $\mbox{\cyr b} = (\sup\mbox{\cyr B}) \in\mbox{\cyr B}$, and let $b\in \mathbb P (T_xS)$ be the corresponding direction. Let ${\mathscr I}\subset \mathbb P (T_xS)$ be the open subset corresponding to the open interval $(\mbox{\cyr b} ,\pi )$. Every direction in $\mathscr I$ is tangent to a geodesic of type $(a)$, and since every such geodesic $\gamma$ has an annular neighborhood which looks like a finite covering of a band around the equator, all the geodesics $\gamma_t$ tangent to elements of $\mathscr I$ form a smooth family of maps of the circle, and in particular are all homotopic to one another. We can then uniquely lift this family of geodesics of type $(a)$ as a smooth family $\hat{\gamma}_t$, $t\in (\mbox{\cyr b} ,\pi )$, of closed geodesics though $\hat{x}\in \hat{S}$. Let $Y\subset \hat{S}$ be the union of these curves $\hat{\gamma}_t$, and notice that $\varpi|_Y$ is injective, since all the $\gamma_t$ are homotopic. If $a\in G-\{ 1\}$, it thus follows that $a(Y)\cap Y = \emptyset$. Now let $\gamma^\prime\subset S$ be the geodesic of type $(b)$ through $x$ with tangent $b$, and let $\hat{\gamma}^\prime$ be its lift through $\hat{x}$. By composing $\hat{\mbox{\cyr f}}$ with an element of $SL(3,\mathbb R)$ if necessary, we can henceforth assume that $\hat{\mbox{\cyr f}}[\hat{\gamma}^\prime]$ is a subset of the equator $z=0$, that $\hat{\mbox{\cyr f}} (\hat{x})= (0,1,0)$, and that there is an element $a\in G$ which sends $\hat{\gamma}^\prime$ to itself, while acting on $S^2$ by $(x,y,z)\mapsto (x+y,y, z)/\|(x+y,y,z)\|$. Let $\sigma\subset \hat{\gamma}^\prime$ consist of those points of $\hat{\gamma}^\prime$ for which every neighborhood meets every $\hat{\gamma}_t$ for $t\in (\mbox{\cyr b} ,\mbox{\cyr b} + \epsilon )$, where $\epsilon> 0$ is allowed to depend on the neighborhood. Now the developing map $\hat{\mbox{\cyr f}}$ is a local diffeomorphism, and carries $Y$ onto $\{ (0 , \pm 1 , 0)\} \cup \{ y < z \cot \mbox{\cyr b}, z > 0\} \cup \{ y > z \cot \mbox{\cyr b}, z < 0\}$ by a finite covering map. It follows that the non-empty subset $\sigma \subset \hat{\gamma}^\prime$ is therefore both open and closed. Hence $\sigma = \hat{\gamma}^\prime$. In particular, any point of $\hat{\gamma}^\prime$ which is sent to the semi-circle $\{ z=0, y > 0\}$ is contained in an open disk which intersects $Y$ in an open half-disk consisting of points south of $\hat{\gamma}^\prime$. Hence each of the iterates $a^n (x)$, $n > 0$, has an open neighborhood $U_n$ such that $a(U_n\cap Y) \cap (U_{n+1}\cap Y) \neq \emptyset$. But this means that $a(Y)\cap Y \neq \emptyset$, which is a contradiction. Hence $\mbox{\cyr B} = \emptyset$, and every geodesics through $x$ is of type $(a)$. The set of all geodesics though $x$ therefore forms a smooth family of embedded circles. If $\tilde{X}\subset \mathbb P (TS)$ denotes the union of all the lifts of geodesics through $x$, then $\tilde{X}$ is a smooth compact surface---in fact, a Klein bottle. Moreover, $\mathbb P (T_xS)$ is a subset of $\tilde{X}$, and this circle has non-orientable normal bundle. Let $X$ be the smooth compact surface---actually, a projective plane---obtained from $\tilde{X}$ by blowing this circle down to a point $x_0\in X$. The projection $\tilde{X}\to S$ then induces a smooth proper map $f: X\to S$ such that $f_* : T_{x_0}X\to T_xS$ is an isomorphism. But, by assumption, any geodesic passes though $x$ only once. Thus $f^{-1}(x)= \{ x_0\}$, and the mod-$2$ degree of $f$ is therefore $1\in \mathbb{Z}_2$. If $f$ were not onto, this would be a contradiction, since any regular value must have an odd number of points in its pre-image. Hence $f$ is onto, and $S=f(X)$ is compact, as claimed. \end{proof} We therefore obtain the following useful result: \begin{thm} \label{standard} Let $(S, [\triangledown])$ be a connected surface with flat projective structure, and suppose that every maximal geodesic of $[\triangledown ]$ is a simple closed curve in $S$. Then, up to diffeomorphism, $(S,[\triangledown])$ is either $S^2$ or $\mathbb R\mathbb P^2$, equipped with the standard projective connection. \end{thm} \begin{proof} By Lemma \ref{closed}, $S$ is a compact Zoll manifold. Hence \cite[Lemma 2.8]{lmzoll} tells us that $S$ is diffeomorphic to either $S^2$ or $\mathbb R\mathbb P^2$. In particular, the universal cover $\tilde{S}$ of $S$ is compact, so the developing map $\tilde{\mbox{\cyr f}}: \tilde{S}\to S^2$ must be a covering map. Hence $\tilde{\mbox{\cyr f}}$ a diffeomorphism. If $S\approx S^2$, $\tilde{\mbox{\cyr f}}$ is now a diffeomorphism $S\to S^2$ which sends the given flat projective structure to the standard one, and we are done. If $S\approx \mathbb R\mathbb P^2$, the non-trivial deck transformaton of $\tilde{S}\approx S^2$ defines a linear involution for which $+1$ is not an eigenvalue. But the only such involution is $-1$. Thus we actually obtain a developing map induces $S\to \mathbb R\mathbb P^2$, and this gives us the promised diffeomorphism sending the given projective structure to the standard one. \end{proof} In the next section, we will see that this has some interesting ramifications for the theory of Zollfrei self-dual $4$-manifolds. \section{Topological Implications} \label{prelim} In this section, we will show that, up to homeomorphism, the only oriented $4$-manifolds which admit self-dual Zollfrei metrics are $S^2 \times S^2$ and the real projective quadric ${\mathbb M}^{2,2} = [S^2 \times S^2]/\mathbb{Z}_2$. We begin our proof with the following observation: \begin{lem} \label{beta} Let $(M,[g])$ be a Zollfrei self-dual $4$-manifold. Then every $\beta$-surface $S\looparrowright M$ is an embedded $S^2$ or $\mathbb R\mathbb P^2$ in $M$. Moreover, every two points of such a $\beta$-surface $S$ are joined by a null geodesic $\gamma$. \end{lem} \begin{proof} Let $(M,[g])$ be a Zollfrei self-dual $4$-manifold, and let $S\looparrowright M$ be a $\beta$-surface. By Lemma \ref{noii}, $S$ is totally geodesic, so the immersion $S\looparrowright M$ is injective outside a discrete subset, where the various tangent spaces of $S$ must be transverse to each other. Moreover, $\nabla$ induces a connection $\triangledown$ on $S$. Proposition \ref{projflat} asserts that the associated projective structure $[\triangledown ]$ is {\em flat}. But the geodesics of $(S, [\triangledown ])$ are all null geodesics of $[g]$, so the assumption that $(M,[g])$ is Zollfrei implies that $(S, [\triangledown ])$ is a projectively flat surface in which every geodesic is a simple closed curve. Theorem \ref{standard} therefore tells us that $S$ is diffeomorphic to either $S^2$ or $\mathbb R\mathbb P^2$, in such a manner that $[\triangledown ]$ becomes the standard projective structure. In particular, every pair of points of $S$ can be joined by a geodesic of $[\triangledown ]$. Since the restriction of $S\looparrowright M$ to any geodesic yields an immersion which is one-to-one outside a finite number of double-points with distinct tangents, the assumption that every null geodesic of $[g]$ is a simple closed curve therefore implies that $S\looparrowright M$ is actually an embedding. \end{proof} When we say that $(M,[g])$ is self-dual, it is already implicit that $M$ is oriented. However, $O(2,2)$ has {\em four} components; indeed, the inclusion $O(2) \times O(2) \hookrightarrow O(2,2)$ is a homotopy equivalence. We will say that an oriented split-signature pseudo-Riemannian $4$-manifold is {\em space-time-orientable} if its structure group can be reduced to the identity component $SO_+(2,2)$ of $O(2,2)$. Obviously this is automatically the case if $H^1(M,\mathbb{Z}_2)=0$, and so in particular holds whenever $M$ is simply connected. If $M$ is {\em not} space-time-orientable, there is always a double cover $\tilde{M}\to M$ which {\em is} space-time orientable with respect to the pull-back of the metric. Moreover, $\tilde{M}$ will then be Zollfrei if $M$ is, since all the null geodesics of $\tilde{M}$ are at worst double covers of those in $M$. Now suppose that $(M,g)$ is a space-time-orientable split-signature self-dual $4$-manifold. Then we may express $TM$ as a direct sum $T_+\oplus T_-$, where $T_+$ and $T_-$ are mutually orthogonal with respect to $g$, and where the restriction of $g$ to $T_+$ (respectively, to $T_-$) is positive (respectively, negative) definite; for example, this may be done by choosing some background Riemannian metric $h$ on $M$, and then diagonalizing $g$ with respect to $h$ at each point. A space-time orientation for $M$ then amounts to a choice of orientations for the bundles $T_\pm$. Notice that this then allows us to express $TM$ as the sum of two complex line bundles; indeed, a reduction of the structure group of $(M,g)$ to $SO(2) \times SO(2) = U(1)\times U(1)$ is equivalent \cite{matsushita} to the choice $( {\mathfrak J} , \tilde{{\mathfrak J}})$ of a pair of $g$-compatible almost-complex structures, where ${\mathfrak J}$ is compatible with the given orientation of $M$, and where $\tilde{{\mathfrak J}}$ is compatible with the opposite orientation. An isotropic $2$-plane $\Pi \subset T_xM$ then becomes the graph of an isometry from $(T_{-x}, -g)$ to $(T_{+x},g)$, and such an isotropic subspace $\Pi $ is then a $\beta$-plane iff this isometry is {\em orientation-reversing}. In particular, the orientation of $T_-$ induces an orientation on every $\beta$-plane, and hence on any $\beta$-surface; what is more, any $\beta$-surface is a $\tilde{{\mathfrak J}}$-holomorphic curve in $M$. We thus obtain the following: \begin{lem}\label{sphere} Let $(M,[g])$ be a space-time-orientable Zollfrei self-dual $4$-manifold. Then every $\beta$-surface $S$ in $M$ is an embedded $2$-sphere. \end{lem} \begin{proof} A space-time orientation induces an orientation of each $\beta$-surface. Lemma \ref{beta} therefore tells us that each $\beta$-surface must be an embedded $2$-sphere. \end{proof} The following observation is therefore pertinent: \begin{lem}\label{unisphere} Suppose that $(M,[g])$ is a split-signature self-dual $4$-manifold in which every $\beta$-surface is an immersed $2$-sphere. Let ${\zap p}: F\to M$ be the bundle of $\beta$-planes over $M$. Then the canonical foliation $\mathscr F$ of $F$ by lifted $\beta$-surfaces is locally trivial, in the sense that every leaf has a neighborhood which is diffeomorphic to $S^2 \times \mathbb R^3$ in such a manner that each first-factor sphere $S^2 \times \{ *\}$ is a leaf. Moreover, this diffeomorphism can be chosen in such a way that each great circle in each first-factor sphere projects to a null geodesic in $M$. \end{lem} \begin{proof} Since every leaf of $\mathscr F$ is compact and simply connected, the holonomy of $\mathscr F$ around any leaf is trivial, and $\mathscr F$ is therefore a fibration \cite{thurstfol}. In particular, we can choose a transversal $U$ through a given leaf which meets every nearby leaf exactly once. Assume, without loss of generality, that $U\approx \mathbb R^3$, and let $V\approx U \times S^2$ be the corresponding neighborhood of the leaf. Since $V$ is simply connected, the line bundle $\ker {\zap p}$ becomes trivial when restricted to $V$, and we can therefore choose a non-zero vector field $u$ on $V$ which spans $\ker {\zap p}$. The $U$-component of this vector field then defines a function $V\to (\mathbb R^3 - \{ 0\})$, and we thus get a map $V\to S^2$ by composing with the radial projection $(\mathbb R^3 - \{ 0\})\to S^2$. Modulo the action of the $SL(3, \mathbb R)$, however, the restriction of this map to any leaf $S$ is really just the developing map $\mbox{\cyr f} : S\to \mathbb R\mathbb P^2$ constructed in Proposition \ref{projflat}, lifted to the universal cover $S^2$ of $\mathbb R\mathbb P^2$. Taking the Cartesian product with leaf projection $V\to U\approx \mathbb R^3$, we thus obtain a local trivialization $V\to S^2\times \mathbb R^3$ of $\mathscr F$ in which every lifted null geodesic becomes a great circle in a first-factor $S^2$. \end{proof} This gives us a more transparent understanding of the Zollfrei condition: \begin{thm}\label{tasty} Let $(M,[g])$ be a space-time-orientable self-dual $4$-manifold. Then the following conditions are equivalent: \begin{romenumerate} \item $(M,[g])$ is Zollfrei; \item $(M,[g])$ is strongly Zollfrei; \item every $\beta$-surface is an embedded $2$-sphere in $M$. \end{romenumerate} \end{thm} \begin{proof} Definition \ref{guil} tells us that $(ii) \Longrightarrow (i)$, while Lemma \ref{sphere} asserts that $(i) \Longrightarrow (iii)$. It therefore suffices to show that $(iii) \Longrightarrow (ii)$. Thus, suppose that every $\beta$-surface of $(M,[g])$ is an embedded $2$-sphere in $M$. Let ${\zap Q}$ be the bundle of null directions of $(M,[g])$, and notice that the bundle projection ${\zap Q}\to M$ factors through an $S^1$-fibration ${\zap Q}\to F$, since every non-zero null vector is an element of exactly one $\beta$-plane. But Lemma \ref{unisphere} tells us that the foliation of ${\zap Q}$ by lifted null geodesics simplifies when restricted to the null geodesics in a $\beta$-surface, where it just becomes the standard fibration $\P (TS^2) \to \mathbb R\mathbb P^2$; moreover, this picture applies uniformly in a neighborhood of each leaf of the foliation $\mathscr F$ of $F$. Hence the foliation of $\zap Q$ by lifted null geodesics is a locally trivial circle fibration. Since each null geodesic lifts to a great circle in a leaf of $\mathscr F$, and each leaf embeds into $M$ via ${\zap p}: F\to M$, each null geodesic is also an embedded circle. It therefore follows that $(M,[g])$ is strongly Zollfrei, and we are done. \end{proof} We also obtain the following crucial fact: \begin{lem} Suppose that $(M,[g])$ is a space-time-orientable self-dual Zollfrei $4$-manifold, and let ${\zap p}: F\to M$ be the bundle of $\beta$-planes over $M$. Then there is a smooth $3$-manifold $P$ and a smooth proper submersion ${\zap q}: F\to P$ whose fibers are exactly the leaves of the foliation $\mathscr F$. \end{lem} \begin{proof} By Lemma \ref{sphere} and Lemma \ref{unisphere}, $\mathscr F$ must be a locally trivial fibration by $2$-spheres, and the leaf space $P$ is therefore a manifold. \end{proof} The situation is thus encapsulated by the diagram \setlength{\unitlength}{1ex} \begin{center}\begin{picture}(20,17)(0,3) \put(10,17){\makebox(0,0){$F$}} \put(2,5){\makebox(0,0){$M$}}\put(18,5){\makebox(0,0){$P$}} \put(15,12){\makebox(0,0){${\zap q}$}} \put(5,12){\makebox(0,0){${\zap p}$}} \put(11,15.5){\vector(2,-3){6}} \put(9,15.5){\vector(-2,-3){6}} \end{picture}\end{center} which we shall refer to as the (real) {\em double fibration} of $(M,[g])$. Now since $F$ is connected, so is $P={\zap q}(F)$, and we may therefore join any two distinct points of $P$ by a smoothly embedded arc. Trivializing the restriction of ${\zap q}$ to this arc then results in a free homotopy of the corresponding leaves of $\mathscr F$. Finally, pushing this homotopy down via ${\zap p}$ produces a free homotopy of any two given $\beta$-surfaces in $M$. In particular, any two $\beta$-surfaces are homologous: \begin{lem}\label{homotopic} Let $(M,[g])$ be a space-time oriented Zollfrei self-dual $4$-manifold. Then any two $\beta$-surface $S, S^\prime \subset M$ are freely homotopic, and so, in particular, represent the same homology class in $H_2 (M, \mathbb{Z})$. \end{lem} Now since $M^4$ is oriented, there is a well defined intersection form $$H_2(M,\mathbb{Z})\times H_2(M,\mathbb{Z})\to \mathbb{Z} $$ even if $M$ is non-compact; for example, this reflects the fact that we always have a Poincar\'e-duality isomorphism $H_2(M)\cong H^2_c(M)$ as well as a natural homomorphism $H^2_c(M)\to H^2(M)$. If $S_1$ and $S_2$ are compact embedded oriented surfaces in general position, one assigns a local intersection index of $\pm 1$ to each intersection point $x\in S_1\cap S_2$ so as to indicate whether the given orientations of $T_xS_1\oplus T_xS_2$ and $T_xM$ agree or disagree, and the homological intersection number $[S_1]\cdot [S_2]$ is then precisely the sum of these intersection indices. When $S_1$ and $S_2$ happen to be $\beta$-surfaces, we thus obtain the following: \begin{lem}\label{insect} If $S_1$ and $S_2$ are distinct compact embedded $\beta$-surfaces in a space-time-orientable self-dual $4$-manifold $(M,[g])$, then their homological intersection number $[S_1]\cdot [S_2]$ equals $-\# (S_1\cap S_2 )$. \end{lem} \begin{proof} Each $\beta$-surface is totally geodesic, so two distinct $\beta$-surfaces can never share the same tangent space. Since distinct $\beta$-planes in any tangent space are transverse, this shows that $S_1$ and $S_2$ are necessarily in general position. Now since any $\beta$-plane may be viewed as the graph of an orientation-reversing isometry $T_-\to T_+$, the intersection index assigned to each point of intersection is $-1$. Summing over intersection points thus yields $[S_1]\cdot [S_2]=-\# (S_1\cap S_2 )$. \end{proof} When $(M,[g])$ is Zollfrei, we thus obtain the following: \begin{lem}\label{any} Let $(M,[g])$ be a space-time-orientable Zollfrei self-dual manifold. Then any two $\beta$-surfaces in $M$ have non-empty intersection. Moreover, any two distinct $\beta$-surfaces meet in exactly $m$ points, where the homological self-intersection of any $\beta$-surface $S$ is given by $ [S] \cdot [S] = -m <0$. \end{lem} \begin{proof} Let $S$ be a reference $\beta$-surface, and suppose that we wish to understand the intersection of two given $\beta$-surfaces $S_1$ and $S_2$. If $S_1=S_2$, they certainly intersect, and there is nothing to prove. Otherwise, Lemma \ref{homotopic} tells us that $[S_1]=[S_2]=[S]$, and Lemma \ref{insect} then yields $\# (S_1\cap S_2 )= - [S_1]\cdot [S_2]= -[S]\cdot [S]$. In particular, the number $m$ of points of interesection is independent of {\em which} pair of distinct $\beta$-surfaces we choose to consider. But since every $\beta$-plane is tangent to a $\beta$-surface, and since we have a circle's worth of different $\beta$-planes in each tangent space $T_xM$, we can certainly find pairs $(S_1, S_2)$ with $S_1\neq S_2$ and $S_1\cap S_2\neq \emptyset$. Thus $m > 0$, and we are done. \end{proof} \begin{lem} \label{compact} If $(M,[g])$ is a Zollfrei self-dual $4$-manifold, then $M$ is compact. \end{lem} \begin{proof} By passing to a double cover if necessary, we may assume that $(M,g)$ is space-time-oriented. Fix a reference $\beta$-surface $S\subset M$. Then for any point $x\in M$, there is a $\beta$-surface through $x$ that meets $S$; indeed, there certainly {\em are} $\beta$-surfaces through $x$, and Lemma \ref{any} tells us that {\em any} of these must meet $S$. But this statement can be rewritten as the assertion that $$M= {\zap p}\left[{\zap q}^{-1}\left({\zap q}\left[{\zap p}^{-1}(S)\right]\right)\right].$$ Since ${\zap p}$ and ${\zap q}$ are both proper maps, and since $S$ is compact, it therefore follows that $M$ is compact, too. \end{proof} If $(M,g)$ is a space-time-oriented Zollfrei $4$-manifold, each fiber ${\zap p}^{-1}(x)$ of $F\to M$ is an oriented circle, and its image ${\zap q}[{\zap p}^{-1}(x)]$ in $P$ may be thought of as a map $\gamma_x : S^1 \to P$, which we will call a {\em standard loop}. \begin{prop} \label{structure} Let $M$ be space-time-orientable self-dual Zollfrei, and let $P$ be its space of $\beta$-surfaces. Then $P$ is diffeomorphic to $\mathbb R\mathbb P^3$, and $\pi_1 (P) \cong \mathbb{Z}_2$ is generated by any standard loop $\gamma_x$, $x\in M$. Moreover, any two distinct $\beta$-surfaces in $M$ meet in exactly two points. \end{prop} \begin{proof} Let $y\in P$ be any base point, and let $S\subset M$ be the corresponding $\beta$-surface. Every other $\beta$-surface in $M$ meets $S$ in $m$ distinct points, where $[S]\cdot [S] = -m$. Moreover, through every point of $S$, there passes a circle's worth of $\beta$-surfaces, only one of which is $S$. Now recall that the structure group of ${\zap p}$ is $O(1,2)=PSL(2,\mathbb R)$. Thus, by removing the one point representing $TS$ from each fiber of ${\zap p}^{-1} (S) \to S$, we obtain an affine $\mathbb R$-bundle $L$ over $S\approx S^2$ which, via ${\zap q}$, maps locally diffeomorphically onto $P-\{ y \}$ in an $m$-to-$1$ fashion. Since any affine $\mathbb R$-bundle over $S^2$ is trivial, it follows that universal cover of $P-\{ y \}$ is $L\approx S^2\times \mathbb R= S^3 -\{ 2 \mbox{ points}\}$; and since the order of this covering is $m=-[S]\cdot [S]$, we also see that $|\pi_1(P-\{ y\})|=m$. But $\pi_1(P)= \pi_1(P-\{ y\})$, since removing a point from a $3$-manifold doesn't change its fundamental group. The universal cover of $P-\{ y \}$ is therefore gotten from the universal cover $\tilde{P}$ of $P$ by removing $|\pi_1(P)|=m$ points. Since the universal cover of $P-\{ y \}$ has $m$ ends, whereas $S^2\times \mathbb R$ has just two ends, it follows that $m=2$. Thus $P= S^3 /\mathbb{Z}_2$ for some free $\mathbb{Z}_2$-action, and a theorem of Livesay \cite{live} tells us that $P\approx \mathbb R\mathbb P^3$. Finally, notice that the fiber of $L\to S$ defines a lift $\tilde{\gamma}_x$ of $\gamma_x$ to $\tilde{P}\approx S^3$. Since this lift is not a loop, but rather is a curve joining the two pre-images of $y$, it follows that $\gamma_x$ is non-trivial in $\pi_1 (P)$. Thus $[\gamma_x]$ generates $\pi_1 (P)\cong\pi_1 (\mathbb R\mathbb P^3)\cong \mathbb{Z}_2$, and we are done. \end{proof} Imitating the proof of Lemma \ref{unisphere} now gives us the following: \begin{lem}\label{folia} If $(M,[g])$ is space-time-orientable and self-dual Zollfrei, then $F$ is diffeomorphic to $\mathbb R\mathbb P^3\times S^2$ in such a manner that ${\zap q}$ becomes the first-factor projection $\mathbb R\mathbb P^3\times S^2\to \mathbb R\mathbb P^3$. \end{lem} \begin{proof} Let $\varpi: S(TP) \to P$ denote the sphere bundle defined by $$S(TP)= (TP-0_P)/\mathbb R^+,$$ where $0_P\subset TP$ denotes the zero section, and where the positive reals $\mathbb R^+$ act on $TP$ by scalar multiplication. That is, $S(TP)$ may be thought of as the unit tangent bundle of $P$ for any choice of Riemannian metric on $P$. Let $u$ be a non-zero vector field on $F$ which spans $\ker {\zap p}_*$ at each point; this is possible because the choice of a metric-compatible decomposition $TM= T_-\oplus T_+$ allows one to realize ${\zap p}: F\to M$ as the principle $SO(2)$-bundle of orientation-reversing isometries $T_-\to T_+$. Since the fibers of ${\zap p}$ and ${\zap q}$ are nowhere tangent, we can therefore define a map \begin{eqnarray*} \Phi: F & \to & S (TP) \\ z & \mapsto & [{\zap q}_*u] \end{eqnarray*} which makes the diagram \setlength{\unitlength}{1ex} \begin{center}\begin{picture}(40,17)(0,3) \put(10,17){\makebox(0,0){$F$}} \put(18,19){\makebox(0,0){$\Phi$}} \put(18,5){\makebox(0,0){$P$}} \put(12,11){\makebox(0,0){$\zap q$}} \put(24,11){\makebox(0,0){$\varpi$}} \put(26,17){\makebox(0,0){$S (TP)$}} \put(11,15.5){\vector(2,-3){6}} \put(25,15.5){\vector(-2,-3){6}} \put(12,17){\vector(1,0){10}} \end{picture}\end{center} commute. Over each point of $P$, the map $\Phi$ is just the lift of the developing map $\mbox{\cyr f} : S \to \mathbb R\mathbb P^2$ constructed in Proposition \ref{projflat} to the universal cover $S^2$ of $\mathbb R\mathbb P^2$, and so is a diffeomorphism. Hence $\Phi$ is a bijection. Moreover, since $\zap q$ and $\varpi$ are both submersions, it follows that $\Phi_*$ has maximal rank everywhere, and $\Phi$ is therefore a diffeomorphism. However, $P\approx \mathbb R\mathbb P^3$ is parallelizable, so $F\approx S (TP) \approx \mathbb R\mathbb P^3 \times S^2$, as claimed. \end{proof} \begin{thm}\label{s2xs2} If $(M,[g])$ is a space-time-orientable self-dual Zollfrei $4$-manifold, then $M$ is homeomorphic to $S^2 \times S^2$. \end{thm} \begin{proof} Since the standard loop $\gamma_x={\zap q}[{\zap p}^{-1}(x)]$ generates $\pi_1(P)$, the pull-back map $${\zap q}^*: H^1(P, \mathbb{Z}_2)\to H^1 (F, \mathbb{Z}_2)$$ sends the generator of $H^1(P, \mathbb{Z}_2)\cong \mathbb{Z}_2$ to an element of $H^1(F, \mathbb{Z}_2)$ which is non-trivial on the fiber class of ${\zap p}$. This shows that there is a double cover $\tilde{F}\to F$ which restricts to a double cover $S^1\to S^1$ of each fiber of ${\zap p}$. Now choose any $g$-adapted, orientation compatible almost-complex structure ${\mathfrak J}$ on ${M}$. The $S^1$-bundle ${\zap p}: F\to M$ can then be identified with the unit circle bundle of the canonical line bundle $K$ of $({M}, {\mathfrak J})$. The double cover $\tilde{F}\to F$ then becomes the unit circle bundle of a square-root $K^{1/2}$ of $K$. Hence $c_1({M}, {\mathfrak J})$ is divisible by $2$ in $H^2(M,\mathbb{Z})$. Because $w_2(M)$ is the mod-$2$ reduction \cite{milnorstaf} of $c_1({M}, {\mathfrak J})$, and because the sequence $$\cdots \to H^2(M,\mathbb{Z} ) \stackrel{2}{\longrightarrow} H^2(M,\mathbb{Z} )\to H^2(M,\mathbb{Z}_2 )\to \cdots $$ is exact, it follows that $w_2(M)=0$. Thus $M$ is a spin manifold. Since $F\approx \mathbb R\mathbb P^3 \times S^2$, its universal cover must be $\tilde{F}\approx S^3\times S^2$. Hence the long exact homotopy sequence \cite{span} $$\cdots \to \pi_2 (S^1) \to \pi_2(\tilde{F})\to \pi_2 (M) \to \pi_1 (S^1)\to \pi_1 (\tilde{F})\to \pi_1 (M)\to 0$$ of the fibration $S^1\to \tilde{F}\to M$ now tells us that $\pi_1(M)=0$ and $\pi_2(M) = \mathbb{Z}\oplus \mathbb{Z}$. Thus $M$ is a simply connected compact $4$-manifold with $b_2=2$ and even intersection form. The Freedman classification of simply connected $4$-manifolds \cite{freedman} therefore tells us that $M$ is homeomorphic to $S^2 \times S^2$. \end{proof} In fact, it seems reasonable to conjecture that any space-time-orientable self-dual Zollfrei $4$-manifold must actually be {\em diffeomorphic} to $S^2\times S^2$. While we have not managed to prove this stronger statement in general, we will eventually see, in Theorem \ref{bihol} below, that it at least turns out to be true if $[g]$ is represented by an indefinite K\"ahler metric. \bigskip We now turn to the {non}-space-time-orientable case. \begin{prop}\label{rp2} Let $(M,[g])$ be a Zollfrei self-dual $4$-manifold which is {\em not} space-time-orientable. Then every $\beta$-surface in $M$ is an embedded $\mathbb R\mathbb P^2$, and every pair of distinct $\beta$-surfaces intersects in exactly one point. \end{prop} \begin{proof} Notice that our definition of self-duality requires that $M$ be orientable. Thus the set $\tilde{M}$ of orientation-compatible local space-time orientations of $(M,[g])$ is a double cover of $M$. Notice that $\tilde{M}$ is space-time-orientable and self-dual Zollfrei with respect to the pulled back metric. Let $a: \tilde{M}\to \tilde{M}$ be the non-trivial deck transformation. If $S\subset \tilde{M}$ is any $\beta$-surface, then we claim that $a [S] = S$. Indeed, suppose not. Then $a [S] = S^\prime$ would be a different $\beta$-surface, and hence $S\cap S^\prime$ would consist of exactly two points by Proposition \ref{structure}; and since $a[S\cap S^\prime ] = a [S] \cap a[a[S]] = S^\prime \cap S$, these two points would necessarily be interchanged by the fixed-point-free involution $a$. On the other hand, all the other points of $S$ would be moved to the complement of $S$ by $a$. Hence the image of $S$ in $M= \tilde{M}/\langle a \rangle$ would be an imersed sphere with a single self-intersection. But this contradicts Lemma \ref{beta}. Thus every $\beta$-surface in $\tilde{M}$ must be sent to itself by $a$. It follows that every $\beta$-surface in $\tilde{M}$ is the double cover of a $\beta$-surface in $M$. Since all the $\beta$-surfaces in $\tilde{M}$ are $2$-spheres by Lemma \ref{sphere}, and since every $\beta$-surface in $M$ must be the image of a $\beta$-surface in $\tilde{M}$, it follows that every $\beta$-surface in $M$ must be an $\mathbb R\mathbb P^2$. Moreover, since $\tilde{M}\to M$ is a double cover, and since $\beta$-surfaces in $\tilde{M}$ intersect in pairs of points, pairs of distinct $\beta$-planes in $M$ must always intersect in a unique point. \end{proof} \begin{thm}\label{tastier} Let $(M,[g])$ be a self-dual split-signature $4$-manifold. Then the following are equivalent: \begin{romenumerate} \item $(M,[g])$ is Zollfrei; \item $(M,[g])$ is strongly Zollfrei; \item exactly one of the following holds: \begin{alphenumerate} \item every $\beta$-surface is an embedded $S^2\subset M$; or \item every $\beta$-surface is an embedded $\mathbb R\mathbb P^2\subset M$. \end{alphenumerate} \end{romenumerate} \end{thm} \begin{proof} Notice that $(iii)\Longrightarrow (i)$ by Proposition \ref{projflat} and the uniqueness \cite{nico} of the flat projective structures on $\mathbb R\mathbb P^2$ and $S^2$. Thus Lemma \ref{sphere} and Proposition \ref{rp2} tell us that $(iii)(a)$ can only occur if $(M,[g])$ is space-time orientable, whereas $(iii)(b)$ can only occur if $(M,[g])$ is {\em not} space-time orientable. If $(M,[g])$ is space-time orientable, the desired equivalence is therefore given by Theorem \ref{tasty}. If, on the other hand, $(M,[g])$ is not space-time orientable, then $(ii)\Longrightarrow (i)$ by Definition \ref{guil}, and $(i)\Longrightarrow (iii)$ by Proposition \ref{rp2}. On the other hand, $(iii)(b)\Longrightarrow (ii)$, too. Indeed, the space-time-orientable double cover $\tilde{M}$ of $M$ is Zollfrei, and hence strongly Zollfrei by Theorem \ref{tasty}. The non-trivial deck transformation $a$ of $\tilde{M}\to M$ must therefore send each null geodesic to itself by the uniqueness \cite{nico} of the flat projective structure on $\mathbb R\mathbb P^2$. \end{proof} Proposition \ref{rp2} also allows us to deduce the following: \begin{lem}\label{nonspin} Let $(M,[g])$ be a Zollfrei self-dual $4$-manifold which is {not} space-time-orientable. Then $M$ is non-spin. \end{lem} \begin{proof} Let ${\mathbf b}\in H^2(M,\mathbb{Z}_2 )$ denote the Poincar\'e dual of the $\mathbb{Z}_2$-homology class of any $\beta$-surface $S\subset M$. Since any two distinct $\beta$-surfaces are freely homotopic and intersect transversely in exactly one point, we have ${\mathbf b } \cdot {\mathbf b}= 1\in \mathbb{Z}_2$, where $$\cdot : H^2(M,\mathbb{Z}_2 ) \times H^2(M,\mathbb{Z}_2 ) \to \mathbb{Z}_2$$ is the intersection form of $M$ with $\mathbb{Z}_2$ coefficients. But since $M$ is orientable, Wu's formula \cite{hirzwu} asserts that $w_2(M)$ satisfies $$w_2 \cdot {\mathbf x}= {\mathbf x} \cdot {\mathbf x}$$ for any ${\mathbf x}\in H^2 (M, \mathbb{Z}_2),$ so we have $$w_2 \cdot {\mathbf b} = {\mathbf b } \cdot {\mathbf b}= 1.$$ Thus $w_2(M)\neq 0$, and $M$ is non-spin, as claimed. \end{proof} \begin{thm} \label{quadric} Let $(M,[g])$ be a Zollfrei self-dual $4$-manifold which is {not} space-time-orientable. Then $M$ is homeomorphic to the real projective quadric ${\mathbb M}^{2,2}$. \end{thm} \begin{proof} Freedman's topological classification of simply connected $4$-manifolds has been extended to compact oriented $4$-manifolds with finite cyclic fundamental group by Hambleton and Kreck \cite[Theorem C]{hmbkrck}. They show that such manifolds are classified up to homeomorphism by their fundamental groups, their intersection forms on $H^2(\bullet , \mathbb{Z})/\mbox{torsion}$, their $w_2$-types, and their Kirby-Siebenmann invariants. The Kirby-Siebenmann invariant vanishes if a manifold admits a smooth structure. The $w_2$-type of a $4$-manifold says whether the manifold and its universal cover are spin; namely, an oriented manifold $M$ with universal cover $\tilde{M}$ is said to be of type (I) if $w_2(\tilde{M})\neq 0$, type (II) if $w_2(M)=0$, and type (III) if $w_2(\tilde{M})=0$, but $w_2(M)\neq 0$. Now assume that $(M,[g])$ is a non-space-time-orientable self-dual Zollfrei manifold. Then $M$ is smooth, and so has vanishing Kirby-Siebenmann invariant. Also, $M$ is oriented, as is required by our definition of self-duality. Now recall that the double cover $\tilde{M}$ of $M$ by its local space-time orientations is a space-time orientable Zollfrei self-dual $4$-manifold, and so is homeomorphic to $S^2\times S^2$ by Theorem \ref{s2xs2}. Since $S^2\times S^2$ is simply connected, $\tilde{M}$ is actually the universal cover, and we therefore have $\pi_1(M)= \mathbb{Z}_2$. Moreover, the Euler characteristic of $M$ must be $\chi (M)= \chi(S^2\times S^2)/2=2$, so $H^2(M,\mathbb{Z})/\mbox{torsion}=0$, and the intersection form of $M$ must therefore be trivial. Finally, $w_2(M)\neq 0$ by Proposition \ref{nonspin}, whereas $w_2(\tilde{M})=w_2(S^2\times S^2)=0$, so $M$ is of type (III). Hambleton and Kreck therefore tell us that there is only one possible homeomorphism type for such an $M$. The quadric ${\mathbb M}^{2,2}\subset \mathbb R\mathbb P^5$ therefore represents the only topological possibility. \end{proof} Combining Theorems \ref{s2xs2} and \ref{quadric}, we have thus proved {\bf Theorem \ref{topprop}}. \section{Stability of the Zollfrei Condition} We now turn to the important assertion that the Zollfrei condition is open among self-dual metrics. This phenomenon is actually a manifestation of aspects of the theory of foliations arising from Thurston stability for compact leaves of foliations \cite{thurstfol}. The result we will need is originally due to Langevin and Rosenberg \cite{stability}, although the formulation given here is actually that of Epstein and Rosenberg \cite{epros}. \begin{thm}[Langevin-Rosenberg] \label{lep} Let $\mbox{\cyr p} : X\to Y$ be a $C^1$ fiber bundle with compact fibers and compact base, where the fibers of $\mbox{\cyr p}$ have $b_1=0$ over $\mathbb R$. Let $\mathfrak F$ be the foliation of $X$ by the fibers of $\mbox{\cyr p}$. Then the foliation $\mathfrak F$ has a neighborhood $\mathscr V$ in the $C^1$ Epstein topology on the space of foliations of $X$ such that every foliation ${\mathfrak F}^\prime\in {\mathscr V}$ is of the form $\phi^* {\mathfrak F}$ for some $C^1$-diffeomorphism $\phi : X\to X$. \end{thm} Here two $C^1$ foliations of $X$ are close in the $C^1$ {\em Epstein topology} \cite{eptop} if there are finite atlases of trivializing charts for the two foliations which are close in the usual $C^1$ topology on the space of maps. The only thing that need concern us here is that two $C^1$ integrable distributions of $k$-planes which are $C^1$ close as sections of the Grassmann bundle $Gr_k(TX)\to X$ define foliations which are close in Epstein's sense. Combining Theorem \ref{lep} with our results from \S \ref{prelim}, we thus obtain \setcounter{main}{0} \begin{main} Let $(M,g)$ be a self-dual Zollfrei $4$-manifold. Then any other self-dual metric $g'$ on $M$ that is sufficiently close to $g$ in the $C^2$ topology is also Zollfrei. \end{main} \begin{proof} There is a $C^0$ neighborhood of $g$ in the pseudo-Riemannian metrics in which every metric $g^\prime$ can be written as $g^\prime= A^*g$ for a unique $g$-self-adjoint endomorphism $A: TM\to TM$ which is $C^0$ close to the identity. This endomorphism of $TM$ allows one to identify the pseudo-orthonormal frame bundles of $g$ and $g^\prime$. Moreover, if $g^\prime$ is $C^2$ close to $g$, the corresponding principle connections are then $C^1$-close after this correspondence has been made. Using $A$ to identify the bundle of $\beta$-planes for $g^\prime$ with the bundle ${\zap p}: F\to M$ of $\beta$-planes for $g$, we then obtain two distributions $E$ and $E^\prime$ on $F$ which represent the horizontal lifts of the $\beta$-planes of $g$ and $g^\prime$, respectively; and these two distributions will be $C^1$ close if we again assume that $g$ and $g^\prime$ are $C^2$ close. Now if $g$ and $g^\prime$ are both self-dual, the distributions $E$ and $E^\prime$ will both be integrable, and will be tangent to foliations ${\mathscr F}$ and ${\mathscr F}^\prime$ that represent the canonical lifts of the $\beta$-surfaces of the two metrics. Moreover, ${\mathscr F}^\prime$ will be $C^1$ close to $\mathscr F$ if we assume that $g^\prime$ is $C^2$ close to $g$. But if, in addition, $g$ is Zollfrei, the leaves of the foliation $\mathscr F$ will exactly be the fibers of a fiber bundle ${\zap q}: F\to P$. Now $F$ is necessarily compact by Lemma \ref{compact}, while Theorem \ref{tastier} tells us that the fibers of ${\zap q}$ are spheres or projective planes. Since these are compact surfaces with $b_1=0$, we may therefore apply Theorem \ref{lep} to conclude that there is a $C^1$ diffeomorphism $\phi : F\to F$ which sends $\mathscr F$ to ${\mathscr F}^\prime$. Thus, if $g$ is self-dual and Zollfrei, and if $g^\prime$ is self-dual and $C^2$ close to $g$, then the $\beta$-surface of $g^\prime$ are either all spheres or all projective planes, and Theorem \ref{tastier} therefore tells us that $g^\prime$ is Zollfrei, too, as claimed. \end{proof} \section{Constructing the Twistor Space} At this point, we have already achieved a certain level of intimacy with the bundle ${\zap p} : F\to M$ of {real} $\beta$-planes over an oriented split-signature conformal $4$-manifold $(M,[g])$. It is now time to introduce the bundle $\wp : {\mathcal Z}\to M$ of {\em complex} $\beta$-planes. Just as in the real case, a $2$-dimensional complex subspace $\Pi$ of a complexified tangent space $T_\mathbb{C} M|_x= \mathbb{C}\otimes T_xM$ of $M$ is called {\em isotropic} if the complex-bilinear extension of $g$ vanishes when restricted to $\Pi$. Such isotropic planes come in two flavors. The complex $\alpha$-planes are precisely those complex $2$-planes $\Pi$ such that $\wedge^2 \Pi$ corresponds by index lowering to a complex null line in $\Lambda^+_\mathbb{C}$; the complex $\beta$-planes instead correspond to null $1$-dimensional subspaces of $\Lambda^-_\mathbb{C}$. Thus, the bundle of complex $\beta$-planes on $M$ is exactly given by $${\mathcal Z}= \{ [\varphi ] \in \mathbb P (\Lambda^+_\mathbb{C})~|~ \langle \varphi , \varphi \rangle =0\},$$ where $\langle \varphi , \psi \rangle = \frac{1}{2}\varphi_{ab}\psi_{cd}g^{ac}g^{bd}$ is the complex-bilinear extension to $\Lambda^2_\mathbb{C}$ of the inner-product on $2$-forms induced by $g$. Since $\mathbb P (\Lambda^+_\mathbb{C})$ is a $\mathbb C\mathbb P_2$-bundle over $M$, each fiber of $\mathcal Z$ is a non-degenerate conic in $\mathbb C\mathbb P_2$, and so is intrinsically a $\mathbb C\mathbb P_1$. Indeed, $\mathcal Z$ is precisely the $\mathbb C\mathbb P_1$-bundle obtained from $F\to M$ by remembering that $F$ has structure group $PSL(2, \mathbb R)$, and that one can therefore construct an associated $\mathbb C\mathbb P_1$-bundle over $M$ by including $PSL(2,\mathbb R)$ in $PSL(2,\mathbb{C})$ and considering the standard action of $PSL(2,\mathbb{C})$ on $\mathbb C\mathbb P_1$. In particular, each fiber of $\wp : {\mathcal Z}\to M$ is a holomorphic curve. Let ${\mathzap V}^{0,1}\subset T_\mathbb{C} {\mathcal Z}$ be the $(0,1)$-tangent bundle of the fibers. Fix a metric $g$ in the conformal class, and notice that $g$ determines a connection on $\mathcal Z$, in the sense that $g$ determines a notion of parallel transport of elements of $\mathcal Z$ along smooth curves in $M$. Let ${\mathzap H} \subset T{\mathcal Z}$ be the horizontal subspace of this connection, so that the derivative of the projection gives us a canonical isomorphism $\wp_*: {\mathzap H}\to \wp^* TM$. Let ${\mathzap H}_\mathbb{C}= {\mathzap H}\otimes\mathbb{C}$. Then $\mathcal Z$ carries a unique distribution ${\mathzap E}\subset {\mathzap H}_\mathbb{C}\subset T_\mathbb{C} {\mathcal Z}$ of horizontal complex $2$-planes such that $$\wp_* ({\mathzap E}|_\Pi)= \Pi \subset T_\mathbb{C} M.$$ Set $${\mathzap D}= {\mathzap E}+ {\mathzap V}^{0,1}.$$ Since ${\mathzap E}$ is horizontal and ${\mathzap V}^{0,1}$ is vertical, this sum is in fact a direct sum, and $\mathzap D$ is therefore a distribution of complex $3$-planes on $\mathcal Z$. Let us make this discussion more concrete by temporarily restricting our attention to an open subset ${\mathscr U}\subset M$ on which we can find an oriented pseudo-orthonormal frame $e_1, \ldots , e_4$ with $$g(e_j, e_k) = \left\{ \begin{array}{rl} 0& \mbox{ if } j\neq k,\\ 1 &\mbox{ if } j=k\in\{1, 2\}, \\ -1&\mbox{ if } j=k \in \{3, 4\}. \end{array} \right. $$ We remark that if $g$ is of differentiability class $C^{k}$, then such frames $e_1, \ldots , e_4$ of class $C^{k}$ can locally be constructed by means of the Gramm-Schmidt procedure. This in turn determines a pseudo-orthonormal basis for $\Lambda^{-}|_{\mathscr U}$ by setting \begin{eqnarray*} \varphi_1 & = & \frac{1}{\sqrt{2}}(e^1\wedge e^2- e^3\wedge e^4) \\ \varphi_2& = & \frac{1}{\sqrt{2}}(e^1\wedge e^3 -e^2\wedge e^4) \\ \varphi_3& = & \frac{1}{\sqrt{2}}(e^1\wedge e^4 + e^2\wedge e^3) \end{eqnarray*} so that $$ \langle \varphi_{\mathzap j} , \varphi_{\mathzap k} \rangle = \left\{ \begin{array}{rl} 0& \mbox{ if } {\mathzap j}\neq {\mathzap k},\\ 1 &\mbox{ if } {\mathzap j}={\mathzap k}=1 \\ -1&\mbox{ if } {\mathzap j}={\mathzap k} \in \{2, 3\}. \end{array} \right. $$ We can then identify $\mathbb C\mathbb P_1\times {\mathscr U}$ with $\wp^{-1}({\mathscr U})\subset {\mathcal Z}$ by $$([\zeta_1: \zeta_2], x) \longmapsto \left. \left[(\zeta_1^2+ \zeta_2^2)~\varphi_1 + (\zeta_1^2- \zeta_2^2)~ \varphi_2 - 2\zeta_1\zeta_2 ~\varphi_3 \right] \right|_x ,$$ and it is worth noting that in the process we have identified $\mathbb R\mathbb P^1\times {\mathscr U}$ with ${\zap p}^{-1}({\mathscr U})\subset F \subset {\mathcal Z}$. In particular, an open dense subset of $\wp^{-1}({\mathscr U})$ may be parameterized by $\mathbb{C} \times {\mathscr U}$, via the map $$(\zeta , x ) \longmapsto [(1+\zeta^2)~\varphi_1 + (1-\zeta^2)~ \varphi_2 - 2\zeta ~\varphi_3 ]\Big|_x,$$ and in the process we sweep out an open dense subset of ${\zap p}^{-1}({\mathscr U})$ with $\mathbb R \times {\mathscr U.}$ Notice that for each $(\zeta, x)$ with $\zeta\neq \pm i$, the corresponding $\beta$-plane is exactly $$\Pi = \mbox{span}\left. \left\{ (\zeta^2+1)e_1 -2\zeta e_3 + (\zeta^2-1)e_4 ~,~ (\zeta^2+1)e_2+ (\zeta^2-1)e_3 + 2\zeta e_4 \right\} \right|_x. $$ Now observe that we have $$\nabla \varphi_{{\mathzap j}} = \theta_{{\mathzap j}}^{{\mathzap k}} \otimes \varphi_{{\mathzap k}},$$ for an ${\mathfrak s \mathfrak o}(1,2)$-valued $1$-form $[\theta_{{\mathzap j}}^{\mathzap k}]$: $$ \theta^1_2= \theta^2_1, \hspace{.5cm} \theta^1_3= \theta^3_1, \hspace{.5cm} \theta^2_3= -\theta^3_2, \hspace{.5cm}\theta^1_1=\theta^2_2=\theta^3_3= 0. $$ When we then expand these $1$-forms as $\theta_{{\mathzap j}}^{\mathzap k}=\theta_{{\mathzap j}\ell}^{\mathzap k} e^\ell$ the resulting functions $\theta_{{\mathzap j}\ell}^{\mathzap k} $ are just linear combinations of the components of the usual connection symbols of the frame, and so are of class $C^{k-1}$ if our frame is of class $C^{k}$. The distribution $\mathzap D$ now becomes $$ {\mathzap D} = \mbox{span} \left\{ {\mathfrak w}_1, {\mathfrak w}_2, \frac{\partial}{\partial\overline{\zeta}} \right\} $$ on $(\mathbb{C} -\{ \pm i\}) \times {\mathscr U}$, where the vector fields \begin{eqnarray*} {\mathfrak w}_1 & = & (\zeta^2+1)e_1 -2\zeta e_3 + (\zeta^2-1)e_4 + Q_1(x,\zeta) \frac{\partial}{\partial \zeta}\\ {\mathfrak w}_2 & = & (\zeta^2+1)e_2+ (\zeta^2-1)e_3 + 2\zeta e_4+ Q_2(x,\zeta) \frac{\partial}{\partial \zeta} \end{eqnarray*} are defined in terms of the functions \begin{eqnarray*} Q_1(x,\zeta) & = & \frac{1-\zeta^2}{2}\Big[ (\zeta^2+1)\theta^3_{11} - 2\zeta\theta^3_{13} + (\zeta^2-1)\theta^3_{14} \Big]\\ &&\hspace{1cm}+ \zeta \Big[ (\zeta^2+1)\theta^2_{11}- 2\zeta\theta^2_{13}+ (\zeta^2-1)\theta^2_{14} \Big]\\ &&\hspace{2cm} -\frac{1+\zeta^2}{2}\Big[(\zeta^2+1)\theta^2_{31} - 2\zeta\theta^2_{33} + (\zeta^2-1)\theta^2_{34} \Big] \\ Q_2(x,\zeta) & = & \frac{1-\zeta^2}{2}\Big[ (\zeta^2+1) \theta^3_{12} + (\zeta^2-1)\theta^3_{13} + 2\zeta \theta^3_{14} \Big]\\ &&\hspace{1cm}+ \zeta \Big[ (\zeta^2+1) \theta^2_{12}+ (\zeta^2-1)\theta^2_{13} + 2\zeta \theta^2_{14} \Big]\\ &&\hspace{2cm} -\frac{1+\zeta^2}{2}\Big[(\zeta^2+1)\theta^2_{32} + (\zeta^2-1) \theta^2_{33} + 2\zeta \theta^2_{34} \Big] \end{eqnarray*} The minuti{\ae} of these expressions are of little importance, but three facts are worthy of emphasis. First of all, the components of ${\mathfrak w}_1$ and ${\mathfrak w}_2$ in the basis $e_1, \ldots , e_4, \partial/\partial \zeta$ are polynomial in $\zeta$ for any fixed $x\in {\mathscr U}$, and so, in particular, $$ \left[ \frac{\partial}{\partial \overline{\zeta}}, {\mathfrak w}_1\right] = \left[ \frac{\partial}{\partial \overline{\zeta}}, {\mathfrak w}_2\right] = 0. $$ Secondly, we have chosen the vector fields ${\mathfrak w}_1$ and ${\mathfrak w}_2$ to be real and {\em horizontal} along the locus $F$ where $\zeta$ is real\footnote{The ${\mathfrak w}_j$ could only be forced to be horizontal {\em everywhere} at the price of adding muliples of $\partial/\partial \overline{\zeta}$ to them. We have avoided doing so here because the relevant coefficients would generally {\em not} be holomorphic in $\zeta$, and the Lie brackets of the ${\mathfrak w}_j$ with $\partial/\partial \overline{\zeta}$ would therefore no longer vanish.} . Finally, notice that $\mathzap D$ is spanned by $C^{k-1}$ vector fields if $g$ is of class $C^{k}$. \begin{prop}\label{critter} Let $(M,g)$ be an oriented split-signature $C^2$ pseudo-Riemannian $4$-manifold. Let $\wp : {\mathcal Z}\to M$ be the bundle of complex $\beta$-planes in $T_\mathbb{C} M$, and let ${\mathzap D}\subset T_\mathbb{C}{\mathcal Z}$ be the $C^1$ distribution of complex $3$-planes defined above. Then ${\mathzap D}$ is involutive, in the sense that $$[C^1({\mathzap D}), C^1({\mathzap D})]\subset C^0({\mathzap D}),$$ iff $(M,g)$ is self-dual. \end{prop} \begin{proof} Let us begin by noticing that $${\mathzap D}\cap T_\mathbb{C} F= {\mathzap E}|_F= E\otimes \mathbb{C} ,$$ where the real distribution of $2$-planes $E$ on $F$ is defined on page \pageref{dodger}. Also recall that Proposition \ref{roger} tells us that $E$ is Frobenius integrable iff $g$ is self-dual. Now, supose that ${\mathzap D}$ is involutive. Then both $T_\mathbb{C} F$ and $\mathzap D$ are closed under Lie brackets. Hence ${\mathzap D}\cap T_\mathbb{C} F= E\otimes \mathbb{C}$ is closed under Lie brackets, too. Thus $E$ is Frobenius integrable, and Proposition \ref{roger} therefore tells us that $g$ is self-dual. Conversely, suppose that $g$ is self-dual. Then Proposition \ref{roger} tells us that $E\to F$ is involutive. Let ${\mathscr U}\subset M$ be any open set on which there exists a pseudo-orthononormal frame $e_1, \ldots, e_4$, and consider the vector fields ${\mathfrak w}_1$ and ${\mathfrak w}_2$ constructed on an open dense subset of $\wp^{-1}({\mathscr U})$ above. Along $F$, the vector fields ${\mathfrak w}_1$ and ${\mathfrak w}_2$ are linearly independent sections of the involutive rank-$2$ bundle $E\subset TF$, so $$\left[ {\mathfrak w}_1, {\mathfrak w}_2\right] \wedge {\mathfrak w}_1\wedge {\mathfrak w}_2 = 0 ~~~\mbox{ when } \zeta = \overline{\zeta}.$$ However, relative to the frame $e_1, \ldots e_4, \partial/\partial \zeta$, the components of ${\mathfrak w}_1$ and ${\mathfrak w}_2$ are polynomial in $\zeta$, so it follows that the components of the tensor field $\left[ {\mathfrak w}_1, {\mathfrak w}_2\right]\wedge {\mathfrak w}_1\wedge {\mathfrak w}_2$ are polynomial in $\zeta$, too. But we have already seen that $\left[ {\mathfrak w}_1, {\mathfrak w}_2\right]\wedge {\mathfrak w}_1\wedge {\mathfrak w}_2$ vanishes when $\zeta$ is real. Hence $\left[ {\mathfrak w}_1, {\mathfrak w}_2\right]\wedge {\mathfrak w}_1\wedge {\mathfrak w}_2$ vanishes identically, and we therefore have $$ \Big[ \frac{\partial}{\partial \overline{\zeta}} , {\mathfrak w}_1\Big], \Big[ \frac{\partial}{\partial \overline{\zeta}} , {\mathfrak w}_2\Big] , \Big[ {\mathfrak w}_1, {\mathfrak w}_2\Big] \in \mbox{span}\left\{ \frac{\partial}{\partial \overline{\zeta}} , {\mathfrak w}_1, {\mathfrak w}_2\right\} . $$ Thus $\mathzap D$ is involutive on the region of $\wp^{-1}({\mathscr U})$ parameterized by $(\mathbb{C} -\{ \pm i\}) \times {\mathscr U}$, and the O'Neill tensor \begin{eqnarray*} A_{\mathzap D}: {\mathzap D}\times {\mathzap D}& \longleftarrow & T_\mathbb{C} {\mathcal Z}/{\mathzap D} \\ (u,v) &\mapsto & [u,v] \bmod {\mathzap D} \end{eqnarray*} therefore vanishes on an open dense subset of $\wp^{-1} ({\mathscr U})$. But $A_{\mathzap D}$ is continuous, so it therefore vanishes on all of $\wp^{-1} ({\mathscr U})$. Since such subsets ${\mathscr U}$ cover all of $M$, it therefore follows that $\mathzap D$ is involutive on all of $\mathcal Z$. \end{proof} Similar reasoning also shows the following: \begin{prop} \label{conformal} Let $(M,[g])$ be an oriented split-signature self-dual $4$-manifold. Then the involutive distribution ${\mathzap D}$ on ${\mathcal Z}$ is conformally invariant --- that is, it depends only on the conformal class $[g]$, rather than on the metric $g\in [g]$. \end{prop} \begin{proof} Since multiplying $g$ by $-1$ does not change the metric connection, and therefore does not change ${\mathzap D}= {\mathzap E}\oplus {\mathzap V}^{0,1}$, it suffices to henceforth consider only conformally related pairs of metrics $g$ and $\hat{g}={\zap f}g$ for which the factor ${\zap f}$ is positive. Now the distribution $E$ on $F$ only depends on $[g]$, since it is tangent to the foliation $\mathscr F$ of $F$ by lifted $\beta$-surfaces. Now consider two metrics $g$ and $\hat{g}={\zap f}g$ in $[g]$, where ${\zap f}> 0$. If $e_1 , \ldots , e_4$ is a pseudo-orthonormal frame for $g$ on an open subset ${\mathscr U}\subset M$, then ${\zap f}^{-1/2}e_1 , \ldots , {\zap f}^{-1/2}e_4$ is a pseudo-orthonormal frame for $\hat{g}$. Let ${\mathfrak w}_j$ and $\hat{\mathfrak w}_j$ be the vector fields on $(\mathbb{C} -\{ \pm i\}) \times {\mathscr U}$ constructed from these two frames and metrics. Then ${\mathfrak w}_j$ and ${\zap f}^{1/2}\hat{\mathfrak w}_j$ coincide along $F$, since they are sections of $E$ with the same projections. But the components of ${\mathfrak w}_j$ and ${\zap f}^{1/2}\hat{\mathfrak w}_j$ (expressed, say, as linear combinations of the $e_j$ and $\partial/\partial \zeta$) are polynomial in $\zeta$. Since they coincide when $\zeta$ is real, we must therefore have ${\mathfrak w}_j\equiv {\zap f}^{1/2}\hat{\mathfrak w}_j$. Hence the distribution $\mathzap D$ determined by $g$ coincides with the distribution $\hat{\mathzap D}$ determined by $\hat{g}$ on an open dense subset of $\wp^{-1}(U)$, and we therefore have $\mathzap D\equiv \hat{\mathzap D}$ on $\wp^{-1}(U)$ by continuity. Since $M$ can be covered with such open sets ${\mathscr U}$, it therefore follows that $\mathzap D= \hat{\mathzap D}$ on all of $\mathcal Z$, as claimed. \end{proof} Actually, the conformal invariance of $\mathzap D$ holds even in the absence of the self-duality hypothesis, but we will never need this fact. It is also worth remarking that Proposition \ref{critter} could instead, for example, have been proved by imitating the arguments of Atiyah-Hitchin-Singer \cite{AHS}. The route we have chosen is not arbitrary, however, but rather is specifically intended to prepare the reader for the proof of Proposition \ref{machine} below. What is the `real' geometrical meaning of a point of the bundle $\wp : {\mathcal Z}\to M$? Obviously, the points of $F\subset {\mathcal Z}$ are real totally null $2$-planes, and there is not much more to be said. By contrast, a point of ${\mathcal Z}-F$ is a subspace $\Pi\subset T_x M\otimes \mathbb{C}$ with the property that $\Pi \cap \overline{\Pi}=0$. Thus $\Pi\oplus \overline{\Pi}= T_x M\otimes \mathbb{C}$, and we can therefore define a unique almost-complex structure $\jmath : T_xM\to T_xM$ at $x$ by declaring that $\Pi$ is its $(+i)$-eigenspace. The requirement that $\Pi$ be isotropic is then equivalent to the condition that $\jmath$ be an orthogonal transformation--- i.e. that $\jmath^*g= g$. Finally, the requirement that $\Pi$ be a $\beta$-plane, rather than an $\alpha$-plane, is exactly that $\jmath$ determine the {\em given} orientation of $M$, rather than the opposite one. This last requirement concretely amounts to asking that there be an oriented pseudo-orthonormal basis $e_1, \ldots , e_4$ with $\jmath e_1= e_2$ and $\jmath e_3= e_4$. Notice that this formulation implicitly is associated with a decomposition $T_xM= T_+\oplus T_-$, where $T_+= \mbox{span }\{e_1 , e_2\}$ and $T_-=\mbox{span} \{ e_3,e_4\}$, and that $\jmath$ gives us a specific orientation of the maximally positive and negative subspaces $T_+$ and $T_-$. Now suppose that $(M,[g])$ is space-time orientable. It then follows that ${\mathcal Z}-F$ has two connected components, depending on whether the associated orientation on $T_-$ is the given one, or its reverse. Let $U\subset ({\mathcal Z}-F)$ be the open subset corresponding to $\jmath$ for which the induced orientation on $T_-$ agrees with the previously chosen one. Then $\wp|_U:U\to M$ is an open disk bundle over $M$, and corresponds to the region $\Im m ~\zeta > 0$ in our explicit local description of $\mathcal Z$. Let ${\mathcal Z}_+= U \cup F$ be the closure of $U$ in $\mathcal Z$. Thus ${\mathcal Z}_+$ is a compact $6$-manifold-with-boundary, and $\wp|_{{\mathcal Z}_+}: {\mathcal Z}_+\to M$ is a bundle of closed oriented $2$-disks. Now $F$ carries a foliation $\mathscr F$ by lifted $\beta$-surfaces. If we assume that our space-time-oriented self-dual $4$-manifold $(M,[g])$ is also {\em Zollfrei}, then $\mathscr F$ becomes the system of fibers of the fibration ${\zap q} : F\to P$, and Lemma \ref{folia} tells us, moreover, that ${\zap q} : F\to P$ is a trivial $2$-sphere bundle over $P\approx \mathbb R\mathbb P^3$. We can thus give the disjoint union $$Z= U ~{\textstyle \coprod}~ P$$ the structure of a compact topological $6$-manifold by endowing it with the quotient topology induced by the map $$\Psi : {\mathcal Z}_+\to Z,$$ where the restriction of $\Psi$ to $\mbox{Int } {\mathcal Z}_+=U$ is the identity map $U\to U$, and where the restriction of $\Psi$ to $\partial {\mathcal Z}_+=F$ is the fibration ${\zap q}: F\to P$. Indeed, we may do this by using the `polar coordinate' map \begin{eqnarray*} P\times S^2 \times [0,\infty ) & \longrightarrow & P\times \mathbb R^3 \\ (p, \vec{x},t) & \longmapsto & (p,t\vec{x}) \end{eqnarray*} as our model for $\Psi$ near $\partial {\mathcal Z}_+= F$. Now if $g$ is of class $C^{k}$, then ${\zap q}: F\to P$ is of class $C^{k-1}$, and the diffeomorphism $\Phi: F\to P \times S^2$ of Lemma \ref{folia} is of class $C^{k-2}$, so this picture actually endows $Z$ with the structure of a $C^{k-2}$ manifold, in such a way that $\Psi$ becomes a $C^{k-2}$ map. This said, we are now ready for one of the key constructions of this article: \begin{thm} \label{buzz} Let $(M,[g])$ be a space-time-oriented self-dual Zollfrei manifold, where $[g]$ can be represented by a $C^4$ split-signature metric $g$. Let $Z$ be the differentiable $6$-manifold obtained from ${\mathcal Z}_+$ by collapsing $\partial {\mathcal Z}_+= F$ down to $P$ along the foliation $\mathscr F$. Then $Z$ can be made into a compact complex $3$-manifold in a unique way such that the quotient map $\Psi : {\mathcal Z}_+\to Z$ satisfies $$\Psi_* {\mathzap D}\subset T^{0,1}Z.$$ Moreover, $\Psi$ is $C^\infty$ with respect to the associated complex-analytic atlas of $Z$ if $g$ is itself assumed to be $C^\infty$. \end{thm} \begin{proof} By construction, $\Psi$ is a diffeomorphism between ${\mathcal Z}_+-F$ and $Z-P$. Since ${\mathzap D}\oplus \overline{\mathzap D}=T_\mathbb{C}{\mathcal Z}_+$ on ${\mathcal Z}_+-F$, it follows that there a unique complex structure $J$ on $Z-P$ with $T^{0,1}= \Psi_* {\mathzap D}$. Moreover, the assumption that $g$ is $C^4$ guarantees that $\mathzap D$ is $C^3$. Since $\mathzap D$ is involutive by Proposition \ref{critter}, the Malgrange version \cite{malgrange} of the Newlander-Nirenberg theorem \cite{newnir} implies that this almost-complex structure is integrable, in the sense that $Z$ admits local complex coordinates in which $J$ becomes the standard complex structure on $\mathbb{C}^3$. Thus the crux of the theorem resides in understanding the behavior of $\Psi_* {\mathzap D}$ in the vicinity of $P$. Now let us recall that the proof of Lemma \ref{folia} hinges on the introduction of a non-zero vector field $u$ on $F$ which spans $\ker {\zap p}_*$ at each point. By rescaling $u$ by an appropriate function, we may now assume henceforth that ${\zap q}_* u$ is always a unit vector with respect to, say, the standard metric on $P\approx \mathbb R\mathbb P^3$. With this convention, $S(TP)$ may be identified with the concrete $S^2$-bundle of unit vectors on $\mathbb R\mathbb P^3$, and the $C^2$ diffeomorphism $\Phi : F\to S(TP)$ is just given by ${\zap q}_*u$. Now this vector field $u$ is tangent to the boundary circles of the disk fibers of $\wp : {\mathcal Z}_+\to M$, and the fiber-wise complex structure $\jmath$ of these disks then sends $u$ to some vector field $v=\jmath u$ along $\partial {\mathcal Z}_+=F$ which points inward at every boundary point of ${\mathcal Z}_+$. Extend this $v$ to a $C^2$ vector field on a collar neighborhood of ${\mathcal Z}_+$ so that we have $v\in \ker \wp_*$ at every point of the collar, and then use the flow of $v$ to identify a slightly smaller collar with $F\times [0, \epsilon )$. Using $\Phi$ and $\Psi$, we may thus construct a $C^2$ diffeomorphism between a tubular neighborhood of $P$ and the $\epsilon$-tube around the zero section of $TP$, in such a manner that the restriction of $\Psi$ to our collar $F\times [0,\epsilon )\approx S(TP)\times [0,\epsilon )$ becomes the map \begin{eqnarray*} S(TP)\times [0,\epsilon ) & \to & TP \\ (\vec{v} ,t) & \mapsto & t\vec{v} \end{eqnarray*} and so that our vector field $v$ becomes the radial field $\vec{v}/\|\vec{v}\|$. In particular, this picture gives us a specific isomorphism $$TZ|_P \cong TP\oplus TP,$$ where the first factor is tangent to $P$, and where the second factor is transverse to it. Moreover, this isomorphism has been constructed precisely so that $\Psi_*(\jmath u)= J\Psi_* (u)$ at each point of $F=\partial {\mathcal Z}_+$, provided that we take $J: TP\oplus TP\to TP\oplus TP$ to be the almost complex structure given by $$J= \left[ \begin{array}{cc} 0& -I\\ I& 0 \end{array}\right] , $$ where $I: TP\to TP$ denotes the identity map. Since the rank of $\Psi_*{\mathzap D}$ is just $1$ along $F$, this choice of $J$ therefore gives us $\Psi_*{\mathzap D}= T^{0,1}(Z,J)$ along $P$, as desired; moreover, this is the only choice of $J$ with this property, since every unit element of $TP\subset TZ$ is of the form $\Psi_{*z}u$ for some $z\in \partial {\mathcal Z}_+$. Thus, in conjunction with our previous discussion of $Z-P$, we see that there is a unique almost-complex structure $J$ on all of $Z$ such that $\Psi_*{\zap D}\subset T^{0,1}(Z,J)$. However, it is not yet clear that this $J$ is even continuous, much less integrable! We will remedy this by next showing that $J$ is actually {\em Lipschitz} continuous, relative to the $C^2$ structure with which we have provisionally endowed $Z$. Of course, this is is only an issue near $P$, since $J$ has been constructed so as to be better than $C^1$ on $Z-P$. It therefore suffices to show that $J$ is Lipschitz along each radial line segments $t\mapsto t\vec{v}$, t$\in [0,\epsilon)$ in our tubular neighborhood of $P$ modeled on the $\epsilon$ tube in $TP$, provided we can also show in the process that the Lipschitz constants are uniformly bounded. To this end, let us therefore recall that we have written down an explicit local basis $({\mathfrak w}_1 , {\mathfrak w}_2, \partial/\partial \overline{\zeta})$ for ${\mathzap D}$ such that $[{\mathfrak w}_j, \partial/\partial \overline{\zeta}]=0$. Moreover, the ${\mathfrak w}_j$ are real along $F=\partial {\mathcal Z}_+$, where they span the distribution of $2$-planes $E$ tangent to the foliation $\mathscr F$ of $F$. Now, through a given point of ${\zap q}^{-1}(y)\subset F$, there is a unique curve in the leaf ${\zap q}^{-1}(y)$ with parameter $t$ such that $d/dt={\mathfrak w}_1$. For any $C^2$ function $f$ on ${Z}$, we then have $$ \frac{d}{dt}\left[ \Psi_* (\frac{\partial}{\partial \overline{\zeta}}) f\right]= \frac{d}{dt}\frac{\partial}{\partial \overline{\zeta}} \Psi^* f= {\mathfrak w}_1\frac{\partial}{\partial \overline{\zeta}}\Psi^* f = \frac{\partial}{\partial \overline{\zeta}}{\mathfrak w}_1\Psi^* f = \frac{\partial}{\partial \overline{\zeta}}\left[ \Psi_*({\mathfrak w}_1)f\right] . $$ Thus, setting $\zeta = \xi + i\eta$, $$\frac{d}{dt}\left[ \Psi_* (\frac{\partial}{\partial \overline{\zeta}})\right] = \frac{\partial}{\partial \overline{\zeta}}\left[ \Psi_*({\mathfrak w}_1)\right] = \frac{i}{2} \frac{\partial}{\partial \eta} \left[ \Psi_*({\mathfrak w}_1)\right] $$ at any $y\in P$, since $\Psi_*({\mathfrak w}_1)\equiv 0$ along $F$, where $\eta=0$. Here the right-hand side should be interpreted as the invariant derivative {\em at a zero} of a section of a vector bundle on the disk $D_x := \Psi [{\wp}^{-1} (x)\cap {\mathcal Z}_+]\approx \mathrm{D}^2$. On the other hand, $$\Psi_* \left(\frac{\partial}{\partial \overline{\zeta}}\right)\in T_y^{0,1}({Z},J)$$ for all $t$, by our previous discussion, so it follows that $$\left. \frac{\partial}{\partial \eta} \left[ \Psi_*({\mathfrak w}_1)\right]\right|_{\eta =0} \in T_y^{0,1}({Z},J).$$ The same argument, with ${\mathfrak w}_1$ replaced by ${\mathfrak w}_2$, tells us that $$\left. \frac{\partial}{\partial \eta} \left[ \Psi_*({\mathfrak w}_2)\right]\right|_{\eta =0} \in T_y^{0,1}({Z},J),$$ too. Along any $D_x$, we therefore have, near an arbitrary point $y\in P\cap D_x$, three continuous sections of $T^{1,0}$ given by $$ {\mathfrak v}_j = \left\{ \begin{array}{cc} \left[ \Psi_*({\mathfrak w}_j)\right]/\eta& \eta \neq 0\\ \frac{\partial}{\partial \eta} \left[ \Psi_*({\mathfrak w}_j)\right]&\eta =0 \end{array} \right. $$ for $j=1,2$, and ${\mathfrak v}_3=\Psi_*(\partial/\partial \overline{\zeta})$. These sections are linearly independent at every point, and so span $T^{1,0}_y$, because $\det (\Psi_*)$ only vanishes to second order at $Z$. Moreover, since $\Psi$ appears to be $C^2$ in our coordinates, these sections are all continuously differentiable along $D_x$, with coordinate derivatives expressible in terms of partial derivatives of $\Psi$ of order $\leq 2$. In particular, $J$ is Lipschitz along $D_x$, with Lipschitz constant controlled by the partial derivatives of $\Psi$ of order $\leq 2$. Since each radial line of our tube is contained in a disk $D_x$, and because a finite number of balls with compact closure within coordinate domains suffice to cover the compact manifold $P$, it therefore follows that the tensor field $J$ on ${Z}$ is Lipschitz near $P$, and hence on all of $Z$. Since $J$ is $C^{0,1}$ on $Z$, and better than $C^1$ on $Z-P$, the na\"{\i}ve coordinate partial derivatives of the components of $J$ on $Z-P$ extend to $Z$ as locally bounded measurable functions. Integration by parts, however, shows that these $L^{\infty}_{{loc}}$ functions are exactly the {\em distributional} partial derivatives of the components of $J$. The Nijenhuis tensor $$N^\ell_{jk}= {J_k}^m\partial_m{J_j}^\ell-{J_j}^m\partial_m{J_k}^\ell+ {J_m}^\ell\partial_j{J_k}^m-{J_m}^\ell\partial_k{J_j}^m$$ of our almost-complex structure $J$ is therefore well-defined in the distributional sense, and has $L^\infty_{{loc}}$ components. Hence $N$ vanishes in the distributional sense, since by construction $N=0$ on a subset $Z-P$ of full measure. However, Hill and Taylor \cite{hiltay} have shown that the Newlander-Nirenberg theorem holds for Lipschitz almost-complex structures for which $N =0$ in the distributional sense. Thus every point of $Z$ has a neighborhood on which we can find a triple $(z^1, z^2,z^3)$ of differentiable complex-valued functions with $dz^k\in \Lambda^{1,0}(Z, J)$ and $dz^1\wedge dz^2 \wedge dz^3\neq 0$. Taking these to be the complex coordinate systems gives $Z$ the structure of a compact complex $3$-fold. In particular, this gives $Z$ a specific preferred $C^\infty$ structure compatible with the $C^1$ structure we built by hand, so $\Psi$ remains a differentiable map even with respect to this brand new atlas for $Z$. Now, if $g$ is actually $C^\infty$, we claim that $\Psi$ is actually a $C^\infty$ map with respect to the tautological smooth structure on ${\mathcal Z}_+$ and the complex atlas of $Z$. Away from $F\to P$, this is an immediate consequence of the classical Newlander-Nirenberg theorem \cite{newnir}, so we need merely verify this assertion near $P$. To do this, let $(x^1,x^2,x^3)$ be any smooth system of local coordinates on a region ${\mathcal V}\subset P$, and pull them these functions back to $F$ as three smooth functions ${\zap q}^{*}x^j$ on ${\zap q}^{-1}({\mathcal V})\subset F=\partial {\mathcal Z}_+$ which are constant along the leaves of $\mathscr F$. These can then be extended \cite{treves} into ${\mathcal Z}_+$ as smooth complex-valued functions ${\mathfrak z}^j$ near $\partial {\mathcal Z}_+$ such that $\partial {\mathfrak z}^j/\partial \overline{\zeta}$ vanishes to infinite order along $\eta =0$, and the ${\mathfrak w}_k z^j$ will then also vanish to infinite order along $\eta =0$, too. Now the real and imaginary parts of the ${\mathfrak z}^j$ give us a differentiable coordinate system on $Z$, and in these coordinates we have $$T^{0,1}Z = \mbox{span}\left\{ \frac{\partial}{\partial \overline{{\mathfrak z}}^j}+ a_j^k ({\mathfrak z})\frac{\partial}{\partial {\mathfrak z}^k} \right\}$$ where the smooth functions $a_j^k({\mathfrak z}^1,{\mathfrak z}^2,{\mathfrak z}^3)$ vanish to infinite order along the locus $P$ given by $\Im m~ {\mathfrak z}^j=0$. If $(z^1,z^2,z^3)$ is a system of holomorphic local coordinates on ${\mathcal U}\subset Z$, where ${\mathcal U}\cap {\mathcal V}\neq \emptyset$, then $z^j({\mathfrak z}^1,{\mathfrak z}^2,{\mathfrak z}^3)$ is therefore $C^\infty$ by elliptic regularity. Since, by construction, each $\Psi^*{\mathfrak z}^k$ is a smooth function on ${\mathcal Z}_+$, it thus follows that the $\Psi^*z^j$ are smooth functions, too. Hence $\Psi$ is smooth with respect to the complex coordinate atlas of $Z$, and we are done. \end{proof} \begin{defn}\label{chubby} The {\em twistor space} of a space-time-oriented $C^4$ Zollfrei self-dual $4$-manifold $(M,[g])$ is the compact complex $3$-manifold $(Z,J)$ constructed from $(M,[g])$ via Theorem \ref{buzz}. \end{defn} \begin{defn}\label{checkers} The twistor space of a non-space-time-orientable $C^4$ Zollfrei self-dual $4$-manifold $(M,[g])$ is defined to be the twistor space $(Z,J)$ of the space-time-oriented double cover $(\tilde{M},[g])$ of $M$. \end{defn} \section{Unmasking the Twistor Space} Our construction of the twistor space of a self-dual Zollfrei $4$-manifold may seem rather technical. However, the hidden motivation behind the entire construction is the observation that when $(M,[g])$ is one of our prototypical models, the associated twistor space $(Z,J)$ is simply the familiar complex projective $3$-space $\mathbb C\mathbb P_3$. Let us now make this explicit: \begin{lem} If $(M,[g])$ is either $(S^2\times S^2, [g_0])$ or $({\mathbb M}^{2,2},[g_0])$, then the twistor space $(Z,J)$ of $(M,[g])$, in the sense of Definitions \ref{chubby} and \ref{checkers}, is biholomorphic to $\mathbb C\mathbb P_3$ in such a manner than $P\subset Z$ becomes the standard $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P^3$. \end{lem} \begin{proof} The relationship between Definitions \ref{chubby} and \ref{checkers} makes it sufficient to consider the case of ${\mathbb M}^{2,2}$. Now this may seem to be a strange choice, because Definition \ref{checkers} ostensibly instructs us to pass up to the double cover $S^2\times S^2\to {\mathbb M}^{2,2}$ and then blow down $\partial {\mathcal Z}_+ (S^2\times S^2)$ along the foliation $\mathscr F$. However, the quotient of ${\mathcal Z}_+(S^2\times S^2)$ by the covering map action of $\mathbb{Z}_2$ on $\partial {\mathcal Z}_+ (S^2\times S^2)$ is just ${\mathcal Z}({\mathbb M}^{2,2})$. Thus, Definition \ref{checkers} can be restated as saying that $Z$ is to be obtained from ${\mathcal Z}({\mathbb M}^{2,2})$ by blowing down the hypersurface $F\subset {\mathcal Z}$. In fact, there is a nice way of explicitly realizing of this blowing-down map. Let ${\mathbb V}\cong \mathbb R^4$ be a real $4$-dimensional vector space, and let ${\mathbb V}_\mathbb{C}\cong \mathbb{C}^4$ be its complexification. Then ${\mathbb M}^{2,2}$ can be be identified with the real Klein quadric $$Q_\mathbb R = \{ [\psi ] \in \mathbb P (\wedge^2 {\mathbb V})~|~ \psi \wedge \psi = 0\}$$ in $ \mathbb P (\wedge^2 {\mathbb V})\cong \mathbb R\mathbb P^5$ by choosing a diagonalizing basis for the signature $(+++---)$ quadratic form $(\psi, \chi) = \phi\wedge \chi$ on $\wedge^2 {\mathbb V}$. For a suitable choice of orientation, the $\beta$-surfaces of $({\mathbb M}^{2,2}, [g_0])$ are exactly those projective planes $\mathbb R\mathbb P^2\subset Q_\mathbb R \subset \mathbb R\mathbb P^5$ which are of the form $$\{ [\psi ] \in Q_\mathbb R ~|~ v ~\lrcorner ~ \psi = 0\}$$ for some $[v ] \in \mathbb P ({\mathbb V}^*)\cong \mathbb R\mathbb P^3$. Thus $F({\mathbb M}^{2,2})$ may be concretely realized as the flag manifold $$F_{2,3,4}= \{ ([\psi ] , [v]) \in Q_\mathbb R \times \mathbb P ({\mathbb V}^*) ~|~ v ~\lrcorner ~ \psi = 0\}\subset Q_\mathbb R \times \mathbb P ({\mathbb V}^*)$$ in such a way that ${\zap p}$ and ${\zap q}$ become the tautological projections $F_{2,3,4}\to Q_\mathbb R=Gr_{2,4}$ and $\mathbb P ({\mathbb V}^*)=Gr_{3,4}$. However, $Q_\mathbb R$ is just a real slice of the complex $4$-quadric $$Q_\mathbb{C} = \{ [\psi ] \in \mathbb P (\wedge^2 {\mathbb V_\mathbb{C}})~|~ \psi \wedge \psi = 0\},$$ so we have a canonical isomorphism $T_\mathbb{C} Q_\mathbb R = TQ_\mathbb{C}|_{Q_\mathbb R}$. Any {\em complex} $\beta$-plane $\Pi \subset T_\mathbb{C} Q_\mathbb R$ is then tangent to a unique {\em complex} $\beta$-surface $\mathbb C\mathbb P_2\subset Q_\mathbb{C}\subset \mathbb C\mathbb P_5$ given by $$\{ [\psi ] \in Q_\mathbb{C} ~|~ v ~\lrcorner ~ \psi = 0\}$$ for some $[v]\in \mathbb P ({\mathbb V}_\mathbb{C}^*)\cong \mathbb C\mathbb P_3$. Thus ${\mathcal Z}({\mathbb M}^{2,2})$ may naturally be identified with the locus $$\{ ([\psi ] , [v]) \in Q_\mathbb R \times \mathbb P ({\mathbb V}^*_\mathbb{C} )~|~ v ~\lrcorner ~ \psi = 0\} \subset F_{2,3,4} (\mathbb{C})$$ in such a way that the $\Psi : {\mathcal Z}\to Z$ is just becomes the tautological projection to $\mathbb P ({\mathbb V}^*_\mathbb{C} )\cong \mathbb C\mathbb P_3$. It remains to show that the constructed complex structure on $Z$ coincides with that of $\mathbb C\mathbb P_3$. To do this, we first recall that the distribution ${\zap D}$ is conformally invariant by Proposition \ref{conformal}. Passing to the stereographic coordinates of equation (\ref{stereo}), it thus suffices do our computations for the flat metric $d{\mathfrak x}_1^2 + d{\mathfrak x}_2^2 - d{\mathfrak y}_1^2 - d{\mathfrak y}_2^2$ using the pseudo-orthonormal frame $$e_1= \frac{\partial}{\partial {\mathfrak x}_1},~ e_2= \frac{\partial}{\partial {\mathfrak x}_2},~ e_3= \frac{\partial}{\partial {\mathfrak y}_1},~ e_4= \frac{\partial}{\partial {\mathfrak y}_2}.$$ Since the connection forms ${\theta}_{\zap j}^{\zap k}$ vanish for this frame, the distribution $\zap D$ is thus spanned by \begin{eqnarray*} {\mathfrak w}_1 & = & (\zeta^2+1)\frac{\partial}{\partial {\mathfrak x}_1} -2\zeta \frac{\partial}{\partial {\mathfrak y}_1} + (\zeta^2-1)\frac{\partial}{\partial {\mathfrak y}_2} \\ {\mathfrak w}_2 & = & (\zeta^2+1)\frac{\partial}{\partial {\mathfrak x}_2}+ (\zeta^2-1)\frac{\partial}{\partial {\mathfrak y}_1} + 2\zeta \frac{\partial}{\partial {\mathfrak y}_2} \end{eqnarray*} and $\partial/\partial \overline{\zeta}$. But the projection $\Psi: {\mathcal Z}\to \mathbb C\mathbb P_3$ coming from the Klein quadric picture is just given by \begin{eqnarray*} z_1 & = & ({\mathfrak x}_1+ {\mathfrak y}_2)+ ({\mathfrak y}_1-{\mathfrak x}_2)\zeta \\ z_2 & = & ( {\mathfrak y}_1+{\mathfrak x}_2)+ ({\mathfrak x}_1-{\mathfrak y}_2)\zeta\\ z_3 & = & \zeta \end{eqnarray*} in suitable affine coordinates $(z_1,z_2,z_3)$ for $\mathbb C\mathbb P_3$. Since ${\mathfrak w}_1$, ${\mathfrak w}_2$, and $\partial/\partial \overline{\zeta}$ all annihilate $z_1$, $z_2$ and $z_3$, it follows that the complex structure $J$ we have constructed on $Z=\mathbb C\mathbb P_3$ coincides with the usual one on an open dense set, and hence everywhere. Thus, for both $({\mathbb M}^{2,2}, [g_0])$ and $(S^2\times S^2, [g_0])$, the twistor space is just $\mathbb C\mathbb P_3$, with its standard complex structure. \end{proof} Now recall that the complex structure of $\mathbb C\mathbb P_3$ is rigid, in the sense of Kodaira and Spencer \cite{KS}. In other words, because $H^1 (\mathbb C\mathbb P_3 , {\mathcal O} (T^{1,0} \mathbb C\mathbb P_3 ))=0$, any complex-analytic family of deformations of the complex structure is trivial for small values of the perturbation parameter. It might therefore seem reasonable to expect that the twistor space of any Zollfrei self-dual $4$-manifold, in the sense of Definitions \ref{chubby} and \ref{checkers}, will {\em always} turn out simply to be $\mathbb C\mathbb P_3$, with its usual complex structure. Our goal in this section will be to show that this is indeed the case provided that suitable extra hypotheses are imposed. To this end, we will use a beautiful circle of characterizations of the standard complex structure on $\mathbb C\mathbb P_3$ due to Nakamura \cite{nakamura}. One such result is the following: \begin{thm}[Nakamura] Let $(Z,J)$ be a compact complex $3$-manifold homeomorphic to $\mathbb C\mathbb P_3$. If $H^q(Z, {\mathcal O})= 0$ for all $q > 0$, and if $h^0(Z, {\mathcal O}(K^{-m}))\geq 2$ for some $m> 0$, then $(Z,J)$ is biholomorphic to $\mathbb C\mathbb P_3$. \end{thm} Nakamura then used this to show that any Moishezon $3$-fold homeomorphic to $\mathbb C\mathbb P_3$ must be biholomorphic to $\mathbb C\mathbb P_3$ unless it is of general type. Recall that a compact complex $n$-fold $Z$ is said to be {\em Moishezon} if there exist $n$ meromorphic functions $f_1, \ldots, f_n : Z\dashrightarrow \mathbb{C}$ which give local complex coordinates near some point $z\in Z$; this is holds, in particular, if \cite{ueno} there is some holomorphic line bundle $L\to Z$ with $h^0 (Z, {\mathcal O} (L^m )) > c m^n$ for some $c> 0$ and all $m \gg 0$. Koll\'ar \cite{kollar} eventually improved Nakamura's result by excluding the possibility that $Z$ might be of general type. Thus: \begin{thm}[Nakamura/Koll\'ar] \label{nakol} A Moishezon manifold is homeomorphic to $\mathbb C\mathbb P_3$ iff it is biholomorphic to $\mathbb C\mathbb P_3$. \end{thm} The following standard piece of folklore is a minor variation on one of Nakamura's results \cite{nakamura}. We include a proof here only because one does not seem to appear elsewhere in the literature. \begin{cor}\label{bigdef} Let $J_t$ be a family of smooth, integrable almost-complex structures on a smooth compact $6$-manifold $Z$, which, in the $C^\infty$ topology, depends continuously on an auxiliary real variable $t\in [0,1]$. If $(Z, J_0)$ is biholomorphic to the standard $\mathbb C\mathbb P_3$, so is $(Z, J_1)$. \end{cor} \begin{proof} Kuranishi \cite{kuranishi} has shown that whenever two smooth complex structures are close enough in a sufficiently high Sobolev norm, they can be joined by a complex-analytic family in the sense of Kodaira-Spencer. Hence there is a finite subset $\{ t_0=0, t_1, \ldots , t_\ell = 1\}$ of $[0,1]$ such that, for each $j=1, \ldots , \ell$, $(Z, J_{t_{j-1}})$ and $(Z, J_{t_{j}})$ both occur as fibers of a single holomorphic family of complex manifolds over over the unit disk $\subset \mathbb{C}$. Now Kodaira-Spencer theory \cite{KS} tells us that if $(Z, J_{t_{j-1}})$ is biholomorphic to $\mathbb C\mathbb P_3$, every nearby fiber is, too. Hence there is a non-empty open set in the disk for which every correspondiing fiber satisfies $h^0 ({\mathcal O}(K^{-m})) > m^3$ for all $m> 0$. But, by the semi-continuity principle \cite{bast}, the set of parameter values for which $h^0 ({\mathcal O}(K^{-m})) > m^3$ for a particular $m$ must be closed in the analytic Zariski topology --- i.e. either discrete, or the whole disk. Hence every fiber must have $h^0 ({\mathcal O}(K^{-m}))> m^3$ for all $m >0$, and this conclusion applies, in particular, to $(Z, J_{t_{j}})$. Hence $(Z, J_{t_{j}})$ is Moishezon. Theorem \ref{nakol} therefore shows that $$(Z, J_{t_{j-1}})\cong \mathbb C\mathbb P_3 ~~ \Longrightarrow ~~(Z, J_{t_{j}})\cong \mathbb C\mathbb P_3.$$ Since $(Z,J_0)$ is biholomorphic to $\mathbb C\mathbb P_3$ by hypothesis, it therefore follows by induction on $j$ that $(Z, J_1)$ is also biholomorphic to $\mathbb C\mathbb P_3$, as claimed. \end{proof} Note that an analogous rigidity assertion also holds for any $\mathbb C\mathbb P_n$, even if $n$ is large, as a consequence of an entirely different circle of ideas due to Siu \cite{siudef}. Now the proof of Theorem \ref{critter} shows that two self-dual Zollfrei metrics which are close in the $C^\infty$ topology will give rise to two complex structures on $Z$ which are close in the $C^\infty$ topology. If $g_t$ is a continuous curve in the space of of $C^\infty$ self-dual Zollfrei metrics, with the $C^\infty$ topology, Corollary \ref{bigdef} then immediately implies that if one of the relevant twistor spaces is biholomorphic to $\mathbb C\mathbb P_3$, so are all the others. When this happens, the smooth submanifold $P=\Psi (F)$ thus becomes a smoothly embedded totally real submanifold of $\mathbb C\mathbb P_3$, and every fiber of ${\mathcal Z}_+\to M$ is then sent by $\Psi$ to an embedded holomorphic disk in $\mathbb C\mathbb P_3$ with boundary on $P$. Thus: \begin{thm}\label{zorro} Let ${\mathcal C}$ be the space of $C^\infty$ self-dual Zollfrei conformal classes metrics on $S^2\times S^2$, endowed with the smooth topology. Let ${\mathcal C}_0\subset {\mathcal C}$ be the path component containing our protypical example $[g_0]$. Then, for each conformal class $[g]\in {\mathcal C}_0$, the corresponding twistor space $(Z,J)$ is biholomorphically equivalent to $\mathbb C\mathbb P_3$, equipped with its standard complex structure. In particular, every conformal class in ${\mathcal C}_0$ gives rise to a smooth totally real submanifold $P\approx \mathbb R\mathbb P^3$ of $\mathbb C\mathbb P^3$ and a $4$-parameter family of embedded holomorphic disks $(D^2, \partial D^2)\hookrightarrow (\mathbb C\mathbb P_3, P)$. \end{thm} Unfortunately, however, we cannot {\em a priori} expect an indefinite self-dual metric to be highly differentiable, as the relevant partial differential equation is ultra-hyperbolic rather than elliptic. It thus behooves us to see what we can say about solutions with comparatively little regularity. However, even trying to understand $C^4$ self-dual metrics will lead us to consider families of twistor spaces with so little regularity that the results of Kodaira-Spencer and Kuranishi cannot be invoked with confidence. Fortunately, however, Nakamura's results are more than enough to deal with the matter at hand: \begin{thm} Let $g_0$ be the standard indefinite product metric on $S^2 \times S^2$. Then $g$ has a neighborhood $\mathscr U$ in space of $C^4$ pseudo-Riemannian metrics such that any self-dual metric $g\in {\mathscr U}$ is Zollfrei and has twistor space $(Z,J)$ biholomorphic to $\mathbb C\mathbb P_3$. \end{thm} \begin{proof} By Theorem A, there is a $C^2$ neighborhood of $g_0$ in which every self-dual $g$ is Zollfrei, and if $g$ is also assumed to be $C^4$ close to $g_0$, then the proof of Theorem \ref{critter} shows that there is a diffeomorphism between the twistor spaces of $g$ and $g_0$ such that the almost-complex structure $J$ associated with $g$ is close to the almost-complex structure $J_0$ associated with $g_0$ in the $C^{0,1}$ topology on tensor fields on $Z$. Choose a biholomorphism, once and for all, between $(Z,J_0)$ and $\mathbb C\mathbb P_3$. Then, by shrinking our neighborhood $\mathscr U$ if necessary, we may identify the $(p,q)$-forms for $J$ with those of $J_0$ via the tautological projections, and it therefore makes sense to think of the operators $D$ and $D_0$ given by $\overline{\partial}+\overline{\partial}^*$ associated to these two complex structures as being defined on the same spaces, even after twisting with any power of the canonical line bundle. Thus, for example, if we consider $D$ and $D_0$ applied to $(0,1)$-forms, then, for every $\varepsilon > 0$ there exists a $\mathscr U$ such that for every $g\in {\mathscr U}$ we have $\|(D-D_0)f\|^2 \leq \varepsilon (\|\nabla f\|^2+ \|f\|^2)$ for each and every smooth $(0,1)$-form $f$, where $\|~\|$ denotes the $L^2$ norm on $Z=\mathbb C\mathbb P_3$ with respect to, say, the Fubini-Study metric. Now assume that such an elliptic operator $D_0$ has trivial kernel. By G{\aa}rding's inequality for $D_0$ we therefore have $$\| (D-D_0)f\|^2_{L^2}\leq \varepsilon \|f\|^2_{L^2_1}\leq C\varepsilon \| D_0f\|^2_{L^2}$$ so that $$\| Df\|_{L^2}\geq (1-\sqrt{C\varepsilon }) \|D_0f\|,$$ and we therefore $D$ has trivial kernel, too, provided that we take $\varepsilon < 1/C$. Thus, by shrinking our neighborhood $\mathscr U$ if necessary, we may arrange that every associated twistor space has $H^1(Z,{\mathcal O})=0$, just like $\mathbb C\mathbb P_3$. Similarly, we may arrange that $H^q(Z,{\mathcal O})=0$ and $H^q(Z, {\mathcal O}(K^{-1}))=0$ for $q =1,2,3$ by further shrinking $\mathscr U$. Since $Z$ also has the same Chern classes as $\mathbb C\mathbb P_3$, the index theorem then gives us $h^0(Z, {\mathcal O}(K^{-1}))={7\choose 4}$, so Nakamura's result certainly guarantees that there is a biholomorphism between $Z$ and $\mathbb C\mathbb P_3$. \end{proof} The holomorphic rigidity of the twistor space implies the following geometric rigidity result: \begin{thm} Let $g_0$ be the standard conformally flat split-signature metric on ${\mathbb M}^{2,2}= (S^2\times S^2)/\mathbb{Z}_2$. Then, in the $C^4$ topology on the space of pseudo-Riemannian metrics, $g_0$ has a neighborhood $\mathscr U$ such that any other self-dual metric $g\in {\mathscr U}$ is of the form ${\zap f}\phi^*g_0$ for some diffeomorphism $\phi: {\mathbb M}^{2,2}\to {\mathbb M}^{2,2}$ and some function ${\zap f}\neq 0$. \end{thm} \begin{proof} If $\mathscr U$ is small enough, every self-dual $g\in \mathscr U$ is Zollfrei and has a twistor space $(Z,J)$ which is biholomorphic to $\mathbb C\mathbb P_3$ by the previous result. This twistor space can be obtained by blowing $\mathcal Z$ down along $F$. Complex conjugation in ${\mathcal Z}$ therefore induces an anti-holomorphic involution $\varrho : Z\to Z$ with fixed point set $P\approx \mathbb R\mathbb P^3$. By a change of homogeneous coordinates, any such $\varrho$ can be put into the standard form $$[z_0: z_1: z_2 : z_3 ]\mapsto [\overline{z} _0: \overline{z} _1: \overline{z} _2 : \overline{z} _3 ],$$ as may be seen by considering the induced action on the sections of the hyperplane line bundle, thought of as meromorphic functions with simple poles along an invariant hyperplane. Thus $P$ becomes the standard $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$ in these coordinates. Let $Q$ denote the quadric given in these coordinates by $z_1^2+z_2^2+z_3^2+z_4^2=0$, and observe that $[Q]$ now generates $H^2(\mathbb C\mathbb P_3-P, \mathbb{Z})$. However, any fiber disk of ${\mathcal Z}_+\to \tilde{M}$ generates $H_2({\mathcal Z}_+, \partial {\mathcal Z}_+; \mathbb{Z})$, where $\tilde{M}=S^2\times S^2$ is the space-time-oriented double cover of $M={\mathbb M}^{2,2}$. Since $\Psi$ induces a homotopy equivalence between $\mathbb C\mathbb P_3-P$ and ${\mathcal Z}_+$, Poincar\'e duality now tells us that each of these holomorphic disks must meet $Q$ in exactly one point. Thus $\Psi^{-1}(Q)$ is a section of $(\mbox{Int }{\mathcal Z}_+)\to \tilde{M}$. Moreover, the non-trivial deck transformation $\tilde{M}\to \tilde{M}$ acts on $Q$ via the complex conjugation map $\varrho$, so we have constructed a diffeomorphism $\phi : (Q/\varrho )\to M$, and since $Q$ is a complex submanifold of $\mbox{Int }{\mathcal Z}_+$, our construction of ${\zap D}=T^{0,1}(\mbox{Int }{\mathcal Z}_+)$ also shows that $\phi$ is of class $C^{k,\alpha}$ if $g$ is of class $C^{k,\alpha}$. But the two holomorphic disks that make up $C_x=\Psi [\wp^{-1}(x)]\subset \mathbb C\mathbb P_3$ have the same boundary along $P= \mathbb R\mathbb P^3$, and their union is therefore a rational curve in $\mathbb C\mathbb P_3$, for any $x\in M$. Each such curve meets $Q$ in a conjugate pair of points; and since $Q\subset \mathbb C\mathbb P_3$ has degree $2$, this means that $C_x$ has degree $1$. Hence each $C_x$ is a projective line $\mathbb C\mathbb P_1\subset \mathbb C\mathbb P_3$. However, $P=\mathbb R\mathbb P^3$ is the space of $\beta$-surfaces of $(M,[g])$, and, for any $x\in M$, ${\zap q}[{\zap p}^{-1}(x)]= C_x\cap \mathbb R\mathbb P^3$. Thus any $\beta$ surface in $M=Q/{\varrho}$ is obtained by choosing some point $y\in \mathbb R\mathbb P^3$, looking at all the $\mathbb R\mathbb P^2$-family of all $\varrho$-invariant projective lines in $\mathbb C\mathbb P_3$ that pass through $y$, and tracing out the intersections of these lines with $Q$. But this same picture also, in particular, describes the $\beta$-surfaces of $g_0$. We have thus found a diffeomorphism $\phi$ between $M$ and $Q/\varrho = (S^2\times S^2)/\mathbb{Z}_2= {\mathbb M}^{2,2}$ which sends $\beta$ surfaces to $\beta$-surfaces. Since this last statement means that $\phi$ takes null vectors to null vectors, we have $\phi^*[g_0]= [g]$, and hence $g= {\zap f}\phi^*g_0$, as promised. \end{proof} It will turn out that the situation on $S^2\times S^2$ is far different. Nonetheless, we do get some interesting immediate geometric pay-off from the present discussion: \begin{thm} Let $g_0$ be be the standard indefinite product metric on $S^2\times S^2=\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$. Then $g_0$ has a neighborhood $\mathscr U$ in the space of $C^4$ pseudo-Riemannian metrics such that any self-dual metric $g\in {\mathscr U}$ is of the form $g=\psi^*h$, where $h$ is an indefinite {\em Hermitian} metric on $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, and where $\psi$ is a self-diffeomorphism of $S^2\times S^2$. \end{thm} \begin{proof} The quadric $Q\subset \mathbb C\mathbb P_3$ given by $z_1^2+z_2^2+z_3^2+z_4^2=0$ does not meet the standard $\mathbb R\mathbb P^3$. For every self-dual metric $g$ close to $g_0$ in the $C^4$ topology, $P$ will be $C^1$ close to the standard $\mathbb R\mathbb P^3$, and so will also not meet $Q$ if our neighborhood $\mathscr U$ is small enough. The inverse image of $Q$ under $\Psi: {\mathcal Z}_+\to M$ is therefore a complex submanifold of $\mbox{Int }{\mathcal Z}_+$. Moreover, the fibers of ${\mathcal Z}_+$ have intersection number $1$ with $Q$, and as both $Q$ and these disks are complex submanifolds, it follows that each fiber meets $Q$ transversely in one point. Thus $Q$ is the image of a smooth section $\mathfrak J$ of $\mbox{Int }{\mathcal Z}_+$. But this section is a bihomorphism between $(M, {\mathfrak J})$ and $Q\cong \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$; in particular, $\mathfrak J$ is integrable. On the other hand, $\mathfrak J$ is, by construction, a $g$-compatible almost-complex structure. Thus what we have constructed is a diffeomorphism $\psi: \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1 \to M$ such that $\psi^*g$ is an indefinite Hermitian metric. \end{proof} Finally, we observe that the smooth topology of the twistor space is always standard, even without restrictions on our Zollfrei self-dual $4$-manifold. This will turn out to be quite useful in \S \ref{kahler} below. \begin{thm} \label{difftwist} Let $(M,[g])$ be a self-dual Zollfrei $4$-manifold, and let $Z$ be the twistor space of $(M,[g])$, as defined in Definitions \ref{chubby} and \ref{checkers}. Then $Z$ is diffeomorphic to $\mathbb C\mathbb P_3$ in such a manner that the Chern classes $c_j(Z,J)$ are sent to the usual Chern classes of $\mathbb C\mathbb P_3$. \end{thm} \begin{proof} By passing to a double cover if necessary, we may assume that $M$ is space-time orientable. Thus $M$ is homeomorphic to $S^2\times S^2$, by Theorem \ref{s2xs2}. Let $Y\subset Z$ be the closure of a small tubular neighborhood of $P\approx \mathbb R\mathbb P^3$, and let $X=Z-(\mbox{Int } Y)$. Thus $Y\approx \mathbb R\mathbb P^3 \times D^3$, $X\cap Y\approx \mathbb R\mathbb P^3 \times S^2$, and $X\approx {\mathcal Z}_+$. Next, choose an almost-complex structure ${\mathfrak J}$ on $M$ which is compatible with $g$ and the space-time orientation. Then ${\mathcal Z}_+$ is diffeomorphic to the unit disk bundle in the anti-canonical line bundle $\Lambda^{0,2} (M ,{\mathfrak J} )$. In particular, ${\mathcal Z}_+$ deform retracts to a copy of $M$. Moreover, $T{\mathcal Z}_+|_M= TM\oplus \nu$, where the normal bundle $\nu$ of $M$ is exactly the anti-canonical line bundle. Since we therefore have $c_1(\nu)= c_1(M, {\mathfrak J})$, so $w_2 (T{\mathcal Z}_+)|_M= 2w_2(TM)= 0$. It follows that $X$ is spin. Now $X$ is simply connected, and since the inclusion $X\cap Y\hookrightarrow Y$ induces an isomorphism of fundamental groups, the Seifert-van Kampen theorem tells us that $Z$ is simply connected, too. Since the inclusion $\partial X \hookrightarrow X$ is homotopic to an $S^1$-bundle projection $\mathbb R\mathbb P^3 \times S^2 \to M$, the Mayer-Vietoris sequence of $X\cup Y$ now becomes $$\begin{array}{rrr} & \cdots ~\to & H^1(\mathbb R\mathbb P^3 \times S^2) ~\to \\ H^2 (Z) ~\to &H^2 (\mathbb R\mathbb P^3) \oplus H^2 (S^2 \times S^2) ~\to & H^2 (\mathbb R\mathbb P^3\times S^2) ~\to \\ H^3 (Z) ~\to & H^3 (\mathbb R\mathbb P^3) \oplus H^3 (S^2 \times S^2) ~\to &H^3 (\mathbb R\mathbb P^3\times S^2) ~\cdots \end{array}$$ and so tells us that $H^2(Z, \mathbb{Z}) = \mathbb{Z}$ and $H^3 (Z, \mathbb{Z} ) =0$. In the same way, we also see that the inclusions $X\hookrightarrow Z$ and $Y\hookrightarrow Z$ induce an injection $$H^2(Z, \mathbb{Z}_2) \hookrightarrow H^2 (X, \mathbb{Z}_2) \oplus H^2 (Y, \mathbb{Z}_2),$$ so the fact that $X$ and $Y$ are both spin implies that that $Z$ is spin, too. Now a theorem of C.T.C. Wall \cite{wall6} asserts the diffeotype of a simply connected compact spin $6$-manifold with torsion-free $H^2$ and $H^3$ is completely determined by the ranks of these groups, the Pontrjagin class $p_1(TX)$, and the trilinear form $$\smile : H^2(X, \mathbb{Z} ) \times H^2(X, \mathbb{Z} ) \times H^2(X, \mathbb{Z} ) \to \mathbb{Z}.$$ To finish the proof, it thus just remains to check that $Z$ and $\mathbb C\mathbb P_3$ have the same Pontrjagin class and trilinear form. To this end, notice that, since $M$ is homeomorphic to $S^2 \times S^2$, our almost-complex structure ${\mathfrak J}$ must have \begin{eqnarray*} c_1&\equiv& w_2=0 \bmod 2 ,\\ c_1^2 & = & 2\chi+3\tau = 8 , \end{eqnarray*} and we must therefore have $c_1(M,{\mathfrak J})=(2,2)\in \mathbb{Z}\oplus \mathbb{Z}= H^2(M,\mathbb{Z})$ after correctly orienting each factor $S^2$ of $S^2 \times S^2$. Since $c_1(\nu)= c_1(M,{\mathfrak J})$, the Poincar\'e dual of $M\subset Z$ has evaluation $2$ on a factor $S^2$, and since the above Mayer-Vietoris sequence shows that this evaluation map $H^2(Z,\mathbb{Z})\to H^2(S^2,\mathbb{Z} )$ is an isomorphism, it follows that $[M]= 2\alpha$ for a generator $\alpha\in H^2 (Z, \mathbb{Z})\cong \mathbb{Z}$. But since $c_1(\nu) = c_1 (M,{\mathfrak J})= (2,2)$, it follows that $(2\alpha)^3= [M]^3= (2,2)\cdot (2,2)=8$, so that $\alpha^3=1$. This shows that $Z$ has the same trilinear form as $\mathbb C\mathbb P_3$. Now notice that $p_1(TZ|_M) = p_1 (TM)+ p_1(\nu )$. However, since $M$ has an orientation-reversing homeomorphism, it has vanishing signature, and we therefore have $p_1 (TM)=0$ by the Hirzebruch signature theorem \cite{milnorstaf}. Thus $ p_1 (TZ)\cdot (2\alpha )= \langle p_1 (TZ), [M]\rangle = [c_1 (\nu) ]^2= 8$, and hence $p_1(TZ)=4\alpha^2$. Since this is the same answer one obtains for $\mathbb C\mathbb P_3$, Wall's theorem now allows us to conclude that $Z\approx \mathbb C\mathbb P_3$. Moreover, this diffeomorphism can be chosen so that the pull-back of the hyper-plane class in $H^2(\mathbb C\mathbb P_3, \mathbb{Z})$ is $\alpha \in H^2(Z, \mathbb{Z})$. Since we have also shown that $c_1(Z,J)= 4\alpha$, this diffeomorphism also takes the Chern classes of $(Z,J)$ to those of the usual complex structure on $\mathbb C\mathbb P_3$, as promised. \end{proof} \section{Families of Holomorphic Disks} In this section, we will show that every small perturbation of the standard embedding $\mathbb R\mathbb P^3\hookrightarrow \mathbb C\mathbb P_3$ gives rise to a self-dual Zollfrei conformal structure on $S^2\times S^2$. First let us recall that there is a standard $(S^2\times S^2)$-family of holomorphic disks in $\mathbb C\mathbb P_3$ with boundaries on the standard $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$. Indeed, the boundary circles of these disks are exactly the real projective lines $\mathbb R\mathbb P^1\subset \mathbb R\mathbb P^3$. Each such real projective line is contained in a unique complex projective line $\mathbb C\mathbb P_1\subset \mathbb C\mathbb P_3$, and divides it into two hemispheres. A choice of orientation for such an $\mathbb R\mathbb P^1$ then uniquely determines a hemisphere for which it is the oriented boundary. These hemispheres are the promised holomorphic disks. A complex projective line $\mathbb C\mathbb P_1\subset \mathbb C\mathbb P_3$ is the complexification of a real projective line $\mathbb R\mathbb P^1\subset \mathbb R\mathbb P^3$ iff it ia $\varrho$-invariant, where $\varrho: \mathbb C\mathbb P_3\to \mathbb C\mathbb P_3$ denotes the complex-conjugation map $$\varrho ([z_1 : z_2 : z_3 : z_4]) = [\bar{z}_1 : \bar{z}_2 : \bar{z}_3 : \bar{z}_4].$$ Now, for reasons of degree, every $\varrho$-invariant $\mathbb C\mathbb P_1\subset \mathbb C\mathbb P_3$ must meet the standard quadric $${\mathcal Q} = \Big\{ [z_1 : z_2 : z_3 : z_4] \in \mathbb C\mathbb P_3~\Big|~ z_1^2+z_2^2+z_3^2+z_4^2=0\Big\}$$ in a conjugate pair of points; and exactly one of these points will lie in each of the hemispheres into which the $\mathbb C\mathbb P_1$ is divided by the fixed-point set $\mathbb R\mathbb P^3$ of $\varrho$. Conversely, each point $z\in{\mathcal Q}$ is joined to its conjugate point $\varrho (z)$ by a unique $\varrho$-invariant $\mathbb C\mathbb P_1$, and so is contained in exactly one of such hemisphere. Thus, the parameter space of our family may conveniently be identified with ${\mathcal Q}\approx S^2\times S^2$. Moreover, the standard conformal structure on $S^2\times S^2$ is completely encoded by this picture, in the sense that each $\beta$-surface is precisely the family of disks whose boundaries pass through some given point $y\in \mathbb R\mathbb P^3$. Although this entire story takes place in projective space, each of the individual disks in question actually lies in an affine subset. To see this, we once again let $[z_1 : z_2 : z_3:z_4 ]$ be the standard homogeneous coordinates on $\mathbb C\mathbb P_3$, so that standard $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$ is represented by $z_1,\ldots , z_4$ real, and consider the affine chart $({\mathfrak z}_1 , {\mathfrak z}_2 , {\mathfrak z}_3)$ on $\mathbb C\mathbb P_3$ defined by $$ {\mathfrak z}_1 = \frac{z_1-iz_2}{z_1+iz_2} ~, \hspace{0.5in} {\mathfrak z}_2 = \frac{z_3}{z_1+iz_2}~, \hspace{0.5in} {\mathfrak z}_3 = \frac{z_4}{z_1+iz_2} ~. $$ This chart realizes the complement of the line $z_1=z_2=0$ in $\mathbb R\mathbb P^3$ as the totally real submanifold $B$ of $\mathbb{C}^3$ given by \begin{equation} \label{affine} {\mathfrak z}_1\overline{{\mathfrak z}}_1 = 1 ~, \hspace{0.5in} {\mathfrak z}_1\overline{{\mathfrak z}}_2 = {\mathfrak z}_2 ~, \hspace{0.5in} {\mathfrak z}_1\overline{{\mathfrak z}}_3 = {\mathfrak z}_3 ~. \end{equation} For each $a, b\in \mathbb{C}$, the disk $$ |{\mathfrak z}_1| \leq 1~, \hspace{0.5in} {\mathfrak z}_2 = a + \bar{a} {\mathfrak z}_1 ~,\hspace{0.5in} {\mathfrak z}_3 = b+\bar{b}{\mathfrak z}_1 $$ has boundary on $B$, and belongs to the family under discussion. Notice that, as promised, these unparameterized disks depend on $4$ real parameters. Of course, each of these may in turn be realized as a {parameterized} holomorphic disk in a $3$-parameter family of ways by also setting $$ {\mathfrak z}_1= \frac{c\zeta + d}{\overline{c}+ \overline{d}\zeta} ~, ~~~~|\zeta|\leq 1~, ~~~|c|^2-|d|^2 =1. $$ In this manner, we actually obtain a $7$-parameter family of {\em parameterized} disks. In any case, it will suffice for our purposes to primarily focus on the particular parameterized disk $${\mathfrak z}_1=\zeta ~, \hspace{0.5in} |\zeta |\leq 1~, \hspace{0.5in} {\mathfrak z}_2={\mathfrak z}_3=0 ~, $$ since all the other disks in the family can be obtained from this one via the action of $PSL(4,\mathbb R)$ on $\mathbb C\mathbb P_3$. We will now appeal to some general results concerning holomorphic disks in $\mathbb{C}^n$ with boundary on a totally real submanifold. Suppose that $X^n\subset \mathbb{C}^n$ is a maximal totally real differentiable submanifold, in the sense that $T\mathbb{C}^n|_X=TX\oplus J(TX)$. The first result we will need is a regularity result \cite{chirka}: \begin{lem}[Chirka] \label{regularity} Suppose that $X^n \subset \mathbb{C}^n$ is a totally real submanifold of class $C^{\ell+1}$, $\ell\geq 2$, and that $\digamma: (D, \partial D)\to (\mathbb{C}^n , X)$ is a $C^1$-map which is holomorphic in the interior of the disk. Then $\digamma$ is actually a $C^{\ell}$ map. \end{lem} Now suppose that $X$ is a is a maximal totally real submanifold of $\mathbb{C}^n$, and that $\digamma: (D, \partial D)\to (\mathbb{C}^n , X)$ is a holomorphic disk with boundary on $X$. Then $\digamma$ is said to have {partial indices} $\kappa_1, \ldots , \kappa_n$ if there is a map $A: D \to GL (n , \mathbb{C} )$ which is holomorphic on the interior of $D$ and continuous up to the boundary such that $TX|_{\digamma(\zeta)}\subset \mathbb{C}^n$ is the real span of the columns of the matrix $$ A(\zeta ) \left[\begin{array}{ccc}\zeta^{\kappa_1/2} & 0 & 0 \\0 & \ddots & 0 \\0 & 0 & \zeta^{\kappa_n/2}\end{array}\right] $$ for all $\zeta\in \partial D$. These partial indices turn out to be well defined up to permutation. Their sum $$\kappa = \kappa_1+ \cdots + \kappa_n$$ is called the {\em Maslov index} of the holomorphic disk $\digamma$. An application of the Banach-space implicit function theorem to the Hilbert transform on the circle leads to the following result \cite{glob,ohrk}: \begin{prop}[Globevnik/Oh] \label{globo} Suppose that $\digamma: D\to \mathbb{C}^n$ is a holomorphic map of the unit disk whose boundary is contained in a totally real submanifold $X$ of class $C^{2\ell+1}$. Suppose, moreover, that all the partial indices $\kappa_1, \ldots , \kappa_n$ of $\digamma$ satisfy $\kappa_j\geq -1$. Then, for any totally real submanifold $X^\prime$ of $\mathbb{C}^n$ which is sufficiently close to $X$ in the $C^{2\ell+1}$-topology, there is a $(\kappa+n)$-real-parameter family of holomorphic embeddings $(D,\partial D) \hookrightarrow (\mathbb{C}^n , X^\prime)$, where $\kappa = \kappa_1+ \cdots + \kappa_n$ is the Maslov index of $\digamma$. This family is of class $C^\ell$, depends in a $C^\ell$ manner on the choice of $X^\prime$, and sweeps out all holomorphic maps of the disk which satisfy the relevant boundary conditions and which are $C^\ell$ close to $\digamma$. \end{prop} Let us now apply these ideas to the case at hand. If we take $X$ to be the submanifold $B=\mathbb R\mathbb P^3-\mathbb R\mathbb P^1$ of $\mathbb{C}^3$ defined by (\ref{affine}), and consider the holomorphic disk $\digamma: D\to \mathbb{C}^3$ given by $\zeta \mapsto (\zeta, 0 , 0)$ for $|\zeta | \leq 1$, then $TB$ is spanned over $\mathbb R$ by the columns of the matrix $$\left[\begin{array}{ccc}i\zeta & 0 & 0 \\0 & \zeta^{1/2} & 0 \\0 & 0 & \zeta^{1/2}\end{array}\right]$$ for all $\zeta \in \partial D$. The partial indices of this disk are thus $\kappa_1=2$, $\kappa_2=1$, and $\kappa_3=1$, and its Maslov index is consequently $\kappa = 4$. Proposition \ref{globo} thus asserts that the $7$-parameter family of perturbations of $\digamma$ we previously found by hand is actually stable under deformations of $B$. That is, for any $B^\prime$ represented by a a section of the normal bundle of $B\subset \mathbb{C}^3$ of small $C^3$ norm on a neighborhood of $f(S^1)\subset B$, we can find a $C^1$ family of parameterized holomorphic disks near $\digamma$ with boundary values in $B^\prime$ and nonetheless $C^1$ close to the boundary values of a neighborhood of $\digamma$ in our original $7$-parameter family. Provided the norm of this section is small, each of the new disks will remain embedded, and will meet the hypersurface $${\mathfrak z}_1+{\mathfrak z}_2^2+{\mathfrak z}_3^2=0$$ that represents the quadric $\mathcal Q$ in our affine chart. Let us now give this assertion a more concrete geometrical interpretation. Suppose that $P\subset \mathbb C\mathbb P_3$ be the image of a general $C^\infty$ embedding of $\mathbb R\mathbb P^3$ into $\mathbb C\mathbb P_3$ which satisfies the sole constraint that, with respect to the $C^3$ topology on the space of maps, it lies in a sufficiently small neighborhood ${\mathcal X}$ of the standard embedding. By shrinking ${\mathcal X}$ if necessary, we may assume that every such $P$ is totally real and does not meet the quadric ${\mathcal Q}\subset \mathbb C\mathbb P_3$. Since the complement of $Q$ is a tubular neighborhood of $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$, any such $P$ may be represented by a smooth section of the normal bundle of $\mathbb R\mathbb P^3$; and since the complex structure $J$ provides an isomorphism between the tangent and normal bundles of $\mathbb R\mathbb P^3$, the freedom in choosing $P$ amounts to that of choosing a vector field on $\mathbb R\mathbb P^3$ of small $C^3$ norm. Proposition \ref{globo}, in conjunction with Lemma \ref{regularity}, now tells us the following: \begin{prop}\label{cadre} Suppose that $P\subset \mathbb C\mathbb P_3$ is the image of a smooth embedding $\mathbb R\mathbb P^3\hookrightarrow \mathbb C\mathbb P_3$ which is sufficiently close to the standard one in the $C^{3}$ topology. Then $P$ contains a uniquely determined smooth family of embedded oriented circles $\ell_x \subset P$, $x\in S^2\times S^2$, each of which bounds an embedded holomorphic disk $D^2\subset \mathbb C\mathbb P_3$ whose relative homology class generates $H_2(\mathbb C\mathbb P_3, P; \mathbb{Z}) \cong \mathbb{Z}$. The corresponding family of holomorphic disks is smooth, and the interiors of these disks smoothly foliate $\mathbb C\mathbb P_3-P$. \end{prop} In fact, the existence of a $C^1$ family of such holomorphic disks simply follows from elementary Fourier analysis and the inverse function theorem, and so may be rederived by essentially repeating the self-contained arguments given in \cite{lmzoll}. Once this is known, one can then use Lemma \ref{regularity} to conclude that each of the constructed disks is actually smooth, and the smoothness of the constructed family then follows from Proposition \ref{globo} by showing that it locally coincides with the families of disks obtained by perturbing any given smooth disk through disks of increasing regularity. A less elementary, but distinctly compelling, road to the same conclusion would be to appeal to the non-linear elliptic methods that are now standard in the theory of $J$-holomorphic curves \cite{mcsalt}. \section{Constructing Self-Dual Metrics} So far, we have associated a $4$-dimensional space of embedded holomorphic disks with each small perturbation of $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$. To finish our construction, we need to show that this $4$-dimensional parameter space carries a natural self-dual split-signature conformal structure. This will be obtained via the following general mechanism: \begin{prop}\label{machine} Let $M$ be a smooth connected $4$-manifold, and let $\varpi: {\mathcal X}\to M$ be a smooth $\mathbb C\mathbb P_1$-bundle. Let $\varrho : {\mathcal X}\to {\mathcal X}$ be an involution which commutes with the projection $\varpi$, and has as fixed-point set ${\mathcal X}_\varrho$ an $S^1$-bundle over $M$ which disconnects ${\mathcal X}$ into two closed $2$-disk bundles ${\mathcal X}_\pm$ with common boundary ${\mathcal X}_\varrho$. Suppose that $\mbox{\cyr D} \subset T_\mathbb{C} {\mathcal X}$ is a distribution of complex $3$-planes on ${\mathcal X}$ such that \begin{itemize} \item $\varrho _* \mbox{\cyr D} = \overline{\mbox{\cyr D}}$; \item the restriction of $\mbox{\cyr D}$ to ${\mathcal X}_+$ is smooth and involutive; \item $\mbox{\cyr D} \cap \overline{\mbox{\cyr D}} = 0$ on ${\mathcal X}-{\mathcal X}_\varrho$; \item $\mbox{\cyr D} \cap \ker \varpi_*$ is the $(0,1)$ tangent space of the $\mathbb C\mathbb P_1$ fibers of $\varpi$; and \item the restriction of $\mbox{\cyr D}$ to a fiber of ${\mathcal X}$ has $c_1= -4$ with respect to the complex orientation. \end{itemize} Then $E=\mbox{\cyr D}\cap T{\mathcal X}_\varrho$ is an integrable distribution of real $2$-planes on ${\mathcal X}_\varrho$, and $M$ admits a unique smooth split-signature self-dual conformal structure $[g]$ for which the $\beta$-surfaces are the projections via $\varpi$ of the integral manifolds of $E$. \end{prop} \begin{proof} Let us begin by noticing that, since $\mbox{\cyr D}= \varrho^*\overline{\mbox{\cyr D}}$ is continuous on the closed sets ${\mathcal X}_+$ and ${\mathcal X}_-$, it is continuous on all of $\mathcal X$. Now let $V^{0,1}$ be the $(0,1)$ tangent space of the fibers. By hypothesis, $V^{0,1}\subset \mbox{\cyr D}$, so that ${{\mho}}= \mbox{\cyr D} /V^{0,1}$ is a well defined rank-$2$ complex vector bundle. Also notice that, since $\mbox{\cyr D} \cap \ker \varpi_* = V^{0,1}$, the fibers of ${{\mho}}$ are carried injectively into $T_\mathbb{C} M$ by $\varpi_*$. We may therefore define a continuous map \begin{eqnarray*} \psi : {\mathcal X} & \to& Gr_2(T_\mathbb{C} M) \\ z&\mapsto & \varpi_* ({{\mho}}|_z)= \varpi_* (\mbox{\cyr D} |_z) \end{eqnarray*} which makes the diagrams \setlength{\unitlength}{1ex} \begin{center}\begin{picture}(80,17)(0,3) \put(10,17){\makebox(0,0){${\mathcal X}$}} \put(18,19){\makebox(0,0){$\psi$}} \put(18,5){\makebox(0,0){$M$}} \put(28,17){\makebox(0,0){$Gr_2(T_\mathbb{C} M)$}} \put(11,15.5){\vector(2,-3){6}} \put(25,15.5){\vector(-2,-3){6}} \put(12,17){\vector(1,0){10}} \put(36,10){{and}} \put(52,17){\makebox(0,0){$\mathcal X$}} \put(71,17){\makebox(0,0){$Gr_2(T_\mathbb{C} M)$}} \put(52,5){\makebox(0,0){$\mathcal X$}} \put(71,5){\makebox(0,0){$Gr_2(T_\mathbb{C} M)$}} \put(50,11){\makebox(0,0){$\varrho$}} \put(73,11){\makebox(0,0){$c$}} \put(60,6.5){\makebox(0,0){$\psi$}} \put(60,19){\makebox(0,0){${\psi}$}} \put(71,15.5){\vector(0,-1){9}} \put(52,15.5){\vector(0,-1){9}} \put(53.5,17){\vector(1,0){11}} \put(53.5,5){\vector(1,0){11}} \end{picture}\end{center} commute, where $c$ denotes the map induced by complex conjugation $T_\mathbb{C} M\to T_\mathbb{C} M$. Now let $\zeta$ be a smooth, fiber-wise holomorphic coordinate on $\mathcal X$, and notice that the corresponding vertical vector field $\partial / \partial \overline{\zeta}$ is both smooth and a section of $\mbox{\cyr D}$. Next, near any point of the interior of ${\mathcal X}_+$, let ${\mathfrak w}_1$ and ${\mathfrak w}_2$ be any two local sections of $\mbox{\cyr D}$ which are linearly independent from $\partial / \partial \overline{\zeta}$ and from each other. Then the involutivity hypothesis $[C^\infty (\mbox{\cyr D}) , C^\infty (\mbox{\cyr D}) ]\subset C^\infty (\mbox{\cyr D})$ tells us that $$\frac{\partial}{\partial \overline{\zeta}} \left( \varpi_* ({\mathfrak w}_j) \right) = \varpi_* ({\mathcal L}_{\frac{\partial}{\partial \overline{\zeta}}}{\mathfrak w}_j) = \varpi_* \left(\left[ \frac{\partial}{\partial \overline{\zeta}} ,{\mathfrak w}_j \right]\right) \equiv 0 \bmod \Big\langle \varpi_* ({\mathfrak w}_1), \varpi_* ({\mathfrak w}_2)\Big\rangle ,$$ and it follows that $\psi$ is a fiber-wise holomorphic on the interior ${{\mathcal X}}_+$. Since $\psi = c\circ \psi \circ \varrho$, it thus follows that $\psi$ is also fiber-wise holomorphic on the interior ${{\mathcal X}}_-$. However, $\psi$ is also continuous across ${\mathcal X}_\varrho = {\mathcal X}_+ \cap {\mathcal X}_-$, so this implies that $\psi$ is actually fiber-wise holomorphic on all of ${\mathcal X}$. By construction, the restriction of ${{\mho}}$ to $\varpi^{-1}(x)$ is the pull-back, via $\psi$, of the universal bundle ${\mathbb U}$ over $Gr_2 (T_{\mathbb{C}} M|_x)\cong Gr_2 (\mathbb{C}^4)$. Now consider the Pl\"ucker embedding \begin{eqnarray*} {\mathfrak P} : Gr_2 (T_{\mathbb{C}} M) & \hookrightarrow & \mathbb P (\wedge^2 T_{\mathbb{C}} M)\\ \mbox{span} (w_1, w_2) & \mapsto & [ w_1\wedge w_2] \end{eqnarray*} and the induced map $\hat{\psi} = {\mathfrak P} \circ \psi : {\mathcal X} \to \mathbb P (\wedge^2 T_{\mathbb{C}} M)$. Since ${\mathfrak P}^*\O (-1)=\wedge^2 {\mathbb U}$, we must have $\hat{\psi}^*\O (-1)= \wedge^2 {{\mho}}$. But $V^{0,1}$ is the $(0,1)$ tangent space of $\varpi^{-1}(x)$, and hence $c_1(V^{0,1})=-2$ on any fiber of $\varpi$. On the other hand, $c_1(\mbox{\cyr D})=-4$ on $\varpi^{-1}(x)$, by hypothesis. Adjunction therefore tells us that $c_1({{\mho}}) = -2$ on any fiber. Thus the restriction of $\hat{\psi}$ to any fiber is a holomorphic map of degree $2$ from $\mathbb C\mathbb P_1$ to the $4$-quadric $Q_4\subset \mathbb C\mathbb P_5$. There are only two possibilities for this map: either it is the inclusion of a non-degenerate plane conic $Q_1$ into $Q_4$, or else it is a ramified double cover of a projective line $\mathbb C\mathbb P_1\subset Q_4$ branched at two points. The latter possibility, however, is excluded by our hypotheses. Indeed, any line $\mathbb C\mathbb P_1\subset Q_4\subset \mathbb C\mathbb P_5$ corresponds to the curve in $Gr_2 (\mathbb{C}^4)$ given by the pencil of all $2$-planes contained in a $3$-dimensional subspace of $\mathbb{C}^4$ and containing some fixed line. If the image of $\varpi^{-1}(x)$ under $\hat{\psi}$ were a line, we would thus have $$ \varpi_* (\mbox{\cyr D}_z)+ \varpi_*(\mbox{\cyr D}_{z^\prime})= \varpi_* ({{\mho}}|_z)+ \varpi_*({{\mho}}|_{z^\prime})\subsetneq T_{\mathbb{C}}M|_x $$ for all $z, z^\prime \in \varpi^{-1}(x)$. However, since $\mbox{\cyr D} + \overline{\mbox{\cyr D}}=T_\mathbb{C} {\mathcal X}$ away from ${\mathcal X}_\varrho$, and because $\varrho^*\mbox{\cyr D} = \overline{\mbox{\cyr D}}$, we actually have $$\varpi_* (\mbox{\cyr D}|_z)+ \varpi_*(\mbox{\cyr D}|_{\varrho (z)})= \varpi_* (\mbox{\cyr D}|_z+ \overline{\mbox{\cyr D}}|_z)= T_{\mathbb{C}}M|_x$$ for all $z\in \varpi^{-1}(x)$ with $\varrho (z)\neq z$. This contradiction shows that $\hat{\psi}[\varpi^{-1}(x)]$ cannot be a line. Thus $\hat{\psi}$ holomorphically includes each fiber of $\varpi$ into $\mathbb P (\wedge^2T_\mathbb{C} M)$ as a non-degenerate conic curve. For each $x$, this conic is cut out by a unique $3$-plane $\Lambda_{-}^{\mathbb{C}}|_x\subset \wedge^2T_{\mathbb{C}} M|_x$. The restriction of the wedge product \begin{eqnarray*} \wedge^2T_{\mathbb{C}} M \times \wedge^2T_{\mathbb{C}} M& \to & \wedge^4T_{\mathbb{C}} M \\ (\varphi , \omega ) & \mapsto & \varphi \wedge \omega \end{eqnarray*} to $\Lambda_{-}^{\mathbb{C}}|_x$ is, moreover, always a non-degenerate bilinear form, since, by construction, $\mathbb P (\Lambda_{-}^{\mathbb{C}})$ always meets the quadric $\omega \wedge \omega =0$ in the non-degenerate conic $\hat{\psi}[\varpi^{-1}(x)]$. Now $\hat{\psi}$ is at least smooth on the interior of ${\mathcal X}_+$. By taking the images under $\hat{\psi}$ of three generic smooth local sections of $\varpi$ which avoid ${\mathcal X}_\varrho$, we can thus locally span $\Lambda_-^\mathbb{C}$ by three smooth local sections of $\wedge^2 T_\mathbb{C} M$. Thus $\Lambda_{-}^{\mathbb{C}}\subset \wedge^2T_{\mathbb{C}} M$ is a smooth sub-bundle. Moreover, essentially the same argument shows that $\hat{\psi}$ is smooth on all of $\mathcal X$. Since $\psi \circ \varrho = c\circ \psi$, we must therefore have $\Lambda_{-}^{\mathbb{C}}= \mathbb{C}\otimes \Lambda_-$ for a unique, smooth real vector sub-bundle $\Lambda_-\subset \wedge^2TM$ on which the wedge product is non-degenerate. However, the wedge product must be {\em indefinite} on every fiber of $\Lambda_{-}$, since ${\mathcal X}_\varrho$ meets every fiber of ${\mathcal X}$. Since $$O(3,3)/[O(2,1)\times O(1,2)]= SL(4,\mathbb R) /SO(2,2)$$ it follows that there is a unique smooth split-signature conformal metric $[g]$ on $M$ for which $\Lambda_-$ is the bundle of anti-self-dual bi-vectors for an appropriate orientation of $M$. For each metric $g$ in this conformal class, $\Lambda_-$ then corresponds via index-lowering to the bundle $\Lambda^-\subset \wedge^2$ of real anti-self-dual $2$-forms. Now consider the subset of the complex tangent bundle of ${\mathcal X}_\varrho$ defined by $$E_\mathbb{C}= \mbox{\cyr D} \cap T_\mathbb{C} {\mathcal X}_\varrho.$$ Since each fiber of $T_\mathbb{C} {\mathcal X}_\varrho$ has codimension $1$ in $T_\mathbb{C}{\mathcal X}$, and since $T_\mathbb{C} {\mathcal X}_\varrho$ does not contain the $1$-dimensional subspace $V^{0,1}\subset \mbox{\cyr D}$, the subspace $\mbox{\cyr D}$ is always in general position relative to $T_\mathbb{C} X_\varrho$. Hence $E_\mathbb{C}$ is a smooth distribution of complex $2$-planes on ${\mathcal X}_\varrho$. However, $\varrho$ acts trivially on ${\mathcal X}_\varrho$, and hence $\varrho_*$ acts on $T_\mathbb{C} {\mathcal X}_\varrho$ via the identity. The assumption that $\varrho_*\mbox{\cyr D} = \overline{\mbox{\cyr D}}$ therefore implies that $\overline{E_\mathbb{C}}=E_\mathbb{C}$. Hence $E_\mathbb{C}$ is the complexification of a smooth distribution of real $2$-planes $$E= \mbox{\cyr D} \cap T{\mathcal X}_\varrho$$ on ${\mathcal X}_\varrho$. Since $T{\mathcal X}_\varrho$ and $\mbox{\cyr D}$ are both closed under Lie brackets, it follows that $E$ is Frobenius integrable. Thus ${\mathcal X}_\varrho$ is foliated by $2$-manifolds tangent to $E$. But the inclusion $E_\mathbb{C} \hookrightarrow \mbox{\cyr D}$ induces an canonical isomorphism $E\otimes \mathbb{C}\to {{\mho}}|_{{\mathcal X}_\varrho}$, whereas $\psi$ identifies ${\mathcal X}_\varrho$ with the bundle of real $\beta$-planes for $(M,[g])$. Thus each integral manifold of $E$ is sent via $\varpi$ to a $\beta$-surface of $(M,[g])$, and $[g]$ is therefore self-dual by Proposition \ref{roger}. Moreover, $[g]$ is uniquely determined by this last prescription, since, at each point of $M$, the union of the tangent spaces of these $\beta$-surfaces is precisely the null cone of $[g]$, and the conformal class of any indefinite metric is completely determined by its null cone. \end{proof} \begin{thm}\label{bild} Let $P\subset \mathbb C\mathbb P_3$ be a smooth, totally real submanifold which, in the $C^3$ topology, is close to the standard `linear' $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$; and, for clarity, fix a quadric $Q\subset \mathbb C\mathbb P_3$ which is disjoint from $P$. For each $x\in Q \approx S^2 \times S^2$, let $D_x\subset \mathbb C\mathbb P_3$ be the unique holomorphic disk of the family constructed in Proposition \ref{cadre} which passes through $x$. For each $y\in P$, set $$S_y= \{ x \in S^2 \times S^2~|~ y \in D_x\}.$$ Then there is a unique, smooth Zollfrei self-dual split-signature conformal structure $[g]$ on $Q\approx S^2\times S^2$ whose $\beta$-surfaces are exactly the $S_y$, $y\in P$. \end{thm} \begin{proof} Let $M=Q\approx S^2 \times S^2$, and let ${\mathcal X}_+\to M$ be the $2$-disk bundle whose fiber over $x\in M$ is the holomorphic disk $D^2\subset \mathbb C\mathbb P_3$ of the family passing through $x$. Thus there is a tautological smooth map ${\mathzap F}: {\mathcal X}_+\to \mathbb C\mathbb P_3$ which sends the interior of ${\mathcal X}_+$ diffeomorphically onto $\mathbb C\mathbb P_3-P$, and which sends $\partial {\mathcal X}_+\to P$. Recalling that ${\mathzap F}_* : T_\mathbb{C}{\mathcal X}_+\to T_\mathbb{C}\mathbb C\mathbb P_3$ denotes the derivative of this map, let ${\mathzap F}_*^{1,0}: T_\mathbb{C} {\mathcal X}_+\to T^{1,0}\mathbb C\mathbb P_3$ denote be the $(1,0)$-component of this derivative, and let $$\mbox{\cyr D} = \ker {\mathzap F}_*^{1,0}\subset T_\mathbb{C}{\mathcal X}_+$$ denote the kernel of this component. Since ${\mathzap F}$ is $C^1$ close to the corresponding map for the flat model, we may assume that ${\mathzap F}_*^{0,1}$ is everywhere of maximal rank, as in the flat case. Thus $\mbox{\cyr D}$ is a smooth complex bundle of rank $3$ on all of ${\mathcal X}_+$. Now if $V^{0,1}$ is the $(0,1)$-tangent space of the fibers of the $D^2$-bundle ${\mathcal X}_+\to M$, then $V^{0,1}\subset \mbox{\cyr D}$ because ${\mathzap F}$ is fiber-wise holomorphic. But because the $5$-manifold $\partial {\mathcal X}_+$ is sent to the $3$-manifold $P$ by ${\mathzap F}$, each fiber $$E = \ker {\mathzap F}_*|_{\partial {\mathcal X}_+}$$ has dimension $\geq 2$, and since $(E\otimes \mathbb{C}) \oplus V^{0,1}\subset \mbox{\cyr D}$, we conclude that $E$ is in fact a smooth distribution of real $2$-planes on $\partial {\mathcal X}_+$. Now let ${\mathcal X}_-$ be a second copy of ${\mathcal X}_+$, and define $\mbox{\cyr D}\to {\mathcal X}_-$ to be the push-forward of the distribution of complex $3$-planes $\overline{\mbox{\cyr D}}\to {\mathcal X}_+$ via the tautological diffeomorphism ${\mathcal X}_+\to {\mathcal X}_-$. Similarly, let $V^{0,1}\to {\mathcal X}_-$ be the distribution of complex lines obtained from $\overline{V^{0,1}}\to {\mathcal X}_+$. Let $${\mathcal X} = {\mathcal X}_+\cup_{\partial {\mathcal X}_+} {\mathcal X}_-$$ be the double of ${\mathcal X}_+$. Then we have a canonical projection $\varpi : {\mathcal X}\to M$ which makes ${\mathcal X}$ into a $\mathbb C\mathbb P_1$-bundle with vertical $(0,1)$ tangent space $V^{0,1}$. Moreover, our two definitions of $\mbox{\cyr D}$ agree along the hypersurface $\partial {\mathcal X}_+ = \partial {\mathcal X}_-$, because both coincide with $V^{0,1}\oplus (E\otimes C )$ along this locus. Moreover, $V^{0,1}= \mbox{\cyr D} \cap \ker \varpi_*$ on all of $\mathcal X$. Let $\varrho : {\mathcal X}\to {\mathcal X}$ be the map which interchanges ${\mathcal X}_\pm$ via the tautological diffeomorphism. This is an involution of ${\mathcal X}$ which commutes with $\varpi$, and its fixed-point set ${\mathcal X}_\varrho= \partial {\mathcal X}_+$ divides ${\mathcal X}$ into two disk bundles over $M$. By construction, we have $\varrho_*\mbox{\cyr D} = \overline{\mbox{\cyr D}}$. Moreover, $\mbox{\cyr D}$ is smooth, involutive, and satisfies $\mbox{\cyr D} \cap \overline{\mbox{\cyr D}}=0$ on $\mbox{Int } {\mathcal X}_+$, since the diffeomorphism ${\mathzap F}$ from $\mbox{Int } {\mathcal X}_+$ to $\mathbb C\mathbb P_3-P$ sends $\mbox{\cyr D}$ to $T^{0,1}\mathbb C\mathbb P_3$. Finally, observe that any holomorphic disk $(D^2, S^1)\to (\mathbb C\mathbb P_3, P)$ obtained by restricting ${\mathzap F}: {\mathcal X}_+\to \mathbb C\mathbb P_3$ to a fiber of ${\mathcal X}_+\to M$ must have Maslov index $\kappa= 4$, since each such disk is obtained by deforming a disk with $\kappa = 4$ from our flat model, and the Maslov index invariant under deformations \cite{mcsalt}. This index is by definition the winding number of $\wedge^3TP$ in $\wedge^3 T^{1,0}T\mathbb C\mathbb P_3$ along $S^1= \partial D^2$, relative to any trivialization of $T^{1,0}\mathbb C\mathbb P_3$ over $D^2$, remembering that the space of real lines in $\mathbb{C}$ is exactly $\mathbb R\mathbb P^1\approx S^1$. But recall that ${\mathzap F}_*^{1,0}$ is surjective, so that we can identify ${\mathzap F}^*T^{1,0}\mathbb C\mathbb P_3$ with the quotient $T_\mathbb{C}{\mathcal X}_+/\mbox{\cyr D}$. Since $\wedge^6 T_\mathbb{C} {\mathcal X}_+$ is the complexification of the trivial real bundle $\wedge^6 T{\mathcal X}_+$, it follows by adjunction that this Maslov index must be {\em minus} the Maslov index of $T{\mathcal X}_\varrho/E \subset \mbox{\cyr D}$. However, the latter winding number is also exactly the degree of $\wedge^3 \mbox{\cyr D}$ on a $\mathbb C\mathbb P_1$ fiber, since, by construction, $\mbox{\cyr D}$ is defined on the double of the disk precisely by gluing $\mbox{\cyr D}|_{D^2}$ to $\overline{\mbox{\cyr D}}|_{\overline{D^2}}$ so as to send $T{\mathcal X}_\varrho/E$ to itself. Thus the evaluation of $c_1(\mbox{\cyr D})$ on a fiber of $\varpi$ is exactly $-\kappa = -4$. The above arguments show that all the hypotheses of Proposition \ref{machine} are satisfied. Thus $M=S^2\times S^2$ admits a unique self-dual split-signature metric $[g]$ for which the $\beta$-surfaces are the projections to $M$ of the integral manifolds of $E\to {\mathcal X}_\varrho$. By construction, however, $E$ is precisely the vertical tangent bundle of the smooth submersion $${\mathzap F}|_{\partial {\mathcal X}_+}: {\partial {\mathcal X}_+}\to P,$$ so these $\beta$-surfaces are exactly of the form $$S_y = \varpi [{\mathzap F}^{-1}(y)]$$ for $y\in P\subset \mathbb C\mathbb P_3$, which is to say that $$S_y= \{ x\in M ~|~ y \in D_y\},$$ as promised. Now ${\mathcal X}_+$ is diffeomorphic to the Chern class $(2,2)$ disk bundle over $S^2\times S^2$. Thus ${\mathcal X}_\varrho= \partial {\mathcal X}_+$ is the Chern class $(2,2)$ circle bundle over $S^2\times S^2$, and so is diffeomorphic to $\mathbb R\mathbb P_3 \times S^2$. Moreover, since every disk of the family represents the generator of $H_2 (\mathbb C\mathbb P_3,P)$, the long exact homotopy sequence $$\cdots \to H_2 (\mathbb C\mathbb P_3, P;\mathbb{Z} ) \to H_1 (P; \mathbb{Z} ) \to H_1 (\mathbb C\mathbb P_3; \mathbb{Z})\to \cdots $$ of the pair $(\mathbb C\mathbb P_3 ,P)$ tells us that the boundary of each disk generates $\pi_1 (P) = H_1 (P)\cong \mathbb{Z}_2$. It follows that ${\mathzap F}|_{{\mathcal X}_\varrho}$ induces a surjection $\pi_1 ({\mathcal X}_\varrho)\to \pi_1 (P)$. But ${\mathzap F}|_{{\mathcal X}_\varrho}$ is a proper submersion, and therefore a smooth fibration, so we have the long exact homotopy sequence $$\cdots \to \pi_2 (P) \to \pi_1 (S_y)\to \pi_1 ({\mathcal X}_\varrho)\to \pi_1 (P)\to \pi_0 (S_y) \to \cdots $$ and it follows that the compact surface $S_y$ is connected and simply connected. Thus every $\beta$-surface of $(M,[g])$ is a $2$-sphere. Hence $[g]$ is Zollfrei by Theorem \ref{tasty}, and we are done. \end{proof} {\bf Theorem \ref{zfsd}} now follows from Theorems \ref{zorro} and \ref{bild}. \section{The K\"ahler Case} \label{kahler} The protypical example which motivated our entire investiagtion of a Zollfrei self-dual manifold was the the indefinite product metric $g_0= \pi_1^*h - \pi_2^*h$ on $S^2\times S^2$. Notice, however, that this metric may be considered as an indefinite K\"ahler metric on $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, with K\"ahler form $\omega =\pi_1^*\mu - \pi_2^*\mu$, where $\mu$ denotes the area form of $(S^2,h)$. Notice that, since $h$ has constant Gauss curvature $1$, the scalar curvature of $g_0$ is $s=s_{g_0}=\pi_1^*s_h-\pi_2^*s_h=2-2= 0$. A pseudo-Riemannian metric with this last property is said to be {\em scalar-flat}. Now, more generally, suppose that we have a scalar-flat indefinite K\"ahler metric $g$ on a compact complex surface $(M^4,{\mathfrak J})$. From the outset, we choose to give $M$ the usual {\em complex} orientation, but we will also need to systematically consider the reverse-oriented version $\overline{M}$ of our manifold. To see why, observe that the K\"ahler form $\omega$ of $(M,g,{\mathfrak J})$ is a closed non-degenerate $2$-form on $M$, and so may be considered as a {\em symplectic form}. However, such a form determines an orientation, and in the present case this orientation is the {\em opposite} of the complex-manifold orientation; thus, $(\overline{M}, \omega)$ becomes a symplectic $4$-manifold, oriented according to standard symplectic conventions. Notice that while $\omega$ is a self-dual $2$-form on $(\overline{M}, g)$, it is instead {\em anti-self-dual} on $(M,g)$. With this potential source of confusion kept clearly in focus, standard Riemannian folklore \cite{bes4} immediate tells us the following: \begin{lem} \label{gau} Let $(M^4,{\mathfrak J},g)$ be a complex surface with an {\em indefinite} K\"ahler metric. Then $(M,g)$ is self-dual iff $g$ is scalar-flat. \end{lem} \begin{proof} The curvature of any K\"ahler manifold, indefinite or not, is necessarily of type $(1,1)$, so that the corresponding curvature operator ${\mathcal R}$ kills $\Lambda^{2,0}\oplus \Lambda^{0,2}$, and amounts to a linear map ${\mathcal R} : \Lambda^{1,1}\to \Lambda^{1,1}$. Now observe that if $(M^4,{\mathfrak J},g)$ is an {\em indefinite} K\"ahler manifold, equipped with the complex orientation, we have \begin{eqnarray*} \Lambda^{1,1} & = & \mathbb{C} \omega \oplus \Lambda^+\\ \Lambda^-_\mathbb{C} & = & \mathbb{C} \omega\oplus \Lambda^{2,0}\oplus \Lambda^{0,2} \end{eqnarray*} so that the $$\left[W_-+\frac{s}{12}\right] : \Lambda^-\to \Lambda^-$$ block of the curvature operator kills $\Lambda^{2,0}\oplus \Lambda^{0,2}$, and sends $\mathbb{C} \omega$ to itself. Since $s/4$ is the trace of this block, and since $W_-$ is its trace-free part, we therefore have $$W_-= \left(\begin{array}{ccc}-\frac{s}{12} & & \\ & -\frac{s}{12} & \\ & & \frac{s}{6}\end{array}\right)$$ in an appropriate basis. Hence $W_-=0$ iff $s=0$, as claimed. \end{proof} Now the structure group of an indefinite K\"ahler surface is $U(1,1)$, which is a connected Lie group. Every indefinite K\"ahler surface therefore carries a canonical space-time orientation. As a consequence, Theorem \ref{s2xs2} tells us that any Zollfrei scalar-flat indefinite K\"ahler surface is homeomorphic to $S^2\times S^2$. However, a much stronger assertion is actually true: \begin{thm} \label{bihol} Suppose that $(M^4,{\mathfrak J},g)$ be a complex surface with scalar-flat {\em Zollfrei} indefinite K\"ahler metric. Then $(M,{\mathfrak J})$ is biholomorphic to $\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$. \end{thm} \begin{proof} Let $S$ be any $\beta$-surface in $(M,g)$. At each point of $S$ the image of $\Lambda^2TS$ in $\Lambda^2$ is then the span of a real, non-zero simple element of $\Lambda^-$. But, up to a positive constant, the general such simple $2$-form can be written uniquely as $\omega + \phi + \overline{\phi}$, where $\phi\in \Lambda^{2,0}$ is any element of unit norm. It follows that the restriction of $\omega$ to $S$ is non-zero at every point. Hence $S$ is a symplectic submanifold of $(\overline{M}, \omega )$. Moreover, since $S$ is orientable and Lemma \ref{beta} asserts that $S$ is either a $2$-sphere or a projective plane, we have $S\approx S^2$. Proposition \ref{structure} thus tells us that $[S]\cdot [S] =-2$ in $M$, and hence that $[S]\cdot [S] =+2$ in the reverse-oriented manifold $\overline{M}$. Hence $(\overline{M}, \omega )$ contains a symplectic $2$-sphere $S$ of positive self-intersection, and a fundamental result of McDuff \cite{mcrules} therefore tells us that $(\overline{M}, \omega )$ must be diffeomorphic to either $S^2 \times S^2$ or to $\mathbb C\mathbb P_2 \# k \overline{\mathbb C\mathbb P}_2$, $k\geq 0$. Since $M$ is spin by Proposition \ref{s2xs2}, it therefore follows that $M$ is therefore diffeomorphic to $S^2\times S^2$. In particular, this shows that $M$ is a minimal complex surfae which admits a Riemannian metric of positive scalar curvature, and Seiberg-Witten theory \cite{FM,spccs} therefore tells us $(M,{\mathfrak J})$ must have Kodaira dimension $-\infty$. By the Kodaira-Enriques classification \cite{bpv,GH}, our simply connected complex surface $(M,{\mathfrak J})$ is therefore {\em rational}, in the sense of being obtained from $\mathbb C\mathbb P_2$ by blowing up and down, and since $M$ is also spin, so it follows that $(M,{\mathfrak J})$ is an even Hirzebruch surface $\mathbb P [\O \oplus \O (2m)]\to \mathbb C\mathbb P_1$. However, Kamada \cite{kamada} has shown that the existence of scalar-flat indefinite K\"ahler metrics is obstructed for $m\neq 0$ because a generalized form of the Futaki invariant \cite{fuma} is non-zero in all these cases. Hence $m=0$, and $(M,{\mathfrak J})$ must be biholomorphic to $\mathbb C\mathbb P_1\times\mathbb C\mathbb P_1$. \end{proof} \begin{remark} The above result would certainly become false if the Zollfrei hypothesis were dropped. For example, one can easily construct scalar-flat indefinite K\"ahler metrics on the product $\Sigma \times \Sigma$ of any Riemann surface $\Sigma$ with itself, just by setting $g= \pi_1^*h -\pi_2^*h$, where $h$ is a metric on $\Sigma$ of constant sectional curvature. \end{remark} \begin{thm}\label{twisthol} Let $(M,g,{\mathfrak J})$ be a scalar-flat Zollfrei indefinite K\"ahler metric. Then its twistor space $(Z,J)$, in the sense of definition \ref{chubby}, is biholomorphic to $\mathbb C\mathbb P_3$. Moreover, this biholomorphism determines a preferred non-singular quadric $Q\subset \mathbb C\mathbb P_3$ obtained by thinking of ${\mathfrak J}$ as section of ${\mathcal Z}_+\to M$. \end{thm} \begin{proof} The complex structure ${\mathfrak J}$ defines a section of $\mbox{Int }{\mathcal Z}_+$, and because ${\mathfrak J}$ is parallel, the image of this section is tangent to ${\zap E}\subset {\zap D}$. The composition of $\Psi$ with this section is therefore a holomorphically embedding of $(M, {\mathfrak J})$ into $(Z,J)$. Moreover, as we saw in the proof of Theorem \ref{difftwist}, the normal bundle $\nu$ of $(M, {\mathfrak J})$ has Chern class $c_1(\nu ) = c_1 (M, {\mathfrak J})$. Since Theorem \ref{bihol} tells us that $(M, {\mathfrak J})\cong \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, this therefore gives us a hypersurface $Q\subset Z$ cut out by a section of the corresponding divisor line bundle $L\to Z$, such that $Q\cong \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$ and such that $L|_Q\cong {\mathcal O}(2,2)$. For each integer $m$, we therefore have an exact sequence $$0\to {\mathcal O}(L^{m-1}) \to {\mathcal O}(L^{m}) \to {\mathcal O}_Q(2m,2m)\to 0$$ of sheaves on $Z$. Since $$H^q (\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1 , {\mathcal O}(2m, 2m )) = 0~~\forall q, m > 0, $$ it follows that, as $m\to \infty$, $h^1(Z, {\mathcal O}(L^{m}))$ is decreasing, while $h^2(Z, {\mathcal O}(L^{m}))$ and $h^3(Z, {\mathcal O}(L^{m}))$ remain constant. Hence \begin{eqnarray*} \chi (Z, {\mathcal O}(L^{m})) & = & h^0 ({\mathcal O}(L^{m})) -h^1 ({\mathcal O}(L^{m})) + h^2 ({\mathcal O}(L^{m})) -h^3 ({\mathcal O}(L^{m})) \\ & = & h^0 (Z, {\mathcal O}(L^{m})) + \mbox{const } ~~~\forall m \gg 0. \end{eqnarray*} However, Theorem \ref{difftwist} tells us that $Z$ is diffeomorphic to $\mathbb C\mathbb P_3$ in a manner sending the Chern classes of $Z$ to the Chern classes of $\mathbb C\mathbb P_3$. Since $c_1(L)= \frac{1}{2}c_1 (Z)$, the Hirzebruch-Riemman-Roch theorem therefore tells us that $$\chi (Z, {\mathcal O}(L^{m}))= \chi (\mathbb C\mathbb P_3 , {\mathcal O}(2m))= \frac{(2m+1)(2m+2)(2m+3)}{6}. $$ Hence $h^0 (Z, {\mathcal O}(L^{m}))$ grows cubically in $m$. The complex $3$-fold $(Z,J)$ is therefore Moishezon. Since $Z$ is also diffeomorphic to $\mathbb C\mathbb P_3$, Theorem \ref{nakol} therefore tells us that $(Z,J)$ is biholomorphic to $\mathbb C\mathbb P_3$. Moreover, $Q\subset Z$ is carried by this biholomorphism to a non-singular hypersurface of degree $2$. \end{proof} Now, which totally real submanifolds of $\mathbb C\mathbb P_3$ correspond to scalar-flat self-dual metrics? The following result provides the key to the answer. \begin{prop} \label{constr} Let $(M, g, {\mathfrak J})$ be a Zollfrei indefinite scalar-flat K\"ahler manifold, let $Q\subset Z\cong \mathbb C\mathbb P_3$ be the quadric constructed in Theorem \ref{twisthol}, and let $P=\Psi (F)$ be the space of $\beta$-surfaces in $M$. Then there is a meromorphic $3$-form $\Omega$ on $Z$ which is holomorphic and non-zero on $Z-Q$ and which has the property that its pull-back to $P$ is a real $3$-form. \end{prop} \begin{proof} Consider a pseudo-orthonormal frame $e_1 , \ldots, e_4$ on some region of $M=\mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$ in which $$\omega = \sqrt{2}\varphi_1 = e^1\wedge e^2 -e^3\wedge e^4.$$ Since $\nabla\omega=0$, we have $$\theta^2_1= \theta^1_2=\theta^3_1=\theta^1_3=0,$$ so the connection on $\Lambda^-$ is determined by a single $1$-form $$\theta = \theta_3^2=-\theta_2^3.$$ The distribution ${\zap D}$ is thus spanned by \begin{eqnarray*} {\mathfrak w}_1 & = & (\zeta^2+1)e_1 -2\zeta e_3 + (\zeta^2-1)e_4 \\ & & -\frac{1+\zeta^2}{2}\Big[(\zeta^2+1)\theta_1 - 2\zeta\theta_3 + (\zeta^2-1)\theta_4 \Big] \frac{\partial}{\partial \zeta}\\ {\mathfrak w}_2 & = & (\zeta^2+1)e_2+ (\zeta^2-1)e_3 + 2\zeta e_4 \\ & & -\frac{1+\zeta^2}{2}\Big[(\zeta^2+1)\theta_2 + (\zeta^2-1) \theta_3 + 2\zeta \theta_4 \Big] \frac{\partial}{\partial \zeta} \end{eqnarray*} and $\partial/\partial \overline{\zeta}$, where $\theta_j=\theta (e_j)$. However, $\varphi_2+ i \varphi_3$ is a unit section of the canonical line bundle of $(M,g, {\mathfrak J})$, and \begin{eqnarray} d \varphi_1 & = & 0 \nonumber \\ d \varphi_ 2 & = & -\theta \wedge \varphi_3 \label{oye} \\ d \varphi_3 & = & \theta \wedge \varphi_2 \nonumber \end{eqnarray} Hence $d\theta$ is just the Ricci form $\rho$ of $(M,g, {\mathfrak J})$. But the Ricci form of any K\"ahler manifold is of type $(1,1)$, and in our case $\rho \wedge \omega =0$, since $(M,g, {\mathfrak J})$ is assumed to be scalar-flat. We thus conclude that \begin{equation} \label{curva} d\theta \wedge \varphi_{\zap j}=0, ~~~ {\zap j} = 1,2, 3. \end{equation} We remark in passing that this is simply a special case of a more general fact: namely, $\Lambda^-$ has self-dual curvature on any scalar-flat self-dual $4$-manifold. Let us now set \begin{eqnarray*} \Omega&=& -\frac{[(1+\zeta^2) \varphi_1 + (1-\zeta^2) \varphi_2 -2\zeta \varphi_3]\wedge [2d\zeta + (1+\zeta^2)\theta]}{(1+\zeta^2)^2}\\ &=& (\varphi_1+ \cos t ~\varphi_2 + \sin t ~\varphi_3 ) \wedge (dt- \theta ) \end{eqnarray*} where $$t= -2\tan^{-1}\zeta =i \log (1-i\zeta ) - i \log (1+i\zeta ).$$ The restriction of this form to $F=\partial {\mathcal Z}_+$ is a real, geometrically meaningful, and globally defined $3$-form. Indeed, $\varphi_1+ \cos t ~\varphi_2 + \sin t ~\varphi_3$ is the tautological $2$-form on $F$, thought of as the space of those real null anti-self-dual $2$-forms for which the inner product with the K\"ahler form $\omega$ is $\sqrt{2}$; and $dt- \theta $ the principle-connection $1$-form of the unit canonical bundle of $(M,g, {\mathfrak J})$. Since $\Omega$ is the unique analytic continuaton of $\Omega|_F$ up the fiber disks of ${\mathcal Z}_+\to M$, this shows that $\Omega$ is globally defined on the ${\mathcal Z}_+-Q$, where $Q$ is the image of the section ${\mathfrak J}$, which is represented by $\zeta =i$. Next, notice that $\Omega$ annihilates ${\mathfrak w}_1$, ${\mathfrak w}_2$, and $\partial/\partial \overline{\zeta}$. Thus $\Omega$ is a $(3,0)$-form on $Z-(P\cup Q)$. Moreover, equations (\ref{oye}) and (\ref{curva}) tell us that \begin{eqnarray*} d\Omega & = & d(\varphi_1+ \cos t ~\varphi_2 + \sin t ~\varphi_3 ) \wedge (dt- \theta )\\ &&\hspace{1cm} + (\varphi_1+ \cos t ~\varphi_2 + \sin t ~\varphi_3 ) \wedge d(dt- \theta ) \\&=& ( -\sin t ~dt\wedge \varphi_2 + \cos t ~d \varphi_2 +\cos t ~ dt\wedge \varphi_3 +\sin t ~ d \varphi_3 )\wedge ( dt - \theta )\\ &&\hspace{1cm} - (\varphi_1+ \cos t ~\varphi_2 - \sin t ~\varphi_3 ) \wedge d \theta \\ &=& ( -\sin t ~dt\wedge \varphi_2 - \cos t ~\theta \wedge \varphi_3 +\cos t ~ dt\wedge \varphi_3 +\sin t ~ \theta \wedge \varphi_2 )\wedge ( dt - \theta ) \\ &=& \sin t ~dt\wedge \varphi_2 \wedge \theta - \cos t ~\theta \wedge \varphi_3 \wedge dt -\cos t ~ dt\wedge \varphi_3 \wedge \theta +\sin t ~ \theta \wedge \varphi_2 \wedge dt \\ &=& 0 \end{eqnarray*} so the $(3,0)$-form $\Omega$ is actually $\overline{\partial}$-closed on $Z-P$, where it is therefore a meromorphic $3$-form with only a pole of order $2$ along $Q$. Moreover, the restriction of $\Omega$ to $\partial {\mathcal Z}_+ =F$ is a real closed $3$-form which kills the tangent space of the foliation $\mathscr F$, since it annihilates ${\mathfrak w}_1$ and ${\mathfrak w}_2$; thus $\Omega|_F$ is actually the pull-back of a real $3$-form on $P$. This shows that $\Omega$ descends to a continuous $3$-form on $Z-Q$ which is holomorphic on the complement of $P$. It is therefore holomorphic even across $P$, by an iterated application of the Weierstrass removable singularities theorem. Identifying $Z$ with $\mathbb C\mathbb P_3$ as in Theorem \ref{twisthol}, $\Omega$ thus becomes a meromorphic $3$-form on $\mathbb C\mathbb P_3$ with a double pole at a quadric $Q$, and its pull-back to the totally real submanifold $P\subset \mathbb C\mathbb P_3-Q$ is real, as promised. \end{proof} Analogy with Pontecorvo's characterization \cite{mano} of the twistor spaces of positive-definite scalar-flat K \"ahler metrics would lead one to expect that the converse is also true. Fortunately, this is indeed the case: \begin{prop} \label{transf} Let $(M, [g])$ be a space-time-oriented Zollfrei self-dual $4$-manifold whose twistor space $(Z,J)$ is biholomorphic to $\mathbb C\mathbb P_3$. Suppose that there is a quadric $Q\subset Z\cong \mathbb C\mathbb P_3$ such that $P\cap Q= \emptyset$, and that there is a meromorphic $3$-form $\Omega$ on $Z$ which is holomorphic and non-zero on $Z-Q$ and which has the property that its pull-back to $P$ is a real $3$-form. Then $Q$ determines an integrable complex structure ${\mathfrak J}$ on $M$ such that $(M,{\mathfrak J})\cong \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, and the conformal class $[g]$ contains a scalar-flat metric $g$ which is indefinite K\"ahler with respect to the complex structure $\mathfrak J$. Moreover, this metric is uniquely determined up to an overall multiplicative constant. \end{prop} \begin{proof} Since $Q$ represents double the generator of $H^2 (\mathbb C\mathbb P_3 , \mathbb{Z})$, it generates $H^2 (\mathbb C\mathbb P_3 - P, \mathbb{Z})= H^2 ({\mathcal Z}_+ , \mathbb{Z})$, and so has intersection number $1$ with any fiber disk in ${\mathcal Z}_+$. Thus $Q$ represents a section of $\mbox{Int }{\mathcal Z}_+$, and may be interpreted as an almost-complex structure $\mathfrak J$. Moreover, the induced projection $Q\to M$ is a diffeomorphism, and the pull-back of $\mathfrak J$ to $Q$ is exactly the given complex structure on $Q\cong \mathbb C\mathbb P_1\times \mathbb C\mathbb P_1$, so $\mathfrak J$ is, in particular integrable. Near an arbitrary point of $M$, choose a local pseudo-orthonormal frame so that $e_2= {\mathfrak J} e_1$ and $e_4= {\mathfrak J}e_3$. Then $Q$ is represented in the corresponding local coordinates on ${\mathcal Z}_+$ by $\zeta =i$. Now pull $\Omega$ back to ${\cal Z}_+$, and observe that we must then have $$\Omega= - \frac{{\zap f}}{(1+\zeta^2)^2} [(1+\zeta^2) \varphi_1 + (1-\zeta^2) \varphi_2 -2\zeta \varphi_3]\wedge [2d\zeta - (1-\zeta^2) \theta_1^3 - 2\zeta \theta_1^2 + (1+\zeta^2) \theta_3^2]$$ for some function $\zap f$ on ${\mathcal Z}_+$, since $\Omega$ annihilates ${\mathfrak w}_1$, ${\mathfrak w}_2$, and $\partial /\partial \overline{\zeta}$. Moreover, this function ${\zap f}$ is holomorphic in $\zeta \neq i$, bounded on the entire half-plane, and real when $\zeta$ is real. Hence $\zap f$ is independent of $\zeta$, by Liouville's Theorem and the reflection principle. In particular, the $\zeta$-derivative of $\zap f$ vanishes at $\zeta = i$, so the residue $\omega$ of $\Omega$ at $\zeta = i$ is a multiple of $\varphi_1$. However, this residue is also a closed nowhere-zero $2$-form on $M$, as, up to an overall constant, it may instead be obtained by restricting $\Omega$ to $F= \partial {\mathcal Z}_+ = \Psi^{-1}(P)$ and integrating along the fibers of ${\zap p}: F\to M$. But this means that $\omega$ is the K\"ahler form with respect to ${\mathfrak J}$ of an indefinite K\"ahler metric $g$ in the self-dual conformal class $[g]$. Since such a metric must also be scalar-flat by Lemma \ref{gau}, the claim therefore follows. \end{proof} {\bf Theorem \ref{D}} now follows immediately from Propositions \ref{constr} and \ref{transf}. since a projective transformation is all that is needed to arrange for the quadric $Q$ to be given by $z_1^2 + z_2^2+z_3^2+z_4^2=0$, and for the associated $3$-form to be some real constant times $$ \Omega = \frac{\left(z_j\frac{\partial}{\partial z_j}\right) ~\lrcorner ~ (dz^1 \wedge dz^2 \wedge dz^3\wedge dz^4)}{[z_1^2 + z_2^2+z_3^2+z_4^2]^2}. $$ Of course, requiring that the pull-back of $\Omega$ to $P$ be real has been re-interpreted in the statement of {Theorem \ref{D}} as the condition that the pull-back of $\phi = \Im m ~\Omega$ should vanish. It remains only to ask whether there are many submanifolds $P$ near $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$ on which $\phi= \Im m ~\Omega$ vanishes. In fact, the condition in question is a weakening of the {\sl special Lagrangian} condition studied by McLean \cite{mclean}, and similar arguments will now show that such submanifolds exist in considerable profusion: \begin{prop} For any integer $k\geq 1$ and any $\alpha \in (0,1)$, the space of compact $C^{k,\alpha}$ totally real submanifolds $P\subset \mathbb C\mathbb P_3-Q$ near $\mathbb R\mathbb P^3$ on which $\phi = \Im m ~ \Omega$ vanishes is a Banach manifold whose tangent space at $P$ consists of real $C^{k,\alpha}$ vector fields $v$ on $P$ for which $\mbox{div }v =0$ with respect to the standard volume form on $\mathbb R\mathbb P^3$. \end{prop} \begin{proof} The normal bundle of $\mathbb R\mathbb P^3\subset \mathbb C\mathbb P_3$ may be identified with $T\mathbb R\mathbb P^3$ via $J$, so some tubular neighborhood of $\mathbb R\mathbb P^3$ must be diffeomorphic to $T\mathbb R\mathbb P^3$. In fact, we can even take this tubular to be all of $\mathbb C\mathbb P_3-Q$ by invoking the real-analytic diffeomorphism $$\gimel : T\mathbb R\mathbb P^3\longrightarrow \mathbb C\mathbb P^3-Q $$ given by $$\pm (\vec{x},\vec{y})\mapsto \Big[\vec{x}+ i\frac{\vec{y}}{\sqrt{1+|\vec{y}|^2}}\Big]$$ for $\vec{x}, \vec{y}\in \mathbb R^4$ with $|\vec{x}|^2=1$ and $\vec{x}\cdot \vec{y}=0$. Thus, for any integer $k\geq 1$ and any $\alpha \in (0,1)$, each real-valued $C^{k,\alpha}$ vector field $v$ on $\mathbb R\mathbb P^3$ defines a new embedding \begin{eqnarray*} \saturn_v: \mathbb R\mathbb P^3 &\longrightarrow & \mathbb C\mathbb P_3- Q \\ y & \longmapsto & \gimel (Jv_y) \end{eqnarray*} and every other compact submanifold of $\mathbb C\mathbb P_3$ which is $C^{k,\alpha}$ close to $\mathbb R\mathbb P^3$ can be so parameterized in a unique manner. Now let ${\mathfrak B}^{k,\alpha}$ be the Banach space of $C^{k,\alpha}$ vector fields on $\mathbb R\mathbb P^3$, and let ${\mathfrak C}^{k,\alpha}$ be the Banach space of $C^{k,\alpha}$ real-valued $3$-forms on $\mathbb R\mathbb P^3$ with integral $0$ on $\mathbb R\mathbb P^3$. Let $\mu$ be the standard volume form on $\mathbb R\mathbb P^3$. We may then define a smooth map of Banach manifolds \begin{eqnarray*} \leo : {\mathfrak B}^{k,\alpha} \times {\mathfrak C}^{k,\alpha}&\longrightarrow & {\mathfrak C}^{k-1,\alpha}\times {\mathfrak B}^{k-1,\alpha}\\ (v, f \mu ) & \longmapsto & (\saturn_v^*\phi , \mbox{curl } v + \mbox{grad }f) \end{eqnarray*} whose derivative at $0$ is $$(v,f)\mapsto ( \mbox{div } v, \mbox{curl } v+ \mbox{grad }f).$$ Now this is essentially just the elliptic operator $d+d^*: \Lambda^{\mbox{\tiny even}}\to \Lambda^{\mbox{\tiny odd}},$ and so has trivial kernel and cokernel because $H^2(\mathbb R\mathbb P^3 , \mathbb R)=0$. The interior Schauder estimates for elliptic equations therefore imply that $\leo_{*0}$ is a Banach-space isomorphism. Hence the inverse function theorem for Banach spaces implies that $\leo$ becomes a diffeomorphism when restricted to some neighborhood ${\mathcal U}={\mathcal U}_1\times {\mathcal U}_2\subset {\mathfrak B}^{k,\alpha} \times {\mathfrak C}^{k,\alpha}$ of the origin. Thus $(\leo|_{\mathcal U})^{-1} ( \{0\}\times {\mathfrak B}^{k-1,\alpha})$ is a Banach manifold. By inspection, however, this set is of the form ${\mathcal M}\times {\mathcal U}_2$, where $\mathcal M$ is its projection to ${\mathfrak B}^{k,\alpha}$. Hence $\mathcal M$ is a Banach manifold, and represents the desired moduli space of solutions. Moreover, $$T_0{\mathcal M} = \{ v \in {\mathfrak B}^{k,\alpha}\subset \Gamma (T\mathbb R\mathbb P^3)~|~ \mbox{div } v=0\},$$ precisely as claimed, so we are done. \end{proof} Thus, while self-dual split-signature conformal structures on $S^2\times S^2$ essentially depend on a vector field on $\mathbb R\mathbb P^3$, scalar-flat K\"ahler metrics correspond to the case in which the vector field is {\sl divergence free}. So far, though, this is just a abstract existence statement. Nonetheless, one can do much better in the real-analytic case. Indeed, let $v$ be a divergence-free real-analytic vector field on $\mathbb R\mathbb P^3$; in other words, let $v= \mbox{curl }w$ for $w$ some real-analytic vector field on $\mathbb R\mathbb P^3$. Then $Jv$ corresponds to the section $Jv+iv$ of $(T^{1,0}\mathbb C\mathbb P_3)|_{\mathbb R\mathbb P^3}$. Because $v$ is locally represented by power series, $Jv+iv$ can then be uniquely extended to some neighborhood $U\subset \mathbb C\mathbb P_3-Q$ of $\mathbb R\mathbb P^3$ as a holomorphic vector field ${\mathfrak v}$, and we then have a real-analytic $1$-parameter family $\{ \psi_t~|~ t\in (-\varepsilon, \varepsilon )\}$ of biholomorphisms from neighborhoods $U_t\subset U$ of $\mathbb R\mathbb P^3$ to $U$ obtained by following the integral curves of $\Re e~ {\mathfrak v}$. Notice, however, that ${\mathcal L}_{\mathfrak v}\Omega=0$, since this expression is the analytic continuation of $\mbox{div } iv$ from $\mathbb R\mathbb P^3$ to $U$. The constructed biholomorphisms therefore satisfy $\psi_t^*\Omega =\Omega$. Hence $P= \psi_t (\mathbb R\mathbb P^3)$ is a submanifold on which $\phi = \Im m ~ \Omega$ vanishes. \vfill \begin{ack} The first author would like to express his gratitude to Franc Forstneri\v{c}, Bill Goldman, Denny Hill, Matthias Kreck, Blaine Lawson, Yair Minsky, Yom-Tung Siu, and Dennis Sullivan for their friendly help in drawing his attention to some key references. He would also like to thank to Jeff Cheeger and the Courant Institute of Mathematics for their hospitality during the initial phase of the writing of this paper. \end{ack} \hspace{1in} \noindent {\sc Department of Mathematics, SUNY, Stony Brook, NY 11794-3651 USA\\ The Mathematical Institute, 24-29 St Giles, Oxford OX1 3LB, England} \pagebreak
2,877,628,090,865
arxiv
\section{Introduction} The creation of matter and entropy from vacuum has been studied via quantum field theory in curved spacetime (see for example \cite{hupark77,creation}). Most cosmological models exhibit a singularity which presents difficulties for interpreting quantum effects, because all macroscopic parameters of created particles are infinite there. This leads to the problem of the initial vacuum. A regular vacuum for a species of created particles can be defined in simple terms as a state where all mean values describing the particles, such as energy density, number density, entropy etc., are zero. But this simple condition is not achieved in many scenarios, so that either one has to postulate an initial state beyond the singularity, or to assume that there was a nonzero number of particles at the initial vacuum. One attempt to overcome these problems is via incorporating the effect of particle creation into Einstein's field equations. For example, in the papers of the Brussels group \cite{edgard1}, the quantum effect of particle creation is considered in the context of the thermodynamics of open systems, where it is interpreted as an additional negative pressure, which emerges from a re-interpretation of the energy-momentum tensor. This effect is irreversible in the sense that spacetime can produce matter, leading to growth of entropy, while the reverse process is thermodynamically forbidden. The main difference with our present paper is that in \cite{edgard1} the law of massive particle creation, i.e. the mechanism of energy flow from the gravitational field to matter, leads to a non-zero number of particles at the beginning of expansion, described as a fluctuation of the regular vacuum. These results were recently generalized in a covariant form in \cite{lima1}. Our approach differs from that of \cite{edgard1,lima1} in that we do not modify the field equations. Instead, we interpret the source of created particles as a decaying vacuum, described phenomenologically by a time-dependent cosmological `constant' $\Lambda (t)$. A number of decaying vacuum models has appeared in the literature (see \cite{lima2,vl} and references cited there). Inflationary models with fixed cosmological constant and cold dark matter have been successful in accounting for the microwave background and large-scale structure observations, while also solving the age problem. However, these models are challenged by the reduced upper limits on $\Lambda$ arising from the Supernova Cosmology Project, and also by the long-standing problem of reconciling the very large early-universe vacuum energy density with the very low late-universe limits \cite{vl}. One resolution of these problems is a decaying $\Lambda$. In common with \cite{lima2}, we attempt to provide some clear and consistent physical motivation for the particular form of vacuum decay, rather than an ad hoc prescription. In ad hoc prescriptions, the functional form of $\Lambda(t)$ or $\Lambda(a)$ (where $a$ is the scale factor) is effectively assumed a priori. Often power-law forms for $\Lambda$ are assumed (see, for example, \cite{new} and references cited there). Exponential decay laws have also been assumed \cite{spindel}. Typically, the solutions arising from ad hoc prescriptions for $\Lambda$ are rather complicated, and moreover, it is often difficult to provide a consistent simple interpretation of the features of particle creation, entropy and thermodynamics. In contrast to many other models, we propose a simple, exact and thermodynamically consistent cosmological history. The latter originates from {\em a regular initial vacuum with a maximal initial entropy production rate.} Together with a naturally defined creation rate, this leads to a simple expansion law and thermodynamic properties, and to a definite estimate for the total entropy in the universe. The very existence of an initial maximal entropy production rate reflects a subtle interplay between the conservation equation, the second law of thermodynamics and Einstein's equations, which is at the heart of our model. Non-adiabatic inflationary models differ from the standard models (see for example \cite{kolb&turner90}), in that: (a) radiation is created continuously during inflation, rather than during reheating; (b) the continuous vacuum decay itself initiates a smooth exit from inflation to the radiation era; (c) entropy and heat production take place continuously, without the need for reheating. In the standard approach, the scalar field drives adiabatic (i.e., isentropic) inflation, followed by a non-equilibrium reheating era when the field decays into radiation and inflation is brought to an end. The potential of the field is then the key ingredient. In the alternative approach, the key ingredient is essentially the model of vacuum decay. In contrast to the ad hoc models that assume functional forms for $\Lambda$, and as an alternative to the thermodynamically motivated model of \cite{lima2}, we arrive at a phenomenological model of decay by imposing simple and thermodynamically consistent physical requirements. A related physically consistent model can be found in the `warm' inflationary scenario \cite{b}. The first law of thermodynamics for open systems (with particle creation) and Einstein's equations lead to a first-order equation for the expansion rate $H=\dot{a}/a$, whose source term is determined by the particle number $N$. A further equation in $H$ and $N$ arises from a simple model for the particle creation rate $\Gamma=\dot{N}/N$. We impose the thermodynamical non-equilibrium condition that $\Gamma$, and hence the entropy production rate, is maximal at the beginning of expansion. A further initial condition is that the initial vacuum for radiation is non-singular. We also require that $\Gamma>H$ initially, so that created particles are thermalized, while $\Gamma<H$ later on, as particle production becomes insignificant. Our simple model for $\Gamma$ is naturally defined by the gravitational dynamics and conforms to these thermodynamical requirements. We decouple the equations to get a second-order evolution for $H$, and we find a remarkably simple exact solution $H(a)$. Since the exit from inflation to the radiation era is smooth, we avoid the problem of matching at the transition. A similar smooth evolution has been used in \cite{roydavid,alexei_96,caldwell}, but in the context of adiabatic inflation, and without a consistent physical foundation. In effect, we show that the ad hoc form of $H(a)$ given in \cite{roydavid} follows from our simple physical conditions and thermodynamic arguments. In \cite{b1}, a kinematic analysis is given for various non-adiabatic inflationary evolutions with smooth exit, but these evolutions are outside the scope of our model. The choice of $a$ as dynamical variable and the very simple form of $H(a)$ that meets the physical conditions, lead to elegant expressions for all parameters describing the radiation and decaying vacuum, and also to a physically transparent interpretation of these results, including the estimate of entropy. In addition, the equation for super-horizon scalar perturbations can be solved exactly for this form of $H(a)$. We present in Sec. 2 the evolution equation for $H(a)$ and its simple solution, that follow from our simple physical constraints. In Sec. 3 we analyze the thermodynamics of the radiation produced in the course of vacuum decay, and estimate the entropy of the created radiation. Sec. 4 contains a summary and concluding remarks. In a subsequent paper, we will discuss generalizations of the present model. We use units with $8\pi G$, $c$ and $k_{\rm B}$ equal 1. \section{The simple model} Consider a spatially flat Friedmann-Lemaitre-Robertson-Walker universe \[ ds^2 = g_{\mu\nu}dx^\mu dx^\nu= dt^2 - a^2(t) \left[dx^2 + dy^2 + dz^2\right]\,, \] containing matter with equation of state \[ p = \gamma \rho, \] where $\gamma$ ($0\leq \gamma < 1$) is a constant parameter. Later we will specialize to the case of radiation, $\gamma={1\over3}$. The energy-momentum tensor of matter is \[ T^{\rm M}_{\mu\nu}= \rho(t)\left[ (\gamma +1) u_\mu u_\nu + \gamma g_{\mu\nu}\right]\,,~~u_\mu=\delta_\mu{}^0\,, \] while the energy-momentum tensor corresponding to the quantum vacuum energy is \[ T^{\rm Q}_{\mu\nu} \equiv \langle \widehat{T}^{\rm Q}_{\mu\nu}\rangle= \Lambda (t)g_{\mu\nu}\,. \] Then the conservation equations $\nabla^\nu(T^{\rm M}_{\mu\nu}+T^{\rm Q}_{\mu\nu})=0$ reduce to \begin{equation} \dot{\rho} + 3 (\gamma +1) H \rho = - \dot{\Lambda}\,, \label{conserv} \end{equation} showing how energy is transferred from the decaying vacuum to matter. Note that (\ref{conserv}) is equivalent to the energy balance of an imperfect fluid with scalar viscous pressure \[ \Pi={\dot{\Lambda}\over 3H} \,. \] This is an example of the known result that cosmological particle production may be interpreted as an effective bulk viscous pressure (see \cite{z} and references cited there). The field equations $R_{\mu\nu}-{1\over2}Rg_{\mu\nu} =T^{\rm M}_{\mu\nu}+T^{\rm Q}_{\mu\nu}$ are \begin{eqnarray} 3 H^2 &=& \rho + \Lambda \,, \label{einst0}\\ 2 \dot{H} + 3 H^2 &=& - \gamma \rho + \Lambda \,, \label{einst1} \end{eqnarray} and if both are satisfied then the energy equation (\ref{conserv}) follows identically. Following \cite{roydavid}, we use $a$ as a dynamical variable instead of $t$, and so we consider the Hubble rate as $H=H(a)$. Then equations (\ref{einst0}) and (\ref{einst1}) give \begin{eqnarray} \rho (a) &=& - \frac{2}{(\gamma +1)} \, a H(a) H'(a) \,, \label{rho1}\\ \Lambda (a) &=& 3 H^2(a) + \frac{2}{(\gamma + 1)} \, a H(a) H' (a) \,, \label{lambd1} \end{eqnarray} where primes denote $d/da$. Given $H(a)$ and $\gamma$, we can calculate $\rho(a)$ and $\Lambda(a)$. In order to determine further properties of $H(a)$, we will impose simple physical requirements. We assume that the evolution of $\rho$ is governed not only by expansion but also by the creation of particles, i.e. from the thermodynamic point of view, we have an open system. According to \cite{edgard1}, the first law of thermodynamics generalized for open systems is \begin{equation} d(\rho V) + p d V - \left({\rho+p\over n}\right) d(nV) = 0 \,, \label{firstlaw} \end{equation} where $p= \gamma \rho$ is the pressure, $n=N/V$ is the particle number density, $N$ is the number of particles in the observable universe, and $V\propto a^3$ is the comoving volume of the observable universe. The thermodynamic equation (\ref{firstlaw}) implies \[ {\left(\rho a^3\right)'\over \rho a^3}+\gamma{\left(a^3\right)' \over a^3}-(\gamma+1){N'\over N}=0 \,, \] which integrates to \begin{equation} \rho={A\over(\gamma+1)}\left({N\over a^3}\right)^{\gamma+1} \,, \label{numviahubble} \end{equation} where $A$ is a positive constant. From now on, we take $\gamma={1\over3}$, i.e. we assume that only radiation is created. This will also allow us to define entropy production via the photon number. By equation (\ref{rho1}), equation (\ref{numviahubble}) can be rewritten as an {\em evolution equation for $H$, with source term determined by the particle number:} \begin{equation} \frac{d}{d a } H^{2}(a) = - A\, \frac{N^{4/3}(a)} {a^5}\,. \label{eqhubble} \end{equation} This is the fundamental equation which follows from Einstein's equations and the thermodynamic equation (\ref{firstlaw}). The creation mechanism is phenomenologically encoded in the source term $N(a)$. The model must be completed via an equation that determines $N$. If the creation rate of radiation $\Gamma=\dot{N}/N=aHN'/N$ is given, then equation (\ref{eqhubble}) implies the second-order equation \begin{equation} 3aHH''+3aH'^2+\left(15H-4\Gamma\right)H'=0 \,. \label{-}\end{equation} We seek a model in which most of the particle creation effectively takes place in the very early universe, starting from a regular vacuum. More precisely, we impose the following thermodynamical requirements:\\ (a) Maximal entropy production rate (equivalently, maximal particle creation rate) at the beginning of expansion, so that the universe starts in a state furthest away from equilibrium and then tends toward equilibrium as the expansion proceeds. \\ (b) A true (regular) vacuum for radiation initially, so that $\rho\rightarrow0$ as $a\rightarrow0$. \\ (c) $\Gamma>H$ in the very early universe, so that we can treat the created radiation as forming a thermalized heat bath. Subsequently, the creation rate should fall behind the expansion rate as particle creation becomes dynamically insignificant. The fundamental physical quantities that are naturally defined by the gravitational dynamics in our model are the expansion rate $H$ and the total energy density $U=\rho+\Lambda$. Both of these quantities can define in a natural way a gravitational creation rate $\Gamma$. The simplest model $\Gamma\propto H$ fails to satisfy requirement (c) above. Furthermore, it leads to a solution $H(a)$ of the evolution equation (\ref{-}) which violates requirement (b). Therefore we propose $\Gamma\propto U$. This will satisfy requirement (a) if (b) holds, since the initial condition (b) implies that $U$ approaches its maximum value $U(0)=\Lambda(0)$ as $a\rightarrow0$. Below we show that the solution of equation (\ref{-}) does verify condition (b). The increase of $\rho$ due to creation in the very early universe partially offsets the decrease in $\Lambda$, so that $U$ decreases slowly in the very early universe, and the entropy production rate remains high. Requirement (c) will be satisfied because in the radiation era, $\Lambda$ becomes negligible and $\rho$ decays like $H^2$. Thus $\Gamma$ decreases more rapidly than $H$, and will have fallen below $H$ at some epoch. Hence we propose the simple model that {\em the particle creation rate is proportional to the total energy density.} By the Friedmann equation (\ref{einst0}), it follows that \begin{equation} \Gamma=\alpha H_{\rm e}\left({H\over H_{\rm e}}\right)^2 \,, \label{cr}\end{equation} where $\alpha$ is a dimensionless free parameter, and $H_{\rm e}=H(a_{\rm e})$, where $a_{\rm e}$ is some fixed epoch. Equation (\ref{-}) becomes the {\em decoupled, second-order evolution equation for $H(a)$:} \begin{equation} 3aHH''+3aH'^2+15HH'-4{\alpha\over H_{\rm e}}H^2H'=0\,. \label{roy}\end{equation} This equation has the first integral \begin{equation} 3aHH'+6H^2-{4\alpha\over 3H_{\rm e}}H^3=0 \,, \label{cr2}\end{equation} where we have used the fact that $H$ and $aHH'$ tend to zero for large $a$ (i.e. in the radiation era), in order to remove a constant of integration. The solution of equation (\ref{cr2}) is \[ \beta\left({a\over a_{\rm e}}\right)=\left[ {9\over 2\alpha}\left({H_{\rm e}\over H}\right)-1\right]^{1/2}\,, \] where $\beta$ is a constant, and we have taken into account that $H$ is a decreasing function. Evaluating at $a=a_{\rm e}$, we see that \begin{equation} 1+\beta^2={9\over2\alpha}\,, \label{cr3}\end{equation} which implies the constraint $\alpha\leq{9\over2}$ on the creation parameter $\alpha$. Re-arranging the solution, we find the remarkably simple form for the expansion rate that follows from our thermodynamic model: \begin{equation} {H\over H_{\rm e}}=(1+\beta^2)\left[{a_{\rm e}^2\over a_{\rm e}^2+\beta^2a^2}\right]\,. \label{beta}\end{equation} This solution approaches de Sitter inflation as $a\rightarrow0$, i.e. $H\sim$ constant, and it becomes radiation-like for $a\rightarrow\infty$, i.e. $H\sim a^{-2}$. It follows that as $a\rightarrow0$, the cosmic proper time $t\rightarrow-\infty$. A naturally defined epoch is $a_{\rm ex}=a(t_{\rm ex})$ of exit from inflation, which is defined by $\ddot{a}(t_{\rm ex}) = 0$, or equivalently $H(a_{\rm ex}) = - a_{\rm ex} H'(a_{\rm ex})$. It follows from (\ref{beta}) that $a_{\rm e}=\beta a_{\rm ex}$. These results reflect a subtle interplay between Einstein's gravitational dynamics and thermodynamical constraints on particle production. The decay of the vacuum into radiation drives inflation, but the same decay, by reducing the vacuum energy density, leads to a deceleration of expansion and a smooth exit from inflation. Since the initial vacuum for radiation is regular, the particle number and hence entropy are initially zero. As we show below, the total entropy produced in the observable universe in an infinite time is finite. This is consistent with the existence of a smooth exit, since unending inflation would produce infinite entropy. The initial rate of entropy production is maximal, reflecting the feature that the universe starts furthest from equilibrium and approaches asymptotically a state of equilibrium. We note also that the initial entropy production rate is a finite maximum value, rather than being unbounded from above. The latter possibility is ruled out by the de Sitter-like nature of the inflationary expansion, which implies that the expansion rate $H$ is bounded from above (unlike power-law inflation for example). The freedom in $\beta$, or equivalently $\alpha$, by equation (\ref{cr3}), provides us with an extra adjustable parameter. However, for simplicity, we will not use this freedom, since the subsequent results are not modified in any essential way for general $\beta$. Henceforth we take $\beta=1$, i.e. $\alpha={9\over4}$, which means that $a_{\rm e}=a_{\rm ex}$. Thus, we arrive finally at the simple expansion rate \begin{equation} H(a) = 2H_{\rm e}\left({a_{\rm e}^2\over a_{\rm e}^2 + a^2}\right)\,. \label{hubble} \end{equation} This form of $H(a)$ was given in \cite{roydavid} as an ad hoc toy model to achieve smooth exit from inflation to radiation, but without a physical basis such as that given here. The expression for the cosmic proper time follows on integrating equation (\ref{hubble}): \[ t=t_{\rm e}+{1\over 4H_{\rm e}}\left[ \ln\left({a\over a_{\rm e}}\right)^2+ \left({a\over a_{\rm e}}\right)^2-1\right]\,. \] \section{Thermodynamics of radiation} On substituting equation (\ref{hubble}) into equations (\ref{rho1}) and (\ref{lambd1}), we find exact expressions for the energy density of radiation and the vacuum: \begin{equation} \rho (a) = 12H_{\rm e}^2\left({a\over a_{\rm e}}\right)^2\left({a_{\rm e}^2\over a_{\rm e}^2 +a^2}\right)^3\,,~~~~~ \Lambda (a) = 12H_{\rm e}^2\left({a_{\rm e}^2\over a_{\rm e}^2+a^2}\right)^3 \,. \label{rholam} \end{equation} It follows that $\Lambda (0) = 12 H_{\rm e}^{2}$. Note that (\ref{hubble}) implies $H(0)=2H_{\rm e}$. Note also that the effective bulk viscous pressure arising from particle production has the form \[ \Pi\equiv {\dot{\Lambda}\over 3H}=-\left({4\Gamma\over 9H}\right)\rho =-\left({2a_{\rm e}^2\over a_{\rm e}^2+a^2}\right)\rho \,. \] Now $\rho$ reaches a maximum at $a_{\rm m} = a_{\rm e}/\sqrt{2}$, with \[ \rho_{\rm m} \equiv \rho(a_{\rm m}) = {\textstyle{16\over9}}\, H_{\rm e}^{2}\,, ~~~~~ \Lambda (a_{\rm m}) = 2 \rho_{\rm m} \,. \] Note also that $\rho$ and $\Lambda$ are equal at exit: \[ \rho (a_{\rm e}) = \Lambda (a_{\rm e}) = {\textstyle{3\over2}} H_{\rm e}^{2} \,, \] while for $a \gg a_{\rm e}$, i.e. during radiation-domination, \[ \rho (a) \sim \frac{1}{a^4}\,,~~~~ \Lambda (a) \sim \frac{1}{a^6}\,, \] so that $\Lambda$ rapidly becomes negligible. The formulas (\ref{rholam}) reflect the creation of radiation due to vacuum decay. The initial value $\rho (0) = 0$ confirms that the field corresponding to radiation is initially in a regular vacuum state. This means that the formula (\ref{rholam}) gives an absolute measure of radiation produced in the universe, as a result of conversion of energy from the vacuum described by $\Lambda$. Substituting equation (\ref{hubble}) into equation (\ref{numviahubble}) we get the exact form for the particle number \begin{equation} N(a) = N_{\infty} \left(\frac{a^2}{a_{\rm e}^2 + a^2}\right)^{9/4}\,, \label{number} \end{equation} where $N_{\infty} $ is usually taken to be about $10^{88}$. It follows that about $2\times 10^{87}$ particles have been created at exit. Note that the initial conditions and the evolution equations in our model imply that {\em a finite number of particles is produced in the observable universe during the entire expansion.} Since $N(0)= n(0) = 0$, the initial state of the field has no particles, i.e. it is a regular vacuum. The number density is \begin{equation} n(a) = 2^{9/4}n_{\rm e}\left({a_{\rm e}\over a}\right)^3 \left(\frac{a^2}{a_{\rm e}^2 + a^2}\right)^{9/4}\,, \label{n}\end{equation} and reaches its maximum also at $a_{\rm m}$. The creation rate of radiation is given by equations (\ref{cr}) and (\ref{hubble}) as \begin{equation} \Gamma=9H_{\rm e}\left({a_{\rm e}^2\over a_{\rm e}^2+a^2}\right)^2={9\over4}H_{\rm e} \left({H\over H_{\rm e}}\right)^2\,. \label{crate}\end{equation} In order to define the radiation temperature (and then entropy), we would like to invoke the standard black-body relation. A justification of this is as follows. From equation (\ref{crate}), it follows that the creation rate $\Gamma (a)$ exceeds the expansion rate $H(a)$ for \[ a<\sqrt{{\textstyle{7\over2}}}\, a_{\rm e} \,. \] Thus it is reasonable to treat the created radiation as forming a thermalized heat bath in the initial stage of expansion. For $a>\sqrt{7\over2}a_{\rm e}$, the creation rate falls behind the expansion rate, and created particles will be out of equilibrium. However, by this stage of the expansion, the energy density in newly created particles is too small to disturb the effective thermalization. Thus it seems reasonable to use the black-body relation for the radiation throughout the expansion, and to define the temperature by \begin{equation} T(a) = \frac{1}{3}\frac{\rho(a)}{n(a)} = {H_{\rm e}^2\over 2^{1/4}n_{\rm e}} \left({a_{\rm e}\over a}\right)\left({a^2 \over a_{\rm e}^2+a^2}\right)^{3/4}\,, \label{temperature} \end{equation} where we have used equations (\ref{rholam}) and (\ref{n}). At the initial radiation vacuum, it is clear that $T(0)=0$, and $T$ increases to its maximum value at $a_{\rm m}$: \[ T_{\rm m} \equiv T(a_{\rm m}) = \left({2\over27}\right)^{1/4} {H_{\rm e}^2 \over n_{\rm e}}\,. \] This temperature may be thought of as analogous to the reheating temperature in standard models. During the radiation era, i.e. for $a \gg a_{\rm e}$, \[ T \sim a^{-1}\,, \] in agreement with the standard result for free radiation in an expanding universe. The formulas for $\rho$ and $n$ can be presented in the thermodynamic form \begin{equation} \rho = 24\left(\frac{ n_{\rm e}^4} {H_{\rm e}^6}\right) T^{4}\,,~~~~~ n = 8\left(\frac{ n_{\rm e}^4}{H_{\rm e}^6}\right) T^{3}\,. \label{tdform} \end{equation} Combining now the Gibbs equation \[ T dS = d(\rho V) + p d V\,, \] with equation (\ref{firstlaw}), and using the definition (\ref{temperature}) of $T$, we obtain the entropy of radiation in the observable universe as \begin{equation} S (a) = 4 N (a)\,, \label{entropy} \end{equation} so that $S(0)=0$ as expected. This gives a reasonable value for the entropy produced during the overall evolution of the universe: \begin{equation} S_\infty = 4 N_{\infty} \approx 4\times 10^{88}\,. \label{entnumb} \end{equation} In the standard model (adiabatic inflation followed by reheating), one can estimate the entropy production due to reheating by matching exact de Sitter inflation to an exact radiation era, with an instantaneous transition at $a_{\rm e}$. This gives \cite{roydavid}: \[ S_{\rm e} = {\textstyle{4\over3}}g^{1/4} \, \rho_{\rm e}^{3/4}= S_\infty\,, \] where $g \sim 100$, and leads to a value of the same order of magnitude as our result. \section{ Conclusion} We have considered a simple and thermodynamically consistent scenario encompassing the decay of the vacuum, the creation of radiation and entropy, and a natural smooth transition from inflationary to radiation-dominated expansion. In order to treat all matter in the universe as created from a regular vacuum, we impose the condition that $\rho\rightarrow0$ as $a\rightarrow0$. We impose the further initial condition that the entropy production rate is a maximum. A simple model for the particle creation rate $\Gamma$, i.e. that $\Gamma$ is proportional to the comoving total energy density, is shown to be consistent with these initial conditions and with the requirement $\Gamma$ should start above and then fall below the expansion rate $H$. We showed that the field equations and the first law of thermodynamics (\ref{firstlaw}) generalized for open systems with creation of matter, then imply a second-order evolution equation (\ref{roy}) for $H(a)$. This equation has the remarkably simple solution (\ref{beta}), and we used this together with black-body thermodynamics and the Gibbs equation to define the temperature and entropy. Our postulate $\Gamma\propto H^2$ for the creation rate, can be contrasted with other work. For example, in \cite{edgard1}, $\dot{N}\propto H^2$, while $\Gamma$ is constant; in \cite{spindel}, $\Lambda\propto \exp(-t/\tau)$, and there is no simple expression for $\Gamma$ in terms of $H$. The postulates in these papers are ad hoc, whereas we have tried to give a thermodynamic justification for our postulate. It is also interesting to compare our decaying-vacuum/ radiation model with scalar-field/ radiation models \cite{b,or}. In these latter models, a phenomenological term is introduced into the Klein-Gordon equation for the scalar field $\phi$, describing the interaction between the decaying field and radiation. The resulting energy balance equation is \[ \dot{\rho}+4H\rho=\Gamma_\phi\dot{\phi}^2 \,, \] where $\Gamma_\phi$ is the phenomenological decay rate. Comparing this with our energy balance (\ref{conserv}), and using (\ref{numviahubble}), we see that $\Gamma_\phi\dot{\phi}^2$ corresponds to our ${4\over3}\Gamma \rho$. It is not obvious whether a scalar field interpretation of our model exists that would produce the rate $\Gamma_\phi\propto H^2$. This is a subject for further investigation. An important feature of the Hubble rate (\ref{hubble}) is that $\lim_{a\rightarrow 0} H(a) = $ constant and $\lim_{a\rightarrow 0} H' (a) = 0$. According to equations (\ref{rholam}) and (\ref{number}), this avoids any divergences in $\rho$ and $N$ at $a=0$. In addition, it follows from the equations (\ref{rholam}), (\ref{number}), (\ref{temperature}) and (\ref{entropy}) that all thermodynamic parameters describing the created radiation vanish initially, i.e. all forms of matter which we observe now in the universe were in a state of regular vacuum, and all constituents of the universe were created from this vacuum in the course of the decay of the vacuum energy density $\Lambda$ from its initial value $12H_{\rm e}^2$. We also showed that our requirements imply finite particle and entropy production during the entire expansion of the observable universe. Our simple model leads to exact expressions for $t(a)$ and for the thermodynamic variables. In addition, the form (\ref{hubble}) of $H(a)$ leads to an exact solution for the large-scale modes of scalar perturbations, described gauge-invariantly by Bardeen's potential $\Phi$. The solution is \cite{roydaniel}: \[ \Phi = C_+ \left(\frac{a^2}{a_{\rm e}^2 + a^2} \right) + C_-\left({a_{\rm e}\over a}\right)\left({a_{\rm e}^2\over a_{\rm e}^2 + a^2}\right)\,, \] where $C_\pm'=0$. We will not discuss the important question of the source of these perturbations, except to mention that, as pointed out in \cite{b}, non-adiabatic inflationary scenarios allow for the possibility that the seeds of density perturbations are of predominantly thermal, rather than quantum, origin. Further discussion of the physical processes taking place during non-adiabatic inflation, and of their effects on baryogenesis, the microwave background radiation and structure formation, is given in \cite{b,b1,new,vl,cds}. \newpage \ack This work was partially supported by EEC grants numbers PSS* 0992 and CI1*-CT94-0004, and by OLAM - Fondation pour la Recheche Fondamental, Brussels. \section*{References}
2,877,628,090,866
arxiv
\section{Introduction}\label{sec:intro} Refractive index fluctuations in the interstellar plasma, caused by electron density fluctuations, cause compact radio sources to be angularly broadened (\cite{r90}). Interstellar angular broadening has been observed for pulsars (e.g., Gwinn, Bartel, \& Cordes~1993), masers (e.g., \cite{fdcv94}), and active galactic nuclei (e.g., Fey, Spangler, \& Cordes~1991). Though AGNs typically display intrinsic structure comparable to the level of angular broadening observed on many lines of sight ($\sim$ milliarcseconds), surveys of angular broadening tend to focus on AGNs because they are strong sources, uniformly distributed on the sky, and suffer from none of the distance ambiguities normally associated with Galactic sources. Nevertheless, such surveys (e.g., \cite{dtbbbc84}; Fey, Spangler, \& Mutel~1989; \cite{fsc91}) have been biased toward the inner Galaxy because of the general enhancement of interstellar scattering seen toward the inner Galaxy (\cite{dtbbbc84}; Cordes, Weisberg, \& Boriakoff~1985; \cite{cwfsr91}; Taylor \& Cordes~1993, hereinafter \cite{tc93}). At Galactic latitudes $|b| > 10\arcdeg$, the angular diameter at 1~GHz due to interstellar scattering of a point source is approximately $1\,\mathrm{mas}\,(\sin |b|)^{-1/2}$ (\cite{d-sr76}). At low latitudes the scattering becomes stochastic and its magnitude depends upon whether the line of sight intersects a clump or localized region of intense scattering (\cite{dtbbbc84}; Cordes et al.~1985; \cite{cwfsr91}; \cite{tc93}). At extremely low latitudes $|b| < 1\arcdeg$, sources toward the anticenter may show enhanced broadening due to an extended \ion{H}{2} disk of the Galaxy. The \ion{H}{1} disk of the Galaxy extends to a Galactocentric distance of 25--30~kpc (\cite{b92}). There is a small body of evidence that hints that the radial extent of the \ion{H}{2} disk may equal or exceed that of the \ion{H}{1} disk: \begin{itemize} \item Savage, Sembach, \& Lu~(1995) find \ion{C}{4} absorption along the line of sight to H~1821$+$643 ($\ell = 94\arcdeg, b=27\arcdeg$). Among the velocity components contributing to this absorption is low-density ($n \sim 5.6 \times 10^{-3}$~cm${}^{-3}$), warm ($T \sim 10^4$~K) gas at a velocity of $-120$~km~s${}^{-1}$, corresponding to a kinematic Galactocentric distance of 25~kpc. \item The \ion{H}{1} disks of nearby galaxy are truncated at radii of order 25--50~kpc, at which the surface density drops to $N_{\mion{H}{1}} \lesssim 2 \times 10^{19}$~cm${}^{-2}$ (Corbelli, Schneider, \& Salpeter~1989; \cite{v91}). This truncation is observed even for galaxies without nearby companions and likely occurs where the disks become optically thin to the intergalactic ionizing flux (\cite{ecees93}). Charlton, Salpeter, \& Hogan~(1992) have proposed that at least some of the low-redshift Ly$\alpha$ clouds seen in quasar spectra may be due to residual \ion{H}{1} in extended, nearly fully ionized disks of normal spiral galaxies. Our Galaxy would then be a prototypical, $z = 0$ absorber. \item Material blown out of the Galactic disk by the action of clustered supernovae may account for a fraction of high-velocity clouds and later return to the disk, forming a Galactic fountain (\cite{sf76}; \cite{b80}; \cite{hb90}; \cite{s90}; \cite{k91}). Models of high-velocity clouds often require the material to be supported by gas pressure at large Galactocentric radii, $R \gtrsim 25$~kpc (e.g., \cite{b80}). \end{itemize} The line of nodes of the \ion{H}{1} disk is approximately constant with Galactocentric radius and centered near a Galactic longitude of 170\arcdeg. Presuming that the \ion{H}{2} disk is oriented similarly to the \ion{H}{1} disk, low-latitude sources toward the anticenter would sample the longest path lengths through the \ion{H}{2} disk and would be expected to show the largest enhancement from this extended disk. Figure~\ref{fig:angle} illustrates the enhanced angular broadening expected as a function of the $e^{-1}$ radial scale length $A_1$. A recent effort to constrain $A_1$ (\cite{tc93}) found comparable fits for $A_1 \approx 20$--50~kpc. This large range is allowed because of the dearth of anticenter scattering measurements. In this paper we report multifrequency Very Long Baseline Array observations of twelve anticenter sources, seven of which have $|b| < 0\fdg5$. As Fig.~\ref{fig:angle} illustrates, the nominal resolutions of the VLBA are such that 18~cm observations are sensitive to scale lengths of $A_1 \gtrsim 100$~kpc and~90~cm observations should detect scattering even if $A_1 \lesssim 10$~kpc. In \S\ref{sec:ac.observe} we describe the observations and data reduction, in \S\ref{sec:sources} we present the results for individual sources, and in \S\ref{sec:conclude} we present a preliminary analysis of the angular diameters. A companion paper (Lazio \& Cordes~1997, hereinafter \cite{lc97}) combines these observations with the other scattering measurements in the literature and uses a likelihood analysis to constrain the distribution of scattering the outer Galaxy. \section{Observational Program}\label{sec:ac.observe} The sources observed in this project were selected from the Northern Sky Catalogs (20~cm: \cite{wb92}, catalog acronym WB; 6~cm: Becker, White, \& Edwards~1991, catalog acronym BWE) and the 6~cm MIT-Green Bank survey (\cite{blbhm86}; \cite{lhclcb90}; \cite{glhclb90}). Our selection criteria were that the source have a flat spectrum with a spectral index $|\alpha| \le 0.5$, that it be classified as point-like to the Green Bank 100~m telescope, and that it be unlikely to be a Galactic radio source (\cite{tg83}; \cite{gt86}). Sources were also selected to lie within a cross-shaped region in the Galactic anticenter described by $150\arcdeg < \ell < 210\arcdeg$ and $|b| < 0\fdg5$ or $\ell \approx 180\arcdeg$ and $|b| < 10\arcdeg$. Sources near or behind known \ion{H}{2} regions (\cite{s59}) or supernova remnants (\cite{g86}; \cite{g91}), objects which might enhance the scattering, were excluded. Because the above surveys were performed with single-dish telescopes, the source positions were not sufficiently accurate for \hbox{VLBI}. We therefore undertook a VLA survey to refine source positions. This survey also enabled us to exclude those sources which are not compact on arcsecond scales. \subsection{VLA survey}\label{sec:vlasurvey} In the surveys listed above, we identified 20 sources potentially suitable for a VLBI angular broadening study; these are tabulated in Table~\ref{tab:vlasources}. On 1993 April~23, we observed these sources with the VLA in the B configuration. A 12.5~MHz bandpass was used with the IFs centered on 1366 and 1446~MHz. Snapshot observations of duration 7~min.\ were obtained. Observations of 3C286 and 3C147 were used to set the flux density scale and frequent observations of 0552$+$398 were used to calibrate the visibility phases. The data were edited, calibrated, and imaged in the standard fashion within {\textsc{aips}}. Images of those sources judged unsuitable for the VLBI program are presented in Fig.~\ref{fig:vlasources}; we also show the VLA image of the one source not detected in the VLBI program. We defer until \S\ref{sec:sources} the images of the sources observed in the VLBI component of this program. The fluxes given in Table~\ref{tab:vlasources} are derived from our VLA observations. Compact sources were fit with a single gaussian. Doubles could typically be fit with two gaussians, in which case the coordinates and brightness are for the stronger component while the flux is the sum of the flux from both components. For sources which could not be fit by gaussians, the flux was derived by summing those image pixels for which the brightness exceeded 2$\sigma$. One source we observed, 87GB~0600$+$3011, has no flux data listed in Table~\ref{tab:vlasources}. We could identify no source brighter than 10~\mbox{mJy~beam${}^{-1}$}\ at this location. This source appears in both the 6~cm BWE and~20~cm WB catalogs with a flux $S \approx 300$~mJy. We conclude that it is likely to be largely resolved out to the VLA and is probably a Galactic \ion{H}{2} region. It is unlikely to be variable because the observations from which the WB and BWE catalogs were assembled were separated by 4~yr, 1983 to 1987. \subsection{VLBA observations}\label{sec:vlbaobs} Twelve sources were selected for further VLBI observations from the VLA survey. Of these, seven have Galactic latitudes $|b| < 0\fdg5$; Dennison et al.~(1984) had three sources with $|b| < 1\arcdeg$. These very low-latitude sources are important because the lines of sight to these sources traverse 50~kpc or more before exceeding the extended component's scale height in the inner Galaxy, 0.88~kpc. The remaining five sources are useful in assessing the presence of any flaring or warping of the outer ionized disk. As a control source we also observed 0611$+$131, a source in Dennison et al.'s~(1984) survey. The log of VLBI observations is given in Table~\ref{tab:vlbaobs}. \subsubsection{6 and 18~cm Observations}\label{sec:obs6_18} The first set of observations was conducted at 6 and 18~cm. The 6~cm observations allow determination of the intrinsic milliarcsecond structure of the sources, while the 18~cm observations might be capable of just detecting scattering if the Galactic disk extends to 100~kpc or more, viz.\ Fig.~\ref{fig:angle}. The observations were conducted on 1994 May~6 and~7 in two 11~h sessions. Observations at two hour angles, of duration 6~min., were obtained for each source at both wavelengths. Dual polarization was recorded in two 8-MHz IFs for a total bandwidth of 16~MHz. The array was composed of all of the VLBA antennas and the phased \hbox{VLA}. Editing of the data was performed using station-supplied logs; additional editing, mostly near scan boundaries, was also performed later. Amplitude calibration for the VLBA antennas was performed using station-supplied $T_{\mathrm{sys}}$ measurements; for baselines including the VLA, a source flux density is required. At 18~cm we estimated this quantity from our VLA observations. At 6~cm we used the fluxes from the BWE catalog (\cite{bwe91}), except for 0611$+$131, for which we estimated a flux of 0.23~Jy from its spectrum (\cite{dtbbbc84}; \cite{gr92}). Fringe fitting was performed in two steps. First the delays across individual IFs due to the electronics were determined using a short section of a scan on 0552$+$398. These delays were applied to all sources, then the rates, delay across the IFs, and residual delay within the IFs were determined. Corrections for the shape of the bandpass provided little improvement in the visibility phases or amplitudes and were not applied. \subsubsection{90~cm}\label{sec:90} The second set of observations was conducted at 90~cm. Scattering diameters should be detectable at this wavelength unless the scale length of the Galaxy's ionized disk is less than 10~kpc in size, viz.~Fig.~\ref{fig:angle}. The observations were conducted in a single 12~h session on 1995 November~27 using the full VLBA and the phased \hbox{VLA}. All of the sources from the earlier VLBI session, except 87GB~0526$+$2458, were observed. The structure of this source at 6 and~18~cm, viz.\ Fig.~\ref{fig:J0529+2500}, is suggestive of a compact symmetric object (\cite{cpruxm92}). We therefore judged it to be unsuitable for scattering measurements and excluded it from further observations. Each source was observed over two hour angles for a total time on source of 30~min. The sources 3C147, 3C286, and 3C454.3 were also observed to assist with fringe finding. Only left-circular polarization was recorded. The VLBA stations recorded eight 4-MHz IFs, with 16 channels per IF, for a total bandwidth of 32~MHz; the VLA hardware and radio frequency interference (RFI) environment allowed only four IFs to be recorded. Initial editing was performed using station-supplied logs. At 90~cm, RFI is problematic; extensive additional editing was performed to excise \hbox{RFI}. Amplitude calibration and fringe finding followed a similar procedure as the 6 and~18~cm observations. The single-IF delays were found by utilizing a 2~min.\ section from one scan of 3C147. RFI made it difficult to isolate a section of a scan containing the full frequency response of the antennas, thus no correction for the shape of the bandpass was applied. \section{Individual Sources}\label{sec:sources} After the delay and rate solutions were applied, the data were averaged across the full bandpass (16~MHz at~6 and~18~cm and 32~MHz at~90~cm) and in time. At~6 and~18~cm the averaging time was typically 30~s and at~90~cm it was 10~s. Longer averaging times were used if a source was not detected within these initial averaging times. Of the twelve sources observed, we detect all but one, WB~0616$+$1522 (viz.\ Fig~\ref{fig:vlasources}), at one or more wavelengths. Gain fluctuations remaining in the data were then removed via a few to several iterations of self-calibration (e.g., \cite{w95}). For the weaker sources, only the gain phases were corrected; both gain amplitude and phase corrections were determined for the stronger sources. In the initial self-calibration iteration we used a point-source model to correct the phases only. Thereafter we used the images produced from the self-calibrated data as the input model. We ceased the self-calibration iterations when the off-source rms in an image was within a factor of 2--3 of the thermal limit, which is 0.6~\mbox{mJy~beam${}^{-1}$}\ at 6 and~18~cm and 2~\mbox{mJy~beam${}^{-1}$}\ at 90~cm. We did not attempt to reach the noise level given our limited hour angle coverage. A summary of the sources is given in Table~\ref{tab:sizes} and Figs.~\ref{fig:spectra} and \ref{fig:sizes}. Figure~\ref{fig:spectra} shows the spectrum of each source detected at two or three of the VLBI observation wavelengths. Because of our limited hour angle coverage, we were unable to test our amplitude calibration via such means as $u$-$v$ plane crossings. The \textit{a priori} amplitude calibration at the VLBA should be accurate to 10\% (\cite{w96}). Figure~\ref{fig:sizes} shows the angular diameter as a function of wavelength for each source appearing compact at two or three of the VLBI observing wavelengths. Our primary aim for undertaking this survey was to find extragalactic sources whose angular diameters show evidence of radio-wave scattering and could therefore be used to constrain the size of the \ion{H}{2} disk (\cite{lc97}). Not all of the sources we have observed are suitable for our intended analysis. In this section we present the individual sources and an assessment of their use for further analysis. For sources with complex structure, we show the image produced from the self-calibrated visibility data. For sources showing simple structure, i.e., one or two gaussian components, we show the visibility data. If the source can be modelled with a single gaussian, we superpose that model on the data. Because of the limited hour angle coverage, the angular diameters in Table~\ref{tab:sizes} are found from fits of circular gaussians to the $u$-$v$ data. For a subset of the sources, we also performed fits of an elliptical gaussian. We focussed on the 90~cm observations as these observations have the longest duration and, therefore, the most complete $u$-$v$ coverage. The 90~cm observations are also the most sensitive to scattering-induced anisotropy, though other causes such as intrinsic structure or $u$-$v$ coverage can also produce anisotropic image shapes. We restricted fits of an elliptical gaussian to those sources with $S > 0.5$~Jy at~90~cm, so that the data would have a high signal-to-noise ratio. Four sources---87GB~0512$+$2627, 87GB~0547$+$3044, 87GB~0600$+$2957, and 0611$+$131---meet these criteria. We characterize the elliptical fits by the image elongation parameter $e_{\mathrm{s}}$ (Romani, Narayan, \& Blandford~1986) \begin{eqnarray} e_{\mathrm{s}} & = & \frac{a-b}{\sqrt{ab}} \nonumber \\ & = & \frac{1-b/a}{\sqrt{b/a}} \label{eqn:elongate} \end{eqnarray} where $a$ and~$b$ are the major and minor axes of the image. Romani et al.~(1986) pointed out that even though the expected image shape from an isotropic scattering medium is circular, a single realization of the image shape can produce a non-zero $e_{\mathrm{s}}$. They provide expressions for the rms value of $e_{\mathrm{s}}$, $\sigma_e$, in terms of the relevant observational and scattering parameters (their Table~1). For our 90~cm observational program and using the \cite{tc93} model to describe scattering in the anticenter, we estimate $\sigma_e \approx 0.03$ for anticenter sources. Values of $e_{\mathrm{s}}$ significantly larger than this indicate (1)~scattering toward the anticenter is stronger than that described by the \cite{tc93} model (i.e., larger $A_1$); (2)~the density fluctuations, which are responsible for interstellar scattering, are described by a so-called ``steep'' power-law spatial spectrum in the anticenter (\cite{rnb86}; \cite{r90}); (3)~the scattering medium is anisotropic; or (4)~effects not related to scattering, such as those cited above, are important. We discuss the fits individually, but we shall conclude that $u$-$v$ coverage or intrinsic structure are more likely than scattering to be responsible for the anisotropic images shapes that we find. \subsection{87GB~0433$+$4706}\label{sec:j0436+4712} Although appearing compact to the VLA at 6 and~20~cm (Fig.~\ref{fig:J0436+4712}; \cite{f86}), we detect this source only at 90cm in the VLBI observations. The lack of a VLBI detection at 6 and~18~cm may be due to scheduling and correlator constraints which resulted in this source not being observed by the VLA, the most sensitive element in our VLBI array. Because its 90~cm structure can be fit by a single gaussian, we use this source in our analysis of scattering. The diameter of this source is a factor of ten larger than two nearby sources, 87GB~0451$+$4309 (see below) and 0503$+$467 (\cite{smbc86}), even though these sources are closer to the supernova remnant HB9. This large diameter may be an indication of anomalously strong scattering or of unresolved structure (like 0611$+$131, \S\ref{sec:0611+131}). \subsection{87GB~0451$+$4309}\label{sec:j0455+4313} Like 87GB~0433$+$4706 this source appears compact to the VLA at 20~cm, but is detected only at 90~cm, Fig.~\ref{fig:J0455+4313}. The same cause is likely responsible for this lack of detection as the VLA was unable to observe this source. Because its 90~cm structure can be fit by a single gaussian, we will use this source in our analysis. The diameter of this source, 5~mas scaled to 1~GHz assuming $\theta \propto \lambda^{2.2}$, compares favorably with the 4~mas diameter Spangler et al.~(1986) find for 0503$+$467. \subsection{87GB~0512$+$2627}\label{sec:j0516+2630} We detect this source at 18 and~90~cm, Fig.~\ref{fig:J0516+2630}. It can be fit by a single gaussian at both wavelengths and we shall use this source in our analysis. This was one of the sources to which we also fit an elliptical gaussian. The image elongation parameter is $e_{\mathrm{s}} = 0.11$ (axial ratio~$b/a = 0.899$). This value is significantly in excess of the rms value for isotropic scattering. The orientation of the source's major axis is aligned approximately with the orientation of the longest baselines in the $u$-$v$ plane. We conclude that the image shape is likely to be dominated by the incomplete $u$-$v$ coverage, and that a circular gaussian is an equally acceptable description of the source shape. \subsection{87GB~0526$+$2458} At 20~cm, this source appears compact to the VLA, Fig.~\ref{fig:J0529+2500}. At 18~cm our VLBI observations resolve the source into a compact symmetric object (\cite{cpruxm92}) having two components separated by approximately 60~mas. The weaker of these is not visible in our 6~cm image. Because of its structure, we judged it to be unsuitable for an angular broadening analysis and did not observe it at 90~cm. \subsection{87GB~0537$+$3059} To the VLA, this source appears compact, Fig.~\ref{fig:J0541+3046}. At 90~cm the source shows a simple structure. At 18~cm two components are visible, a compact core-like component and a larger halo. At 6~cm the source is detected on only the shortest baseline, PT-\hbox{VLA}. From the two shortest baselines, PT-VLA and LA-VLA, we constrain its 6~cm diameter to $57\,\mathrm{mas} < \theta < 284\,\mathrm{mas}$. We shall use the diameters derived from the 90~cm data and the 18~cm core component in our analysis. \subsection{87GB~0547$+$3044} At 20~cm, the source has a compact core with a jet extending approximately 30\arcsec\ to the west, Fig.~\ref{fig:J0550+3045}. In our VLBI observations we detect the core component at all three wavelengths. However, at 6~cm, it is detected on only the two shortest baselines, PT-VLA and LA-VLA, so that we can constrain its diameter to be only $\theta < 57$~mas. We use only the diameters derived from the 18 and~90~cm data in our scattering analysis. This was one of the sources to which we also fit an elliptical gaussian. The image elongation is $e_{\mathrm{s}} = 0.17$ ($b/a = 0.842$). Like for 87GB~0512$+$2627, \S\ref{sec:j0516+2630}, however, the image shape is likely to be dominated by the incomplete $u$-$v$ coverage as the orientation of the source's major axis is aligned approximately with the orientation of the longest baselines in the $u$-$v$ plane. \subsection{87GB~0558$+$2325} We detect this low-latitude source at all three wavelengths, Fig.~\ref{fig:J0601+2324}. It is compact and we use it in our analysis. \subsection{87GB~0600$+$2957}\label{sec:J0603+2957} This source is detected at all three wavelengths, Fig.~\ref{fig:J0603+2957}. At 6 and~18~cm, two components can be identified---a compact one with a flux density of 150--250~mJy and a secondary component approximately 2~mas to the west with a flux of 100--150~mJy. At 90~cm, only one component is present. For our scattering analysis, we shall use only the more compact component. This was one of the sources to which we also fit an elliptical gaussian. The image elongation is $e_{\mathrm{s}} = 0.44$ ($b/a = 0.649$). This elongation is likely to reflect intrinsic structure. First, the wavelength dependence of the angular diameter is $\theta \propto \lambda^{0.3}$, as opposed to the $\lambda^2$ dependence expected for angular broadening. Second, an image of the source at~18~cm shows a comparable elongation, though with a misalignment of the position angles. The 18~cm image has a position angle of approximately $-10\arcdeg$ while the fit to the $u$-$v$ data gives a position angle of approximately 30\arcdeg. In \cite{lc97} we show that the scattering diameter for this source is small enough to alter the estimates of the scale height of the ionized disk in the outer Galaxy by 50\%. Possible explanations for a small scattering diameter are that the distribution of scattering material is patchy or that the source is Galactic. However, we have found no characteristic of the source which would cause us to favor a Galactic classification over extragalactic: Its brightness temperature is about $5 \times 10^9$~K; its morphology is consistent with that of extragalactic sources, a central core with a secondary component; there are no X-ray sources within 1\arcdeg; and the nearest pulsar is PSR~B0609$+$37. We discuss these possibilities in more detail in \cite{lc97}. Condon et al.(1983) were unable to find an optical counterpart for this source, though the field does show signs of obscuration. \subsection{87GB~0621$+$1219} This source is detected at all three wavelengths and can be modelled with a single gaussian component, Fig.~\ref{fig:J0623+1218}. We shall use this source in our analysis. \subsection{87GB~0621$+$3206} This source is detected at 18 and~90~cm, Fig.~\ref{fig:J0624+3205}. At 18~cm the source is resolved into two components, with a core-jet morphology. The visibility data show significant deviations from a gaussian, particularly at short baselines, and there is possibly a extended, nearly resolved-out component approximately 400~mas from the core. At 90~cm the source has two components, separated by 400~mas. Because of its complex structure, we shall not use this source in our analysis. \subsection{87GB~0622$+$1153} This source is detected at 18 and~90~cm, Fig.~\ref{fig:J0625+1150}. At 90~cm the source can be modelled with a single component. At 18~cm, we detect the source only on the PT-LA-VLA triangle. The visibility amplitude on the PT-VLA baseline, the shortest baseline, is lower than that on the LA-VLA and PT-LA baselines, possibly indicating that the source structure at 18~cm may be more complex than at 90~cm. Given the paucity of data at 18~cm, however, we shall use only the 90~cm diameter in our scattering analysis. \subsection{0611$+$131}\label{sec:0611+131} This source was observed by Dennison et al.~(1984) and was included in our observations as a control source. Our spectrum is in good agreement with that obtained by Dennison et al.~(1984), though the wavelengths from which they derive their spectrum differ somewhat from ours. Also, we find a flux 10--15\% lower than Dennison et al.~(1984). Our 90~cm source diameter is 41.4~mas, in good agreement with their upper limit of 40~mas at 75~cm. However, as our 6 and~18~cm images show, Fig.~\ref{fig:0611+131}, these low-frequency source diameters are unlikely to be that of a single component. At higher frequencies, a second component is seen to the north, approximately 35~mas away from the compact component we identify as the core. Extended emission is also seen either partially or entirely linking these two components. We shall therefore exclude this source from our analysis (\cite{lc97}). This was one of the sources to which we also fit an elliptical gaussian. The image elongation is $e_{\mathrm{s}} = 0.66$ ($b/a = 0.521$). This is the most highly anisotropic of the four sources for which we fit elliptical gaussians. The anisotropy of this source is likely to be dominated by the intrinsic structure. Our higher resolution observations at~6 and~18~cm reveal an elongated source structure with a comparable elongation. There is some misalignment between the image shapes, however. The 6 and~18~cm images show the source to be elongated in the north-south direction, a position angle of 0\arcdeg; the fit to the $u$-$v$ data at~90~cm finds a position angle of 24\arcdeg. \section{Preliminary Constraints on the Ionized Disk}\label{sec:conclude} The measured angular diameters are summarized in Table~\ref{tab:sizes}. They are in the range 50--600~mas at 90~cm, 1--150~mas at 18~cm, and 0.4--5~mas at 6~cm. The latitude range of the sources is $|b| < 10\arcdeg$. At high latitudes the expected scattering diameter of extragalactic sources is $12\,\mathrm{mas}\,\lambda^2(\sin |b|)^{-1/2}$ for $\lambda$ in meters (\cite{d-sr76}). Of the three wavelengths at which we observed, the 90~cm angular diameters will contain the largest contribution from scattering. Scaling the high latitude expression to 327~MHz, the expected scattering diameters are approximately 20--75~mas for sources with $|b| = 1\arcdeg$ to 10\arcdeg. Of the ten sources for which we have measured an angular diameter at 90~cm, seven of them have diameters smaller than 90~mas, viz.\ Table~\ref{tab:sizes}, comparable to that expected if the Galaxy's scattering material is confined to a flat disk. Of the remaining three sources, we cannot assess the amount to which intrinsic structure contributes to the observed diameters of 87GB~0433$+$4706 and 87GB~0622$+$1153 because they were detected at only one frequency, and scattering probably does not influence the size of 87GB~0537$+$3059h because it is the halo of the compact component. The close correspondence between the observed angular diameters and that extrapolated from high latitudes suggests that the ionized disk is not strongly warped like \ion{H}{1} disk. It may also indicate that the scattering material does not extend to large Galactocentric distances ($R \gtrsim 100$~kpc). We defer a more comprehensive analysis to \cite{lc97} where we combine these observations with those in the literature and use a likelihood analysis to constrain the distribution of scattering the outer Galaxy. \acknowledgements This research has made use of the Simbad database, operated at the CDS, Strasbourg, France. The Very Large Array (VLA) and the Very Long Baseline Array (VLBA) are facilities of the National Science Foundation operated by the National Radio Astronomy Observatory under cooperative agreement by Associated Universities, Inc. Many people at the NRAO were instrumental in assisting with these observations. We thank A.~Beasley for assistance with the L- and C-band observations; C.~Janes, R.~Simon, and J.~Wrobel spent considerable time determining the RFI environment, particularly for the VLA, and assisting with the production of SCHED files. We thank B.~Clark for scheduling us on a Thanksgiving weekend for the 90~cm observations. This research was supported by NASA GRO grants NAG~5-2436 and NAG~5-3515 and NSF grant AST-9528394. \clearpage
2,877,628,090,867
arxiv
\section{Introduction} \label{sec:intro} \vspace{-0.5em} State-of-the-art Automatic Speech Recognition (ASR) systems use Deep Neural Networks (DNN) of various architectures for acoustic modeling (AM). Early success using DNNs for ASR came from hybrid DNN-hidden markov models (DNN-HMM) \cite{hmm_dnn}. These were typically trained using frame-level cross entropy (CE) criterion to predict senones \cite{hmm_dnn} obtained from a previous Gaussian Mixture Model(GMM)-HMM system. Sequence-level training criteria like Maximum Mutual Information (MMI) \cite{bahl_mmie} have been shown to improve the performance of these frame-level trained DNN-HMM-based ASR systems \cite{Vesely_13, saon2012discriminative}. Since then, various approaches have been shown to be able to train neural network purely through sequence training without initially pre-training using a frame-level criterion -- lattice-free MMI (LF-MMI) \cite{lfmmi, e2e_lfmmi}, connectionist temporal classification (CTC) \cite{ctc}, recurrent neural network transducer (RNN-T) \cite{graves2012rnnt}, attention-based sequence-to-sequence (seq2seq) models \cite{bahdanau2016attention, las}. RNN-T and seq2seq models consist of an acoustic encoder that is jointly trained with a neural decoder, which can be considered to be a neural language model (LM). These models can be used to decode audio without using an external LM, and thus can be termed as ``end-to-end". As opposed to this, CTC-based models and hybrid DNN-HMM are ``encoder-only" models in the sense that they do not have an explicit jointly trained neural decoder. Having a single ``end-to-end" model might be simpler, but in general these models are known to be data-hungry \cite{chiu2018seq2seq, xiaohui2021benchmarking} and require thousands of hours of data to achieve competitive performance. RNN-T models are also known to benefit from pre-training encoders or alignments from CTC \cite{graves2013rnn} or hybrid DNN-HMM \cite{chunxi2021aux} models for accuracy or efficiency improvements \cite{graves2013rnn,chunxi2021aux,ar_rnnt, Zeyer2020transducer}. On the other hand, hybrid models use an external LM for decoding and are often explicitly trained to work with an LM \cite{lfmmi,semisup_lfmmi,ctc_crf}. They are appealing for their modularity which allows to easily replace or extend the lexicon or LM for different applications, while this is still a challenge for end-to-end systems \cite{duc2021rnnt_context}. Hybrid models also explicitly model silence which makes them ideal candidates for pre-processing and segmenting audio as well as for applications that require highly accurate decoding token time-stamps. While hybrid DNN-HMM and CTC models are very similar, they have vastly different legacies and are usually implemented in very different frameworks. For e.g., though LF-MMI was proposed in a DNN-HMM framework with senone/chenone\cite{le2019senones} modeling units, this combination of topology and modeling units is not mandatory. On the other hand, CTC models conventionally refers to a model whose modeling units follow the CTC topology and trained with the Maximum-Likelihood (ML) criteria, which is just the numerator part of the MMI criteria \cite{e2e_lfmmi}. However, CTC models can also be trained discriminatively with sMBR \cite{ctc_smbr}, or MMI~\footnote{The CTC-CRF criterion in \cite{ctc_crf} is equivalent to LF-MMI as in \cite{e2e_lfmmi} as both used uniform transition scores constant over the linear chain.} criteria. Intrinsically HMM and CTC are just different label topologies (Sec. \ref{sec:topo}). By decoupling the concepts of modeling units (character/wordpiece/chenone etc.) and label topologies, we introduce a single generalized framework for training hybrid models. This major contribution of our paper allows systematic comparisons (Sec. \ref{libri_setup}) of different modeling units and label topologies to gain deep understandings of their properties, and makes it easier to develop training schemes with novel combinations of them. From this framework, together with the boost factor \cite{povey2008bmmi,chen2018sequence} for LF-MMI, we propose three new training schemes: 1,2) wp-CTC-bMMI and ch-CTC-bMMI (CTC-bMMI with chenone/wordpiece units), with overall better WERs than HMM-bMMI, whose effectiveness is also confirmed by two real-world server-side/on-device ASR applications. 3) wp-HMM-bMMI, which enables both large-stride (8) inference and accurate token time-stamps, thanks to silence modeling. On the HMM side, we also show HMM-MMI models with bi-character units (bc-HMM-MMI) can serve as a better flat-start trained alignment model than Gaussian Mixture Models (GMM), especially on noisy data. \vspace{-0.5em} \section{LF-bMMI training} \vspace{-0.5em} LF-MMI \cite{lfmmi} criterion was extended to include boosting \cite{povey2008bmmi} in \cite{chen2018sequence, weng2019lfbmmi}. Here, we present it again in the generalized hybrid model framework for different modeling units and label topologies. The MMI criterion \cite{bahl_mmie} for training acoustic models can be viewed as maximizing the conditional likelihood of the reference $\wWr{r}$ given the acoustic observation sequence $\oOr{r}$. This maximizes the joint likelihood of the reference and acoustic observation sequence, i.e. numerator likelihood, and minimizes the marginal likelihood $\oOr{r}$, i.e. denominator likelihood. As in \cite{lfmmi}, the denominator is approximated by marginalizing over all state sequences in a denominator graph $\mathcal{G}_\text{Den}$ (hence ``lattice-free") constructed using an n-gram token LM, which in our case can be phone/character/wordpiece LM. The numerator likelihood is computed by marginalizing over all sequences in a numerator graph $\mathcal{G}_\text{Num}(\wWr{r})$ that is similar but constrained to the reference word sequence. In this paper, we assume MMI/bMMI training is always lattice-free, hence omitting ``LF" most of the time. The boosted MMI \cite{povey2008bmmi,Vesely_13} criterion was introduced to improve training performance by encouraging the criterion to give higher likelihoods to more ``accurate" paths. This is achieved by boosting the likelihoods of paths in the denominator graph proportional to the number of errors it contains. The LF-bMMI criterion can be written as: \begin{equation} \begin{aligned} \mathcal{F}_\text{LF-bMMI} =& \sum_{r} \log \frac{\sum_{\pi\in\mathcal{G}_\text{Num}(\wWr{r})} \mathbb{P}{\left(\oOr{r}\mid \pi\right)}^\kappa \mathbb{P}(\pi)} {\sum_{\pi' \in \mathcal{G}_\text{Den}} \mathbb{P}{\left(\oOr{r}\mid \pi'\right)}^\kappa \mathbb{P}(\pi') e^{-b\mathbb{A}(\wWr{r},\pi')}}, \label{eq:Flfbmmi} \end{aligned} \end{equation} where $\kappa$ is acoustic weight and $\mathbb{A}(\wWr{r},\pi')$ is the accuracy function for the path $\pi'$ measured against the reference $\wWr{r}$. The accuracy function can be defined in several ways such as using phone edit distance to the reference \cite{povey2008bmmi}. But implementation-wise, in the lattice-free training framework, it is easiest to define this as a sum of per-frame accuracy values. Therefore, as in \cite{weng2019lfbmmi}, we use numerator posterior derived from the numerator graph as a proxy for the per-frame state-level accuracy values. Besides, the intuition of boosted MMI can also be interpreted by Max-Margin learning \cite{baskar2019promising} \cite{smithsoftmax}. \vspace{-0.5em} \subsection{Full-sequence training} \vspace{-0.5em} The LF-(b)MMI criterion was originally designed at the sequence-level. For efficiency on GPUs, the original Kaldi implementation \cite{lfmmi} applies it on equally-sized chunks of around 1.5s each. However, in our application we need to apply LF-bMMI criterion to full-context models like BLSTMs and Transformers, and sequence lengths of up to 2 minutes. We leveraged PyChain's LF-MMI implementation for sequence-training with variable length sequences, and added boosting \cite{povey2008bmmi} for training with boosted MMI. \vspace{-0.5em} \section{Label topologies and modeling units} \vspace{-0.5em} \label{sec:topo} In this section, we describe the label topologies and modeling units used in our models. A label topology defines the mapping between a label sequence and neural network output units (i.e. modeling units). For DNN-HMM systems, in this paper, we consider only the 1-state and 2-state-with-skip (which we call as {\em chain}) HMM topologies \cite{e2e_lfmmi}. For CTC systems, a CTC topology \cite{ctc, ctc_crf} is used which adds a special blank ($\phi$) output unit. The CTC topology defines a mapping $\mathcal{B}^{-1}$ that maps a label sequence $\boldsymbol{l} = l_1,\dots l_L$ to all output unit sequences $\pi$ such $l$ is obtained by de-duplicating $\pi$ and removing blank symbols. An intuitive understanding of the difference between CTC and 1-state HMM topology\footnote{The {\em chain} topology can be obtained from Fig.\ref{fig:1_state_num_fst} by replacing input tokens on all self-loops by a `2nd version' of each token (e.g. `I'$\rightarrow$`I$_2$').} can be obtained by looking at the examples in Fig.\ref{fig:topo}. We see that the CTC topology allows blank ($\phi$) units {\em between any tokens}, e.g. \texttt{\_a} and \texttt{m}. The silence label (\texttt{<sil>}) which we see in the 1-state HMM topology is different from blank in that it is a real label similar to any other wordpiece. In our systems, we make the modeling choice to optionally allow it {\em between words} in order to model the real acoustics of silence \cite{chen2015pronunciation}. We also point out that explicit silence modeling can help achieve more accurate token time-stamps during decoding, which is an advantage of HMM-based models, especially when time-constraints are used in training targets. Notably, we can use both blank and silence in the same model, which is the case for chenone-CTC models as pointed out in Table \ref{tab:combination}. On the other hand, we hypothesize that CTC-based models can achieve better training performance as it benefits more from \textit{SpecAugment} due to the blank tokens, verified in our experiments. The blank tokens, which signify ``no output" are ideal to represent the perturbations due to feature masking, while HMM-based models are forced to model masked features using non-silence units (except between words where silence can be predicted). However the cost is less accurate decoding time-stamps due to the peaky behavior \cite{zeyer2021does} caused by dominance of blank tokens at output during decoding. \begin{figure} \begin{subfigure}[h] {0.42\textwidth} \centering { \begin{FitToWidth}[0.88\columnwidth] \includegraphics[]{ctc.jpg} \end{FitToWidth} } \caption{CTC topology\label{fig:ctc_num_fst}} \end{subfigure} \begin{subfigure}[h] {0.47\textwidth} \centering { \begin{FitToWidth}[1\columnwidth] \includegraphics[]{1_state.jpg} \end{FitToWidth} } \caption{\label{fig:1_state_num_fst}1-state HMM topology} \end{subfigure} \caption{Numerator FSTs (mapping output units to modeling units) of `I', `$\_$a', `m' in CTC and 1-state HMM topology; $\phi$ means blank.} \label{fig:topo} \end{figure} We consider the following 4 types of labels: \begin{itemize} \vspace{-0.5em} \item Mono-character (mono-char): This is the simplest case where the labels are characters which are context-independent. \vspace{-0.5em} \item Bi-character (bi-char): In this case, the characters are modeled separately for each left-context. We do a basic text-based clustering based on the raw counts of the character n-grams seen in the training transcripts, to let infrequent bi-characters share a modeling unit within each cluster. \vspace{-0.5em} \item Tri-character (tri-char): In this case, the characters are modeled separately for each left and right context. We use standard decision-tree based clustering of states \cite{young_tied_states} and share modeling units across states within each cluster. We refer to tri-char based modeling units as chenones \cite{le2019senones}. \vspace{-0.5em} \item Wordpiece (wp): In this case, wordpieces are constructed using Sentencepiece \cite{spm} modeling from training transcripts. \end{itemize} \vspace{-0.5em} \section{Numerator and Denominator preparation} \vspace{-0.5em} \subsection{HMM topology} \vspace{-0.5em} \textbf{\textit{Chenone units:}} We use the approach for denominator graph preparation from \cite{lfmmi}, except for replacing phoneme with character i.e. by composing an \{3,4\}-gram character LM with tri-character context-dependency transducer and HMM transducer. The n-gram LM was estimated using the alignments from a previous flatstart trained hybrid LFMMI bi-char system \cite{e2e_lfmmi}. Numerator graph preparation also follows the same approach from \cite{lfmmi} and we apply time-constraints using alignments from the same flatstart-trained hybrid system. \noindent \textbf{\textit {Bi/mono-char units:}} The denominator graph preparation follows the similar approach as described for chenone units in the previous section but using a bi-character context-dependency for the bi-char systems and no context-dependency for the mono-char systems. Also the character LM has to be estimated from transcripts rather than alignments, with randomly inserted silence phones \cite{e2e_lfmmi}. Numerator graph in this case is a full HMM with self-loops following \cite{e2e_lfmmi}. \noindent \textbf{\textit {Wordpiece units:}} The denominator and numerator graph preparation mimic the approach for mono-char units as described in the previous section. The word sequences are converted to wordpieces using a ``wordpiece lexicon" constructed using mappings from a Sentencepiece model \cite{kudo2018sentencepiece} trained on the text. Since the number of wordpieces is usually much larger than the number of characters, to decrease the denominator graph size, we use a \{2,3\}-gram LM on wordpieces for the denominator. An example for numerator FST for wordpiece units with HMM topology is shown in Figure \ref{fig:1_state_num_fst}. \vspace{-0.5em} \subsection{CTC topology} \vspace{-0.5em} \textbf{\textit{Chenone units:}} For chenone units with CTC topology, we first obtain the chenone sequence from a previous flatstart trained hybrid LFMMI bi-char system as in the case of HMM topology described in the chenone-HMM case, and remove repetitions to obtain a label sequence with chenones as the labels. We treat chenones similar to regular characters and compose the sequence with the CTC topology transducer \cite{ctc_crf}. Note that unlike the chenone HMM case, there's no time constraints on numerator FSTs here. For the denominator graph, we first obtain the denominator graph for a 1-state HMM topology as in HMM case, and then convert it to a CTC compatible topology by splitting each state into two states and adding two arcs for consuming blank tokens, in the same way as done in \cite{zhang2020wp_hybrid} for constructing decoding graph for chenone-based CTC models. \noindent \textbf{\textit {Wordpiece units:}} For wordpiece units with CTC topology, the numerator graph (e.g. \ref{fig:1_state_num_fst}) is created in the same way as the chenone case. The wordpiece sequence is generated on-the-fly by tokenizing the reference sequence into wordpieces using a SentencePiece model \cite{kudo2018sentencepiece}. The denominator graph is created by composing a n-gram wordpiece LM with the CTC topology transducer. The n-gram wordpiece LM is estimated from training transcripts tokenized into wordpieces using a Sentencepiece model. We summarized the main properties of the combinations of HMM/CTC topology with different modeling units which we'll study in Table \ref{combination}. Among them, wp-HMM, wp-CTC and ch-CTC are novel schemes in terms of MMI training. \begin{table*}[t] \renewcommand\arraystretch{1.2} \centering \caption{Properties of combinations of different modeling units and label topologies (`mc' = `mono-char', `bc' = `bi-char', `ch'=`chenone')} \vspace{-0.2cm} \begin{tabular}{ || c || c | c | c | c | c ||} \hline \hline Model & wp-HMM & mc/bc-HMM & ch-HMM & ch-CTC & wp-CTC \\ \hline \hline Label topology & \multicolumn{3}{c|}{HMM} & \multicolumn{2}{c||}{CTC} \\ \hline Acoustic-based clustering & \multicolumn{2}{c|}{N} &\multicolumn{2}{c|}{Y} & N \\ \hline Time-constrained Num. FST & \multicolumn{2}{c|}{N} & Y & \multicolumn{2}{c||}{N} \\ \hline Explicit silence modeling & \multicolumn{4}{c|}{Y} & N \\ \hline Training criterion & \multicolumn{2}{c|}{ML / MMI} & CE / MMI & \multicolumn{2}{c||}{ML / MMI} \\ \hline \hline \end{tabular} \label{combination} \label{tab:combination} \end{table*} \vspace{-0.5em} \section{Pre-training with CE/ML models} \vspace{-0.5em} To improve LF-bMMI training performance, we can pre-train the model with either frame-level CE criterion or sequence level ML criterion \cite{e2e_lfmmi}\footnote{Strictly speaking CE is frame-level ML. We make CE comparable to ML since we always refer ML to ``sequence-level ML" in our paper for simplicity.}. ch-HMM models (i.e. HMM topology with chenone units) are the only one for which we use frame-level alignments. For these, we use CE pre-training with the labels obtained from frame-level alignments. For other models, we use sequence-level pre-training with ML criterion. Note that in the case of CTC topology, this is equivalent to the CTC training criterion. In all these cases, the neural network outputs are locally normalized by a softmax layer. When fine-tuning a neural network pre-trained with CE or ML criterion, we empirically found removing softmax and using the logits directly helped performance. However, we subtract the log of the model priors from the logits just as we would when using the model for decoding \cite{hmm_dnn}. We estimate the model priors \cite{manohar2015semi} on a small subset of training data as opposed to the conventional approach of obtaining it from frame-level alignments \cite{hmm_dnn}. This approach is more general as it allows to estimate model priors even for CTC-based systems with blank tokens and for wordpiece-based systems. We additionally apply an acoustic scale $\kappa$ on the neural network outputs before it is combined with the graph scores from the numerator or denominator graphs. In theory, the LF-bMMI objective is normalized at the sequence-level and hence it is capable of learning the linear offset corresponding to the log-priors as well as the acoustic scale. We indeed find that when the model is trained from scratch, we do not need to explicitly supply the log priors or an acoustic scale of 1.0 suffices. But when fine-tuning a pre-trained network, we found that we need to match the priors and acoustic scale to the optimal values during decoding. Using a mis-matched prior or acoustic scale leads to slower convergence. \setlength{\tabcolsep}{0.14cm} \begin{table*}[t] \caption{\textit{dev-clean/other} ML/CE vs. MMI WER and the effect of ML/CE pre-training for MMI (\#ep means \# epochs to reach the best WER).} \begin{subtable}[h] {0.65\textwidth} \centering \small \vspace{-0.2cm} \begin{tabular}{ c | c c | c c | c c} \multirow{2}{*}{Loss} & \multicolumn{2}{c}{wp-HMM} & \multicolumn{2}{|c}{wp-CTC} & \multicolumn{2}{|c}{ch-CTC} \\ \cline{2-7} & WER & \#ep & WER & \#ep & WER & \#ep \\ \hline ML & 7.2 / 17.3 & 69 & 4.6 / 11.5 & 58 & 4.1 / 10.7 & 55 \\ MMI & 4.3 / 11.0 & 60 & 4.4 / 10.6 & 121 & 3.8 / 9.1 & 153 \\ ML $\rightarrow$ MMI & 4.4 / 11.0 & 66 & 4.1 / 10.2 & 89 & 3.7 / 9.0 & 143 \\ \hline \end{tabular} \end{subtable} \begin{subtable}[h] {0.1 \textwidth} \centering \small \vspace{-0.2cm} \begin{tabular}{ c | c c} \multirow{2}{*}{Loss} & \multicolumn{2}{c}{ch-HMM} \\ \cline{2-3} & WER & \#ep \\ \hline CE & 4.2 / 10.6 & 60 \\ MMI & 4.0 / 9.5 & 54 \\ CE $\rightarrow$ MMI & 3.8 / 9.1 & 48 \\ \hline \end{tabular} \end{subtable} \label{ml} \end{table*} \vspace{-0.5em} \section{Experiments} \vspace{-0.5em} \subsection{Comprehensive Analysis on Librispeech} \vspace{-0.5em} \label{libri_setup} Here we perform a series of analysis of LF-bMMI training with different modeling units, label topologies and various configurations on Librispeech \cite{librispeech}. We use the standard (960h) training and (\textit{dev-clean}, \textit{dev-other} sets for training and evaluation respectively. We use the official 4-gram LM pruned to 3-gram with a threshold of $1e^{-9}$) built into HLG/HCLG graphs for decoding. For the AM, we use a 25M-parameters TDNN-BLSTM network with 2 BLSTM \cite{lstm} layers (640 hidden units) in each recurrence direction and 3 TDNN layers \cite{tdnn,vijay_tdnn} (640 hidden units) interleaved between input and first BLSTM layer, and between the 2 BLSTM layers. Unless specified, we use stride (i.e. input frame rate / output frame rate) 8 for wp-CTC/HMM and stride 4 for ch-CTC/HMM models, since previous studies \cite{zhang2020wp_hybrid} have shown wordpieces units can work reasonably well with stride 8, while chenone units cannot because of their short duration. Regarding modeling units, for mc-HMM, we use 29 characters. For bc-HMM, we use 870 bi-char units from text-based clustering. For ch-HMM/ch-CTC systems, we use a set of 1632 chenones corresponding to a tree built from alignments from a bc-HMM model. For wp-HMM/wp-CTC systems, we use a set of 511 wordpieces built from a Sentencepiece model, balancing performance between strides 4 and 8. Unless specified, we always conduct MMI training without pre-training, with $0$ as the boost factor, \texttt{LD} as the \textit{SpecAugment} policy, and 1-state topology for HMM-based systems. \vspace{-0.5em} \subsubsection{Basic results and the effect of ML/CE pre-training} \vspace{-0.5em} We first do comparison of the WERs of LF-MMI training for wp-HMM/CTC and ch-HMM/CTC with their corresponding non-discriminatively trained ML/CE baselines, and then investigate the effect of pre-training with ML/CE for LF-MMI training. Regarding the choice between ML/CE training, since ch-HMM is the only one with frame-level targets, it's natural to go with CE for ch-HMM, and ML for others. From results in Table \ref{ml}, comparing with ML baselines, we can see that wp/ch-CTC-MMI both have around $8-15\%$ relative improvements on \textit{dev-other} and $4-7\%$ relative improvements on \textit{dev-clean}, and pre-training MMI with ML helps provide a better initialization resulting in both faster convergence and better final WER. For wp-HMM, the ML WER is significantly worse and doesn't help for pre-training MMI, which is similar to the finding on mc-HMM in \cite{e2e_lfmmi}. For ch-HMM, MMI achieves $5-10\%$ improvement comparing with CE, and pre-training with CE further brings $4\%$ improvements. \vspace{-0.5em} \subsubsection{The effect of boost} \vspace{-0.5em} Here we study the contribution of the boost factor for bMMI training. From Table \ref{boost} we can see that the boost improves WERs for all four systems. For wp-HMM, wp-CTC, ch-CTC, the relative WER gain is around $2-7\%$ on \textit{dev-clean} and $2-4\%$ on \textit{dev-other}. For ch-HMM, the gain is large: $10\%$ on \textit{dev-clean} and $6\%$ on \textit{dev-other}. We suspect the reason is that ch-HMM is the only system with time-constraints on the numerator FSTs, and thereby the frame posteriors are more accurate, which the boosting mechanism relies on. \begin{table}[h] \centering \small \caption{\textit{dev-clean/other} bMMI WER with different boost values} \vspace{-0.2cm} \begin{tabular}{ c || c | c | c | c } boost & wp-HMM & wp-CTC & ch-HMM & ch-CTC \\ \cline{1-5} \texttt{0} & 4.3 / 11.0 & 4.4 / 10.6 & 4.0 / 9.5 & 3.8 / 9.1 \\ \texttt{0.3} & 4.2 / 11.0 & \textbf{4.2 / 10.4} & 3.7 / 9.2 & 3.7 / 9.1 \\ \texttt{0.5} & \textbf{4.2 / 10.7} & 4.3 / 10.3 & \textbf{3.6 / 8.9} &\textbf{ 3.6 / 8.7 }\\ \texttt{1.0} & 4.2 / 10.9 & 4.4 / 10.9 & 3.6 / 9.3 & 3.6 / 8.9 \\ \hline \end{tabular} \label{boost} \end{table} \vspace{-0.5em} \subsubsection{The effect of \textit{SpecAugment}} \vspace{-0.5em} Here we study the effect of \textit{SpecAugment} for different systems. We study two \textit{SpecAugment} policies -- \texttt{LD}, \texttt{Large}. \texttt{LD} is same as in \cite{spec_augment} but with maximum time mask width of $p=0.2$. \texttt{Large} ($T=30, mT= 10$) is a more aggressive policy which was shown in \cite{zhang2020wp_hybrid} to help performance on Librispeech. From Table \ref{tab:specaug}, we see that without \textit{SpecAugment}, for both wordpiece and chenone units, HMM and CTC models have similar WERs. However, we see that CTC models benefit more from \textit{SpecAugment} compared to the corresponding HMM models, verifying our hypothesis on the advantage of CTC which better models feature masking with blank tokens. \begin{table}[!h] \centering \small \caption{\textit{dev-clean/other} MMI WERs with different \textit{SpecAugment} policies} \vspace{-0.2cm} \begin{tabular}{ p{1cm} || c | c | c | c } Policy & wp-HMM & wp-CTC & ch-HMM & ch-CTC \\ \cline{1-5} \texttt{None} & 4.8 / 13.0 & 4.8 / 13.1 & 4.5 / 11.6 & 4.5 / 11.7 \\ \texttt{LD} & 4.3 / 11.0 & 4.4 / 10.6 & 4.0 / 9.5 & 3.8 / 9.1 \\ \texttt{Large} & 4.4 / 10.8 & 4.3 / 10.3 & 3.9 / 9.2 & 3.8 / 8.9\\ \hline \end{tabular} \label{saug} \label{tab:specaug} \end{table} \vspace{-0.5em} \subsubsection{Comparing different modeling units} \vspace{-0.5em} Here we fix the label topology to be 1-state HMM, and compare the WER and RTF\footnote{When we measure RTF, we optimize the decoding beam so that the WER is $1\%$ worse than the optimal WER. Otherwise we always use a beam of 30.} performance of different modeling units, both wordpiece and character-based units. For wordpiece, we train models with strides 8 and 4. For character-based units we couldn't get reasonable convergence performance with stride 8 and hence stick to stride 4. From Table \ref{units_perf}, we can see that the WER of bi-char is better than mono-char by a large gap ($13-15\%$ relative), while the relative improvement of tri-char on top of bi-char is smaller ($2-7\%$). This implies that even text-based simple clustering can provide quite useful context dependency information. Looking at wordpiece units, we can see that with stride 4, its performance is better than bi-char and close to tri-char, showing wordpieces can also be powerful modeling units without relying on decision tree building. Furthermore, at stride 8, we can see its performance is still $6-10\%$ better than mono-char at stride 4. Unfortunately, the RTFs we report here for wordpiece-based models are much worse than the mono-char case. This is due to increased number of modeling units (29 chars $\rightarrow$ 511 wordpieces), and hence more confusable paths during graph search. However, in real applications where we use much larger AMs so that AM inference dominates the computation, the RTF advantage of stride 8 wordpiece systems would amplified as verified in a previous study \cite{xiaohui2021benchmarking}, where a stride 8 wp-CTC model had better RTF than a stride 3 ch-HMM model using the same encoder. \begin{table}[!h] \centering \small \caption{\textit{dev-clean/other} MMI WER, RTF and TSE of different units with the same (1-state) HMM topology} \begin{tabular}{ p{0.12\columnwidth} | c | c | c | c | c } \vspace{-0.2cm} Unit & \multicolumn{2}{c|}{wordpiece} & mono-char & bi-char & chenone \\ \cline{1-6} \hline Stride & 8 & \multicolumn{4}{c}{4} \\ \hline \hline WER & 4.4 / 11.1 & 3.9 / 10.1 & 4.9 / 11.8 & 4.2 / 10.3 & 4.1 / 9.6 \\ \hline RTF & 0.020 & 0.046 & 0.006 & 0.005 & 0.011 \\ \hline TSE & 86 & 66 & 74 & 47 & 28 \\ \hline \end{tabular} \label{units_perf} \label{tab:units_perf} \end{table} \vspace{-0.5em} We also measure decoding time-stamp accuracy of different models. The metric is the mean absolute error (MAE) between the start/end time-stamps of decoded hypothesized words and reference words, with incorrect words ignored. The reference time-stamps were obtained by aligning the audio with the reference using a bc-HMM system. In table \ref{tab:units_perf}, we report this metric as time-stamp-error (TSE, in ms) on \textit{dev-other}. We see that the ch-HMM model has the smallest TSE, confirming time-constraints in training targets helps the model to learn more accurate alignments. \subsubsection{The effect of HMM topology} \vspace{-0.5em} Here we compare the impact of 1-state vs {\em chain} HMM topology for wp-HMM and ch-HMM models. For wp-HMM, in the {\em chain} case, the set of modeling units gets doubled from the 511 wordpieces as in the 1-state case. For ch-HMM, we choose a 3008-sized tree for the {\em chain} case, which is around two times of the 1632-leaves tree for the 1-state case. From Table \ref{chain}, we can see the impact on ch-HMM models is minor. However the impact on wp-HMM is obvious on \textit{dev-other}, where {\em chain} topology brings $5\%$ WER gain, which agrees with the finding in \cite{e2e_lfmmi}. We believe the reason behind the observation is that: The richer representation provided by {\em chain} topology, better modeling intra-class variations, contributes more to wordpiece units which are longer than chenones. \begin{table}[!h] \centering \small \caption{\textit{dev-clean/other} MMI WER of wp-HMM and ch-HMM with 1-state and {\em chain} HMM topology} \vspace{-0.2cm} \begin{tabular}{ p{1cm} | p{1.2cm} c | p{1.2cm} c} & \multicolumn{2}{c}{wp-HMM} & \multicolumn{2}{|c}{ch-HMM} \\ \cline{1-5} Topo. & \small{1-state} & \small{{\em chain}} & \small{1-state} & \small{{\em chain}} \\ \hline \hline WER & 4.3 / 11.0 & 4.3 / 10.5 & 4.0 / 9.5 & 4.0 / 9.4 \\ \hline \end{tabular} \label{chain} \end{table} \vspace{-0.5em} \subsubsection{The effect of denominator LM order} \vspace{-0.5em} Here we investigate the impact of denominator LM order on denominator FST size and training speed for wordpiece/chenone systems (wp-CTC/ch-HMM). From Table \ref{order} we can see that due to a large set of units which the den. LM is built upon, and the large CTC topology transducer (For a reference, den. FST w/ a 3-gram den. LM for wp-HMM is 5.2M), den. FST size in the wordpiece case is much larger than the chenone case, so that when increasing the order from 2 to 3, per-epoch training time increased by $110\%$, while it only increases by $12\%$ when changing order from 3 to 4 for ch-HMM. In terms of total training time, when increasing den. LM orders, wp-CTC training becomes much more expensive, while ch-HMM training even becomes cheaper. Considering the WER improvement for wp-CTC still looks worthwhile, we decide to stick with order 3 for wordpiece systems and 4 for chenone systems in other experiments. \begin{table}[] \centering \small \caption{\textit{dev-clean/other} MMI WER, denominator LM order/FST size, and training speed for wp-CTC and ch-HMM} \vspace{-0.2cm} \begin{tabular}{ p{2cm} | c | c | c | c} & \multicolumn{2}{c|}{wp-CTC} & \multicolumn{2}{c}{ch-HMM} \\ \hline den. LM order & 2 & 3 & 3 & 4 \\ den. FST size & 4.2MB & 10.2MB & 3.8MB & 4.6MB \\ \hline \hline WER & 4.8 / 11.4 & 4.4 / 10.6 & 4.3 / 9.9 & 4.0 / 9.5 \\ \hline \# epochs & 112 & 121 & 84 & 54 \\ per-epoch hrs & 0.38 & 0.8 & 1 & 1.12 \\ \hline \end{tabular} \label{order} \end{table} \vspace{-0.5em} \subsubsection{Benchmarking the 4 main systems with their optimal setup} \vspace{-0.5em} Here we conduct a comprehensive WER/RTF/TSE benchmark of the 4 main systems we have studied: wp-HMM, wp-CTC, ch-HMM, ch-CTC with their optimal training setup: optimal boost value for each, \textit{SpecAugment} \texttt{Large} policy for all, pre-training for all except wp-HMM, {\em chain} topology for wp/ch-HMM. From Table \ref{optimal}, we can see as expected, ch-HMM achieves the best TSE performance thanks to silence modeling and time-constraints used in training, ch-CTC achieves the best WER (thanks to blank+\textit{SpecAugment}), and also RTF. For wp-HMM and wp-CTC, they perform similarly well on RTF/WER (with wp-CTC's WER at stride 4 being a bit better), while wp-HMM's TSE is much better again thanks to silence modeling. This shows that wp-HMM, which doesn't rely on alignments, is an appealing choice when we need a large-stride $\&$ flat-start trained model providing accurate timestamps. Besides, though ch-CTC has worse TSE than ch-HMM (due to lack of time-constraints in training and CTC's peaky behavior), the gap is much smaller than that of wp-CTC/HMM, showing that silence modeling (which ch-CTC has but wp-CTC doesn't) can effectively improve time-stamp accuracy, even for CTC-based models. \begin{table}[!h] \centering \small \caption{\textit{dev-clean/other} bMMI WER/RTF/TSE of optimal systems} \begin{tabular}{ p{0.08\columnwidth} | c | c | c | c | c | c} \vspace{-0.2cm} & \multicolumn{2}{c|}{wp-HMM} & \multicolumn{2}{c|}{wp-CTC} & ch-HMM & ch-CTC \\ \cline{1-7} \hline Stride & 8 & 4 & 8 & \multicolumn{3}{c}{4} \\ \hline \hline WER & 4.0/10.1 & 3.9/9.7 & 4.0/10.1 & 3.7/9.4 & 3.5/8.5 & 3.3/8.3 \\ \hline RTF & 0.023 & 0.053 & 0.027 & 0.052 & 0.015 & 0.011 \\ \hline TSE & 59 & 45 & 162 & 112 & 25 & 51 \\ \hline \end{tabular} \label{optimal} \end{table} \label{sec:pagestyle} \vspace{-0.5em} \subsection{CTC-bMMI training for real-world large-scale ASR tasks} \vspace{-0.5em} Here we apply the proposed CTC-bMMI training scheme with wordpiece/chenone units (i.e. wp-CTC-bMMI and ch-CTC-bMMI) in two real world large scale ASR tasks and compare with the corresponding ML baselines to confirm its effectiveness. In the first application, we adopt wp-CTC-bMMI for training a large full-context Transformer model, for server-side ASR. In the second application, we adopt ch-CTC-bMMI for training a small limited-context streamable\footnote{Though the emphasis of our paper is bMMI for full-context ASR model training, we intentionally choose a limited-context scenario to show our method can work for streamable models as well.} Emformer \cite{emformer} using convolution operations similar to Conformer \cite{conformer}, for on-device ASR. We focus on CTC-bMMI rather than HMM-bMMI because the emphasis in the applications here is on WER rather than the token time-stamp accuracy. \vspace{-0.5em} \subsubsection{wp-CTC-bMMI for training large Transformer models} \vspace{-0.5em} \label{video_data} Here, we compare CTC-bMMI with the standard CTC (i.e. CTC-ML) and RNN-T criteria on a real-world large scale English video ASR task. The training data consist of de-identified public videos with no personally identifiable information (PII), where only the audio part is used. Besides a development set, there are 3 test sets under different audio conditions: \textit{clean}, \textit{noisy} and \textit{extreme}. These test sets are further segmented by into audio chunks that are no longer than 45 seconds. Decoding is performed on these chunks unless otherwise specified. Training data are segmented into chunks with a maximum duration of 10s. Besides 39.4K hours of supervised training data (including two speed perturbed copies), we prepared 2.2M hours of unsupervised training data, with transcriptions obtained by decoding de-identified public videos by our internal ASR models. Several data filters are applied to keep the most useful data, e.g. confidence filter, word-per-second filter and country filter, etc. No human effort is involved in transcribing these unsupervised data. In total, we have 1.5M hours of semi-supervised training data. We use the same Transformer encoder architecture for each model, consisting of 24 layers, each with 12 attention heads, 768 embedding dimensions, and 3072 feed-forward dimensions. The encoder part has roughly 170M parameters. The input is the same as all other experiments: 80-dimensional log-Mel filter bank features at a 10ms frame rate. A stride of 8 is applied at the input layer by concatenating every 8 feature frames and then project to a dimension of 768, the same as the Transformer embedding dimension. For the RNN-T model, a predictor network consists of 512-dimensional embeddings for each token followed by two LSTM layers with 512 hidden nodes, then a linear projection to 1024-dimensional features before the joiner. For the joiner, the combined embeddings from the encoder and the predictor first go through a $tanh$ activation and then another linear projection to the target number of wordpieces. We use the same set of 511 wordpieces as modeling units for all models, and use the same $4$-gram LM for decoding CTC and CTC-bMMI models. In improve help convergence, for CTC(-ML) training we used CTC loss at intermediate layers. For RNN-T training we used CE loss at intermediate layers. The CTC-bMMI model is pre-trained by the CTC model. The boost value used is 2. Experiment results could be found in Table \ref{video}. We see that the CTC-bMMI model has large ($4-7\%$) WER improvements over CTC especially on the \textit{noisy} and \textit{extreme} sets and is almost on-par with the RNN-T model even without neural LM rescoring. RTF-wise, all models are similar. \begin{table}[h] \vspace{-0.5em} \centering \small \caption{Comparing training criteria for Transformer-based ASR} \vspace{-0.2cm} \begin{tabular}{ c || c c c | c } Loss & \textit{clean} & \textit{noisy} & \textit{extreme} & RTF \\ \cline{1-5} CTC & 8.53 & 12.10 & 18.46 & 0.089 \\ CTC-bMMI & 8.24 & 11.61 & 17.19 & 0.090 \\ RNN-T & 8.01 & 11.49 & 17.04 & 0.094 \\ \hline \end{tabular} \vspace{-1em} \label{video} \end{table} \vspace{-1em} \subsubsection{ch-CTC-bMMI for training small Emformer models} \vspace{-0.5em} Here we study the effectiveness of CTC-bMMI using chenone modeling units in an on-device English ASR scenario, with CTC(-ML) as baselines. Training data are two subsets of the data used in Sec. \ref{video_data}, containing $7000$ and $1000$ hours of videos correspondingly. We use the same test data as in Sec. \ref{video_data}. The model is an Emformer \cite{emformer} model supporting streaming speech recognition using block processing. In training, attention mask and ``right context hard copy'' are used to constrain the look ahead context for self-attention. In this experiment, each block consists of 1.4 seconds left context, 600 ms center chunk size, and 40 ms look-ahead context size. The algorithmic latency~\cite{emformer} of the acoustic model is 340 ms. A stride of 4 is applied at the input layer by concatenating every 8 feature frames and then project to a dimension of 256, used as input to the stack of 12 Emformer layers. Each Emformer layer has a multi-head self-attention layer with four heads, input size 256, a feed-forward layer with hidden dim 1024, and a depth separable convolution layer with kernel size 15. The model has roughly 18M parameters. From the results in Table \ref{streaming}, we can see that in the 1000h condition, CTC-bMMI has $20-30\%$ relative WER improvement over CTC, which is much larger than the gain ($11-16\%$) in the 7000h condition, showing discriminative training helps more when we have less data, and when the models are smaller (comparing with Table \ref{video}). \begin{table}[h] \centering \vspace{-0.5em} \small \caption{Comparing training criteria for Emformer-based ASR} \vspace{-0.2cm} \begin{tabular}{ c || c c c | c } Criterion & \textit{clean} & \textit{noisy} & \textit{extreme} & training hours \\ \cline{1-5} CTC & 25.38 & 32.18 & 39.65 & \multirow{2}{*}{1000h} \\ CTC-bMMI & 17.63 & 23.36 & 31.02 & \\ \hline CTC & 18.44 & 23.97 & 31.38 & \multirow{2}{*}{7000h} \\ CTC-bMMI & 15.45& 20.59 & 27.71 & \\ \hline \end{tabular} \label{streaming} \end{table} \vspace{-1em} \subsection{bc-HMM-MMI for alignment model training} \vspace{-0.5em} Here, we study an important application of HMM-MMI models with bi-char units (bc-HMM-MMI): alignment generation. Accurate alignments are important for ASR, in terms of both audio segmentation and providing training targets for main/auxiliary ASR training tasks even for RNN-T \cite{chunxi2021aux,ar_rnnt}. In order to train an alignment model from scratch, people have been mainly relying on GMM-HMMs, e.g. from Kaldi \cite{kaldi}. However, single-stage trained, HMM-based neural models, e.g. bc-HMM-MMI models, can be more appealing candidates (used in Kaldi OCR recipes\cite{arora2019using} already), which may provide more accurate alignments especially on noisy data, and moreover, enable an all-neural acoustic modeling pipeline. To the best of our knowledge, there's no prior literature confirming this by benchmarking bc-HMM-MMI models with GMM-HMMs. Here we conduct this benchmark by training a bc-HMM-MMI neural model and a GMM model (following the Kaldi recipe) with the same data and graphemic lexicon, evaluate their WERs, and then generate alignments on the same training data, on top which we then train two CE neural models and evaluate their WERs for measuring the alignment quality. The two CE models and the bc-HMM-MMI alignment model all have the same architecture as the one used in \ref{libri_setup}, except the stride is 3 here. Using the same architecture enables us to show another advantage of bc-HMM-MMI alignment models: Besides generating alignments, it can also serve as a pre-trained seed model for the following modeling stage to improve training performance, which can't be done with GMMs. We conduct the experiments on Librispeech where we train models on the full 960h data and evaluate WERs on \textit{dev-other}, and a Tagalog Video ASR task (whose description is the same as \ref{video_data}) where we train models on 1000h Tagalog videos and evaluate WERs on the \textit{noisy} test set. From the results shown in Table \ref{bootstrap}, we can see that the bc-HMM-MMI neural alignment model achieves on-par alignment quality as GMM evaluated by CE WER. On Tagalog Video ASR, which is much noiser than Librispeech, the bc-HMM-MMI model is capable of generating much better alignmetns, reducing CE WER by $14\%$ relatively. Besides, pre-traing the CE models with bc-HMM-MMI seed models indeed bring down CE WERs further, by $2\%$ (Tagalog) or $5\%$ (Librispeech) relatively. This shows besides serving as a strong alignment model, a bc-HMM-MMI model can also serve as as a seed model for downstream modeling tasks. \begin{table}[] \centering \small \caption{Alignment model and CE model WERs, on Tagalog video (\textit{noisy}) and Librispeech (\textit{dev-other})} \begin{tabular}{ c | c | c | c | c} & \multicolumn{2}{c|}{Alignment Model} & CE & CE w/ seed \\ \hline \hline \multirow{2}{*}{Tagalog Video} & GMM & 61.7 & 38.0 & -\\ & bc-HMM-MMI & 27.5 & 32.6 & 31.9 \\ \hline \multirow{2}{*}{Librispeech} & GMM & 30.1 & 11.3 & - \\ & bc-HMM-MMI & 10.0 & 11.2 & 10.6 \\ \hline \end{tabular} \label{bootstrap} \end{table} \vspace{-0.5em} \section{Conclusion} \vspace{-0.5em} In this paper, we generalized the original chunk-wise HMM-based LF-bMMI training framework to a new framework, where full-context neural network training is enabled by full-sequence LF-bMMI training, supporting both HMM and CTC as the label topology, and mono-char/bi-char/chenone/wordpieces as modeling units. Comprehensive studies were conducted on Librispeech to understand the impact of boost factor, CE/ML pre-training, \textit{SpecAugment} and denominator LM order to different training schemes. From this framework, we proposed wp-CTC-bMMI and ch-CTC-bMMI training schemes with WER advantages, studied also in two large scale real-world ASR tasks, and wp-HMM-bMMI training scheme with advantages in large-stride inference, time-stamps accuracy, and alignment-free training. In the future we would like to further generalize LF-bMMI training to RNN-T-type of topologies. \bibliographystyle{IEEEbib}
2,877,628,090,868
arxiv
\section{Abstract} The Exoplanet Modeling and Analysis Center (EMAC) at NASA Goddard Space Flight Center is a web-based catalog, repository, and integration platform for modeling and analysis resources focused on the study of exoplanet characteristics and environments. EMAC hosts user-submitted resources ranging in category from planetary interior models to data visualization tools. Other features of EMAC include integrated web tools developed by the EMAC team in collaboration with the tools' original author(s) and video demonstrations of a growing number of hosted tools. EMAC aims to be a comprehensive repository for researchers to access a variety of exoplanet resources that can assist them in their work, and currently hosts a growing number of code bases, models, and tools. EMAC is a key project of the NASA GSFC Sellers Exoplanet Environments Collaboration (SEEC) and can be accessed at \url{https://emac.gsfc.nasa.gov}. \section{Introduction} It has been three decades since the discovery of the first extrasolar planets. In that time, the research output and publications associated with exoplanet observations, data analysis, and modeling has risen exponentially; at the same time, transformations in the technical and cultural aspects of information dissemination have made the sharing of resources, software, and model inputs much simpler and more ubiquitous. However, there has not been a comprehensive platform for cataloging, sharing, and comparing these resources in an exoplanet research context. \par The \href{https://seec.gsfc.nasa.gov/}{Sellers Exoplanet Environments Collaboration} (SEEC) at NASA Goddard Space Flight Center has initiated the \href{https://emac.gsfc.nasa.gov}{Exoplanet Modeling and Analysis Center} (EMAC) to serve this purpose. EMAC is a web and mobile-accessible system that serves as a catalog, repository, and integration platform for modeling and analysis resources focused on the study of exoplanet characteristics and environments. At the time of writing, EMAC has cataloged over 170 tools and resources split into 11 scientific categories and numerous subcategories. In this report, we describe the design and future plans for this project. We encourage developers in the exoplanet community to submit their resources to the platform and utilize EMAC's search tools and subscription services when starting their next project. \par \section{Exoplanet Resource Database} An EMAC-listed resource is any software, data visualization tool, or collection of model inputs or outputs related to exoplanet science. Users can \href{ https://emac.gsfc.nasa.gov/submissions/}{submit} a resource to EMAC where it will be reviewed by our team to ensure its applicability. Once approved, it will be featured on the EMAC homepage (See Figure \ref{fig1}) and will be searchable by category or keyword. Each resource is displayed to users in discrete ``resource blocks.'' These blocks contain metadata such as authors, summaries, as well as links to 3rd-party code repositories, tutorials, notebooks, and documentation. EMAC automatically checks all resources for broken links every month and works with the resource authors to update or fix these issues. \par \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figure1.png} \caption{\label{fig1} The EMAC homepage features all user-submitted resources that have been approved by the EMAC team. Major scientific themes can be seen to the left of the image. These categories can be clicked by a user to filter the resource list based on one or more topics and capabilities. A search bar is available for users to quickly find a specific tool. Each resource block shows a summary of the resource, links to applicable code repositories, documentation, and tutorials.} \end{figure} Every resource is provided a unique ID that follows a format similar to the ArXiv manuscript repository: "[2-Digit Year][2-Digit Month]-[3-Digit Sequential Number]". A resource can be accessed on the EMAC website utilizing its ID via the URL scheme: "https://emac.gsfc.nasa.gov?cid=UniqueID". The URL provides a means to quickly and easily share multiple types of information (code, databases, tutorials, documentation, etc.), all of which may be hosted across multiple sites and servers, with a single, permanent link. \par The primary users of EMAC are exoplanet scientists that are looking to solve a problem but may not know the best software or dataset to use. To assist these users, EMAC offers a multi-category search function so that visitors can create filters to list only those resources that apply to their questions. Categories include ``Planetary Atmosphere Models'', ``Planetary Interior Models'', ``Planet Formation and Dynamics Tools'', ``Observatory/Instrument Models'', and many more. Each of these categories can have several sub-categories to allow for more refined searches. EMAC is always looking to expand into new areas of research, and we encourage authors who have a resource that may not fit a current category to either submit and suggest a new category or contact the team to offer new category suggestions. In addition to filtering, resources with significant heritage in common can be linked to one another using the ``Related'' button. This allows users to quickly find tools with a common code base or compare and contrast various exoplanet models with one another. \par \section{Promoting and Supporting Resources \& Developers} Resource development is time-intensive and scientists who publish their tools online often do not have the time or expertise to promote them beyond brief mentions in talks and papers. EMAC provides services to promote resources with little-to-no extra work on the author's end. We offer a \href{https://emac.gsfc.nasa.gov/subscriptions/}{subscription service} and a \href{https://emac.gsfc.nasa.gov/news/rss/}{RSS feed} that interested users can sign up to based on their research interests to learn when a new resource is posted, or updates are made to an existing one. Our \href{https://twitter.com/ExoplanetModels}{Twitter} account shares information about resources and connects users with developers should questions arise. The EMAC team has also created video tutorials for various exoplanet software which are then hosted on our \href{https://www.youtube.com/channel/UCLRJnT1l6CGC8aU2o2MofXw}{YouTube} channel, which has received thousands of views. \par Based on user feedback, we have found that the most accessible tools are those that have web-based applications or graphical user interfaces. However, these can be time-consuming for authors to develop when their focus is on conducting scientific research. For that reason, EMAC offers users assistance in building out these resources, particularly for applications that utilize \href{https://jupyter.org/}{Jupyter notebooks} that can be easily opened on the web-based \href{https://mybinder.org/}{MyBinder} platform. Our team can assist researchers in putting together these web-based tutorials that showcase their software and its use cases. \section{Future Plans} The primary missions of EMAC are to improve the accessibility of exoplanet resources, foster inter-code comparison and validation, and provide a venue to support developers of publicly available exoplanet software resources. To further these goals, we will offer the first \href{https://emac.gsfc.nasa.gov/workshop/}{EMAC workshop} in early 2023, which will bring our community of developers and users together to present and share codes and expertise with the wider exoplanet community and highlight the exciting work being done with these software packages. We hope that these regular events, followed up by an online discussion platform, will lead to long-term networking across exoplanet disciplines and will encourage future model comparisons and new code development. \par Going forward, we plan to host more web-based tools and visualization resources to showcase the capabilities of EMAC-listed resources and inform when one resource may be more applicable to a specific problem compared to another. While EMAC's primary audience is exoplanet scientists, we also plan to provide outreach to non-scientists and students by working with organizations to utilize a subset of EMAC-listed tools in high school and college classrooms. This learning-by-doing approach will both prepare students for a research career and educate them about the fundamentals of exoplanet science and scientific computing in general. \par We invite the exoplanet community to \href{https://emac.gsfc.nasa.gov/submissions/}{submit} their resources, join EMAC's \href{https://emac.gsfc.nasa.gov/subscriptions/}{subscription service}, and \href{mailto:gsfc-emac@mail.nasa.gov}{connect} with our team to explore future ideas and collaborations. \section{Acknowledgments} EMAC is a key project of the NASA Goddard Space Flight Center's \href{https://seec.gsfc.nasa.gov/}{Sellers Exoplanet Environments Collaboration (SEEC)}, which is funded through the Internal Scientist Funding Model (ISFM) by NASA's Planetary Science Division (PSD), the Astrophysics Division (APD), and the Heliophysics Division (HPD). J.P.R, C.A.C, C.K., and N.S. are supported by NASA under award number 80GSFC21M0002. \end{document}
2,877,628,090,869
arxiv
\section{Introduction} Interactions of the fermion field with solitons have been subject to intense research since the pioneering work of Jackiw and Rebbi \cite{jackiw1976solitons}. Interestingly, they found that solitons may have an associated half-integer fermion number whenever there exists a bound zero-energy solution for the fermion field. Later, a series of works has shown that a soliton can have any fractional fermion number \cite{goldstone1981fractional,niemi1986fermion,alonso2019soliton}. In particular, Jackiw and Rebbi investigated a model where the fermion is coupled with a $\phi^4$ kink via Yukawa coupling, ignoring the back-reaction. This model can be solved analytically and has a well-known set of bound and scattering states as shown for instance in \cite{chu2008fermions,charmchi2014complete} considering a massless fermion field and in \cite{charmchi2014massive} a massive one. Since then other kink-fermion systems have been studied. For instance, it is possible to compute the Casimir energy of the fermion field when the fermion is chirally coupled with a prescribed scalar field \cite{shahkarami2011casimir,gousheh2013casimir, gousheh2014investigation}. Another example is the computation of the energy and eigenfunctions of fermion bound states where the background scalar field is a modified sine-Gordon or a modified $\phi^4$ \cite{bazeia2017fermionic,bazeia2019fermion}. Moreover, kink-fermion interactions arise naturally in supersymmetric systems \cite{charmchi2014one}. In higher dimensions, fermions have been studied in the background of vortices \cite{jackiw1981zero}, chiral fields \cite{kahana1984soliton} and skyrmions \cite{hiller1986solutions}, for example. This problem becomes more interesting when one considers a kink-antikink pair, instead of just a kink, as the background. This was done in \cite{chu2008fermions} for the $\phi^4$ model where the authors computed the energy spectrum and eigenstates of a fermion in such background. They showed that, as the distance of the kink-antikink increases, the fermion bound states and energy approach the ones of a single kink, as expected. This result was repeated in \cite{brihaye2008remarks}, now considering sine-Gordon kink-antikink pair instead of the $\phi^4$. There, snapshots of the exact solution of a kink-antikink collision were considered as the background field, however, without any reference to the problem of bound states after the collision. The issue here is that there are no bound states after the collision. This problem arises in many cases where the kink is not centered around the origin. In \cite{brihaye2008remarks} this problem was circumvented by shifting the sine-Gordon kink to center it around the origin, however, after the collision, this is not the case anymore and there is no bound state. Therefore, it is hard to find models where we can discuss fermion bound states in kinks collision backgrounds that go beyond the $\phi^4$ model. There is a more intriguing problem than computing fermion bound states for a kink-antikink background which is the exchange of fermions between the kinks or the fermions transfer between fermion bound states during a kink and antikink collision, as done in \cite{gibbons2007fermions,saffin2007particle}. There, the background scalar field is not fixed anymore and evolves dynamically. During the collision, the fermion is affected by the scalar field and can stay on the kink, be transferred from the kink to the antikink or radiate. This is the type of analysis that we focus on in the present paper. In \cite{gibbons2007fermions,saffin2007particle}, this analysis was motivated by previous works investigating the possibility that higher-dimensional universes can behave like a four-dimensional one if particles are bound to a brane that localizes them in the extra dimensions \cite{rubakov1983we,koley2005scalar, randjbar2000fermion,melfo2006fermion}. The authors in \cite{gibbons2007fermions,saffin2007particle} tried to understand the fate of fermions when such branes collide and it is interesting to have this interpretation in mind. It is worth pointing out that it is possible to add another ingredient to the problem: the back-reaction of the fermion on the soliton. It has been shown that a prescribed soliton is a good approximation for small coupling constants and considering the back-reaction can create bound soliton-antisoliton pairs and also can mediate interactions between the solitons \cite{shahkarami2011exact,amado2017coupled,klimashonok2019fermions,perapechka2018soliton, perapechka2020kinks,perapechka2019fermion}. Here, we study a fermion field coupled to a scalar field via the Yukawa interaction in (1+1) dimensions as in \cite{amado2017coupled}. However, as in most works cited above, we consider the scalar field as a background, even for larger values of the coupling constant, because it greatly simplifies the problem, allows some analytical treatment and a more direct comparison to the works mentioned before. As the scalar field is evolving dynamically in our study, it is important to highlight some relevant works involving kink-antikink collisions. One of the pioneering ones was done by Sugiyama \cite{sugiyama1979kink} where the author estimated the critical velocities in kink-antikink collisions using a collective coordinate approach. A few years laters, Campbell et al. \cite{campbell1983resonance} did a precise numerical computation of the pattern of resonance windows. Remarkably, the authors showed that while a kink and an antikink annihilate for small relative velocities and reflect for high relative velocities, there are intermediate velocities where they collide multiple times before separating. Furthermore, they gave an approximate explanation for the resonance windows phenomena as an energy exchange mechanism between the kink translational and vibrational energy. More recent works in kink-antikink interactions include $\phi^4$ model and modifications \cite{takyi2016collective,bazeia2018scattering, dorey2018resonant}; interaction of kinks in higher-order models such as $\phi^6$ and $\phi^8$ \cite{takyi2016collective,dorey2011kink,gani2014kink,gani2015kink}; coupled two-component kinks \cite{alonso2018reflection, halavanau2012resonance}; models with power-law asymptotics \cite{gomes2012highly,belendryasova2019scattering,christov2019kink}; and others \cite{simas2017degenerate,marjaneh2017multi}. It is a rich field of research with many interesting works and novel results. More recently some attention has been directed towards the $\phi^4$ model with a half-BPS preserving impurity \cite{adam2019phi,adam2019spectral}. In this model, the impurity is a term in the Lagrangian that breaks translational invariance in such a way that the model still admits one BPS solution. The model admits topological and nontopological defects consisting of kinks, antikinks and lumps. During the collisions, some of the interactions between the defects are BPS preserving versus the others which are not. Similar half-BPS preserving models with exactly solvable BPS sector were also considered in \cite{adam2019solvable}. Supersymmetric extensions of these models, where the scalar field naturally couples to a fermion field, were studied in \cite{adam2019bps}. Here, we take the solutions of the $\phi^4$ model with the half-BPS preserving impurity as a background, coupled to a fermion field similar to \cite{adam2019bps}, although non-supersymmetrically. For a specific range of parameters, the model gives rise to fermion bound states for the fermion interacting with the scalar field configurations. We study fermion transfer where the background is a collision between the defects of this model, with both BPS and non-BPS interactions. In section \ref{model}, we present the $\phi^4$ model with a half-BPS preserving impurity, interacting with a fermion field via a Yukawa coupling. In section \ref{Results}, we study the time evolution and transfer of the fermion field during a collision between different components in the scalar field. Finally in section \ref{conclusion}, we discuss and summarize our conclusions. \section{Model} \label{model} \subsection{Lagrangian and Euler-Lagrange equations} We study a model given by the following Lagrangian in $1+1$ dimensions, which can be organized into three types of terms \begin{equation} \mathcal{L}=\mathcal{L}_{scalar}+\mathcal{L}_{fermion}+\mathcal{L}_{int}. \end{equation} The scalar Lagrangian is the soliton-impurity model studied in \cite{adam2019phi} \begin{equation} \mathcal{L}_{scalar}=\frac{1}{2}\phi_t^2-\frac{1}{2}\phi_x^2-U(\phi)-2\sigma\sqrt{U(\phi)}-\sqrt{2}\sigma\phi_x-\sigma^2, \end{equation} that differs from the typical scalar field theories due to the $\sigma$ term, which describes a half-BPS preserving impurity. The parameter $U(\phi)$ is the scalar potential term depending on the scalar field $\phi(x,t)$. The fermion Lagrangian is given by \begin{equation} \mathcal{L}_{fermion}=i\bar{\psi}\gamma^\mu\partial_\mu\psi, \end{equation} and we consider a Yukawa interaction \begin{equation} \mathcal{L}_{int}=-g\phi\bar{\psi}\psi. \end{equation} The scalar Lagrangian demands some deeper discussion. Following \cite{adam2019phi}, we choose the potential as in the $\phi^4$ theory \begin{equation} U(\phi)=\frac{1}{2}(1-\phi^2)^2. \end{equation} The $\sigma$ terms are added to the Lagrangian such that the system still has one BPS solution resulting from \begin{equation} \label{BPS} \phi_x+\sqrt{2}\sigma+(1-\phi^2)=0. \end{equation} This should be compared with the $\phi^4$ model, where two BPS solutions exist, associated with each of the two topological sectors (kink and antikink), instead. The BPS solution in Eq.~\ref{BPS} corresponds to the antikink solution of the $\phi^4$ model. Hence, to find the kink solution we solve the full second-order field equation. To simplify the kink solution, $\sigma$ is chosen such that the $\phi^4$ kink at the origin is still a solution. This leads to \cite{adam2019phi} \begin{equation} \label{sigma} \sigma=\frac{\lambda}{\cosh^2(x)}, \end{equation} where $\lambda$ is a constant in the range $\lambda>-\sqrt{2}$. In other words, $\phi_{K_0}(x)=\tanh(x)$ solves the field equations when $\sigma$ is given by Eq.~\ref{sigma}. However, $\phi_K(x;x_0)=\tanh(x-x_0)$ does not solve the field equations for this choice of $\sigma$, if $x_0\neq0$. For more details regarding the $\phi^4$ scalar field with this half-BPS preserving impurity see reference \cite{adam2019phi}. The Euler-Lagrange equations for this model are \begin{align} \label{phiEL} \phi_{tt}-\phi_{xx}+2(\phi^2-1)\phi&=\frac{2\sqrt{2}\lambda}{\cosh^2(x)}(\phi-\tanh(x))\\ \label{psiEL} i\gamma^\mu\partial_\mu\psi-g\phi\psi&=0. \end{align} In Eq.~\ref{phiEL} we ignored the term proportional to $g\bar{\psi}\psi$, meaning that we disregarded the back-reaction of the fermion on the scalar field. To solve Eq.~\ref{psiEL}, let us choose a representation for the gamma matrices. We choose the complex representation $\gamma^0=-\sigma^2$, $\gamma^1=i\sigma^3$. In this representation, the fermion field can be split into two decoupled Majorana fields \begin{equation} \psi=\psi^M_1+i\psi^M_2. \end{equation} Each of these fields has two real components. We ignore the second Majorana field $\psi^M_2$ because it has identical equations to $\psi^M_1$. Writing $\psi^M_1=(\psi_1,\psi_2)^T$, the Euler-Lagrange equation becomes \begin{align} \label{fermion1} \partial_t\psi_1&=-\partial_x\psi_2+g\phi\psi_2,\\ \label{fermion2} \partial_t\psi_2&=-\partial_x\psi_1-g\phi\psi_1. \end{align} \subsection{Scalar field solutions} \label{scalar} The solutions discussed in this section were originally found in \cite{adam2019phi}. Let us consider static solutions of the scalar field first. The first interesting static solution is the kink-on-impurity $K_0$ given by $\phi_{K_0}=\tanh(x)$. The subscript $0$ indicates that it is bound to the impurity. As discussed before, the model was constructed such that this is still a solution of the field equations as can be seen in Eq.~\ref{phiEL}. The second interesting static solution is the antikink. As the BPS property is preserved for antikink solutions, they consist of a family of solutions related by a generalized translational symmetry as shown in \cite{adam2019phi}. These solutions $\phi_{AL}(x;x_0)$ are the full BPS antikink solutions. They can be parameterized by a coordinate $x_0$ and usually consist of a $\phi^4$ antikink $A$ and a lump $L$. They can be found numerically integrating Eq.~\ref{BPS} with different initial conditions. We choose the parameter $x_0$ such that the initial condition is $\phi_{AL}(x_0;x_0)=0$. We fix $\lambda=-1.0$ for the reason which will be discussed shortly. The antikink solution symmetric around the origin is called antikink-on-impurity $A_0$. It is given by $\phi_{A_0}(x)\equiv\phi_{AL}(x;0)$ and is shown in Fig.~\ref{antikink-lump} (dotted line). This solution resembles a kink at the origin surrounded by two symmetric antikinks. As we translate the antikink from the origin the solution becomes a translated antikink $A$ and a lump which is near the origin, where the impurity is. This is shown in Fig.~\ref{antikink-lump} (dashed line). Finally, in the limit that the antikink is translated to plus or minus infinity the solution consists of only a lump $\phi_{L^\pm}(x)=\phi_{AL}(x;\pm\infty)$ as shown in Fig.~\ref{BPS} (solid line) for $L^-$. The two lump solutions $L^\pm$ differ by the property that $\phi_{L^+}(\pm\infty)=1$, while $\phi_{L^-}(\pm\infty)=-1$. \begin{figure}[tbp] \centering \includegraphics[width=0.5\linewidth]{antikink-lump.pdf} \caption{BPS solutions of the scalar field. The full line is the lump solution, The dashed line is an antikink to the left and a lump and the dotted line is the antikink-on-impurity solution. This figure is a reproduction of results in \cite{adam2019phi}.} \label{antikink-lump} \end{figure} Next, we would like to build approximate composite solutions using the additive ansatz. This can be built using a solution of the complete field equations (such as $\phi_{K_0}$, $\phi_{A_0}$ and $\phi_{L^\pm}$) and a $\phi^4$ solution far from the origin. For instance \begin{equation} \phi(x)=\phi_{K_0}(x)+\phi_A(x;x_0)+1,\label{additive ansatz} \end{equation} where $\phi_A=-\tanh(x-x_0)$ is the solution of the $\phi^4$ antikink $A$. This is an approximate solution only for $x_0\ll-1$. If one replaces $+1$ by $-1$ in the above equation, the condition changes to $x_0\gg1$. It is easy to see that Eq. \ref{additive ansatz} is a solution of the field equations except for an exponentially decreasing overlap. The same is true if we add a boosted antikink $\phi_A(x,t;x_0,v)=-\tanh(\gamma(x-vt-x_0))$ to the kink-on-impurity \begin{equation} \label{inicond1} \phi(x,t)=\phi_{K_0}(x)+\phi_A(x,t;x_0,v)+1, \end{equation} where again $x_0\ll-1$. We will discuss the evolution of this solution in the following sections. Using a similar reasoning, we can approximate the BPS solution $\phi_{LA}(x,x_0)$ for $x_0\ll-1$ using the additive ansatz considering a static antikink $\phi_A(x;x_0)$ \begin{equation} \phi_{LA}(x;x_0)\simeq\phi_{L^-}(x)+\phi_A(x;x_0)+1. \label{additive ansatz2} \end{equation} or the boosted one $\phi_A(x,t;x_0,v)$, where the solution is close to the BPS regime for small $v$. Moreover, it is also possible to build solutions with the $\phi^4$ kink $K$ the same way. In the following sections, we will consider the evolution of the aforementioned solutions treating the scalar field classically. \subsection{Fermion bound states} \label{fermion-bs} Now, let us study the fermion field in the presence of static or boosted solutions of the scalar field discussed in the previous section. The bound states are found by solving Eqs. \ref{fermion1} and \ref{fermion2}. First we use the ansatz $\psi_1=\eta_+\cos(\omega t-\theta)$ and $\psi_2=\eta_-\sin(\omega t-\theta)$. After substituting one equation into the other, this gives two decoupled equations for $\eta_\pm$ \begin{equation} \label{schrodinger} -\partial^2_x\eta_\pm+g(g\phi^2\mp\partial_x\phi)\eta_\pm=\omega^2\eta_\pm, \end{equation} which is a Schr\"{o}dinger-like equation with the potential $V_\pm=g(g\phi^2\mp\partial_x\phi)$. These equations have well-known solutions for the $\phi^4$ kink and antikink as shown in \cite{chu2008fermions,charmchi2014complete}. For instance, the fermion zero mode of the $\phi^4$ kink centered at $x_0$ is given by \begin{equation} \psi_1=\mathcal{N}\cosh^{-g}(x-x_0),\quad\psi_2=0, \end{equation} where $\mathcal{N}$ is the normalization constant. The full discrete spectrum is given by \begin{equation} \omega_n=\sqrt{n(2g-n)},\quad0\leq n<g,\quad n\in Z^+. \end{equation} The fermion zero mode always exists for this model and the first excited state appears for $g\geq 1$. Therefore, we set $g\geq 1$ to include the first excited state of the kink in our analysis. The solutions can be boosted in the standard way. We set $x^\prime=\gamma(x-vt)$ and $t^\prime=\gamma(t-vx)$ together with \begin{align} \psi^\prime_1(x,t)&=\cosh(\chi/2)\psi_1(x^\prime,t^\prime)+\sinh(\chi/2)\psi_2(x^\prime,t^\prime),\\ \psi^\prime_2(x,t)&=\sinh(\chi/2)\psi_1(x^\prime,t^\prime)+\cosh(\chi/2)\psi_2(x^\prime,t^\prime), \end{align} where we defined the rapidity $\chi=\tanh^{-1}v$. For the other static solutions $\phi_{LA}(x;x_0)$, Eq.~\ref{schrodinger} is solved numerically as discussed below. We studied the spectrum of the fermion field bound to the lump with $g\geq 1$ and found that there are only bound states if $\lambda<0$ as shown in Fig.~\ref{spectrum} for $g=2.0$. Therefore, we choose a negative value of $\lambda=-1.0$. Moreover, for this value of $g$, the system coincides with the supersymmetric case and, similar to the discussion in \cite{adam2019bps}, the spectrum of the fermion bound to the lump is the same as the spectrum of scalar field perturbations around the lump shown in \cite{adam2019phi}. The fermion spectrum in the lump background has no bound states for $\lambda>0$ because $V_\pm$ has no minimum in this case. As we decrease $\lambda$ below zero, a minimum in $V_\pm$ appears together with a fermion bound state. Decreasing further, we approach the limit where the lump becomes a kink-antikink pair giving rise to a fermion zero mode and two degenerate discrete modes with $\omega^2=3$. The depth of the potential $V_\pm$ increases with $g$ and more fermion bound states appear accordingly. \begin{figure}[tbp] \centering \includegraphics[width=0.9\linewidth]{Spectrum-lump-and-antikink.pdf} \caption{Spectrum of the fermion field coupled to (a) the lump and (b) to the BPS antikink with $\lambda=-1.0$. We set $g=2.0$ (the supersymmetric case). The spectrum is identical to the scalar field spectrum in \cite{adam2019phi}, as expected.} \label{spectrum} \end{figure} The fermion spectrum bound to BPS antikink can also be computed numerically. It is shown in Fig.~\ref{spectrum}(b) for $\lambda=-1.0$ again in the supersymmetric case $g=2.0$. This can also be compared with the spectrum of the perturbations in the scalar field shown in \cite{adam2019phi}. In the limit $x_0\to\pm\infty$ we see that the fermion spectrum approaches the values of the separate lump and $\phi^4$ antikink, which consists of the zero fermion mode and first excited state bound to the $\phi^4$ antikink and the three discrete fermion states bound to the lump. For $x_0\simeq 0$, the spectrum is slightly deformed. Notice that the highest excited state of the BPS antikink shown in Fig.~\ref{spectrum}(b) disappear in the continuum for small values of $x_0$ similarly to what happens in \cite{adam2019spectral}. This could have interesting effects and be the subject of future investigation. The result of the study will be reported elsewhere. \section{Results} \label{Results} \subsection{Scalar field collisions} Now let us study collisions between defects of the scalar field, as done in \cite{adam2019phi}. It is necessary to repeat the computation here before including the fermion field, however, we will be brief. The details of the numerical integration are given in appendix \ref{ap1}. Here, we consider two types of collisions, one with and one without BPS interactions, among the ones investigated in \cite{adam2019phi}. The first type of collision is between an antikink and a lump. The initial condition for this collision is given by Eq.~\ref{additive ansatz2} with a boosted antikink, which occurs very close to the BPS regime as discussed in section \ref{scalar}. For the value of the parameters used, the kink passes smoothly through the lump which is possible to see in Fig.~\ref{phi-and-density}(a). The process can be written schematically as \begin{equation} \label{process1} A+L^-\to L^++A. \end{equation} Notice that after the antikink passes through the lump $L^-$, the lump becomes $L^+$ to adjust with the boundaries. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.85\linewidth} \includegraphics[width=\linewidth]{phi.pdf} \end{subfigure} \begin{subfigure}[b]{0.85\linewidth} \includegraphics[width=\linewidth]{density.pdf} \end{subfigure} \caption{Upper graphs: Evolution of the scalar field during a collision between (a) an antikink and a lump and (b) an antikink and a kink-on-impurity. These two graphs are the reproduced results in Figs. 14 and 24 in \cite{adam2019phi} with different parameters. Lower graphs: Evolution of the fermion field during the background collision between (c) the antikink and the lump and (d) the antikink and the kink-on-impurity. Parameters are $g=2.0$, $v=0.3$ and $\lambda=-1.0$.} \label{phi-and-density} \end{figure} The next collision we consider is between an antikink and a kink-on-impurity. The initial condition for this collision is given by Eq.~\ref{inicond1}. The kink is tightly bound to the impurity for the chosen values of the parameters. Therefore, the antikink is reflected and the kink remains at the impurity after the collision as shown in Fig.~\ref{phi-and-density}(b). This can be written as \begin{equation} \label{process2} A+K_0\to A+K_0. \end{equation} Reflection happens for high velocities only, while for small velocities the kink and the antikink annihilate or resonate. This behavior is reminiscent of the $\phi^4$ theory and we will consider only the values of $v$ where the antikink reflects because otherwise, the system does not have a well defined final state. In the following subsections, we will study the evolution of the fermion field in these two background collisions. \subsection{Bogoliubov Coefficients} We treat the fermion field quantum mechanically in contrast with the scalar field which is treated classically. Now let us discuss the formalism necessary to study the time evolution of the fermion field in the two scenarios discussed in the previous subsection. To do so, we consider a fermion field localized on a defect before the collision in the asymptotic past at $t=0$ and find the fermion field evolution in time via the Bogoliubov coefficients $B_{j\to k}$ \begin{equation} \psi_{in}^j(t)=\sum_kB_{j\to k}(t)\psi_{out}^k(t), \end{equation} where the indices $j$ and $k$ specify the type of the defects present initially and after the collision, respectively, with the fermion field bound to them. In the above equation, $\psi_{in}^j(t)$ is the initial fermion bound state evolved in time and initially localized on the defect type $j$ at one of its associated bound states. Time evolution is done integrating the equations of motion. On the other hand, $\psi_{out}^k(t)$ is the final fermion \text{state} bound to the defect type $k$ present after the collision at time $t$. Therefore, the coefficients are given by \begin{equation} B_{j\to k}(t)=(\psi_{out}^k(t),\psi_{in}^j(t))\equiv\int(\psi_{out}^k(t))^T\psi_{in}^j(t)dx \end{equation} The interpretation of the Bogoliubov coefficient is that $(B_{j\to k})^2$ is the fraction of fermion number transferred to the state $k$ in time $t$, starting from state $j$ in the asymptotic past. This is shown, for example, in \cite{saffin2007particle}, where one can see more details regarding the Bogoliubov coefficients in this context. \subsection{Adiabatic Evolution} The numerical techniques employed here to evolve the fermion field are discussed in appendix \ref{ap1}. Let us first discuss the evolution of the scalar and fermion fields in the BPS case for small velocities. When the velocities are small, the scalar field evolves slowly and smoothly from one BPS state to the next and, thus, the evolution is adiabatic. Moreover, if the evolution is truly adiabatic, the fermion field would evolve smoothly from one bound state configuration to the next corresponding configuration as the BPS antikink moves in moduli space. We will specialize to the case where the fermion field starts at the zero mode of the BPS antikink. A typical collision in the BPS sector is shown in Fig.~\ref{phi-and-density}(a), for the scalar field, and in Fig.~\ref{phi-and-density}(c), for the fermion field considering $v=0.3$. The plot of the fermion field shows the fermion density $n$ defined as \begin{equation} n=\psi_1^2+\psi_2^2. \end{equation} The adiabatic limit, however, occurs for $v\lesssim0.1$. The difference between an adiabatic evolution and the nonadiabatic one, the one shown in Fig.~\ref{phi-and-density} for example, is that the passage of the antikink through the lump in the adiabatic evolution is smoother and the vibrational mode of the lump is not excited. Moreover, in this case, there is no fermion density near $x=0$ after the collision, meaning that the fermion field is not transferred from the zero mode to the first excited state, located at the lump. To show that the evolution is adiabatic for small velocities we show snapshots of the fields evolution in the BPS sector for $v=0.02$ and $g=2.0$ in Fig.~\ref{adiabatic}. In Fig.~\ref{adiabatic}(a) we have snapshots of the scalar field configuration when it crosses $x_0\simeq-5.0$, $-3.0$, $0.0$, $3.0$ and $5.0$, from left to right. Superimposed in the figure, dotted curves are corresponding to the BPS antikink solution at these positions. The two curves are indistinguishable corroborating the assumption that the evolution is adiabatic. In Fig.~\ref{adiabatic}(b) we show similar plots of snapshots of the fermion density. As before, we superimpose the fermion density of the fermion zero mode bound to the corresponding BPS antikinks and, again, the two curves are indistinguishable. Therefore, the fermion field also evolves adiabatically. This also means that, computing the Bogoliubov coefficient from the initial fermion zero mode of BPS antikink to the same one in the new position gives exactly $1.0$ (within the numerical precision) during the whole evolution. \begin{figure}[tbp] \centering \includegraphics[width=0.9\linewidth]{adiabatic-snapshots.pdf} \caption{(a) Snapshots (solid) of the scalar field configuration during an adiabatic evolution of an antikink-lump collision with $v=0.02$ and $g=2.0$. Superimposed in the curves we have the static BPS antikink solution (dotted) and the two curves are indistinguishable. (b) Same as before for the fermion density. The dotted curves are now the density of the fermion eigenstates coupled to the corresponding static BPS antikinks.} \label{adiabatic} \end{figure} \subsection{Relativistic Evolution} Now let us discuss the behavior of the fermion field in relativistic collisions. We initialize the fermion field in the fermion zero mode bound to the boosted $\phi^4$ antikink, denoted by $A$ as before. The result of the fermion field evolution can be used to compute Bogoliubov coefficients of the type $B_{A\to j}$, where $j$ is any defect present in the final state with an attached fermion bound state. The fermion density is plotted in Figs.~\ref{phi-and-density}(c) and (d) for a specific set of parameters. In (c), we observe the evolution of the process in Eq.~\ref{process1} and, in (d), for the process in Eq.~\ref{process2}. In both graphs $n$ is localized around the antikink before the collision, reflecting the initial condition chosen for the fermion field. After the collision, the density is split between the defects and, in general, the split is uneven. Interestingly, most of it is still localized on some defect, instead of on the bulk, similarly to what was found in \cite{gibbons2007fermions,saffin2007particle}. The ``amount'' of the fermion field transferred to each defect after the collision can be quantified by the Bogoliubov coefficients and varies with the parameters of the model. Moreover, after each collision, some density may be lost as radiation. We make the following definitions \begin{equation} B_{A\to A}\equiv\alpha,\quad B_{A\to K}\equiv\beta,\quad B_{A\to AE}\equiv\gamma,\quad B_{A\to KE}\equiv\delta,\quad B_{A\to L}\equiv\xi, \end{equation} where $K$ is the fermion zero mode bound to the $\phi^4$ kink, while adding $E$ means we are condidering the first excited fermion bound state instead. Also $L$ denotes the fermion lowest state bound to the lump. Then, we investigate how the Bogoliubov coefficients evolve with time $t$. However, we must be careful with the definition of the coefficients because the defects present before the collision may be different from the defects present after the collision. In particular, for the process in Eq.~\ref{process1}, the lump $L^-$ becomes $L^+$ after the collision. Thus, we define $\xi$ to be the amount of fermion number transferred from $A\to L^-$ before the collision and from $A\to L^+$ after the collision. For the process in Eq.~\ref{process2}, there is no confusion in the definitions because there are a kink(-on-impurity) and an antikink both before and after the collision. The evolution of the Bogoliubov coefficients with time is shown in Fig.~\ref{bogvst}(a) and (b) for the processes in Eqs.~\ref{process1} and \ref{process2}, respectively. The parameters are the same as in Fig.~\ref{phi-and-density}. We observe that, before the collision, the fermion is completely localized on the antikink, that is, $\alpha^2=1$ and the other coefficients are zero, due to our choice of initial conditions. During the collision, our analysis is not reliable due to the fact that one cannot separate different defects. After the collision, the coefficients rapidly reach a steady state, meaning that the fermion is now bound to the final defects. \begin{figure}[tbp] \centering \includegraphics[width=0.95\linewidth]{bogvst.pdf} \caption{Evolution of the Bogoliubov coefficients as a function of the final time $t_f$. (a) Corresponds to the antikink-lump collision and (b) to the collision between antikink and kink-on-impurity. Parameters are $v=0.3$, $g=2.0$ and $\lambda=-1.0$.} \label{bogvst} \end{figure} Some points should be noticed. First, the sine (or cosine) dependence in the ansatz (above Eq.~\ref{schrodinger}) of the fermion bound states means that the components of the fermion fields oscillate with phase $\omega t-\theta$. As $\omega\neq 0$ for $\xi$, $\gamma$ and $\delta$ we must be careful when we compute these Bogoliubov coefficients. If the fermion is in one of these states the fermion field $\psi$ will oscillate with phase $\omega t-\theta_0$ for some unknown $\theta_0$. As we do not know the phase $\theta_0$, we project the fermion field at the bound state with a different phase $\omega t-\theta$ to compute the Bogoliubov coefficients, which we fix to an arbitrary constant. This constant will only be equal to $\omega t-\theta_0$ once in a full period. Thus, the coefficients oscillate with time and the amplitude of this oscillation should be taken as the real coefficient. Second, we also see an oscillation in $\beta^2$ in Fig.~\ref{bogvst}(b). This oscillation is accompanied by a negatively correlated oscillation in the amplitude of $\delta^2$. Observing Fig.~\ref{phi-and-density}(b) closely, this can be traced back to the oscillation of $K_0$ that occurs after the collision. This means that $\psi^K$ and $\psi^{KE}$ are not exact bound states of this oscillating $K_0$ and, therefore, there is a transition between the states which is an interesting phenomenon. To clarify why this transition occurs, recall from section \ref{fermion-bs} that we know how to compute the Bogoliubov coefficients for two cases: static solutions and their boosts. After the collision, the kinks and lumps can also have the vibrational modes excited, but this effect is usually small and the Bogoliubov coefficients can be computed neglecting this effect still with high accuracy. However, in the collision between an antikink and the kink-on-impurity the final state is neither a static solution nor a boosted one, it is an oscillating kink. The deviation from the static kink solution is not small and cannot be neglected. Luckily, even an oscillating kink has a confining potential and the fermion density stays bound to this kink with the difference that the fermion states bound to the static kink are not the exact bound states of the oscillating kink. Hence, there appears a transition between the states and consequently an oscillation in the Bogoliubov coefficients. Now let us investigate the behavior of the final Bogoliubov coefficients as a function of the parameters of the model, $v$ and $g$. These parameters measure, respectively, the velocity of the incoming antikink and the strength of the coupling between the fermion and scalar fields. The results are shown in Figs.~\ref{fermion-antikink-lump} and \ref{fermion-antikink-kinkonimp} for the processes in Eqs.~\ref{process1} and \ref{process2}, respectively. \begin{figure}[tbp] \centering \includegraphics[width=0.9\linewidth]{bog-antikinklump.pdf} \caption{Bogoliubov coefficients versus $g$ for an antikink-lump collision with different values of $v$. We take $\lambda=-1.0$.} \label{fermion-antikink-lump} \end{figure} Consider the antikink-lump collision first. In Fig.~\ref{fermion-antikink-lump}(a) we see the amount of fermion number associated with the zero mode that stays bound to the antikink after the collision, $\alpha^2$. For small $v$, the collision is close to the BPS regime and most fermions stay at this mode. Moreover, in this case the system is closer to the adiabatic limit, where only the fermion zero mode is excited. On the other hand, as we increase $v$, i.e. more distant from the BPS regime, more fermions are transferred to the excited state or the lump. This is quantified by $\gamma^2$ and $\xi^2$ shown in Figs.~\ref{fermion-antikink-lump}(b) and (c). Similarly, if we increase $g$, the fermions are more likely to be affected by the collision and be transferred to the lump even near the BPS regime. Clearly, $\alpha^2$ must be negatively correlated with $\gamma^2$ and $\xi^2$ as shown in the figures. Finally, in Fig.~\ref{fermion-antikink-lump}(d) we plot the sum of the Bogoliubov coefficients in the previous graphs. Table \ref{Bogul-sum}, the left panel, show some values of the sum for example. We find that close to the BPS regime the sum is equal to $1$, meaning that almost all fermions stay at the lowest bound states. Nevertheless, as we increase $v$ more fermions are lost as radiation or transferred to higher excited states, as expected intuitively. \begin{table} \centering \begin{tabular}{c c c|c c c} $v$ & $g$ & $\alpha^2+\gamma^2+\xi^2$ & $v$ & $g$ & $\alpha^2+\beta^2+\gamma^2+\delta^2$\\ \hline 0.1&1.0& 1.000 &0.3&1.0& 0.500\\ 0.1&2.0& 1.000 &0.3&2.0& 0.953\\ 0.1&3.0& 1.000 &0.3&3.0& 0.883\\ 0.1&4.0& 1.000 &0.3&4.0& 0.809\\ 0.1&5.0& 1.000 &0.3&5.0& 0.754\\ 0.2&1.0& 1.000 &0.4&1.0& 0.651\\ 0.2&2.0& 1.000 &0.4&2.0& 0.946\\ 0.2&3.0& 0.998 &0.4&3.0& 0.892\\ 0.2&4.0& 0.995 &0.4&4.0& 0.875\\ 0.2&5.0& 0.998 &0.4&5.0& 0.830 \end{tabular} \caption{The sum of the Bogoliubov coefficients for some values of $g$ and $v$. The left columns correspond to the BPS case and the right columns to the non-BPS case.} \label{Bogul-sum} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=0.9\linewidth]{bog-antikinkkinkonimp.pdf} \caption{Bogoliubov coefficients versus $g$ for the collision between an antikink and the kink-on-impurity, with different values of $v$. We take $\lambda=-1.0$.} \label{fermion-antikink-kinkonimp} \end{figure} After this analysis, we could compare our results with a non-BPS case from other models such as $\phi^4$. The same analysis we did here was done for the $\phi^4$ model in \cite{gibbons2007fermions,saffin2007particle}. The main difference between the two results is that in the non-BPS case the initial fermion zero mode is much more likely to detach from the antikink and the coefficients are more sensitive to the parameters of the model. Moreover, more fermions are lost as radiation or transferred to higher excited states. Nevertheless, it is also relevant to study a non-BPS case within the same model. Therefore, to complete the analysis, we will now study the Bogoliubov coefficients for the collision between an antikink and the kink-on-impurity in our model. We will show that the results are similar to the ones found for the $\phi^4$ model in \cite{gibbons2007fermions,saffin2007particle}. The final Bogoliubov coefficients for a collision between an antikink and the kink-on-impurity are shown in Fig.~\ref{fermion-antikink-kinkonimp}. As mentioned before, we consider only $v\gtrsim 0.3$ because for smaller values of $v$ the kink and antikink annihilate and we do not have a well defined final state. We see the curve for the coefficient $\alpha^2$ in Fig.~\ref{fermion-antikink-kinkonimp}(a) which shows that for a large interval of the parameters the fermion does not stay at zero mode bound to the antikink in contrast with the BPS case. The behavior is approximately sinusoidal. The curves follow this behavior approximately as argued in \cite{gibbons2007fermions} considering the Dirac equation with an ansatz for the fermion field symmetric concerning $x$ with a time-dependent amplitude and phase interacting with a scalar field approximated by its maximum at the collision during a short time. In this simplified model the authors discarded the bulk fermions and showed that with these approximations a sinusoidal behavior is expected. In Fig.~\ref{fermion-antikink-kinkonimp} the curve for $\alpha^2$ is negatively correlated with the sum $\beta^2+\delta^2$ shown in Fig.~\ref{fermion-antikink-kinkonimp}(b). We have plotted the sum instead of the two individual quantities because, as discussed before, there is a transition between the two states and therefore, the separate quantities are not reliable. It is clear from the curves that we can find very different results varying the parameters of the model in the range considered, meaning that the result is more sensitive to these parameters. The curves (a) and (b) for $v=0.3$ and small values of $g$ show strange behavior such as a jump in $\alpha^2$ near $g\simeq 2$. This can be traced back to the fact also observed in \cite{gibbons2007fermions,saffin2007particle} that for small $g$ the fermion bound states are too delocalized compared with the kink size and are more likely to escape when perturbed. In Fig.~\ref{fermion-antikink-kinkonimp}(c) we have $\gamma^2$, the fermion excited state bound to the antikink, which usually corresponds to a small fraction of fermion number, but can go as high as $45\%$. Finally, In Fig.~\ref{fermion-antikink-kinkonimp}(d) we have the total probability of the fermions staying in the lowest bound states. Again we show some of the values of the sum in Table \ref{Bogul-sum} for reference. The difference from unity is equal to the amount of fermion number that is transferred to higher excited states or radiated away. We observe that as we increase $v$ the difference from unity becomes larger (except for $g\lesssim 2.0$), as expected to happen when the energy of the system is increased leading to the loss of a larger fraction of fermions in the form of radiation or excitation to higher states, as in the antikink-lump collision. On the other hand, in the antikink-lump collision close to the BPS regime with small $v$ almost no radiation is produced and higher states are not excited. \section{Conclusion} \label{conclusion} The main goal in this work is to compare the fermion transfer between solitons when these solitons collide in BPS and non-BPS cases. In order to do this, we added a fermion field and a Yukawa interaction to a model recently proposed in the literature \cite{adam2019phi} that consists of the $\phi^4$ model with a half-BPS preserving impurity. This model contains different defects that may interact in a BPS or a non-BPS way and it is interesting because it may serve as a guide for higher dimensional soliton interactions where, contrary to the $(1+1)$ dimensional case, there might be multi-soliton BPS solutions. The same is true for our work: it may also serve as a guide for higher-dimensional cases. We discussed the spectrum of the defects of the model. In particular, we showed that the lump has fermion bound states only for $\lambda<0$ and that the spectrum of the BPS antikink approaches separate spectra of the lump and the $\phi^4$ antikink for large positions in moduli space. As one expects, the spectrum of fermion field is similar to the spectrum of scalar field excitations in the supersymmetric limit. Then, we computed the time evolution of the scalar and fermion fields for two scenarios: a collision between an antikink and a lump and between an antikink and the kink-on-impurity. In both cases, the fermion field is initially bound to the antikink at the zero mode. We found that after the collision, when the defects separate, most of the fermion density is found at the defects and not at the bulk, meaning that the fermion stays bound to the defects even after the collision. Moreover, in the special case of non-relativistic velocities, the BPS collision evolves adiabatically, meaning that the scalar field is always in a BPS antikink configuration, slowly evolving in moduli space with time, while at each instant the fermion field lies exactly at its respective zero mode. We quantified fermion transfer between solitons through the computation of Bogoliubov coefficients similar to the ones studied in \cite{gibbons2007fermions,saffin2007particle}. After the collisions in most cases, the Bogoliubov coefficients reach a constant value which quantifies the probability that fermion is transferred from one state to the other. We found that close to the BPS case most fermions stay localized on the initial soliton except for high values of the coupling constant $g$. Moreover, as the initial velocity $v$ increases, the system moves further away from the BPS regime and more fermions are transferred to the other defect and to higher excited states or lost as radiation. On the other hand, for the non-BPS cases the fermions are much more likely, and in a higher amount, to be transferred to the other defect or excited states and the coefficients are more sensitive to the parameters of the model. An interesting continuation of our work can be to allow the defects to receive the fermion back-reaction. Thus, the soliton collisions should be altered as well as the soliton shapes. This would make the analysis based on the Bogoliubov coefficients less straightforward. However, we expect that some of our main results should be maintained. We plan to investigate this in a future work.
2,877,628,090,870
arxiv
\section{Introduction} Homological stability is the following question: Given an infinite series of groups $G_n$, such as the general linear groups $\Gl_n$, we consider the sequence of inclusions \[ G_1 \hookrightarrow G_2 \hookrightarrow G_3 \hookrightarrow \cdots. \] Then, if we apply group homology of a fixed degree, does the corresponding sequence of homology modules stabilise eventually? This is an old question and there are many interesting results for various series of classical groups, usually over rings of finite stable rank. An overview of results in this area can be found in \cite[Chapter 2]{Knu:HLG:01} and we will also provide references to the best known results for specific series of groups. Although the method of proof is usually based on a common idea, the action of the larger group on a highly connected simplicial complex, all proofs known to the author are tailored to specific series of groups. In this paper, we present a general method to prove homological stability, valid for all groups with weak spherical Tits systems, that is, groups acting strongly transitively on possibly weak spherical buildings. We then use this method to prove homological stability for various series of classical groups over division rings, usually improving the stability range previously known for larger classes of rings. The method is based on the observation that the simplicial complexes used by Charney in \cite{Cha:HSD:80} and \cite{Cha:gtV:87} and by Vogtmann in \cite{Vog:HSO:79} and \cite{Vog:SPH:81} are closely related to the theory of buildings --- they are the \emph{opposition complexes} studied by von Heydebreck in \cite{vH:HPC:03}. The opposition complexes admit Levi subgroups as vertex stabilisers. Using these complexes, we construct a spectral sequence involving relative group homology of Levi subgroups. For the groups we consider, the Levi subgroups split as direct or semidirect products of smaller groups, both of the series of groups we consider and, interestingly, of \emph{general linear groups}. Using strong stability results for general linear groups, we can hence prove stability results for various series of groups. This spectral sequence can probably also be used to show low-dimensional homological stability for groups of types $E_6$, $E_7$ and $E_8$. Additionally, one could try to compare group homology of groups of different types using this method. Finally, homological stability results for all reductive algebraic groups should be possible, albeit with a rather weak stability range. \paragraph{Homological stability results} The method outlined above has originally been used by Charney in \cite{Cha:HSD:80} to prove homological stability of general and special linear groups, but yielding a comparatively weak stability range. For special linear groups, however, it is an interesting observation that terms involving general linear groups appear in the spectral sequence. This allows us to apply a strong theorem by Sah in \cite{Sah:HcL:86} on homological stability for general linear groups to prove homological stability for special linear groups. \begin{SlnTheorem}[Homological stability of special linear groups] If $D$ is an infinite field, then $n\geq 2k-1$ implies \[ H_k(\Sl_{n+1}(D),\Sl_{n}(D);\mathds Z)=0. \] \end{SlnTheorem} \noindent For fields of characteristic zero, there is a far better result by Hutchinson and Tao in \cite{HT:HSS:08} with stability range $n\geq k$. Up to now, the best result known to the author applicable to other infinite fields is a result by van der Kallen in \cite{vdK:HSL:80} for rings with stable rank $1$. It guarantees a stability range of $n\geq 2k$. Vogtmann originally used a version of the construction in this paper to prove homological stability for orthogonal and symplectic groups in \cite{Vog:HSO:79} and \cite{Vog:SPH:81}. Here, we investigate the general situation of unitary groups associated to a hermitian form of Witt index $n+1$ on a vector space $V$. This vector space then splits non-canonically as an orthogonal sum of a hyperbolic module $\mathcal H_{n+1}$ and an anisotropic complement $W$. We consider the unitary group induced on the subspace $\mathcal H_n\perp W$ and ask for homological stability. Again, the spectral sequence we consider has terms involving the relative homology of general linear groups. We can hence apply Sah's theorem again to obtain \begin{UnTheorem}[Homological stability of unitary groups] For a division ring $D$ with infinite centre, the relative homology modules \[ H_k\bigl(\U(\mathcal H_{n+1}\perp W), \U(\mathcal H_n\perp W);\mathds Z\bigr) \] vanish for $n\geq 2$ if $k=1$ and for $n\geq k\geq 2$. If the centre of $D$ is finite, relative homology vanishes for $n\geq 2k$. \end{UnTheorem} \noindent This is an improvement over the results by Mirzaii and van der Kallen in \cite{MaB:HSU:02} and \cite{Mir:HSU:05}, where homological stability for unitary groups with stability range $n\geq k+1$ has already been proved. Their result is valid for a much larger class of rings, namely local rings with infinite residue field, but only for the case of maximal Witt index, that is for $W=\{0\}$. The following strong result can also be proved using this method. \begin{SOnTheorem}[Homological stability of special orthogonal groups] For an infinite field $D$, we have \[ H_k\bigl(\SO_{n+1,n+1}(D),\SO_{n,n}(D);\mathds Z\bigr) =0 \] for $n\geq 2$ if $k=1$ and for $n\geq k\geq 2$. If $D$ is a finite field, then the relative homology groups vanish for $n\geq 2k$. \end{SOnTheorem} \paragraph{The construction of a relative spectral sequence} We give an outline of the method used to prove these results and we state the main theorem. Consider a group $G$ with a weak spherical Tits system of rank \mbox{$n+1$}, contained in an infinite series of groups for which we prove homological stability. We enumerate the type set $I=\{i_1,\ldots,i_{n+1}\}$ arbitrarily. For $1\leq p\leq n+1$, denote by $L_p$ certain Levi subgroups of $G$ of type $I\backslash\{i_p\}$. For the applications discussed in the previous section, the group $G$ is of type $A_{n+1}$ or $C_{n+1}$ with a linear ordering of the type set. The resulting Coxeter diagrams of $G$ and $L_p$ are illustrated in the following picture. \begin{center} \begin{tikzpicture}[font=\small] \node (G) at (-1,.8) {$G$}; \node (L) at (-1,0) {$L_p$}; \foreach \y in {0,.8} { \foreach \x in {0,1,2,3,4,6,7,8,9,10} { \fill (\x,\y) circle (.7mm);} \draw (1,\y) -- (2,\y); \draw[dotted] (2,\y) -- (3,\y); \draw (3,\y) -- (4,\y); \draw (6,\y) -- (7,\y); \draw[dotted] (7,\y) -- (8,\y); \draw (8,\y) -- (10,\y); \draw (0,\y + .05) -- (1,\y + .05); \draw[dashed] (0,\y - .05) -- (1,\y -.05); } \draw (4,.8) -- (6,.8); \fill (5,.8) circle (.7mm); \node (0) at (0,-.6) {$1$}; \node (1) at (1,-.6) {$2$}; \node (3) at (4,-.6) {$p-1$}; \node (4) at (5,-.63) {$p$}; \node (5) at (6,-.6) {$p+1$}; \node (7) at (9,-.62) {$n$}; \node (8) at (10,-.6) {$n+1$}; \end{tikzpicture} \end{center} We choose a subgroup $G'\leq L_{n+1}$ of type $I\backslash \{i_{n+1}\}$ and write $L'_p=L_p\cap G'$. Again, in the concrete applications, we have the following situation. \begin{center} \begin{tikzpicture}[font=\small] \node (G) at (-1,.8) {$G'$}; \node (L) at (-1,0) {$L'_p$}; \foreach \y in {0,.8} { \foreach \x in {0,1,2,3,4,6,7,8,9} { \fill (\x,\y) circle (.7mm);} \draw (1,\y) -- (2,\y); \draw[dotted] (2,\y) -- (3,\y); \draw (3,\y) -- (4,\y); \draw (6,\y) -- (7,\y); \draw[dotted] (7,\y) -- (8,\y); \draw (8,\y) -- (9,\y); \draw (0,\y + .05) -- (1,\y + .05); \draw[dashed] (0,\y - .05) -- (1,\y -.05); } \draw (4,.8) -- (6,.8); \fill (5,.8) circle (.7mm); \node (0) at (0,-.6) {$1$}; \node (1) at (1,-.6) {$2$}; \node (3) at (4,-.6) {$p-1$}; \node (4) at (5,-.63) {$p$}; \node (5) at (6,-.6) {$p+1$}; \node (7) at (9,-.6) {$n$}; \node (empty) at (10,0) {\phantom{$n+1$}}; \end{tikzpicture} \end{center} Using a filtration of the opposition complex by types of vertices, we construct two exact chain complexes of $G$- and $G'$-modules. From these chain complexes, we obtain a spectral sequence involving relative homology of Levi subgroups with coefficient modules $M_p$, which are top-dimensional homology modules of opposition complexes of type $\{i_1,\ldots,i_{p-1}\}$, except for $M_1=\mathds Z$. \begin{RelSpecTheorem}[Relative spectral sequence] There is a spectral sequence with first page \[ E^1_{p,q}=\begin{cases} H_q(G,G';\mathds Z) & p=0 \\ H_q(L_p,L'_p;M_p) & 1\leq p \leq n\\ H_q(L_{n+1},G';M_{n+1}) & p=n+1 \end{cases} \] which converges to zero. \end{RelSpecTheorem} This can be used to prove homological stability for groups of type $A_{n+1}$ and $C_{n+1}$ in the following way: We want to prove that $H_q(G,G';\mathds Z)$ vanishes for all $q$ smaller than a given $k$. Hence we must show that $H_q(L_p,L'_p;M_p)$ vanishes for $p+q\leq k+1$. For $2\leq p\leq n-1$ the Levi subgroups, having disconnected diagrams, usually split as direct or semidirect products of two groups whose types belong to the connected components of the diagrams. \begin{center} \begin{tikzpicture}[font=\small] \node (Qp) at (-1,1.6) {$Q_p$}; \node (K) at (-1,0.8) {$K_p$}; \node (Kp) at (-1,0) {$K'_p$}; \foreach \x in {0,1,2,3,4} { \fill (\x,1.6) circle (.7mm);} \draw (1,1.6) -- (2,1.6); \draw[dotted] (2,1.6) -- (3,1.6); \draw (3,1.6) -- (4,1.6); \draw (0,1.6 + .05) -- (1,1.6 + .05); \draw[dashed] (0,1.6 - .05) -- (1,1.6 -.05); \foreach \y in {0,.8} { \foreach \x in {6,7,8,9} { \fill (\x,\y) circle (.7mm);} \draw (6,\y) -- (7,\y); \draw[dotted] (7,\y) -- (8,\y); \draw (8,\y) -- (9,\y); } \draw (9,.8) -- (10,.8); \fill (10,.8) circle (.7mm); \node (0) at (0,-.6) {$1$}; \node (1) at (1,-.6) {$2$}; \node (3) at (4,-.6) {$p-1$}; \node (4) at (5,-.63) {$p$}; \node (5) at (6,-.6) {$p+1$}; \node (7) at (9,-.62) {$n$}; \node (8) at (10,-.6) {$n+1$}; \end{tikzpicture} \end{center} This means that there are groups $Q_p$, $K_p$, $K'_p$ of types $\{i_1,\ldots,i_{p-1}\}$, $\{i_{p+1},\ldots,i_{n+1}\}$ and $\{i_{p+1},\ldots,i_n\}$, respectively, such that $L_p=Q_p\ltimes K_p$ and $L'_p=Q_p\ltimes K'_p$. The modules $M_p$ are constructed in such a fashion that the groups $K_p$ and $K'_p$ act trivially on $M_p$. If we know that relative \emph{integral} homology of the subgroups $K_p$ and $K'_p$ vanishes, we can produce zeroes in the spectral sequence $E^1_{p,q}$ by using a relative Lyndon\slash Hochschild-Serre spectral sequence. The structure of the Levi subgroups and the corresponding semidirect product decompositions depend on the specific series of groups, but note that we always need relative integral homology of groups of type $A_*$! For special linear groups over fields and for unitary groups over division rings, these subgroups of type $A_*$ are general linear groups. Hence, as mentioned above, we can use strong results on the homological stability of general linear groups to obtain homological stability of these series of groups. Variations of this method using an appropriate type filtration can probably be used to prove homological stability results for different series of reductive groups. As mentioned above, this could be used to study groups of type $E_6$, $E_7$ or $E_8$, or compare group homology of groups of different types. In particular, relations between algebraic K-theory and hermitian K-theory could also be studied by choosing a different type enumeration of a group of type $C_{n+1}$, forcing $G'$ to be of type $A_n$ instead of type $C_n$. \paragraph{} This paper is structured as follows. In the first part, we introduce the required concepts very briefly and give a general procedure to construct relative spectral sequences. At the end of this section, we apply this procedure to obtain a relative Lyndon\slash Hochschild-Serre spectral sequence, which is required to decompose homology of Levi subgroups later on. In the second part, we introduce the groups we work with and their associated opposition complexes. We construct an exact chain complex, analogously to the construction of cellular homology, and use it to prove Theorem \ref{th:stability_pair_spectral_sequence}. Finally, in the third part, we consider explicit series of groups, decompose the Levi subgroups appropriately and apply Theorem \ref{th:stability_pair_spectral_sequence} to prove homological stability inductively. \paragraph{} The author would like to thank Linus Kramer for many discussions, suggestions and encouragement, as well as Ruth Charney and Karen Vogtmann for their help. We also thank Stefan Witzel for very useful discussions. Proposition \ref{prop:filtrated_opposition_complexes_are_spherical} is largely due to him. The author was supported by the \selectlanguage{ngerman}\emph{Graduiertenkolleg: \glqq Analytische Topologie und Me\-ta\-geo\-me\-trie\grqq}\selectlanguage{english} while working on this topic. This work is part of the author's doctoral thesis \cite{Ess:BGL:10} at the Universität Münster. \section{Homology} The aim of this part is to give a general construction for relative spectral sequences. Using this, we will construct a relative Lyndon\slash Hochschild-Serre spectral sequence. The relative construction will also be applied in the second part to prove the main theorem. We will assume the reader to be familiar with the concept of spectral sequences. For a textbook on this topic, see \cite{McC:UGS:01}. A very good brief introduction to the subject is \cite{Cho:Ych:06}. \subsection{Group homology} We give a very brief definition of group homology to introduce the terminology. The standard reference is of course \cite{Bro:CoG:82}. Throughout this section, let $G$ be any group, we write $\mathds Z G$ for the group ring over $G$, and we define a \emph{$G$-module} to be a left module over $\mathds Z G$. Tensor products $M\otimes N$ of modules are only defined if $M$ is a right module and $N$ is a left module over a common ring. We want to form tensor products of left modules over the group ring $\mathds Z G$. Note that we can canonically make any left $\mathds Z G$-module $M$ into a right $\mathds Z G$-module by setting \[ mg := (g^{-1})m \quad\text{for any}\quad m\in M, g\in G. \] As in \cite{Bro:CoG:82}, using this construction, we can define tensor products of left $G$-modules $M$ and $N$, denoted by $M\otimes_G N$. \begin{Definition} Associated to a group $G$, consider the modules $F_n(G)$ which are the free abelian groups over $(n+1)$-tuples of elements of $G$. We define the boundary map \[ \partial(g_0,g_1,\ldots,g_n) = \sum_{k=0}^n (-1)^k(g_0,\ldots,\hat g_k,\ldots,g_n), \] where, as usual, the hat $\hat g_k$ indicates omitting that entry in the tuple. We call the associated exact $G$-chain complex of the form \[ \cdots \rightarrow F_2(G)\rightarrow F_1(G) \rightarrow F_0(G) \rightarrow \mathds Z \rightarrow 0, \] the \emph{standard resolution of $\mathds Z$ over $\mathds Z G$}. For any $G$-module $M$, the \emph{group homology $H_*(G;M)$} is defined to be the homology of the chain complex $F_*(G) \otimes_G M$. \end{Definition} \begin{Remarks} Any group monomorphism $G'\rightarrow G$ induces a chain map $F_*(G')\rightarrow F_*(G)$ which is injective on each chain module. For any other free (even projective) resolution $F_*$ of $\mathds Z$ over $\mathds Z G$, we also have $H_*(G;M)\cong H_*(F_* \otimes_G M)$. We will frequently use the following observation: If $G'\leq G$ and $F_*$ is a free resolution of $\mathds Z$ over $\mathds Z G$, then it is also a free resolution of $\mathds Z$ over $\mathds Z G'$. \end{Remarks} \noindent The introduction of relative homology simplifies the formulation of homological stability considerably. \begin{Definition} Consider a group $G$ and a subgroup $G'\leq G$. Let $M$ be a $G$-module. The \emph{relative group homology} $H_*(G,G';M)$ is defined to be the homology of the quotient complex $F_*(G)\otimes_G M / F_*(G') \otimes_{G'} M$. \end{Definition} \noindent Note that there is canonically an associated long exact sequence of the form \[ \cdots \rightarrow H_k(G';M) \rightarrow H_k(G;M) \rightarrow H_k(G,G';M) \rightarrow H_{k-1}(G';M) \rightarrow \cdots \] \begin{Proposition}\label{prop:relative_homology_of_subgroups} Let $G'$ and $H$ be two subgroups of a given group $G$, write $H'=G'\cap H$. Then we have \[ H_*\Bigl( \frac{F_*(G)\otimes_H M}{F_*(G')\otimes_{H'}M}\Bigr) \cong H_*\Bigl( \frac{F_*(H)\otimes_H M}{F_*(H')\otimes_{H'}M}\Bigr) = H_*(H,H';M). \] \end{Proposition} \begin{Proof} Consider the following diagram: \[ \xymatrix{ 0\ar[r] & F_*(H') \otimes_{H'} M\ar[r]\ar[d] & F_*(H) \otimes_H M\ar[r]\ar[d] & \frac{F_*(H)\otimes_H M}{F_*(H')\otimes_{H'}M}\ar[r]\ar[d] & 0 \\ 0\ar[r] & F_*(G') \otimes_{H'} M\ar[r] & F_*(G) \otimes_H M\ar[r] & \frac{F_*(G)\otimes_H M}{F_*(G')\otimes_{H'}M}\ar[r] & 0 } \] Here, the top row is exact by construction. The bottom row is exact since $H'=G'\cap H$. The vertical arrows are all induced by the inclusions. If we now consider the associated long exact sequences on homology, the first and second vertical arrow induce isomorphisms by the remark above. By the 5-lemma, we get the desired isomorphism \end{Proof} \noindent This following lemma is a special case of \cite[Theorem 6.1.12]{Wei:IHA:94}. \begin{Lemma}\label{l:trivial_coefficient_module} Let $G$ be a group and let $M$ be a trivial and $\mathds Z$-free $G$-module. Then we have \[ H_*(G;M) \cong H_*(G;\mathds Z) \otimes_\mathds Z M. \] \end{Lemma} \begin{Proof} Since $M$ is trivial, we have $F_k(G) \otimes_G M \cong F_k(G) \otimes_G \mathds Z \otimes_\mathds Z M$. Since $M$ is $\mathds Z$-free, it is in particular $\mathds Z$-flat and the functor $(-\otimes_\mathds Z M)$ is exact, which proves the result. \end{Proof} \noindent We will also need the following simple lemma on relative $H_0$. \begin{Lemma}\label{l:relative_h0} For any group $G$, any subgroup $G'$ and any $G$-module $M$, we have \[ H_0(G,G';M) = 0. \] \end{Lemma} \begin{Proof} It is a well-known fact that $H_0(G;M) = \mathds Z \otimes_G M$ and $H_0(G';M)= \mathds Z \otimes_{G'} M$. Now by the long exact sequence we have \[ H_0(G,G';M) \cong (\mathds Z \otimes_G M) / i_*(\mathds Z \otimes_{G'} M), \] where $i_*$ is the map $z \otimes_{G'} m \mapsto z \otimes_G m$. This map is clearly surjective. \end{Proof} \subsection{The mapping cone chain complex} We briefly recall the definition of the mapping cone complex. Let $(C_*,\partial^C)$, $(C'_*,\partial^{C'})$ be two chain complexes and let $\varphi_*:C'_*\rightarrow C_*$ be a chain map. The \emph{mapping cone chain complex $\Cone_*(\varphi)$} has the modules $\Cone_k(\varphi) = C'_{k-1} \oplus C_k$ with differential \[ \partial^{\Cone}_k (c' + c) \mathrel{\mathop :}= - \partial^{C'}_{k-1}(c') + \varphi_{k-1}(c') + \partial^C_k(c). \] \noindent As for relative homology, there is an associated long exact sequence \[ \cdots \rightarrow H_k(C') \rightarrow H_k(C) \rightarrow H_k(\Cone(\varphi)) \rightarrow H_{k-1}(C') \rightarrow \cdots. \] \begin{Lemma}[1.5.8 in \cite{Wei:IHA:94}]\label{l:mapping_cone_relative_homology} If $\varphi: C'\rightarrow C$ is the inclusion of subcomplexes $C'\subseteq C$, then \[ H_*(\Cone(\varphi)) \cong H_*(C/C'). \] \end{Lemma} \subsection{The spectral sequences associated to a double complex}\label{ssec:ss_double_complex} In this section, we will recall the definition of the spectral sequence associated to a double complex. Using this, we will construct a relative spectral sequence to be used throughout the paper. First of all, remember that a \emph{double complex $D_{p,q}$} is a bi-graded module with horizontal and vertical differentials $\partial^h:D_{p,q}\rightarrow D_{p-1,q}$ and $\partial^v:D_{p,q}\rightarrow D_{p,q-1}$ such that \[ \partial^v\circ \partial^h - \partial^h\circ \partial^v = (\partial^v)^2 = (\partial^h)^2= 0. \] Associated to a double complex is the \emph{total complex}, defined to be \[ \Tot(D_{p,q})_k \mathrel{\mathop :}= \bigoplus_{p+q=k} D_{p,q} \] with differential induced by \[ \partial^{\Tot}(d_{p,q}) = \partial^h(d_{p,q}) + (-1)^p \partial^v(d_{p,q}) \] The homology of the total complex can be calculated via two spectral sequences. \begin{Proposition}[\cite{Bro:CoG:82}, VII.3]\label{prop:two_spectral_sequences} There are two spectral sequences both converging to the homology of the total complex \[ \begin{array}{l} E^1_{p,q} = H_q((D_{*,p},\partial^h)) \\ L^1_{p,q} = H_q((D_{p,*},\partial^v)) \end{array} \Rightarrow H_{p+q}(\Tot(D)_*). \] The differentials on the first page are up to sign each induced by the other differential, respectively. \end{Proposition} \noindent The simplest example of a double complex arises in the following situation: Let $(F_*,\partial^F)$ and $(C_*,\partial^C)$ be two chain complexes of $G$-modules. Then $D_{p,q}=F_p\otimes_G C_q$ with the differentials $(\partial^F \otimes_G\id)$ and $(\id\otimes_G \partial^C)$ is a double complex. The total complex of this double complex coincides with the \emph{tensor product of chain complexes $(F\otimes_G C)_*$}: \[ \Tot(D_{p,q})_k = (F\otimes_G C)_k \mathrel{\mathop :}= \bigoplus_{p+q=k} F_p \otimes_G C_q \] with differential induced by \[ \partial^{F\otimes_G C}(f_p \otimes_G c_q) = \partial^F(f_p) \otimes_G c_q + (-1)^p f_p \otimes_G \partial^C(c_q). \] \noindent Proposition \ref{prop:two_spectral_sequences} specialises to the following corollary. \begin{Corollary}\label{cor:two_spectral_sequences_for_tensor_product} There are two spectral sequences both converging to the homology of the tensor product complex \[ \begin{array}{l} E^1_{p,q} = H_q(F_* \otimes_G C_p) \\ L^1_{p,q} = H_q(F_p \otimes_G C_*) \end{array} \Rightarrow H_{p+q}(F\otimes_G C). \] The differentials on the first page are induced by $\pm(\id\otimes_G\partial^C)$ and $(\partial^F\otimes_G\id)$, respectively. \end{Corollary} Note that, if $F_*$ is a chain complex of free modules, each module $F_p$ is $\mathds Z G$-free and hence $\mathds Z G$-flat, and we obtain \[ L^2_{p,q} = H_p(F_* \otimes_G H_q(C)) \Rightarrow H_{p+q}(F\otimes_G C). \] In particular, if $F_*=F_*(G)$ is the standard resolution of $\mathds Z$ over $\mathds Z G$ and if the chain complex $C_*$ is an exact complex of $G$-modules, we obtain \begin{Corollary}\label{cor:ss_exact_coefficients} Let $G$ be a group and let $C_*$ be an exact chain complex of $G$-modules. Then there is a spectral sequence \[ E^1_{p,q} = H_q(G;C_p) \Rightarrow 0 \] which converges to zero. \end{Corollary} \noindent We will later also use transposed double complexes as follows: We denote the \emph{transposed tensor complex} by $F \otimes^T_G C$: \[ (F\otimes^T_G C)_k \mathrel{\mathop :}= \bigoplus_{p+q=k} F_p \otimes_G C_q \] with differential induced by \[ \partial^{F\otimes^T_G C}(f_p \otimes_G c_q) = (-1)^q \partial^F(f_p) \otimes_G c_q + f_p \otimes_G \partial^C(c_q). \] This is obviously the total complex of the double complex $(F_q \otimes_G C_p)_{p,q}$. The following Lemma is then a simple calculation. \begin{Lemma}\label{lem:transposed_double_complex} The chain map $F\otimes_G C\rightarrow F\otimes^T_G C$ given on the basis by \[ f_p \otimes_G c_q \mapsto (-1)^{pq} f_p \otimes_G c_q \] induces isomorphisms on homology $H_*(F\otimes_G C) \cong H_*(F\otimes^T_G C)$. \end{Lemma} \subsection{The Lyndon\slash Hochschild-Serre spectral sequence} The Lyndon\slash Hochschild-Serre spectral sequence is a well-known tool in group homology theory. Nevertheless, we give a short summary of its construction in \cite[VII.6]{Bro:CoG:82} since we will need this in the following section. Consider an exact sequence of groups \[ 1\rightarrow H \rightarrow G \rightarrow Q \rightarrow 1. \] Write $F_*(G)$ for the standard resolution of $\mathds Z$ over $\mathds Z G$, this is a free resolution of $\mathds Z$ over $\mathds Z H$ as well. Let $M$ be a $G$-module. It is not difficult to see that \[ F_*(G) \otimes_G M = ( F_*(G) \otimes_H M)\otimes_Q \mathds Z. \] Now set $C_* = (F_*(G) \otimes_H M)$ and consider the standard resolution $F_*(Q)$ of $\mathds Z$ over $\mathds Z Q$. By Corollary \ref{cor:two_spectral_sequences_for_tensor_product}, applied to the tensor product $F(Q) \otimes_Q C$, there are two spectral sequences \[ \begin{array}{l} E^1_{p,q} = H_q(Q; C_p) \\ L^1_{p,q} = H_q(F_p(Q) \otimes_Q C_* ) \end{array} \Rightarrow H_{p+q}(F(Q)\otimes_Q C). \] Note first of all that $F_p(Q)$ is a free $Q$-module, the functor $(F_p(Q) \otimes_Q -)$ is exact and we obtain \[ L^2_{p,q} = H_p(Q;H_q(C)) = H_p(Q;H_q(H;M)). \] Now note that $\mathds Z G \otimes_H M \cong \mathds Z (G/H) \otimes_\mathds Z M = \mathds Z Q \otimes_\mathds Z M$. Hence $H_q(Q; \mathds Z G \otimes_H M)=0$ for $q>0$ by \cite[III.5.7]{Bro:CoG:82} and hence also $H_q(Q,C_p)=0$ for all $p$ and all $q>0$, since $F_p(G)$ is a free $\mathds Z G$-module for all $p$. We obtain \[ E^1_{p,q}=\begin{cases} H_0(Q;C_p)\cong(F_p(G)\otimes_H M) \otimes_Q \mathds Z \cong F_p(G)\otimes_G M & q=0 \\ 0 & \text{otherwise,} \end{cases} \] hence the spectral sequence collapses on the second page and we have $E^2_{p,0}=H_p(G;M)$. This proves \begin{Theorem}[Lyndon\slash Hochschild-Serre]\label{th:lhs} Given a short exact sequence of groups \[ 1 \rightarrow H \rightarrow G \rightarrow Q \rightarrow 1, \] there is a convergent spectral sequence \[ L^2_{p,q} = H_p(Q;H_q(H;M)) \Rightarrow H_{p+q}(G;M). \] \end{Theorem} \subsection{Relative spectral sequences} In this section, we will discuss a procedure to obtain relative spectral sequences from both the spectral sequence associated to a double complex as well as from the Lyndon\slash Hochschild-Serre spectral sequence. Consider the following situation: Fix a group $G$ and a subgroup $G'$. Let $F'$, $C'$ and $F$, $C$ be chain complexes of $G'$- and $G$-modules, respectively. Consider the two tensor product double complexes $(F_p \otimes_G C_q)_{p,q}$ and $(F'_p \otimes_{G'} C'_q)_{p,q}$. Assume that there is map of double complexes \[ i: F'\otimes_{G'} C' \rightarrow F\otimes_G C. \] We denote the induced maps on the vertical and horizontal chain complexes by \begin{align*} i_{p,\bullet}: F'_p \otimes_{G'} C'_\bullet & \rightarrow F_p \otimes_G C_\bullet \\ i_{\bullet,q}: F'_\bullet \otimes_{G'} C'_q & \rightarrow F_\bullet \otimes_G C_q \end{align*} For every $q$, we then consider the mapping cone chain complexes $\Cone_*(i_{\bullet,q})$ and $\Cone_*(i_{q,\bullet})$. It is a simple calculation to see that \[ D_{p,q}=\Cone_p(i_{\bullet,q})\qquad\text{and}\qquad D^T_{p,q}=\Cone_p(i_{q,\bullet}) \] are actually double complexes with respect to the cone differentials and the differentials \begin{align*} \partial: (F'_{p-1} \otimes_{G'} C'_q) \oplus (F_p \otimes_G C_q) &\longrightarrow (F'_{p-1} \otimes_{G'} C'_{q-1}) \oplus (F_p \otimes_G C_{q-1}) \\ f' \otimes_{G'} c' + f \otimes_G c &\longmapsto f' \otimes_{G'} \partial^{C'}(c') + f\otimes_G\partial^C(c) \\ \partial^T: (F'_q \otimes_{G'} C'_{p-1}) \oplus (F_q \otimes_G C_p) &\longrightarrow (F'_{q-1} \otimes_{G'} C'_{p-1}) \oplus (F_{q-1} \otimes_G C_p) \\ f' \otimes_{G'} c' + f \otimes_G c &\longmapsto \partial^{F'}(f') \otimes_{G'} c' + \partial^F (f)\otimes_G c \end{align*} induced by the differentials $\partial^{C'}$ and $\partial^C$, respectively $\partial^{F'}$ and $\partial^F$. \begin{Theorem}\label{th:relative_spectral_sequence} There are two spectral sequences corresponding to $D$ and $D^T$ satisfying \[ \begin{array}{c} E^1_{p,q} = H_q\bigl(\Cone_*(i_{\bullet,p})\bigr) \\ L^1_{p,q} = H_q\bigl(\Cone_*(i_{p,\bullet})\bigr) \end{array} \Rightarrow H_{p+q}\bigl(\Cone_*(i)\bigr). \] The differentials on the first pages are induced by $\pm\partial$ and $\pm\partial^T$, respectively. \end{Theorem} \begin{Proof} The first spectral sequence is the first one from Proposition \ref{prop:two_spectral_sequences} applied to the double complex $D$. We know that \[ E^1_{p,q} = H_q\bigl(\Cone_*(i_{\bullet,p})\bigr) \Rightarrow H_{p+q}(\Tot(D)). \] But the formation of the mapping cone and of the total complex commutes in the following sense: On the one hand \[ \Tot(D)_k = \bigoplus_{p+q=k} (F'_{p-1} \otimes_{G'} C'_q) \oplus (F_p \otimes_G C_q). \] On the other hand \[ \Cone_k(i:(F'_*\otimes_{G'} C'_*)\rightarrow (F_* \otimes_G C_*)) = \bigoplus_{p+q=k-1}(F'_p \otimes_{G'} C'_q) \oplus \bigoplus_{p+q=k}(F_p \otimes_G C_q) \] Hence the modules of the two chain complexes coincide. For the boundary maps, we have \begin{align*} \partial^{\Cone(i)}\Bigl(\sum_{p+q=k-1}f'_p\otimes_{G'} c'_q + \sum_{p+q=k}f_p\otimes_G c_q \Bigr) = &- \sum_{p+q=k-1} \bigl(\partial f'_p \otimes_{G'} c'_q + (-1)^p f'_p\otimes_{G'}\partial c'_q \bigr) +\\ &+ i \bigl( \sum_{p+q=k-1} f'_p \otimes_{G'} c'_q \bigr) +\\ &+ \sum_{p+q=k} \bigl(\partial f_p\otimes_G c_q + (-1)^p f_p\otimes_G\partial c_q\bigl). \end{align*} and \begin{multline*} \partial^{\Tot(D)}\Bigl(\sum_{p+q=k} (f'_{p-1}\otimes_{G'} c'_q + f_p\otimes_G c_q)\Bigr) = \sum_{p+q=k} \bigl( -\partial f'_{p-1}\otimes_{G'} c'_q + i(f'_{p-1}\otimes_{G'} c'_q) + \partial f_p\otimes_G c_q + \\ \qquad\quad+ (-1)^p f'_{p-1}\otimes_{G'} \partial c'_q + (-1)^p f_p\otimes_G \partial c_q\bigr). \end{multline*} which are also easily seen to be identical. For the second spectral sequence, we take the first spectral sequence from Proposition \ref{prop:two_spectral_sequences} associated to the double complex $D^T$. We denote it by $L$, however, to make the notation consistent in the following. Note that we can also write $D^T_{p,q}\cong\Cone_p(i_{\bullet,q}^T)$, where $i^T: (F'_q\otimes_{G'}^T C'_p)_{p,q} \rightarrow (F_q\otimes_G^T C_p)_{p,q}$ is the map induced on the transposed double complexes. By applying the above calculation to $i^T$, we obtain \[ L^1_{p,q} = H_q\bigl(\Cone_*(i_{p,\bullet})\bigr) \Rightarrow H_{p+q}(\Tot(D^T)) \cong H_{p+q}(\Cone_*(i^T)), \] We have seen in Lemma \ref{lem:transposed_double_complex} that there is a chain map inducing isomorphisms \[ H_*(F \otimes_G C) \cong H_*(F \otimes_G^T C) \] on homology. Using these maps, one can easily construct a chain map $\Cone(i)\rightarrow \Cone(i^T)$ inducing an isomorphism on homology by the long exact sequence associated to the mapping cone and the 5-Lemma. \end{Proof} \begin{Remark} If $i$ is injective, we have \[ H_{p+q}(\Cone_*(i))\cong H_{p+q}( F\otimes_G C, i(F'\otimes_{G'} C')) \] by Lemma \ref{l:mapping_cone_relative_homology}. \end{Remark} \noindent Applied to the Lyndon\slash Hochschild-Serre spectral sequence from Theorem \ref{th:lhs}, we obtain \begin{Theorem}[Relative Lyndon\slash Hochschild-Serre]\label{th:relative_lyndon_hochschild_serre} Fix a group $G$ with a normal subgroup $H$, a subgroup $G'\leq G$ and a $G$-module $M$. Write $H'=H\cap G'$. If the quotient $Q=G/H$ is isomorphic to $G'/H'$, there is a spectral sequence \[ L^2_{p,q} = H_p\bigl(Q;H_q(H,H';M)\bigr) \Rightarrow H_{p+q}(G,G';M). \] If the subgroup $H$ acts trivially on $M$ and $M$ is $\mathds Z$-free, we obtain \[ L^2_{p,q} = H_p\bigl(Q;H_q(H,H';\mathds Z)\otimes_\mathds Z M\bigr) \Rightarrow H_{p+q}(G,G';M). \] \end{Theorem} \begin{Proof} Let $F_*(G)$, $F_*(G')$ and $F_*(Q)$ be the standard resolutions of $\mathds Z$ over $\mathds Z G$, $\mathds Z G'$ and $\mathds Z Q$, respectively. As in the construction of the Lyndon\slash Hochschild-Serre spectral sequence, we consider the double complexes $F_*(Q) \otimes_Q (F_*(G') \otimes_{H'} M)$ and $F_*(Q) \otimes_Q (F_*(G) \otimes_H M)$. Let \[ i : F_*(Q) \otimes_Q ( F_*(G') \otimes_{H'} M) \rightarrow F_*(Q) \otimes_Q (F_*(G) \otimes_H M) \] be induced by the inclusions. Then apply Theorem \ref{th:relative_spectral_sequence} to obtain a spectral sequence with first page term \[ L^1_{p,q}=H_q(\Cone_*(i_{p,\bullet}))\cong H_q\biggl(\frac{F_p(Q) \otimes_Q (F_*(G) \otimes_{H} M)}{F_p(Q) \otimes_Q (F_*(G') \otimes_{H'} M)}\biggr). \] The module $F_p(Q)$ is $\mathds Z Q$-free and hence $\mathds Z Q$-flat. We obtain \[ L^1_{p,q}\cong F_p(Q) \otimes_Q H_q\biggl(\frac{F_*(G)\otimes_{H} M}{F_*(G') \otimes_{H'} M}\biggr) \cong F_p(Q) \otimes_Q H_q\biggl(\frac{F_*(H)\otimes_{H} M}{F_*(H') \otimes_{H'} M}\biggr), \] where the last isomorphism is the content of Proposition \ref{prop:relative_homology_of_subgroups}. This yields \[ L^2_{p,q}\cong H_p(Q,H_q(H,H';M)) \Rightarrow H_{p+q}(\Cone_*(i)). \] \noindent On the other hand, consider the spectral sequence $E$ from Theorem \ref{th:relative_spectral_sequence} and apply Lemma \ref{l:mapping_cone_relative_homology} to obtain the following description of the first page. \[ E^1_{p,q} \cong H_q \biggl( \frac{F_*(Q) \otimes_Q ( F_p(G) \otimes_H M)}{F_*(Q) \otimes_Q (F_p(G') \otimes_{H'} M)} \biggr) \Rightarrow H_{p+q}(\Cone_*(i)). \] In the proof of the regular Lyndon\slash Hochschild-Serre spectral sequence (Theorem \ref{th:lhs}), we have seen that $H_q(F_*(Q) \otimes_Q ( F_p(G) \otimes_H M))=0$ for $q\neq 0$ and that \[ H_0(F_*(Q) \otimes_Q ( F_p(G) \otimes_H M)) \cong F_p(G)\otimes_G M. \] The same is true if we replace $G$ and $H$ by $G'$ and $H'$. The map $i$ induces the map \[ H_0(F_*(Q) \otimes_Q ( F_p(G') \otimes_{H'} M)) \rightarrow H_0(F_*(Q) \otimes_Q ( F_p(G) \otimes_H M)) \] which under the above isomorphisms is just the inclusion $F_p(G')\otimes_{G'}M \rightarrow F_p(G)\otimes_G M$. In particular, it is injective. By the long exact sequence for relative homology, we obtain $E^1_{p,q}=0$ for $q\neq 0$ and \[ E^1_{p,0}\cong \frac{F_p(G) \otimes_G M}{F_p(G')\otimes_{G'} M}, \] so the spectral sequence $E$ collapses on the second page and converges to $H_{p+q}(G,G';M)$, which proves the first statement. The second statement follows from Lemma \ref{l:trivial_coefficient_module}. \end{Proof} \section{Geometry} Homological stability proofs usually consider the action of a group on some simplicial complex and then exhibit smaller groups of the same series of groups as stabiliser subgroups. In this part, we will introduce the opposition complex associated to a group with a weak spherical Tits system and construct a filtration of this complex which leads to a relative spectral sequence involving the group and its Levi subgroups. This will be used in the last part to prove homological stability. \subsection{Spherical buildings}\label{subsec:sph_buildings} We will briefly recall the basic definitions for Coxeter complexes and spherical buildings. The books by Abramenko and Brown \cite{AB:B:08} and by Ronan \cite{Ron:LoB:89} are excellent references, where all of the material of this section can be found. At the end of this section, we will illustrate all of these definitions in the concrete case of the projective space over a division ring. \begin{Definition} Let $I$ be a finite set. A \emph{Coxeter matrix} $M=(m_{i,j})_{i,j\in I}$ is a symmetric matrix with entries 1 on the diagonal and with entries in $\{2,3,\ldots,\infty\}$ else. We associate to $M$ the \emph{Coxeter diagram}, an edge-labelled graph with vertex set $I$ and with edges between $i$ and $j$ if $m_{i,j}>2$. We label all edges by the corresponding matrix elements $m_{i,j}$. It is customary to omit the label if $m_{i,j}=3,4$ and to draw a double edge for $m_{i,j}=4$. \end{Definition} \noindent In this paper, we will need the following two diagrams, where the type set $I=\{i_1,\ldots,i_n\}$ is enumerated linearly as follows \begin{center} \begin{tikzpicture}[font=\small] \node (G) at (-1,.8) {$A_n$}; \node (L) at (-1,0) {$C_n$}; \foreach \y in {0,.8} { \foreach \x in {0,1,2,4,5} { \fill (\x,\y) circle (.7mm);} \draw (1,\y) -- (2,\y); \draw[dotted] (2,\y) -- (4,\y); \draw (4,\y) -- (5,\y); } \draw (0,.8) -- (1,.8); \draw (0,0.05) -- (1,0.05); \draw (0,-0.05) -- (1,-0.05); \node (0) at (0,-.6) {$i_1$}; \node (1) at (1,-.6) {$i_2$}; \node (3) at (2,-.6) {$i_3$}; \node (4) at (4,-.6) {$i_{n-1}$}; \node (5) at (5,-.6) {$i_n$}; \node (dot) at (3,-.6) {$\cdots$}; \end{tikzpicture} \end{center} \noindent Associated to every Coxeter matrix, there is a finitely presented group, the Coxeter group. \begin{Definition} Let $I$ be a finite set and let $M$ be a Coxeter matrix. The associated \emph{Coxeter group $W$} is a finitely presented group with generator set $S=\{s_i:i\in I\}$ and with presentation \[ W = \langle S \,|\, (s_is_j)^{m_{i,j}}=1\quad\forall i,j\in I \rangle. \] The pair $(W,S)$ is called a \emph{Coxeter system}, the cardinality $|I|$ is called its \emph{rank}. The set of all cosets \[ \Sigma(W,S)=\{w\langle S'\rangle: w\in W, S'\subseteq S\}, \] partially ordered by reverse inclusion forms a simplicial complex, the \emph{Coxeter complex} associated to $(W,S)$. If $\sigma\in\Sigma(W,S)$ is of the form $\sigma=w\langle \{s_i : i \in I' \}\rangle$, we write $\type(\sigma) = I\setminus I'$. In particular, the types of vertices are single elements of $I$. Top-dimensional simplices of $\Sigma(W,S)$ are called \emph{chambers}, codimension-1-simplices are called \emph{panels}. \end{Definition} \noindent The definition of opposition in spherical buildings is central to our discussion later on. \begin{Definition} We say that $W$ is \emph{spherical} if $W$ is finite. In this case $\Sigma(W,S)$ can be realised naturally as a triangulated sphere of dimension $|I|-1$. Then the antipodal map of the sphere induces a simplicial involutory automorphism $\rho$ of $\Sigma(W,S)$. A simplex $\sigma$ and its image $\rho(\sigma)$ are said to be \emph{opposite}. The notion of opposition can also be expressed in a combinatorial fashion. \end{Definition} \noindent Buildings are simplicial complexes covered by the union of their apartments, which are copies of a fixed Coxeter complex. \begin{Definition} A simplicial complex $\Delta$ together with a collection of subcomplexes $\mathcal A$ called \emph{apartments} is a \emph{building} if \begin{itemize} \item Every apartment is a Coxeter complex. \item Every two simplices of $\Delta$ are contained in a common apartment. \item For any two apartments $A_1,A_2\in \mathcal A$ containing a common chamber, there is an isomorphism $A_1\rightarrow A_2$ fixing $A_1\cap A_2$ pointwise. \end{itemize} The building $\Delta$ is called \emph{thick} if every panel is contained in at least three chambers. We will call a building \emph{weak} if it is not necessarily thick. \end{Definition} \noindent The axioms force all apartments to be isomorphic, there is in particular a unique Coxeter system $(W,S)$ associated to $\Delta$. \begin{Definition} If this Coxeter group $W$ is spherical, then $\Delta$ is called spherical as well. Likewise, we say that the type of $\Delta$ is the type of $(W,S)$. The \emph{rank} of the building is the cardinality of the type set $I$. \end{Definition} \noindent We will need the following properties of buildings: \begin{Lemma} The link of a simplex $\lk_\Delta(\sigma)$ is again a building of type $I\setminus \type(\sigma)$. \end{Lemma} \begin{Theorem}[Solomon-Tits] A spherical building of rank $n$ has the homotopy type of a bouquet of $(n-1)$-spheres. \end{Theorem} \noindent The following example will be discussed again in section \ref{sec:general_linear_groups}. \begin{Example} The simplest example of a rank $n$ spherical building is the flag complex $\Delta$ over the $n$-dimensional projective space over any division ring $D$. Simplices in $\Delta$ are then ascending flags of subspaces of $D^{n+1}$: \[ \Delta = \{ (V_1 \subsetneq V_2 \subsetneq \cdots\subsetneq V_k) : V_1\neq 0, V_k\subsetneq D^{n+1} \} \] Apartments in $\Delta$ consist of all flags whose elements can be expressed as spans of a fixed basis of $D^{n+1}$. The associated Coxeter group $W$ is the symmetric group on $n+1$ letters permuting the basis vectors. This building is of type $A_n$. Two simplices $(V_1\subsetneq V_2 \subsetneq\cdots\subsetneq V_k)$ and $(V'_1\subsetneq V'_2 \subsetneq\cdots\subsetneq V'_k)$ are opposite if and only if $V_i\bigoplus V'_{k+1-i} = D^{n+1}$ for all $i=1,\ldots,k$. The type of a simplex $\sigma=(V_1\subsetneq V_2 \subsetneq\cdots\subsetneq V_k)$ is the set of dimensions of subspaces in the flag: $\type(\sigma)=\{ i_l : \dim(V_i)=l, 1\leq i\leq k\}$. \end{Example} \noindent Our main result will involve groups with weak spherical Tits systems or, equivalently, groups acting strongly transitively on weak spherical buildings. \begin{Definition} A group $G$ acts \emph{strongly transitively} on a weak spherical building $\Delta$ if it acts transitively on pairs of chambers and apartments containing them. \end{Definition} \noindent Strongly transitive actions give rise to a group theoretic datum called a Tits system. \begin{Definition} Let $G$ be a group. Let $B$ and $N$ be subgroups of $G$ such that their intersection $H=B\cap N$ is normal in $N$. Assume also that the quotient group $W\mathrel{\mathop :}= N/H$ is generated by a set $S\subset W$. The quadruple $(G,B,N,S)$ is called a \emph{weak Tits system} if \begin{itemize} \item $G=\langle B \cup N\rangle$, \item $(W,S)$ is a Coxeter system and \item for $s\in S$ and $w\in W$, we have $BsBwB\subseteq BwB \sqcup BswB$. \end{itemize} If in addition $sBs\neq B$ for all $s\in S$, we call $(G,B,N,S)$ a \emph{Tits system} for $G$. \end{Definition} \noindent Given a strongly transitive action of a group $G$ on a weak spherical building, we can construct a Tits system as follows. \begin{Construction} Fix an apartment $\Sigma$ and a chamber $c_0\in\Sigma$. Denote by $B$ the stabiliser of $c_0$ and by $N$ the normaliser of $\Sigma$. Their intersection $H=B\cap N$ is the pointwise stabiliser of $\Sigma$ and the group $H$ is normal in $N$. The quotient $N/H$ acts chamber-regularly on the apartment $\Sigma$ and is hence isomorphic to the associated Coxeter group $W$, where we have fixed the generating set $S$. \end{Construction} \begin{Theorem} The quadruple $(G,B,N,S)$ is a weak Tits system for $G$. If the building is thick, then $(G,B,N,S)$ is a Tits system. Conversely, given a (weak) Tits system of spherical type for $G$, a (weak) spherical building can be constructed on which $G$ acts strongly transitively. \end{Theorem} \noindent The proof of this result can be found in \cite[Chapter 6]{AB:B:08}. \begin{Remark} We will later describe the buildings and the weak Tits systems we use explicitly. The class of groups with weak spherical Tits systems includes general and special linear groups over division rings and unitary groups over hyperbolic modules, in particular symplectic groups and special orthogonal groups of maximal Witt index. Note that the latter groups have a (non-weak) Tits system of type $D_n$, but also a weak Tits system of type $C_n$, as described in \cite[6.7]{AB:B:08}. \end{Remark} \begin{Definition} Any stabiliser in $G$ of a simplex in $\Delta$ is called a \emph{parabolic subgroup}. \end{Definition} \begin{Example} The general linear group $\Gl_{n+1}(D)$ acts strongly transitively on the building $\Delta$ of type $A_n$, which is the flag complex over projective space, as discussed in the previous section. If $e_1,e_2,\ldots,e_{n+1}$ is the standard basis of $D^{n+1}$, the parabolic subgroup associated to the vertex $v=(\langle e_1,\ldots, e_k\rangle)$ is given by \[ G_v = \begin{pmatrix} \Gl_{k}(D) & * \\ 0 & \Gl_{n+1-k}(D) \end{pmatrix}. \] \end{Example} \subsection{The opposition complex} Let $n\geq 2$ and let $\Delta$ be a weak spherical building of rank $n$. Enumerate the type set $I=\{i_1,\ldots,i_n\}$ of $\Delta$ arbitrarily. In addition, we fix a group $G$ that acts strongly transitively on $\Delta$. The basic geometry on which we will study the action of $G$ is not the building $\Delta$, but its associated opposition complex. \begin{Definition} The \emph{opposition complex $O(\Delta)$} is the simplicial complex consisting of pairs of opposite simplices \[ O(\Delta) \mathrel{\mathop :}= \{ \sigma= (\sigma^+,\sigma^-) \in \Delta\times \Delta : \sigma^+ \text{ opposite } \sigma^- \} \] with the induced inclusion relations. Set $\type((\sigma^+,\sigma^-)) \mathrel{\mathop :}= \type(\sigma^+)$. \end{Definition} \noindent The opposition complex $O(\Delta)$ is a $G$-simplicial complex since the $G$-action preserves opposition. The $G$-action is transitive on vertices of the same type of $O(\Delta)$, since it is transitive on pairs of opposite vertices of a fixed type in the building. \begin{Example} The vertices of the opposition complex associated to the general linear group $\Gl_{n+1}(D)$ acting on the associated projective space over $D^{n+1}$ as in Section \ref{subsec:sph_buildings} are pairs of complementary subspaces of $D^{n+1}$. \end{Example} \noindent The significance of the opposition complex for this paper lies in the following theorem. \begin{Theorem}[von Heydebreck, \cite{vH:HPC:03}, Theorem 3.1]\label{th:opposition_complex_is_spherical} The opposition complex of a weak spherical building $\Delta$ of rank $n$ is homotopy equivalent to a bouquet of $(n-1)$-spheres, we will also say that it is \emph{$(n-1)$-spherical}. \end{Theorem} \noindent We fix a set of representative vertices for the $G$-action. \begin{Definition}\label{def:situation} We fix a standard apartment $\Sigma$ in $\Delta$ and a chamber $c_0\in\Sigma$. In the following, we write $v_p^+$ for the vertex of $c_0$ of type $\{i_p\}$ and we denote the corresponding opposite vertex in $\Sigma$ by $v_p^-$. We write $v_p=(v_p^+,v_p^-)\in O(\Delta)$ for the vertex in $O(\Delta)$. \end{Definition} \noindent For the inductive arguments to come, we investigate the structure of stabilisers. \begin{Definition} Denote the stabilisers as follows: \[ L_p \mathrel{\mathop :}= G_{v_p} = G_{v_p^+} \cap G_{v_p^-} \] These are intersections of two opposite parabolic subgroups, called \emph{Levi subgroups of $G$}. \end{Definition} \begin{Example} For the general linear group $\Gl_{n+1}(D)$ acting on the associated building $\Delta$ of type $A_n$, as above, the stabiliser of the vertex $v=(\langle e_1,\ldots,e_k\rangle,\langle e_{k+1},\ldots,e_{n+1}\rangle)$ in $O(\Delta)$ is the subgroup \[ L_v = \begin{pmatrix} \Gl_{k}(D) & 0 \\ 0 & \Gl_{n+1-k}(D) \end{pmatrix}. \] Note that the Levi subgroup splits as a direct product of smaller general linear groups. \end{Example} \noindent The opposition complex commutes with the formation of links as follows. \begin{Proposition}[von Heydebreck, \cite{vH:HPC:03}, Proposition 2.1]\label{prop:links_opposition_complex} For a simplex \\ $(\sigma^+,\sigma^-)\in O(\Delta)$ we have \[ \lk_{O(\Delta)}((\sigma^+,\sigma^-)) \cong O(\lk_\Delta(\sigma^+)). \] This isomorphism is $G_{\{\sigma^+,\sigma^-\}}$-equivariant. In particular, the link $\lk_{O(X)}((\sigma^+,\sigma^-))$ is $(n-1-k)$-spherical. \end{Proposition} \subsection{A Filtration} We construct an exact chain complex of $G$-modules associated to $O(\Delta)$. The construction is similar to the construction of cellular chains of a CW complex. The filtration by skeletons is replaced by a filtration by types. \begin{Definition} For $1\leq p \leq n$ let $I_p=\{i_1,\ldots,i_p\}$. Write \[ O(\Delta)_p \mathrel{\mathop :}= \{ \sigma\in O(\Delta) : \type (\sigma) \subseteq I_p\}, \] this is a $G$-invariant filtration of $O(\Delta)$. We set $O(\Delta)_0\mathrel{\mathop :}= \emptyset$. \end{Definition} \noindent Observe that $O(\Delta)_p$ is of rank $p$ and hence of dimension $p-1$. \begin{Definition} Let $v$ be a vertex in $O(\Delta)_p$. We define the \emph{filtrated residue}, \emph{link} and \emph{star} by: \begin{align*} R(v)_p &= R_{O(\Delta)}(v) \cap O(\Delta)_p = \{\sigma\in O(\Delta)_p: v\in\sigma\},\\ \lk(v)_p &= \lk_{O(\Delta)}(v) \cap O(\Delta)_p = \{\sigma\in O(\Delta)_p: v\cup\sigma\in O(\Delta), v\cap\sigma=\emptyset\},\\ \st(v)_p &= \lk(v)_p \sqcup R(v)_p. \end{align*} \end{Definition} \begin{Remark} From the definition it is obvious that \[ \st(v)_p \cap O(\Delta)_{p-1} = \lk(v)_p = \lk(v)_{p-1} \] if $\type(v)=\{i_p\}$. \end{Remark} \begin{Proposition}\label{prop:filtered_homology} For $2\leq p\leq n$ we have \[ H_i(O(\Delta)_p,O(\Delta)_{p-1}) \cong \bigoplus_{\type(v)=i_p} \tilde H_{i-1}(\lk(v)_{p-1}). \] \end{Proposition} \begin{Proof} We have \begin{align*} O(\Delta)_p \setminus O(\Delta)_{p-1} &= \{\sigma\in O(\Delta) : i_p \in \type(\sigma) \subseteq I_p \} \\ &= \coprod_{\type(v)=i_p} \{\sigma\in O(\Delta)_p : v\in \sigma \} \\ &= \coprod_{\type(v)=i_p} R(v)_p. \end{align*} Since $\st(v)_p\cap O(\Delta)_{p-1} = \lk(v)_{p-1}$, we obtain the following pushout diagram \[ \xymatrix{ \coprod_{\type(v)=i_p} \lk(v)_{p-1}\ar[r]\ar[d] & O(\Delta)_{p-1}\ar[d] \\ \coprod_{\type(v)=i_p} \st(v)_p\ar[r] & O(\Delta)_p. } \] By excision, we obtain \begin{align*} H_i(O(\Delta)_p,O(\Delta)_{p-1}) &\cong \bigoplus_{\type(v)=i_p} H_i(\st(v)_p,\lk(v)_{p-1}) \\ &\cong \bigoplus_{\type(v)=i_p} \tilde H_i(\st(v)_p / \lk(v)_{p-1}) \\ &\cong \bigoplus_{\type(v)=i_p} \tilde H_{i-1}(\lk(v)_{p-1}). \end{align*} The last line follows from the fact that $\st(v)_p$ is the simplicial cone over $\lk(v)_{p-1}$. \end{Proof} \noindent This description of relative homology allows us to show that each filtration subcomplex of the opposition complex is also spherical. \begin{Proposition}\label{prop:filtrated_opposition_complexes_are_spherical} For any $1\leq p \leq n$, the homology groups $\tilde H_i(O(\Delta)_p)$ are trivial except for $i=p-1$. The group $\tilde H_{p-1}(O(\Delta)_p)$ is $\mathds Z$-free. \end{Proposition} \begin{Proof} Since the complexes $O(\Delta)_p$ are $(p-1)$-dimensional, their top-dimensional homology group $H_{p-1}(O(\Delta)_p)$ is automatically $\mathds Z$-free. It remains to show that all other reduced homology groups vanish. We prove this by induction on $n=\rank(\Delta)$. In any case, the statement is true for $O(\Delta)_n=O(\Delta)$ by Theorem \ref{th:opposition_complex_is_spherical} and it is trivial for $O(\Delta)_1$. Combining these facts, we obtain the statement for $n=2$. Now assume $n\geq 3$. We prove the statement for $O(\Delta)_p$ for all $2\leq p\leq n$ by reverse induction. The case $p=n$ is already known by Theorem \ref{th:opposition_complex_is_spherical}. Hence assume we have the statement for $O(\Delta)_p$ and prove it for $O(\Delta)_{p-1}$. First of all note that by Proposition \ref{prop:filtered_homology}, we have \[ H_i(O(\Delta)_p,O(\Delta)_{p-1}) \cong \bigoplus_{\type(v)=i_p} \tilde H_{i-1}(\lk(v)_{p-1}) \] and we have the induction hypothesis for $\lk(v)_{p-1}$, since $\rank(\lk_\Delta(v^+))=\rank(\Delta)-1$. We see in particular that $H_i(O(\Delta)_p,O(\Delta)_{p-1})$ vanishes for $i\neq p-1$. By the long exact sequence for the pair $(O(\Delta)_p,O(\Delta)_{p-1})$, we obtain that $\tilde H_i(O(\Delta)_{p-1})$ vanishes for $i\neq p-2$. \end{Proof} \noindent The following modules $M_p$ will be the coefficient modules in the spectral sequence. \begin{Definition} We define a sequence of $L_p$-modules as follows: \[ M_p \mathrel{\mathop :}=\begin{cases} \mathds Z & p=1 \\ \tilde H_{p-2}(\lk(v_p)_{p-1}) & 2\leq p \leq n. \end{cases} \] These are $L_p$-modules, since $L_p$ stabilises $\lk(v_p)$ and is type-preserving. It hence also stabilises the subcomplex $\lk(v_p)_{p-1}$. Note additionally that $M_p$ is $\mathds Z$-free by the previous proposition. \end{Definition} \noindent With this definition, we obtain a simple description of the relative homology modules. \begin{Proposition}\label{prop:structure_of_filtrated_homology} For $1\leq p\leq n$, we have \[ H_i(O(\Delta)_p,O(\Delta)_{p-1}) \cong\begin{cases} 0 & i \neq p-1\\ \mathds Z G \otimes_{L_p} M_p & i=p-1. \end{cases} \] \end{Proposition} \begin{Proof} If $p=1$, then $O(\Delta)_1 = \coprod_{\type(v)=i_1} \{v\}$. Then obviously \[ H_0(O(\Delta)_1) \cong \mathds Z G \otimes_{L_1} \mathds Z \] since $G$ acts transitively on pairs of opposite vertices of the same type. For $p>1$, by Proposition \ref{prop:filtered_homology}, we have \begin{align*} H_i(O(\Delta)_p,O(\Delta)_{p-1}) &\cong \bigoplus_{\type(v)=i_p} \tilde H_{i-1}(\lk(v)_{p-1}), \\ &\cong \mathds Z G \otimes_{L_p} \tilde H_{i-1}(\lk(v_p)_{p-1} ), \end{align*} again since $G$ acts transitively on pairs of opposite vertices of the same type. The claim now follows from Proposition \ref{prop:filtrated_opposition_complexes_are_spherical}. \end{Proof} \noindent The filtration allows us to obtain an exact complex of $G$-modules, which will be used to construct a relative spectral sequence. \begin{Definition}\label{def:exact_chain_complexes} Consider the following sequence of $G$-modules: \[ C_p\mathrel{\mathop :}=\begin{cases} H_{n-1}(O(\Delta)) & p=n+1 \\ H_{p-1}(O(\Delta)_p,O(\Delta)_{p-1}) & 1 \leq p \leq n \\ \mathds Z & p=0 \\ 0 & \text{otherwise.} \end{cases} \] We then have \[ C_p \cong \mathds Z G \otimes_{L_p}M_p \] for $1\leq p \leq n$ by Proposition \ref{prop:structure_of_filtrated_homology}. \end{Definition} \noindent As for cellular chains, there is a chain complex structure on $C_*$. Note that, in contrast to the situation of cellular chains, we have added the modules $C_0$ and $C_{n+1}$ to make the chain complex $C_*$ exact. \begin{Lemma}\label{l:kdelta_exact_chain_complex} There is a boundary map $\partial^C_*$ on $C_*$, which makes $C_*$ into an exact chain complex of $G$-modules. \end{Lemma} \begin{Proof} The filtration $(O(\Delta)_p)_{p\in\mathds Z}$ induces a $G$-equivariant filtration on cellular chains of $O(\Delta)$. There is hence a spectral sequence of $G$-modules \[ E^1_{p,q} = H_{p+q}(O(\Delta)_{p+1},O(\Delta)_p) \quad\Rightarrow\quad H_{p+q}(O(\Delta)). \] By Proposition \ref{prop:filtered_homology} we obtain \[ E^1_{p,q} = \begin{cases} H_p(O(\Delta)_{p+1},O(\Delta)_p) & q=0 \\ 0 & q \neq 0. \end{cases} \] The spectral sequence hence collapses on the second page, the differential maps on the first page and the edge homomorphisms form the long exact sequence. \end{Proof} \begin{Remark} A closer comparison to the situation of cellular chains shows that this boundary map is given by the composition \[ H_p(O(\Delta)_{p+1},O(\Delta)_p) \stackrel{\delta}{\rightarrow} H_{p-1}(O(\Delta)_p) \stackrel{H(\pi)}{\rightarrow} H_{p-1}(O(\Delta)_p,O(\Delta)_{p-1}) \] where $\delta$ is the connecting homomorphism of the long exact sequence associated to the pair $(O(\Delta)_{p+1},O(\Delta)_p)$ and $H(\pi)$ is induced by the projection as in the long exact sequence associated to the pair $(O(\Delta)_p,O(\Delta)_{p-1})$. \end{Remark} \subsection{The relative spectral sequence} For $n\geq 2$, let $G$ be a group with a weak Tits system of rank $n+1$ with associated building $\Delta$ whose type set $I=\{i_1,\ldots,i_{n+1}\}$ is ordered arbitrarily. We adapt the notation from the previous section. In this general setting, of course, a part of the problem of homological stability is its precise formulation. Which subgroups of a given group $G$ should be considered as the ones to yield stability? These subgroups cannot be expressed explicitly in this generality, they depend on the chosen series of groups. Consequently, we allow for a certain amount of flexibility in the choice of the subgroup $G'$. \begin{Definition} Let $G'$ be any subgroup of $L_{n+1}$ that still acts strongly transitively on the link $\Delta'\mathrel{\mathop :}=\lk_\Delta(v_{n+1}^+)$, which is a building of type $I_n=\{i_1,\ldots,i_n\}$. In particular, $G'$ admits a weak Tits system of type $I_n$. We call the pair $(G,G')$ a \emph{stability pair}. \end{Definition} \begin{Remark} We will always choose $G'$ such that this assumption of strong transitivity is obviously fulfilled. In fact, if the group $G$ admits a root datum, then every Levi subgroup does as well by \cite[6.2.3]{Rem:GKM:02}. The group $L_{n+1}^\dagger=[L_{n+1},L_{n+1}]$ is then the associated little projective group which also admits a root datum and we will always choose $G'$ to contain $L_{n+1}^\dagger$. \end{Remark} \noindent The aim of this section is to associate a spectral sequence to every stability pair. It will be a relative version of the spectral sequence in Corollary \ref{cor:ss_exact_coefficients}. \begin{Definition} By Proposition \ref{prop:links_opposition_complex}, we have \[ O(\Delta)' \mathrel{\mathop :}= \lk_{O(\Delta)}(v_{n+1}) \cong O(\lk_\Delta(v_{n+1}^+))=O(\Delta'), \] and we denote the complex on the left by $O(\Delta)'$. This isomorphism is $G'$-equivariant. Define a type filtration on $O(\Delta')$ analogously to the one defined on $O(\Delta)$. In addition, the filtration on $O(\Delta)$ induces a filtration on $O(\Delta)'$. The isomorphism is then also filtration-preserving. \end{Definition} \noindent Now we consider the exact chain complexes $C_*$ and $C'_*$ associated to the groups $G$ and $G'$ as in Definition \ref{def:exact_chain_complexes}. We obtain the following descriptions: \begin{align*} C_p&\mathrel{\mathop :}=\begin{cases} H_n(O(\Delta)) & p=n+2 \\ H_{p-1}(O(\Delta)_p,O(\Delta)_{p-1}) & 1 \leq p \leq n+1 \\ \mathds Z & p=0 \\ 0 & \text{otherwise.} \end{cases}\\ \intertext{and} C_p'&\mathrel{\mathop :}=\begin{cases} H_{n-1}(O(\Delta)') & p=n+1 \\ H_{p-1}(O(\Delta)'_p,O(\Delta)'_{p-1}) & 1 \leq p \leq n \\ \mathds Z & p=0 \\ 0 & \text{otherwise.} \end{cases} \end{align*} We see that \[ C_p' \cong \mathds Z G' \otimes_{L_p'} M_p\qquad\text{ and }\qquad C_p\cong \mathds Z G \otimes_{L_p} M_p \] for $1\leq p\leq n$ and $C'_{n+1} = H_{n-1}(\lk_{O(\Delta)}(v_{n+1})) = M_{n+1}$, where \[ M_p = \tilde H_{p-2}(\lk_{O(\Delta)}(v_p)_{p-1}) \cong \tilde H_{p-2}(\lk_{O(\Delta)'}(v_p)_{p-1})\qquad\text{for } 2\leq p\leq n \] are the same modules for both groups $G$ and $G'$. \begin{Lemma} The inclusion $O(\Delta)'\hookrightarrow O(\Delta)$ induces a $G'$-equivariant chain map \[ \iota: C_*' \rightarrow C_*. \] \end{Lemma} \begin{Proof} The inclusion $O(\Delta)'\hookrightarrow O(\Delta)$ is $G'$-equivariant and filtration-preserving. The inclusions of pairs then induce maps \[ \iota_p: \underbrace{H_{p-1}(O(\Delta)'_p,O(\Delta)'_{p-1})}_{C_p'} \rightarrow \underbrace{H_{p-1}(O(\Delta)_p,O(\Delta)_{p-1})}_{C_p} \] for $1\leq p \leq n$, which are compatible with the boundary maps $\partial^C$ and $\partial^{C'}$. Of course $\iota_p=0$ for $p\leq -1$ and $p\geq n+2$. Consider \[ \xymatrix{ 0 & \mathds Z\ar[l] & H_0(O(\Delta)_1) \ar[l]_-{\partial^C_1} & \ar[l]\cdots \\ 0 & \mathds Z\ar[l]\ar@.[u]^{\iota_0} & H_0(O(\Delta)'_1)\ar[l]_-{\partial^{C'}_1}\ar[u]^{\iota_1} & \ar[l]\cdots } \] Here $\iota_0\mathrel{\mathop :}= \partial^C_1 \circ \iota_1 \circ (\partial^{C'}_1)^{-1}$ is well-defined by a diagram chase. For $p=n+1$, consider \[ \xymatrix{ \cdots & H_{n-1}(O(\Delta)_n,O(\Delta)_{n-1})\ar[l] & H_n(O(\Delta)_{n+1},O(\Delta)_n)\ar[l]_-{\partial^C_{n+1}} & H_{n}(O(\Delta)_{n+1})\ar[l] \\ \cdots & H_{n-1}(O(\Delta)'_n,O(\Delta)'_{n-1})\ar[l]\ar[u]^{\iota_n} & \tilde H_{n-1}(O(\Delta)'_n) \ar[l]_-{\partial^{C'}_{n+1}}\ar@.[u]_{\iota_{n+1}} & 0.\ar[l] } \] Note that $O(\Delta)'_n=\lk_{O(X)}(v_{n+1}) = \lk(v_{n+1})_n$, so \[ \tilde H_{n-1}(O(\Delta)'_n) \cong H_n\bigl(\st(v_{n+1})_{n+1},\lk(v_{n+1})_n\bigr) \] as in the proof of Proposition \ref{prop:filtered_homology}. The inclusion of pairs \[ \bigl(\st(v_{n+1})_{n+1},\lk(v_{n+1})_n\bigr) \hookrightarrow \bigl(O(\Delta)_{n+1},O(\Delta)_n\bigr) \] then induces the map $\iota_{n+1}$ on homology. A closer inspection using the explicit description of the boundary maps above shows that $\iota$ is a chain map. \end{Proof} \begin{Remark} It is not difficult to see that, under the above isomorphisms, the map $\iota_p$ is induced by the canonical inclusions \[ \mathds Z G'\otimes_{L'_p} M_p \hookrightarrow \mathds Z G \otimes_{L_p} M_p \] for $1\leq p \leq n$ and that $\iota_{n+1}: M_{n+1} \rightarrow \mathds Z G \otimes_{L_{n+1}} M_{n+1}$ is given by $m\mapsto 1\otimes_{L_{n+1}} m$. \end{Remark} \noindent With these chain complexes and the chain map, we are able to construct the following spectral sequence. \begin{Theorem}[Stability pair spectral sequence]\label{th:stability_pair_spectral_sequence} For each stability pair $(G,G')$, there is a first-quadrant spectral sequence \[ E^1_{p,q}=\begin{cases} H_q(G,G';\mathds Z) & p=0 \\ H_q(L_p,L_p';M_p) & 1\leq p\leq n \\ H_q(L_{n+1},G';M_{n+1}) & p=n+1 \end{cases} \] which converges to zero. \end{Theorem} \begin{Proof} Let $F_*(G)$ and $F_*(G')$ be the standard resolutions of $\mathds Z$ over $\mathds Z G$ and $\mathds Z G'$, respectively. Consider the two double complexes $F_*(G')\otimes_{G'} C'_*$ and $F_*(G)\otimes_G C_*$ and the map of double complexes \begin{align*} i: F(G')\otimes_{G'} C' &\rightarrow F(G)\otimes_G C \\ f \otimes_{G'} c &\mapsto f \otimes_G \iota(c) \end{align*} induced by the inclusion $G'\hookrightarrow G$ and by $\iota$. We can apply Theorem \ref{th:relative_spectral_sequence} to obtain the relative spectral sequence \[ E^1_{p,q} = H_q(\Cone_*(i_{\bullet,p})) \Rightarrow 0, \] converging to zero by Corollary \ref{cor:ss_exact_coefficients} and the long exact sequence associated to the mapping cone complex. All that remains is the description of the first page terms. For $1\leq p\leq n$, we apply Lemma \ref{l:mapping_cone_relative_homology} to obtain \[ E^1_{p,q} \cong H_q\bigl( (F_*(G) \otimes_G C_p ) / (F_*(G') \otimes_{G'} C'_p)\bigr) \cong H_q\bigl( (F_*(G) \otimes_{L_p} M_p ) / (F_*(G') \otimes_{L'_p} M_p ) \bigr), \] which is isomorphic to $H_q(L_p,L'_p;M_p)$ by Proposition \ref{prop:relative_homology_of_subgroups}. For $p=n+1$, note that \begin{align*} E^1_{n+1,q} &= H_q\bigl( (F_*(G) \otimes_G C_{n+1} ) / (F_*(G') \otimes_{G'} C'_{n+1})\bigr) \\ &\cong H_q\bigl( (F_*(G) \otimes_{L_{n+1}} M_{n+1} ) / (F_*(G') \otimes_{G'} M_{n+1} ) \bigr), \end{align*} which is isomorphic to $H_q(L_{n+1},G';M_{n+1})$. \end{Proof} \begin{Remark}\label{rem:e1_vanishes_at_n+1} Note that by Lemma \ref{l:relative_h0}, we have $E^1_{p,0}=0$ for all $0\leq p\leq n+1$. \end{Remark} \section{Group theory} In the third part, we will apply the results of the first two parts to specific series of groups. We will only sketch the original application to general linear groups by Charney in \cite{Cha:HSD:80}. Instead, we cite a strong stability theorem for general linear groups by Sah, which we will use for the following proofs. In the following sections, we then prove homological stability for special linear groups, for unitary groups and for special orthogonal groups. \subsection{General linear groups}\label{sec:general_linear_groups} Let $D$ be any division ring. We consider the case where $G=\Gl_{n+2}(D)$ for $n\geq 2$. The associated building $\Delta$ is the flag complex over $(n+1)$-dimensional projective space, the opposition complex $O(\Delta)$ consists of pairs of complementary vector subspaces of $D^{n+2}$. We choose a basis $e_1,e_2,\ldots,e_{n+2}$ of $D^{n+2}$ corresponding to the standard apartment $\Sigma$. We choose the type filtration and the chamber $c_0$ in $\Sigma$ with vertices $v_p^+ = \langle e_1,\ldots,e_p \rangle$, hence $v_p^- = \langle e_{p+1},\ldots,e_{n+2}\rangle$. Consequently \[ L_p = \begin{pmatrix} \Gl_p(D) & 0 \\ 0 & \Gl_{n+2-p}(D) \end{pmatrix} \] for $1\leq p \leq n+1$. In particular \[ L_{n+1}\cong \begin{pmatrix} \Gl_{n+1}(D) & 0 \\ 0 & D^\times \end{pmatrix}. \] We choose $G' = \begin{pmatrix} \Gl_{n+1}(D) & 0 \\ 0 & 1 \end{pmatrix}$ which implies that \[ L'_p = \begin{pmatrix} \Gl_p(D) & 0 & 0 \\ 0 & \Gl_{n+1-p}(D) & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}. \] Obviously, we obtain $L_p\cong \Gl_p(D) \times \Gl_{n+2-p}(D)$ and $L'_p \cong \Gl_p(D)\times \Gl_{n+1-p}(D)$. By Theorem \ref{th:stability_pair_spectral_sequence} we obtain as in \cite{Cha:HSD:80} the following spectral sequence. \begin{Theorem}[Charney, Theorem 2.2 in \cite{Cha:HSD:80}] For any $n\geq 2$, there is a first-quadrant spectral sequence \[ E^1_{p,q} =\begin{cases} H_q\bigl(\Gl_{n+2}(D),\Gl_{n+1}(D);\mathds Z\bigr) & p=0 \\ H_q\bigl(\Gl_{p}(D)\times \Gl_{n+2-p}(D),\Gl_p(D)\times \Gl_{n+1-p}(D); M_p\bigr) & 1\leq p \leq n\\ H_q\bigl(\Gl_{n+1}(D)\times D^\times, \Gl_{n+1}(D);M_{n+1}\bigr) & p=n+1 \end{cases} \] which converges to zero. \end{Theorem} \begin{Remark} It is not difficult to see that the maps \[ E^1_{1,q} = H_q\bigl(D^\times \times \Gl_{n+1}(D), D^\times \times \Gl_n(D);\mathds Z\bigr) \rightarrow H_q\bigl(\Gl_{n+2}(D),\Gl_{n+1}(D);\mathds Z\bigr) = E^1_{0,q} \] are induced by the inclusion of pairs, which is an important ingredient in the homological stability proof by Charney described below. \end{Remark} \noindent Note that the vertices of $\lk(v_p)_{p-1}$ are given by \[ \lk(v_p)_{p-1}^0 = \bigl\{ (V,W) : V\subsetneq \langle e_1,\ldots,e_p\rangle, W \supsetneq \langle e_{p+1},\ldots,e_{n+2}\rangle, V\oplus W=D^{n+2}\bigr\}. \] In particular, these factors of the Levi subgroups \[ \begin{pmatrix} \mathds 1_p & 0 \\ 0 & \Gl_{n+2-p}(D) \end{pmatrix}\qquad\text{and}\qquad\begin{pmatrix} \mathds 1_p & 0 & 0\\ 0 & \Gl_{n+1-p}(D) & 0 \\ 0 & 0 & 1 \end{pmatrix} \] act trivially on $\lk(v_p)_{p-1}$ and hence on $M_p$. We can hence apply the relative Lyndon\slash Hochschild-Serre spectral sequence (Theorem \ref{th:relative_lyndon_hochschild_serre}) to obtain a description of the first page of the aforementioned spectral sequence in terms of integral relative group homology of smaller general linear groups. This can be used to apply an ingenious ``bootstrap procedure'' to prove homological stability inductively for general linear groups as in \cite{Cha:HSD:80}. The result, originally proved for Dedekind rings, is \begin{Theorem}[Charney, Theorem 3.2 in \cite{Cha:HSD:80}] For $n\geq 3k$ we have \[ H_k(\Gl_{n+1}(D),\Gl_{n}(D);\mathds Z) = 0. \] \end{Theorem} \noindent This theorem is far from optimal, to the author's knowledge, the following result by Sah is the strongest for division rings with infinite centre. \begin{Theorem}[Sah, Appendix B in \cite{Sah:HcL:86}]\label{th:sah} If $D$ has infinite centre then $n\geq k$ implies \[ H_k(\Gl_{n+1}(D),\Gl_n(D);\mathds Z) =0. \] \end{Theorem} The discussion in \cite[Appendix B]{Sah:HcL:86} is very brief. A detailed exposition of this result can be found in \cite{Ess:HSG:06}. A different proof of this result can be found in \cite[2.3]{Knu:HLG:01}. For arbitrary division rings, compare this very special case of a general result by van der Kallen: \begin{Theorem}[van der Kallen, \cite{vdK:HSL:80}]\label{th:vdk_gln} For any division ring $D$ we have \[ H_k(\Gl_{n+1}(D),\Gl_n(D);\mathds Z) =0. \] for $n\geq 2k$. \end{Theorem} \subsection{Special linear groups} In this section, we consider special linear groups $\Sl_{n+2}(D)$ over infinite fields $D$. It is well known that the groups $\Sl_{n+2}(D)$ also act strongly transitively on the associated building $\Delta$ from the previous section. The opposition complex $O(\Delta)$ and hence also the modules $M_p$ are the same as for general linear groups. In the case $G=\Sl_{n+2}(D)$ for $n\geq 2$, the Levi subgroups admit the following structure: \[ L_p = \biggl\{\begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} : A \in \Gl_{p}(D), B \in \Gl_{n+2-p}(D), \det(A)\det(B)=1 \biggr\}, \] which splits as a semidirect product \begin{align*} L_p &= \biggl\{\begin{pmatrix} A & 0 & 0 \\ 0 & \det(A)^{-1} & 0 \\ 0 & 0 & \mathds 1_{n+1-p} \end{pmatrix} : A \in \Gl_p(D) \biggr\}\ltimes\begin{pmatrix} \mathds 1_{p} & 0 \\ 0 & \Sl_{n+2-p}(D) \end{pmatrix}\\ & \cong \Gl_{p}(D) \ltimes \Sl_{n+2-p}(D), \end{align*} and the group $\Sl_{n+2-p}(D)$ acts trivially on $M_p$ by the same argument as in Section \ref{sec:general_linear_groups}. In particular \[ L_{n+1} = \biggl\{\begin{pmatrix} A & 0 \\ 0 & \det(A)^{-1} \end{pmatrix} : A\in\Gl_{n+1}(D) \biggr\}\text{, and we choose } G'=\begin{pmatrix} \Sl_{n+1}(D) & 0 \\ 0 & 1 \end{pmatrix}. \] We obtain \begin{align*} L_{p}' &=\biggl\{\begin{pmatrix} A & 0 & 0 \\ 0 & \det(A)^{-1} & 0 \\ 0 & 0 & \mathds 1_{n+1-p} \end{pmatrix} : A \in \Gl_p(D) \biggr\}\ltimes\begin{pmatrix} \mathds 1_{p} & 0 &0 \\ 0 & \Sl_{n+1-p}(D) &0 \\ 0 & 0 & 1 \end{pmatrix}\\ &\cong \Gl_{p}(D) \ltimes \Sl_{n+1-p}(D), \end{align*} again with $\Sl_{n+1-p}(D)$ acting trivially on $M_p$. In particular, we have \begin{align*} L_1 &= \biggl\{\begin{pmatrix} \det(A)^{-1} & 0 \\ 0 & A \end{pmatrix} : A\in\Gl_{n+1}(D) \biggr\}\cong \Gl_{n+1}(D) \\ L'_1&=\biggl\{\begin{pmatrix} \det(A)^{-1} & 0 & 0 \\ 0 & A & 0 \\ 0 & 0 & 1 \end{pmatrix} : A \in \Gl_{n}(D) \biggr\}\cong \Gl_{n}(D). \end{align*} \noindent As in \cite{Cha:HSD:80}, we apply Theorem \ref{th:stability_pair_spectral_sequence} to obtain \begin{Theorem}[Charney, Theorem 2.3 in \cite{Cha:HSD:80}]\label{th:sln_spectral_sequence} For any $n\geq 2$, there is a first-quadrant spectral sequence \[ E^1_{p,q} =\begin{cases} H_q\bigl(\Sl_{n+2}(D),\Sl_{n+1}(D);\mathds Z\bigr) & p=0 \\ H_q\bigl(\Gl_{p}(D)\ltimes \Sl_{n+2-p}(D),\Gl_p(D)\ltimes \Sl_{n+1-p}(D); M_p\bigr) & 1\leq p \leq n\\ H_q\bigl(\Gl_{n+1}(D),\Sl_{n+1}(D);M_{n+1}\bigr) & p=n+1 \end{cases} \] which converges to zero. \end{Theorem} \noindent Charney then uses this spectral sequence and a version of the relative Lyndon\slash Hochschild-Serre spectral sequence (Theorem \ref{th:relative_lyndon_hochschild_serre}) to prove homological stability for $n\geq 3k$. One can improve this result using Theorem \ref{th:sah}, however. \begin{Theorem}\label{th:sln_result} If $D$ is an infinite field, then $n\geq 2k-1$ implies \[ H_k(\Sl_{n+1}(D),\Sl_{n}(D);\mathds Z)=0. \] \end{Theorem} \begin{Proof} In spite of the formulation of the theorem we will prove equivalently that $n\geq 2k-2$ implies \[ H_k(\Sl_{n+2}(D),\Sl_{n+1}(D);\mathds Z)=0. \] Since relative $H_0$ vanishes always by Lemma \ref{l:relative_h0} and since $H_1(\Sl_{n+2}(D);\mathds Z)=0$ for all $n\geq 0$ by \cite[2.2.3]{HOM:CGK:89}, we can start an induction over $k$. Let $k\geq 2$. As induction hypothesis, assume that \[ H_l(\Sl_{n+2}(D),\Sl_{n+1}(D);\mathds Z)=0\quad\text{ for all $l<k$ and all $n\geq 2l-2$.} \] We will show that $H_k(\Sl_{n+2}(D),\Sl_{n+1}(D);\mathds Z)=0$ for all $n\geq 2k-2$. For any $n\geq 2k-2\geq 2$, by Theorem \ref{th:sln_spectral_sequence}, we have the spectral sequence $E^1_{p,q}$ converging to zero. Note first of all that \[ E^1_{0,q} = H_q(\Sl_{n+2}(D),\Sl_{n+1}(D);\mathds Z). \] In particular, we have to prove that $E^1_{0,k}=0$. Since the spectral sequence converges to zero, it is enough to prove $E^1_{p,q}=0$ for all $p+q\leq k+1$ and $p\geq 1$, which will then imply $E^1_{0,q}=0$ for all $0\leq q\leq k$. The situation is illustrated in Figure \ref{fig:sln}. We will prove the vanishing of the modules separately for the regions I, II and III. \begin{figure} \centering \begin{tikzpicture}[scale=0.5,font=\small] \draw[->] (0,0) -- (0,9); \draw[->] (0,0) -- (10,0); \draw (0,8) -- (2,8) -- (2,7) -- (3,7) -- (3,6) -- (4,6) -- (4,5) -- (5,5) -- (5,4) -- (6,4) -- (6,3) -- (7,3) -- (7,2) -- (8,2) -- (8,1) -- (9,1) -- (9,0); \node (h1) at (-.5,.5) {$0$}; \node (hk) at (-.5,7.5) {$k$}; \node (v1) at (.5,-.5) {$0$}; \node (vk) at (7.5,-.5) {$k$}; \node (p) at (10.5,0) {$p$}; \node (q) at (0,9.5) {$q$}; \node (leq) at (5,7.5) {$2\leq p \leq n$}; \draw[dotted] (1,8.5) -- (1,1); \draw[dotted] (2,8.5) -- (2,1); \draw[dotted] (8,8.5) -- (8,1); \draw[dotted] (-.5,1) -- (9,1) -- (9,-.5); \node (1) at (.5,7.5) {$*$}; \node (2) at (1.5,4) {I}; \node (3) at (3.5,2) {II}; \node (4) at (4.5,.5) {III}; \end{tikzpicture}\caption{$E^1_{p,q}$ in the proof of Theorem \ref{th:sln_result}}\label{fig:sln} \end{figure} For region I, that is for $p=1$ and $1\leq q\leq k$, note that \[ E^1_{1,q} \cong H_q(\Gl_{n+1}(D),\Gl_{n}(D);\mathds Z). \] Since $n\geq 2k-2\geq k\geq q$, we know that $E^1_{1,q}=0$ by Theorem \ref{th:sah}. For region II, that is $p+q\leq k+1$, $q\geq 1$ and $2\leq p\leq n$, we have in particular $q\leq k-1$. Additionally \[ 2q-2+p = (p+q) + q -2 \leq 2k-2 \leq n, \] so $n-p\geq 2q-2$. By the induction hypothesis, we hence have \begin{equation} H_q(\Sl_{n-p+2}(D), \Sl_{n-p+1}(D);\mathds Z) = 0 \label{eq:vanishing_terms_of_relative_lhs} \end{equation} for all $p,q$ with $p+q\leq k+1$ and $p\geq 2$. For the terms $E^1_{p,q}=H_q(L_p,L'_p;M_p)$ in region II, consider the relative Lyndon\slash Hochschild-Serre spectral sequence from Theorem \ref{th:relative_lyndon_hochschild_serre}: \[ L^2_{i,j} = H_i(\Gl_p(D);H_j(\Sl_{n+2-p}(D),\Sl_{n+1-p}(D);\mathds Z)\otimes_\mathds Z M_p) \Rightarrow H_{i+j}(L_p,L'_p;M_p). \] Note that we proved in \eqref{eq:vanishing_terms_of_relative_lhs} that $L^2_{i,j}=0$ for $j\leq q\leq k+1-p$, hence $H_q(L_p,L'_p;M_p)=0$ for all $p$, $q$ in region II. Hence $E^1_{p,q}=0$ for region II. Finally, note that region III, that is, all $E^1_{p,0}$ with $0\leq p\leq k+1\leq n+1$ vanish by the remark after Theorem \ref{th:stability_pair_spectral_sequence}. Inspecting the spectral sequence $E^1_{p,q}$ yields $E^1_{0,k}=E^{\infty}_{0,k}=0$, which proves the theorem. \end{Proof} \begin{Remark} The best known stability result by Hutchinson and Tao in \cite{HT:HSS:08} for special linear groups improves this to $n\geq k$ in the case where $D$ is a field of characteristic zero. Our result above improves the known stability range for all other infinite fields by one, however. Up to now, the best result known to the author is due to van der Kallen \cite{vdK:HSL:80} proving stability for $n\geq 2k$ for rings with stable rank 1. \end{Remark} \subsection{Unitary groups} In this section, let $D$ be a division ring. Let $J:D\rightarrow D$ be an involution and let $\varepsilon=\pm 1$. Let $V$ be a finite-dimensional right $D$-vector space. Choose an $(\varepsilon,J)$-hermitian form $h:V\times V\rightarrow D$, that means $h$ is bi-additive and \begin{align*} h(av,w) &= a^Jh(v,w),& h(v,aw)&=h(v,w)a,& h(w,v)&= h(v,w)^J\varepsilon. \end{align*} If the characteristic of $D$ is two, we have to assume that $h$ is trace-valued, see \cite[Chapter 6]{HOM:CGK:89} for further details. A subspace $X\leq V$ is \emph{totally isotropic} if $h(v,w)=0$ for all $v, w\in X$. Denote by $(n+1)$ the \emph{Witt index of $h$}, the maximal dimension of a totally isotropic subspace of $V$. Then $2(n+1)\leq \dim(V)$. It is known that $V$ splits non-canonically as an orthogonal sum $V=\mathcal H_{n+1}\perp W$, where $W$ is \emph{anisotropic}, that is $h(v,v)\neq 0$ for $v\in W\setminus\{0\}$, and where $\mathcal H_{n+1}$ is a \emph{hyperbolic module}, in particular $\dim(\mathcal H_{n+1})=2(n+1)$. A good reference for unitary groups over division rings is \cite[Chapter 6]{HOM:CGK:89}. \begin{Definition} The \emph{unitary group associated to $h$} is \[ \U(V) = \{ A\in\Gl(V) : h(Av,Aw) = h(v,w) \text{ for all } v,w\in V\}, \] the subgroup of $h$-preserving linear automorphisms of $V$. \end{Definition} \noindent It is well known that $\U(V)$ admits a Tits system of type $C_{n+1}$ which is weak if $J=\id$, $\dim(V)=2(n+1)$ and $\varepsilon\neq -1$, this is the case of orthogonal groups. We also allow this case explicitly. The corresponding building $\Delta$ is isomorphic to the flag complex over all totally isotropic subspaces of $V$. The opposition complex $O(\Delta)$ is then isomorphic to the flag complex over pairs of opposite totally isotropic subspaces of $V$, where $X\leq V$ is \emph{opposite} $Y\leq V$ if $X\oplus Y^\perp = V$. It is known that there is a basis \[ e_{-(n+1)},\ldots,e_{-1},e_1,\ldots,e_{n+1}\] of the hyperbolic module $\mathcal H_{n+1}$ such that \[ h(e_i,e_j) =\begin{cases} \varepsilon\qquad & \text{if } i+j=0 \text{ and } i>0 \\ 1 & \text{if } i+j=0\text{ and } i<0 \\ 0 & \text{otherwise.} \end{cases} \] This basis determines an apartment of the building $\Delta$. We choose the standard chamber $c_0$ of $\Delta$ and the type enumeration such that \[ v_p = ( \langle e_{-(n+1)},\ldots,e_{-p}\rangle, \langle e_{p},\ldots,e_{n+1}\rangle ) \] and we write $\mathcal H_p=\langle e_{-p},\ldots,e_{-1},e_1,\ldots,e_p\rangle$ and $\mathcal H_0=\{0\}$. It is a simple calculation to see that the Levi subgroups $L_p$ then admit the following structure \begin{align*} L_p &=\biggl\{\begin{pmatrix} S & 0 & 0 & 0 \\ 0 & A & 0 & B \\ 0 & 0 & S^{-J} & 0 \\ 0 & C & 0 & D \end{pmatrix} : S\in \Gl_{n+2-p}(D), \begin{pmatrix} A & B \\ C & D \end{pmatrix}\in \U(\mathcal H_{p-1}\perp W)\biggr\}\\ &\cong \Gl_{n+2-p}(D)\times \U(\mathcal H_{p-1}\perp W). \end{align*} In particular, we have \[ L_{n+1} = \biggl\{\begin{pmatrix} s & 0 & 0 & 0 \\ 0 & A & 0 & B \\ 0 & 0 & s^{-J} & 0 \\ 0 & C & 0 & D \end{pmatrix} : s\in D^\times, \begin{pmatrix} A & B \\ C & D \end{pmatrix}\in \U(\mathcal H_n\perp W)\biggr\}. \] We choose $G'$ to be \[ G' = \biggl\{\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & A & 0 & B \\ 0 & 0 & 1 & 0 \\ 0 & C & 0 & D \end{pmatrix} : \begin{pmatrix} A & B \\ C & D \end{pmatrix}\in \U(\mathcal H_n\perp W)\biggr\}\cong \U(\mathcal H_n\perp W), \] hence \begin{align*} L'_p &= \biggl\{\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & S & 0 & 0 & 0 & 0 \\ 0 & 0 & A & 0 & 0 & B \\ 0 & 0 & 0 & S^{-J} & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & C & 0 & 0 & D \end{pmatrix} : S \in\Gl_{n+1-p}(D), \begin{pmatrix} A & B \\ C & D \end{pmatrix}\in \U(\mathcal H_{p-1}\perp W)\biggr\}\\ &\cong \Gl_{n+1-p}(D) \times \U(\mathcal H_{p-1}\perp W). \end{align*} For $n\geq 2$, we apply Theorem \ref{th:stability_pair_spectral_sequence} to obtain a spectral sequence \[ E^1_{p,q} \cong\begin{cases} H_q\bigl(\U(\mathcal H_{n+1}\perp W),\U(\mathcal H_{n}\perp W);\mathds Z\bigr) & p=0 \\ H_q\bigl(\Gl_{n+2-p}(D)\times \U(\mathcal H_{p-1}\perp W),\\ \hspace{4cm} \Gl_{n+1-p}(D) \times \U(\mathcal H_{p-1}\perp W); M_p\bigr) \hspace{1cm} & 1\leq p \leq n\\ H_q\bigl(D^\times\times \U(\mathcal H_n\perp W), \U(\mathcal H_n\perp W);M_{n+1}\bigr) & p=n+1 \end{cases} \] which converges to zero. Using this spectral sequence, we obtain a generalisation of the theorems by Vogtmann for orthogonal groups in \cite{Vog:HSO:79} and \cite{Vog:SPH:81}, which were later generalised to Dedekind rings by Charney in \cite{Cha:gtV:87}. \begin{Theorem}\label{th:un_result} For a division ring $D$ with infinite centre, the relative homology modules \[ H_k\bigl(\U(\mathcal H_{n+1}\perp W), \U(\mathcal H_n\perp W);\mathds Z\bigr) \] vanish for $n\geq 2$ if $k=1$ and for $n\geq k\geq 2$. If the centre of $D$ is finite, relative homology vanishes for $n\geq 2k$. \end{Theorem} \begin{Proof} Note first that all assumptions imply $n\geq 2$, so we can construct the spectral sequence $E^1_{p,q}$ as above. For $2\leq p\leq n$, the vertices of $\lk(v_p)_{p-1}$ are given by \[ \lk(v_p)_{p-1}^0 = \bigl\{ (X,Y) : \langle e_{-(n+1)},\ldots,e_{-p}\rangle \subsetneq X, Y \supsetneq \langle e_p,\ldots,e_{n+1}\rangle, X\oplus Y^{\perp}=V\bigr\} \] In particular, the general linear group factors of $L_p$ and $L'_p$ act trivially on this filtrated link and hence on $M_p$. For $p\geq 1$, we can hence apply the relative Lyndon\slash Hochschild-Serre spectral sequence (Theorem \ref{th:relative_lyndon_hochschild_serre}) to obtain \[ L^2_{i,j} = H_i( \U(\mathcal H_{p-1}\perp W); H_j(\Gl_{n+2-p}(D),\Gl_{n+1-p}(D);\mathds Z)\otimes_\mathds Z M_p) \Rightarrow H_{i+j}(L_p,L'_{p};M_p). \] \begin{figure} \centering \begin{tikzpicture}[scale=0.5,font=\small] \draw[->] (0,0) -- (0,9); \draw[->] (0,0) -- (10,0); \draw (0,8) -- (2,8) -- (2,7) -- (3,7) -- (3,6) -- (4,6) -- (4,5) -- (5,5) -- (5,4) -- (6,4) -- (6,3) -- (7,3) -- (7,2) -- (8,2) -- (8,1) -- (9,1) -- (9,0); \node (h1) at (-.5,.5) {$0$}; \node (hk) at (-.5,7.5) {$k$}; \node (v1) at (.5,-.5) {$0$}; \node (vk) at (7.5,-.5) {$k$}; \node (p) at (10.5,0) {$p$}; \node (q) at (0,9.5) {$q$}; \node (leq) at (5,7.5) {$1\leq p \leq k\leq n$}; \draw[dotted] (1,8.5) -- (1,1); \draw[dotted] (8,8.5) -- (8,1); \draw[dotted] (-.5,1) -- (9,1) -- (9,-.5); \node (1) at (.5,7.5) {$*$}; \node (3) at (3,3) {I}; \node (4) at (4.5,.5) {II}; \end{tikzpicture}\caption{$E^1_{p,q}$ in the proof of Theorem \ref{th:un_result}}\label{fig:un} \end{figure} Again, we want to show that $E^1_{0,k}=0$. We do this by showing that $E^1_{p,q}=0$ for $p+q\leq k+1$ and $p\geq 1$. The situation is illustrated in Figure \ref{fig:un}. We will prove the vanishing of these modules separately for the regions I and II. First of all, consider region II, which consists of the modules $E^1_{p,0}$ with \[ 1\leq p\leq k+1\leq n+1. \] These modules vanish by the remark after Theorem \ref{th:stability_pair_spectral_sequence}. Region I is given by $p+q\leq k+1$, $q\geq 1$ and $1\leq p\leq k\leq n$. We distinguish cases. If the centre of $D$ is infinite, by Sah's result (Theorem \ref{th:sah}), we have \[ H_j(\Gl_{n+2-p}(D),\Gl_{n+1-p}(D);\mathds Z)=0 \] for $j\leq n+1-p$. So $E^1_{p,q}$ vanishes for $p+q\leq n+1$ and $1\leq p\leq n$ by the relative Lyndon\slash Hochschild-Serre spectral sequence. Since $n\geq k$, the modules $E^1_{p,q}$ vanish in region I. If the centre of $(D)$ is finite, van der Kallen's result (Theorem \ref{th:vdk_gln}) implies $E^1_{p,q}=0$ for $p+2q\leq n+1$. By hypothesis, $n\geq 2k$, so $p+q\leq k+1$ and $q\leq k$ imply $p+2q\leq 2k+1\leq n+1$, which shows again that $E^1_{p,q}=0$ in region I. This proves the theorem. \end{Proof} \begin{Remark} Mirzaii and van der Kallen proved homological stability for unitary groups over local rings with infinite residue fields in \cite{MaB:HSU:02} and \cite{Mir:HSU:05} with a slightly weaker stability range of $n\geq k+1$. We restrict ourselves to division rings with infinite centre, but on the other hand we allow an anisotropic kernel. \end{Remark} \noindent Note that, if $W=\{0\}$ and $J=\id$, which forces $D$ to be a field, we obtain stability results for the following two special cases. \begin{Theorem} For the \emph{symplectic} and \emph{orthogonal groups over an infinite field $D$}, we obtain \begin{align*} H_k\bigl(\Sp_{2n+2}(D),\Sp_{2n}(D);\mathds Z\bigr) &=0 \\ H_k\bigl(\Orth_{n+1,n+1}(D),\Orth_{n,n}(D);\mathds Z\bigr) &=0 \end{align*} if $k=1$ and $n\geq 2$ or $n\geq k\geq 2$. If $D$ is a finite field, then the relative homology groups vanish for $k\geq 2n$. \end{Theorem} \noindent By a combination of the methods for Theorem \ref{th:sln_result} and Theorem \ref{th:un_result}, results on special unitary groups can also be obtained. The following result on special orthogonal groups is particularly interesting: \begin{Theorem}\label{th:son_result} For an infinite field $D$, we have \[ H_k\bigl(\SO_{n+1,n+1}(D),\SO_{n,n}(D);\mathds Z\bigr) =0 \] for $k=1$ and $n\geq 2$ or $n \geq k\geq 2$. If $D$ is a finite field, then the relative homology groups vanish for $n\geq 2k$. \end{Theorem} \begin{Proof} In this case, note that $J=\id$. The Levi subgroups hence have the following simple structure: \[ L_p =\biggl\{\begin{pmatrix} S & 0 & 0 & 0 \\ 0 & A & 0 & B \\ 0 & 0 & S^{-1} & 0 \\ 0 & C & 0 & D \end{pmatrix} : S\in \Gl_{n+2-p}(D), \begin{pmatrix} A & B \\ C & D \end{pmatrix}\in \SO_{p-1,p-1}(D)\biggr\} \] Since by assumption $n\geq 2$, we can apply Theorem \ref{th:stability_pair_spectral_sequence} to obtain a relative spectral sequence converging to zero. We have to prove $E^1_{0,k}=0$. As in the proof of Theorem \ref{th:un_result}, the general linear factors of $L_p$, $L'_p$ act trivially on $M_p$ and we consider the relative Lyndon\slash Hochschild-Serre spectral sequence to obtain \[ L^2_{i,j} = H_i\bigl( \SO_{p-1,p-1}(D); H_j(\Gl_{n+2-p}(D),\Gl_{n+1-p}(D);\mathds Z)\otimes_\mathds Z M_p\bigr) \Rightarrow H_{i+j}(L_p,L'_{p};M_p). \] Now, for infinite fields, note that by Theorem \ref{th:sah}, for $j\leq n+1-p$, we have $L^2_{i,j}=0$. In particular, $p+q\leq k+1\leq n+1$ and $p\geq 1$ implies $E^1_{p,q}=0$, except for $p=n+1=k+1$, where we apply the remark after Theorem \ref{th:stability_pair_spectral_sequence} again. We obtain the theorem by inspection of the spectral sequence. For finite fields, by Theorem \ref{th:vdk_gln}, for $2j\leq n+1-p$, we have $L^2_{i,j}=0$. So $p+q\leq k+1$ and $q\leq k$ imply $p+2q\leq 2k+1\leq n+1$, so $E^1_{p,q}=0$, and we obtain the result. \end{Proof} \bibliographystyle{alpha}
2,877,628,090,871
arxiv
\section{Introduction} The problem of learning from several modalities simultaneously has garnered the attention of several deep learning researchers over the past few years~\cite{ngiam2011multimodal}-\cite{sohn2014improved}. This is primarily because of the wide availability of such data, and the numerous real-world applications where multimodal data is used. For instance, speech may be accompanied with text and the resultant data can be used for training speech-to-text or text-to-speech engines. Even within the same medium, several modalities may exist simultaneously, for instance, the plan and elevation of a 3d object, or multiple translations of a text. The task of learning from several modalities simultaneously is complicated by the fact that the correlations within a modality are often much stronger than the correlations across modalities. Hence, many multi-modal learning approaches such as~\cite{ngiam2011multimodal}\cite{srivastava2012multimodal} try to capture the cross-modal correlations at an abstract latent feature level rather than at the visible feature level. The assumption is that the latent features are comparatively less correlated than the visible features, and hence, the latent features from different modalities can be concatenated and a single distribution can be learnt for the concatenated latent features~\cite{srivastava2012multimodal}. An alternative approach to capture the joint distribution, is by modelling the conditional distribution across modalities as done in~\cite{sohn2014improved}, whereby the authors make the simplifying assumption that the joint log-likelihood is maximized when the conditional log-likelihood of each modality given the other modality is maximized. While the assumption is untrue in general, the idea of learning conditional distributions to capture the joint distribution has several advantages. In particular, the conditional distributions are often less complex to model, since conditioning on one modality reduces the possibilities for the other modality. Moreover, if the underlying task is to generate one modality given the other, then learning conditional distributions directly addresses this task. Hence, we address the problem of multimodal learning by capturing the conditional distributions. In particular, we use a variational approximation to the joint log-likelihood for training. In this paper, we restrict ourselves to directed graphical models, whereby a latent representation is sampled from one modality (referred to as the conditioning modality) and the other modality (referred to as the generated modality) is then sampled from the latent representation. Hence, the model is referred to as conditional multimodal autoencoder~(CMMA). \section{Problem Formulation and Proposed Solution} A formal description of the problem is as follows. We are given an $i.i.d$ sequence of $N$ datapoints $\{({\mathbf{x}}^{(1)}, {\mathbf{y}}^{(1)}), \dotsc, ({\mathbf{x}}^{(N)}, {\mathbf{y}}^{(N)})\}$. For a fixed datapoint $({\mathbf{x}}, {\mathbf{y}})$, let ${\mathbf{x}}$ be the modality that we wish to generate and ${\mathbf{y}}$ be the modality that we wish to condition on. We assume that ${\mathbf{x}}$ is generated by first sampling a real-valued latent representation ${\mathbf{z}}$ from the distribution $p({\mathbf{z}}|{\mathbf{y}})$, and then sampling ${\mathbf{x}}$ from the distribution $p({\mathbf{x}}|{\mathbf{z}})$. The graphical representation of the model is given in Figure~\ref{fig:graphical}. Furthermore, we assume that the conditional distribution of the latent representation ${\mathbf{z}}$ given ${\mathbf{y}}$ and the distribution of ${\mathbf{x}}$ given ${\mathbf{z}}$ are parametric. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{graph.png} \caption{\small A graphical representation of CMMA} \label{fig:graphical} \end{minipage} \hspace{.5cm} \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{graph2.png} \caption{\small A graphical representation of conditional VAE as well as conditional GAN} \label{fig:graphical2} \end{minipage} \end{figure}% Given the above description of the model, our aim is to find the parameters so as maximize the joint log-likelihood of ${\mathbf{x}}$ and ${\mathbf{y}}$ for the given sequence of datapoints. The computation of log-likelihood comprises a term for $\log p({\mathbf{x}}| {\mathbf{y}})$ (referred to as conditional log-likelihood) and a term for $\log p({\mathbf{y}})$. The computation fo $p({\mathbf{x}}| {\mathbf{y}})$ requires the marginalization of the latent variable ${\mathbf{z}}$ from the joint distribution $p({\mathbf{x}}, {\mathbf{z}}| {\mathbf{y}})$. \begin{align} p({\mathbf{x}}|{\mathbf{y}}) &= \int p({\mathbf{x}}, {\mathbf{z}}| {\mathbf{y}}) \mathrm{d}{\mathbf{z}} \\ &= \int p({\mathbf{x}}| {\mathbf{z}})p({\mathbf{z}}| {\mathbf{y}}) \mathrm{d}{\mathbf{z}} \end{align} For most choices of $p({\mathbf{x}}| {\mathbf{z}})$ and $p({\mathbf{z}}| {\mathbf{y}})$, the evaluation of conditional log-likelihood is intractable. Hence, we resort to the minimization of a variational lower bound to the conditional log-likelihood. This is achieved by approximating the posterior distribution of ${\mathbf{z}}$ given ${\mathbf{x}}$ and ${\mathbf{y}}$, that is $p({\mathbf{z}}| {\mathbf{x}}, {\mathbf{y}})$ by a tractable distribution $q({\mathbf{z}}| {\mathbf{x}}, {\mathbf{y}})$. This is explained in more detail in the following section. \subsection{The variational bound}~\label{sec:var_bound} For a given $i.i.d$ collection of datapoints, $\{({\mathbf{x}}^{(1)}, {\mathbf{y}}^{(1)}), \dotsc, ({\mathbf{x}}^{(N)}, {\mathbf{y}}^{(N)})\}$, the log-likelihood can be written as \begin{table} \centering \begin{tabular}{|c|c|c|} \hline Distribution & Parametric form & Representation\\ \hline $p({\mathbf{z}}| {\mathbf{y}})$ & $\mathcal{N}(f_\mu({\mathbf{y}}), e^{f_\sigma({\mathbf{y}})})$ & $p_f({\mathbf{z}}| {\mathbf{y}})$ \\ \hline $p({\mathbf{x}}| {\mathbf{z}})$ & $\mathcal{N}(g_{\mu}({\mathbf{z}}), e^{g_{\sigma}({\mathbf{z}})})$ & $p_g({\mathbf{x}}| {\mathbf{z}})$\\ \hline $q({\mathbf{z}}|{\mathbf{x}})$ & $\mathcal{N}(h_{1\mu}({\mathbf{x}}), e^{h_{1\sigma}({\mathbf{x}})})$ & $q_{h_1}({\mathbf{z}}|{\mathbf{x}})$\\ \hline $q({\mathbf{y}}|{\mathbf{z}})$ & $\mathcal{N}(h_{2\mu}({\mathbf{z}}), e^{h_{2\sigma}({\mathbf{z}})})$ & $q_{h_2}({\mathbf{y}}|{\mathbf{z}})$\\ \hline \end{tabular} \vspace{.3cm} \caption{Parametric forms for the distributions used in the paper and their representations demonstrating the explicit dependence on $f,g$ and $h$.} \label{tab:parametric} \end{table} \begin{align} &\log p\left(({\mathbf{x}}^{(1)}, {\mathbf{y}}^{(1)}), \dotsc, ({\mathbf{x}}^{(N)}, {\mathbf{y}}^{(N)})\right) \notag\\ & \hspace{1cm} = \sum_{i=1}^N\log p({\mathbf{x}}^{(i)} , {\mathbf{y}}^{(i)}) \end{align} Let the posterior distribution be approximated by a distribution whose graphical representation is shown in Figure~\ref{fig:graphical}. In particular, $q({\mathbf{z}}|{\mathbf{x}})$ be an approximation to the posterior distribution of the latent variables given ${\mathbf{x}}$ and $q({\mathbf{y}}|{\mathbf{z}})$ be an approximation to the posterior distribution of ${\mathbf{y}}$ given ${\mathbf{z}}$ . For an individual datapoint, the conditional log-likelihood can be rewritten as \begin{align} &\log p({\mathbf{x}}| {\mathbf{y}}) \notag\\ & = \mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}) } \log \frac{ p({\mathbf{x}}, {\mathbf{z}}| {\mathbf{y}})}{p({\mathbf{z}} |{\mathbf{x}},{\mathbf{y}})} \\ & = \mathrm{KL}\left[q(.|{\mathbf{x}}) || p(.|{\mathbf{x}}, {\mathbf{y}}) \right] + \mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}, {\mathbf{y}}) } \log \frac{ p({\mathbf{x}}, {\mathbf{z}}| {\mathbf{y}})}{ q({\mathbf{z}} |{\mathbf{x}})} \notag\\ &\ge \mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}) }\log \frac{ p({\mathbf{x}}, {\mathbf{z}}| {\mathbf{y}})}{ q({\mathbf{z}} |{\mathbf{x}})}\,, ~\label{eq:VariationalLowerBound} \end{align} where $\mathrm{KL}(p||q)$ refers to the $\mathrm{KL}$-divergence between the distributions $p$ and $q$ and is always non-negative. Note that the choice of the decomposition of posterior $q({\mathbf{z}}, {\mathbf{y}}| {\mathbf{x}})$ as $q({\mathbf{z}}|{\mathbf{x}})q({\mathbf{y}}|{\mathbf{z}})$ forces the distribution $q({\mathbf{z}}|{\mathbf{x}})$ to be `close' to the true posterior $p({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$, thereby encouraging the model to learn features from ${\mathbf{x}}$ alone, that are representative about ${\mathbf{y}}$ as well. The term in equation~\eqref{eq:VariationalLowerBound} is referred to as the variational lower bound for the conditional log-likelihood for the datapoint $({\mathbf{x}}, {\mathbf{y}})$ and will be denoted by $\mathcal{L}(p,q;{\mathbf{x}}, {\mathbf{y}})$. It can further be rewritten as \begin{align} &\mathcal{L}(p,q;{\mathbf{x}}, {\mathbf{y}}) \notag\\ &=\mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}) }\log { p({\mathbf{x}}| {\mathbf{z}})} + \mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}) }\log \frac{ p({\mathbf{z}}| {\mathbf{y}})}{ q({\mathbf{z}} |{\mathbf{x}})} \notag\\ & = \mathbb{E}_{q({\mathbf{z}}| {\mathbf{x}}) }\log { p({\mathbf{x}}| {\mathbf{z}})} - \mathrm{KL}\left[q({\mathbf{z}}|{\mathbf{x}}) || p({\mathbf{z}}|{\mathbf{y}})\right] \label{simplifiedLowerBound} \end{align} From the last equation, we observe that the variational lower bound can be written as the sum of two terms. The first term is the negative of reconstruction error of ${\mathbf{x}}$, when reconstructed from the encoding ${\mathbf{z}}$ of ${\mathbf{x}}$. The second term ensures that the encoding of ${\mathbf{x}}$ is 'close' to the corresponding encoding of ${\mathbf{y}}$, where closeness is defined in terms of $\mathrm{KL}$-divergence between the corresponding distributions. Adding $\log p({\mathbf{y}})$ to the above bound, we obtain the lower bound to the joint log-likelihood. It has been shown in~\cite{bengio2013deep} that for learning a distribution from samples, it is sufficient to train the transition operator of a Markov chain, whose stationary distribution is the distribution that we wish to model. Using this idea, we replace $\log p({\mathbf{y}})$ by $\mathbf{E}_{p({\mathbf{z}}|{\mathbf{y}})} \log q({\mathbf{y}}| {\mathbf{z}})$. Note that while the two terms will be quite different, the gradients with respect to the parameters for the two terms is expected to be 'close'. \subsection{The reparametrization} In order to simplify the computation of the variational lower bound, we assume that conditioned on ${\mathbf{y}}$, the latent representation ${\mathbf{z}}$ is normally distributed with mean $f_\mu({\mathbf{y}})$ and a diagional covariance matrix whose diagonal entries are given by $e^{f_\sigma({\mathbf{y}})}$. Moreover, conditioned on ${\mathbf{z}}$, ${\mathbf{x}}$ is normally distributed with mean $g_{\mu}({\mathbf{z}})$ and a diagonal covariance matrix whose diagonal entries are given by $e^{g_{\sigma}({\mathbf{y}})}$. In the rest of the paper, we assume $f_\mu, f_\sigma, g_{\mu}$ and $g_{\sigma}$ to be multi-layer perceptrons. Furthermore, we approximate the posterior distribution of ${\mathbf{z}}$ given ${\mathbf{x}}$ and ${\mathbf{y}}$ by a normal distribution with mean $h_\mu({\mathbf{x}}, {\mathbf{y}})$ and a diagonal covariance matrix whose diagonal entries are given by $e^{h_\sigma({\mathbf{x}}, {\mathbf{y}})}$, where $h_\mu$ and $h_\sigma$ are again multi-layer perceptrons. In order to make the dependence of the distributions on $f,g$ and $h$ explicit, we represent $p({\mathbf{z}}|{\mathbf{y}})$ as $p_f({\mathbf{z}}|{\mathbf{y}})$, $p({\mathbf{x}}|{\mathbf{z}})$ as $p_g({\mathbf{x}}|{\mathbf{z}})$ and $q({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$ as $q_h({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$. For reference, the parametric forms of the likelihood, prior and posterior distributions and their representations demonstrating the explicit dependence on $f,g$ and $h$ are given in Table~\ref{tab:parametric}. The above assumptions simplify the calculation of $\mathrm{KL}$-divergence and $\log p({\mathbf{x}}| {\mathbf{z}})$. Let $f_j$ denote the $j^{th}$ component of the function $f$ and the size of the latent representation be $J$. After ignoring the constant terms, the $\mathrm{KL}$-divergence term of the variational lower bound can be written as \begin{align} &\mathrm{KL}(q_h({\mathbf{z}}| {\mathbf{x}}, {\mathbf{y}})|| p_f({\mathbf{z}}| {\mathbf{y}})) = \notag\\ &\frac{1}{2} \sum_{j=1}^J \bigg[f_{j\sigma}({\mathbf{y}}) - h_{j\sigma}({\mathbf{x}}, {\mathbf{y}}) + \exp\left(h_{j\sigma}({\mathbf{x}}, {\mathbf{y}}) - f_{j\sigma}({\mathbf{y}}) \right) \notag\\ &\hspace{3.5cm} + \left. \frac{(h_{j\mu}({\mathbf{x}}, {\mathbf{y}}) - f_{j\mu}({\mathbf{y}}))^2}{\exp(f_{j\sigma}({\mathbf{y}}))}\right] \label{eq:KLDiv} \end{align} The negative reconstruction error term in the variational lower bound in ~\eqref{simplifiedLowerBound} can be obtained by generating samples from the posterior distribution of ${\mathbf{z}}$ given ${\mathbf{x}}$ and ${\mathbf{y}}$, and then averaging over the negative reconstruction error. For a fixed ${\mathbf{z}}$, the term can be written as \begin{align} \log{p_g({\mathbf{x}}|{\mathbf{z}})} = -\left[ \sum_{l=1}^m \frac{(g_{l\mu}({\mathbf{z}}) - {\mathbf{x}}_l )^2}{2\exp(g_{l\sigma})} + \sum_{l=1}^m \exp(g_{l\sigma}({\mathbf{z}})) \right] \label{eq:recon} \end{align} The choice of the posterior allows us to sample ${\mathbf{z}}$ as follows: \begin{align} \boldsymbol{\epsilon}& \sim \mathcal{N}(0, I) \\ {\mathbf{z}} & = h_{\mu}({\mathbf{x}}, {\mathbf{y}}) + \boldsymbol{\epsilon} \odot h_{\sigma} ({\mathbf{x}}, {\mathbf{y}}) \end{align} where $\odot$ denotes elementwise multiplication. Hence, the negative reconstruction error can alternatively be rewritten as \begin{equation} \mathbb{E}_{\boldsymbol{\epsilon \small{\sim} \mathcal{N}(0,I)} }\log { p_g({\mathbf{x}}| h_{\mu}({\mathbf{x}}, {\mathbf{y}}) + \boldsymbol{\epsilon} \odot h_{\sigma} ({\mathbf{x}}, {\mathbf{y}}))} \,, \label{eq:totalRecon} \end{equation} where $\log{p_g({\mathbf{x}}|.)}$ is as defined in \eqref{eq:recon} In order to train the model using first-order methods, we need to compute the derivative of the variational lower bound with respect to the parameters of the model. Let $\theta_f, \theta_g$ and $\theta_h$ be the parameters of $\{f_\mu, f_\sigma\}$, $\{g_\mu, g_\sigma\}$ and $\{h_\mu, h_\sigma\}$ respectively. Note that the KL-divergence term in \eqref{eq:KLDiv} depends only on $\{f_\mu, f_\sigma\}$, and $\{h_\mu, h_\sigma\}$. Its derivatives with respect to $\theta_f$ and $\theta_h$ can be computed via chain rule. From \eqref{eq:totalRecon}, the derivative of the negative reconstruction error with respect to $\{\theta_g, \theta_h\}$ is given by \begin{equation} \mathbb{E}_{\boldsymbol{\epsilon \small{\sim} \mathcal{N}(0,I)} } \nabla_{\{\theta_g, \theta_h\}}\log { p_g({\mathbf{x}}| h_{\mu}({\mathbf{x}}, {\mathbf{y}}) + \boldsymbol{\epsilon} \odot h_{\sigma} ({\mathbf{x}}, {\mathbf{y}}))} \end{equation} The term inside the expectation can again be evaluated using chain rule. \begin{figure*} \centering \centering \includegraphics[width=.8\linewidth, height = .35\linewidth]{implementation2.png} \caption{\small A pictorial representation of the implemented model. The KL-divergence between $q_h({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$ and $p_f({\mathbf{z}}|{\mathbf{y}})$} is computed using~\eqref{eq:KLDiv} and backpropagated to update the parameters $\theta_h$ and $\theta_f$. Similarly, the negative reconstruction error is computed using equation~\eqref{eq:recon} for the specific ${\mathbf{z}}$ and its gradient is backpropagated to update the parameters $\theta_g$ and $\theta_h$. \label{fig:representation} \end{figure*}% \subsection{Implementation details} We use minibatch training to learn the parameters of the model, whereby the gradient of the model with respect to the model parameters $\{\theta_f,\theta_g, \theta_h\}$ is computed for every minibatch and the corresponding parameters updated. While the gradient of the KL-divergence can be computed exactly from~\eqref{eq:KLDiv}, the gradient of the negative reconstruction error in~\eqref{eq:totalRecon} requires one to sample standard normal random vectors, compute the gradient for each sampled vector, and then take the mean. In practise, when the minibatch size is large enough, it is sufficient to sample one standard normal random vector per training example, and then compute the gradient of the negative reconstruction error with respect to the parameters, for this vector. This has also been observed for the case of variational autoencoder in~\cite{kingma2013auto}. A pictorial representation of the implemented model is given in Figure~\ref{fig:representation}. Firstly, ${\mathbf{x}}$ and ${\mathbf{y}}$ are fed to the neural network $h$ to generate mean and log-variance of the distribution $q_h({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$. Moreover, ${\mathbf{y}}$ is fed to the neural network $f$ to generate the mean and log-variance of the distribution $p_f({\mathbf{z}}|{\mathbf{y}})$. The KL-divergence between $q_h({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$ and $p_f({\mathbf{z}}|{\mathbf{y}})$ is computed using~\eqref{eq:KLDiv}, and its gradient is backpropagated to update the parameters $\theta_f$ and $\theta_h$. Furthermore, the mean and log-variance of $q_h({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$ are used to sample ${\mathbf{z}}$, which is then forwarded to the neural network $g$ to compute the mean and log-variance of the distribution $p_g({\mathbf{x}}|{\mathbf{z}})$. Finally, the negative reconstruction error is computed using equation~\eqref{eq:recon} for the specific ${\mathbf{z}}$ and its gradient is backpropagated to update the parameters $\theta_g$ and $\theta_h$. \section{Related Works} \label{relatedWorks} Over the past few years, several deep generative models have been proposed. They include deep Boltzmann machines (DBM)~\cite{salakhutdinov2009deep}, generative adversarial networks (GAN)~\cite{goodfellow2014generative}, variational autoencoders (VAE)~\cite{kingma2013auto} and generative stochastic networks (GSN)~\cite{bengio2013deep}. DBMs learn a Markov random field with multiple latent layers, and have been effective in modelling MNIST and NORB data. However, the training of DBMs involves a mean-field approximation step for every instance in the training data, and hence, they are computationally expensive. Moreover, there are no tractable extensions of deep Boltzmann machines for handling spatial equivariance. All the other models mentioned above, can be trained using backpropagation or its stochastic variant, and hence can incorporate the recent advances in training deep neural networks such as faster libraries and better optimization methods. In particular, GAN learns a distribution on data, by forcing the generator to generate samples that are `indistinguishable' from training data. This is achieved by learning a discriminator whose task is to distinguish between the generated samples and samples in the training data. The generator is then trained to fool the discriminator. Though this approach is intuitive, it requires a careful selection of hyperparameters. Moreover, given the data, one can not sample the latent variables from which it was generated, since the posterior is never learnt by the model. In a VAE, the posterior distribution of the latent variables conditioned on the data, is approximated by a normal distribution, whose mean and variance are the output of a neural network (distributions other than normal can also be used). This allows approximate estimation of variational log-likelihood which can be optimized using stochastic backpropagation~\cite{icml2014c2_rezende14}. Both GAN and VAE are directed probabilistic models with an edge from the latent layer to the data. Conditional extensions of both these models for incorporating attributes/labels have also been proposed~\cite{kingma2014semi}\cite{gauthierconditional}\cite{mirza2014conditional}. The graphical representation of a conditional GAN or conditional VAE is shown in Figure~\ref{fig:graphical2}. As can be observed, both these models assume the latent layer to be independent of the attributes/labels. This is in stark contrast with our model CMMA, which assumes that the latent layer is sampled conditioned on the attributes. It is also informative to compare the variational lower bound of conditional log-likelihood for a CVAE with \eqref{simplifiedLowerBound}. The lower bound for a CVAE is given by \begin{equation} \log p({\mathbf{x}}|{\mathbf{y}}) \ge \mathbb{E}_{q({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})} \log p({\mathbf{x}}| {\mathbf{y}}, {\mathbf{z}}) - \mathrm{KL}(q({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}}) || p({\mathbf{z}})) \label{CVAElowerBound} \end{equation} Note that while the lower bound in the proposed model CMMA contains a KL-divergence term to explicitly force the latent representation from ${\mathbf{y}}$ to be 'close' to the latent representation from both ${\mathbf{x}}$ and ${\mathbf{y}}$, there is no such term in the lower bound of CVAE. This proves to be a disadvantage for CVAE as is reflected in the experiments section. \begin{figure}% \centering \subfloat[The prior distribution over the latent representations $p({\mathbf{z}}| {\mathbf{y}})$ for several randomly selected individuals. The width of the circles correspond to standard deviation. Note the high overlap between the priors, despite the attributes being different for the individuals.]{{\includegraphics[width=5cm]{prior.jpg} }}% \qquad \subfloat[The contours of the prior distribution for individual 1 (in blue), and the contours for the posterior distribution for individual 1,2 and 3 (in red). The high uncertainty in the prior causes the posterior distribution of several individuals to lie within the prior distribution of individual 1.]{{\includegraphics[width=5cm]{contour_multiple.jpg} }}% \caption{The prior and posterior distributions of latent representation when the dimension of the latent layer if fixed to $2$. } \label{prior_and_posterior} \end{figure} \section{Experiments} We consider the task of learning a conditional distribution for the faces given the attributes. For this task, we use the cropped Labelled Faces in the Wild dataset\footnote{The dataset is available at http://conradsanderson.id.au/lfwcrop/} (LFW)~\cite{LFWTech}, which consists of $13,233$ faces of $5749$ people of which $4069$ people have only one image. The images are of size $64\times 64$ and contain $3$ channels (red green and blue). Of the $13,233$ faces, $13,143$ faces have $73$ attributes associated with them, obtained partially using Amazon Mechanical Turk and partially using attribute classifiers~\cite{kumar2009attribute}. The attributes include `Male', `Asian', `No eye-wear', `Eyeglasses', `Moustache', `Mouth open', `Big nose', `Pointy nose', `Smiling', `Frowning', `Big lips' etc. The data also contains attributes for hair, necklace, earrings etc., though these attributes are not visible in the cropped images. We use the first $10,000$ faces and the corresponding $73$ attributes for training the model, the next $1000$ faces for validation, and keep the remaining faces and their corresponding attributes for testing. The LFW dataset is very challenging since most people in the dataset have only one image. Moreover, any possible combination of the $73$ attributes occurs at most once in the dataset. This forces the model to learn a mapping from attributes to faces, that is shared across all possible combinations of attributes. In contrast, the face dataset used in~\cite{kulkarni2015deep}, consists of several subsets of faces where only one attribute changes while others remain unchanged. Hence, one can tune the mapping from attributes to faces, one attribute at a time. This, however, isn't possible for LFW. In order to emphasize this factor, we show the prior ($p({\mathbf{z}}| {\mathbf{y}})$) and the posterior distribution ($q({\mathbf{z}}|{\mathbf{x}}, {\mathbf{y}})$) for a 2-dimensional latent representatation of few randomly selected individuals with the modalities $({\mathbf{x}}, {\mathbf{y}})$ in Figure~\ref{prior_and_posterior}. Note that despite conditioning on the attributes, the prior distributions have high uncertainty, and prior distribution for several attribute combinations $p({\mathbf{z}}| {\mathbf{y}})$ overlap considerably, particularly in lower dimensions. As the dimensions increase, this overlap decreases however. A VAE, on the other hand, assumes a common prior for all the individuals. Hence, one can think of conditioning in CMMA as tilting the prior of VAE in the direction of the conditioning modality. Moreover, the posterior always has much lower variance than the prior. Other than the fact that access to ${\mathbf{x}}$ decreases the uncertainty by a huge amount, the reduced variance is also an artifact of variational methods in general. In particular, for the 2-dimensional latent representations, we observed an average standard deviation of $.04$ for CMMA , and $.036$ for VAE in the posterior distribution of latent representations after $5000$ iterations, which did not reduce further. \subsection{CMMA architecture} The MLP $\{f_\mu, f_\sigma\}$ of the CMMA used in this paper (refer Figure~\ref{fig:representation}) encodes the attributes, and is a neural network with $800$ hidden units, a soft thresholding unit of the form $\log(1+e^x)$ and two parallel output layers, each comprising of $500$ units. The MLPs $\{h_\mu, h_\sigma\}$ and $\{g_\mu, g_\sigma\}$ are convolution and deconvolution neural networks respectively. The corresponding architectures are given in Figure~\ref{fig:CMMA}. \begin{figure*} \centering \centering \includegraphics[width=.7\linewidth]{CMMA.png} \caption{\small The architecture of the MLPs $\{h_\mu, h_\sigma\}$ (top) and $\{g_\mu, g_\sigma\}$ (bottom) of CMMA used in experiments.} \label{fig:CMMA} \end{figure*} \subsection{Models used for comparison} We compare the quantitative and qualitative performance of CMMA against conditional Generative Adversarial Networks~\cite{mirza2014conditional}\cite{gauthierconditional} (CGAN) and conditional Variational Autoencoders~\cite{kingma2013auto} (CVAE). We have tried to ensure that the architecture of the models used for comparison is as close as possible to the architecture of the CMMA used in our experiments. Hence, the generator and discriminator of CGAN and the encoder and decoder of CVAE closely mimic the MLPs $g$ and $h$ of CMMA as described in the previous section \subsection{Training} We coded all the models in Torch~\cite{collobert2011torch7} and trained each of them for $500$ iterations on a Tesla K40 GPU. For each model, the training time was approximately $1$ day. The adagrad optimization algorithm was used~\cite{duchi2011adaptive}. The proposed model CMMA was found to be relatively stable to the selection of initial learning rate, and the variance of the randomly initialized weights in various layers. For CGAN, we selected the learning rate of generator and discriminator and the variance of weights by verifying the conditional log-likelihood on the validation set. Only the results from the best hyperparameters have been reported. We found the CGAN model to be quite unstable to the selection of hyperparameters. \begin{figure*} \subfloat[Faces generated from the proposed model CMMA.]{{\includegraphics[width=5.5cm]{CMMA.jpg} }} \qquad \subfloat[Faces generated from CVAE~\cite{kingma2014semi}.]{{\includegraphics[width=5.5cm]{CVAE.jpg} }} \qquad \subfloat[Faces generated from CGAN~\cite{gauthierconditional} using the hyperparameters used in~\cite{gauthierconditional}.]{{\includegraphics[width=5.5cm]{experiments/images_from_tags/new/tags_to_face_GAN.png} }} \qquad \subfloat[Faces generated from CGAN~\cite{gauthierconditional} using the hyperparameters selected by us.]{{\includegraphics[width=5.5cm]{CGAN.jpg} }} \caption{\small Faces generated from the attributes using various models (Best viewed in color). For a fixed model, the $4$ rows correspond to `Female Asian', 'Female Not-Asian', 'Male Asian' and 'Male Not-Asian' in order. The remaining attributes are varied one at a time to generate the $7$ columns. In particular, for each model, the $7$ columns of faces correspond to i) no change, ii) mouth open, iii) spectacles, iv) bushy eyebrows, v) big nose ,vi) pointy nose and vii) thick lips. Note that, for our model CMMA, any change in attributes, such as mouth open, spectacles etc., is clearly reflected in the corresponding face. For other models, this change is not very evident from the corresponding faces.} \label{tag_to_face} \end{figure*}% \subsection{Quantitative results} For the first set of experiments, we compare the conditional log-likelihood of the faces given the attributes on the test set for the $3$ models - CMMA, CGAN, and CVAE. A direct evaluation of conditional log-likelihood is infeasible, and for the size of latent layer used in our experiments (500), MCMC estimates of conditional log-likelihood are unreliable. For the proposed model CMMA, a variational lower bound to the log-likelihood of the test data can be computed as the difference between the negative reconstruction error and KL-divergence (see \eqref{eq:VariationalLowerBound}). The same can also be done for the CVAE model using \eqref{CVAElowerBound}. Since we can not obtain the variational lower bound for the other models, we also use Parzen-window based log-likelihood estimation method for comparing the $3$ models. In particular, for a fixed test instance, we condition on the attributes to generate samples from the $3$ models. A Gaussian Parzen window is fit to the generated samples, and the log-probability of the face in the test instance is computed for the obtained Gaussian Parzen window. The $\sigma$-parameter of the Parzen window estimator is obtained via cross-validation on the validation set. The corresponding log-likelihood estimates for the $3$ models are given in Table~\ref{tab:Parzen}. \begin{table} \centering \begin{tabular}{|c|c|c|} \hline Model & Conditional Log-likelihood &Variational Lower Bound \\ \hline \textbf{CMMA} & \textbf{9,487} & \textbf{17,973}\\ CVAE & 8,714 & 14,356\\ CGAN & 8,320 & -\\ \hline \end{tabular} \vspace{.3cm} \caption{\small Parzen window based estimates and variational lower bound to conditional log-likelihood for the test data (Higher means better).} \label{tab:Parzen} \end{table} In both the cases, the proposed model CMMA was able to achieve a better conditional log-likelihood than the other models. \subsection{Qualitative results} While the quantitative results do convey a sense of superiority of the proposed model over the other models used in comparison, it is more convincing to look at the actual samples generated by these models. Hence, we compare the three models CGAN, CVAE and CMMA for the task of generating faces from attributes. We also compare the two models CVAE and CMMA for modifying an existing face by changing the attributes. CGAN can not be used for modifying faces because of the uni-directional nature of the model, that is, it is not possible to sample the latent layer from an image in a generative adversarial network. \subsubsection{Generating faces from attributes} In our first set of experiments, we generate samples from the attributes using the $3$ already trained models. In a CGAN, the images are generated by feeding noise and attributes to the generator. Similarly, in a CVAE, noise and attributes are fed to the MLP that corresponds to $p({\mathbf{x}}|{\mathbf{z}}, {\mathbf{y}})$ (see \eqref{CVAElowerBound}) to sample the images. In order to generate images from attributes in a CMMA, we prune the MLP $\{h_\mu, h_\sigma\}$ from the CMMA model (refer Figure~\ref{fig:representation}), and connect the MLP $\{f_\mu, f_\sigma\}$ in its stead as shown in Figure \ref{tag_to_face_rep}. \begin{figure} \centering \includegraphics[width=.7\linewidth]{tag_to_face.png} \caption{\small The model used for generating faces from attributes in CMMA is obtained by removing the MLP $\{h_\mu, h_\sigma\}$ from the CMMA model (refer Figure~\ref{fig:representation}), and connecting the MLP $\{f_\mu, f_\sigma\}$ in its stead.} \label{tag_to_face_rep} \end{figure}% \begin{figure*} \subfloat[Modifying faces with CMMA.]{{\includegraphics[width=.45\linewidth]{experiments/from_faces_with_changed_tags/random.png} }} \qquad \subfloat[Modifying faces with CVAE.]{{\includegraphics[width=.45\linewidth]{experiments/from_faces_with_changed_tags/CVAE.png} }} \caption{\small Modifying the faces in the training data by modifying the corresponding attributes using CMMA and CVAE respectively (Best viewed in color). The rows in each of the above figures correspond to i) No change, ii) Big Nose, iii) Spectacles, iv) Moustache and v) Big lips. Except for spectacles, any other change in attributes is not reflected in the faces modified by CVAE. } \label{face_to_face} \end{figure*}% \begin{figure}[t]\label{fig:old} \subfloat[]{\includegraphics[width=.45\linewidth]{young1.png}} \qquad \subfloat[]{ \includegraphics[width=.45\linewidth]{old1.png}} \caption{Modification of faces by setting the old attribute.} \end{figure} We set/reset the 'Male' and 'Asian' attributes to generate four possible combinations. The faces are then generated by varying the other attributes one at a time. In order to remove any bias from the selection of images, we set the variance parameter of the noise level to $0$ in CMMA, CVAE and CGAN. The corresponding faces for our model CMMA, and the other models (CVAE~\cite{kingma2014semi} and CGAN~\cite{gauthierconditional}) are listed in Figure~\ref{tag_to_face}. We have also presented the results from the implementation of CGAN\footnote{https://github.com/hans/adversarial} by the author of~\cite{gauthierconditional}, since the images sampled from CGAN trained by us were quite noisy. The $7$ columns of images for each model correspond to the attributes i) no change, ii) mouth open, iii) spectacles, iv) bushy eyebrows, v) big nose ,vi) pointy nose and vii) thick lips. As is evident from the first image in Figure~\ref{tag_to_face}, CMMA can incorporate any change in attribute such as `open mouth' or `spectacles' in the corresponding face for each of the $4$ rows. However, this does not seem to be the case for the other models. We hypothesize that this is because our model explicitly minimizes the KL-divergence between the latent representation of attributes and the joint representation of face and attributes. \subsubsection{Varying the attributes in existing faces} In our next set of experiments, we select a face from the training data, and vary the attributes to generate a modified face. For a CMMA, this can be achieved as follows (also refer Figure~\ref{fig:representation}): \begin{enumerate} \item Let \textit{attr\_orig} be the original attributes of the face and \textit{attr\_new} be the new attributes that we wish the face to possess. \item Pass the selected face and the \textit{attr\_new} through the MLP $\{h_\mu, h_\sigma\}$. \item Pass \textit{attr\_orig} and \textit{attr\_new} through the MLP $\{f_\mu, f_\sigma\}$ and compute the difference. \item Add the difference to the output of MLP $\{h_\mu, h_\sigma\}$. \item Pass the resultant sum through the decoder $\{g_\mu, g_\sigma\}$. \end{enumerate} As in the previous case, we have the set the variance parameter of noise level to $0$. Note that, we can not use CGAN for this set of experiments, since, given a face, it is not possible to sample the latent layer in a CGAN. Hence, we only present the results corresponding to our model CMMA and CVAE. The corresponding transformed faces are given in Figure~\ref{face_to_face}. As can be observed, for most of the attributes, our model, CMMA, is successfully able to transform images by removing moustaches, adding spectacles and making the nose bigger or pointy etc. \subsubsection{Modifying faces with missing attributes} Next, we select a few faces from the web and evaluate the performance of the model for modifying these faces. In order to modify these faces, one needs to sample the attributes conditioned on the faces. The algorithm for modifying the faces mentioned in the previous section is then applied. The corresponding results are given in Figure~\ref{face_to_face_new}. \begin{figure*} \centering \subfloat[Unchanged image]{\includegraphics[width=.9\linewidth]{experiments/new_face_to_face/original.png}} \qquad \subfloat[With a big nose] {\includegraphics[width=.9\linewidth]{experiments/new_face_to_face/big_nose.png}} \qquad \subfloat[With moustache]{\includegraphics[width=.9\linewidth]{experiments/new_face_to_face/only_moustache.png}} \qquad \subfloat[With spectacles and moustache]{\includegraphics[width=.9\linewidth]{experiments/new_face_to_face/spectacles_moustache.png}} \caption{\small Modification of images with missing attributes (Best viewed in color). } \label{face_to_face_new} \end{figure*} \section{Concluding Remarks} In this paper, we proposed a model for conditional modality generation, that forces the latent representation of one modality to be `close' to the joint representation for multiple modalities. We explored the applicability of the model for generating and modifying images using attributes. Quantitative and qualitative results suggest that our model is more suitable for this task than CGAN~\cite{gauthierconditional} and CVAE~\cite{kingma2014semi}. The model proposed, is general and can be used for other tasks such whereby some modalities need to be conditioned whereas others need to be generated, for instance, translation of text or transliteration of speech. We wish to explore the applicability of the model for such problems in future. \bibliographystyle{splncs}
2,877,628,090,872
arxiv
\section{Introduction} \label{sec:Introduction} \IEEEPARstart{D}{eep} Neural Networks, and specifically models based on Convolutional Neural Networks (CNNs), have reached a remarkable success in several computer vision tasks during the last decade \cite{deng2009imagenet, lin2014microsoft, cordts2016cityscapes}. New advances in image databases, CNN architectures and training schemes have pushed forward the state-of-the-art in computer vision. However, the success of deep models, comes usually in hand with the need of huge computational and memory resources to process vast databases for training them \cite{dosovitskiy2020image}. In this vein, there exists a line of research focused on using smaller models that need fewer computational resources for training while obtaining similar results to larger models. Techniques such as quantization \cite{jacob2018quantization}, network pruning \cite{luo2017thinet, zhou2018online, xiao2019autoprune, liu2018rethinking}, Knowledge Distillation \cite{hinton2015distilling, gou2021knowledge} or the design of efficient new architectures \cite{tan2019efficientnet, howard2019searching, cui2019fast} have been of great importance to achieve fast, compact, and easily deploying CNN models. \subsubsection*{\textbf{Knowledge Distillation}} Among these, Knowledge Distillation (KD) is of key relevance given its proven effectiveness in different computer vision tasks such as image classification, object detection and semantic segmentation \cite{gou2021knowledge}. KD was originally proposed by Hinton \textit{et al.} \cite{hinton2015distilling} as a strategy to improve the efficiency of CNNs by passing on knowledge from a teacher to a student model. Generally, the student model, usually defined as a smaller network, leverages the knowledge learnt by the teacher model, usually a bigger one, via training supervision. Specifically, in Hinton's KD \cite{hinton2015distilling}, the student model is trained using supervision not only from the ground-truth labels, but also from the teacher predicted logits. Compared to just relying on hard-label annotations, the additional use of teacher's predictions as extra supervision provides an automatic label smoothing regularization \cite{muller2019does,yuan2020revisiting}. Feature-based Knowledge Distillation expanded the seminal KD scheme by building on the concept of representation learning: CNNs are effective at encoding knowledge at multiple levels of feature representation \cite{bengio2013representation}. The idea was firstly introduced by the FitNets \cite{romero2014fitnets}, which proposed to use the matching of intermediate CNN representations as the source knowledge that is transferred from the teacher to the student. \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth]{Images/Comparison_AM_2.pdf} \caption{Example of the obtained activation maps, at different levels of depth, for the scene recognition task (the scene class is \textit{hotel room}). Top rows represent activation maps for vanilla ResNet-18 and ResNet-50 CNNs respectively. Bottom row represents the activation maps obtained by the proposed DCT Attention-based KD method when ResNet-50 acts as the teacher network and ResNet-18 acts as the student. AT \cite{komodakis2017paying} activation maps are also included for comparison.} \label{fig:ActivationMaps} \end{figure*} A specific subgroup of Feature-based KD methods is that of the Attention-based KD ones. This category was pioneered by Komodakis \textit{et al.} \cite{komodakis2017paying}. They proposed to further optimize FitNets by simplifying complete CNN features into attention/activation maps. The matching between the student activation maps and the teacher ones serves as supervision for the KD scheme. The use of activation maps provides several advantages with respect to the direct use of features: first, as matching maps does not depend on channel dimensions, more architectures can be used in the KD process; second, it avoids the problem of semantic mismatching between features when KD is used between two significantly different architectures in terms of depth \cite{chen2020cross}. As depicted in Figure \ref{fig:ActivationMaps}, activation areas, although not being placed in the same image areas, are correlated in terms of the semantic concepts detected even when comparing considerably different models like ResNet-18 and ResNet-50. Due to its computational simplicity and convenient mathematical properties (differentiable, symmetric and holds the triangle inequality), as already stated by Gou \textit{et al.} \cite{gou2021knowledge}, the convention to compare either two feature tensors or a pair of activation maps is to compute the \(\ell_{2}\) norm of their difference. However, the performance of the \(\ell_{2}\) norm when used to simulate human perception of visual similarities has already been demonstrated to be poor \cite{zhao2015loss}: it might yield, due to its point-wise accumulation of differences, similar results for completely visually different images \cite{wang2009mean}. Furthermore, in the scope of Attention-based KD, another key problem of the \(\ell_{2}\) norm is its tendency towards \textit{desaturation} when is used to guide an optimization process. A visual evidence of this problem is the \textit{sepia} effect in colorization \cite{zhang2016colorful}. We pose that the usage of the pixel-wise \(\ell_{2}\) norm for the comparison of activation maps can be replaced by global image-wise estimates for a better matching and knowledge transferring in Feature-based KD. \subsubsection*{\textbf{Contributions}} In this vein, we propose a novel matching approach based on a 2D discrete linear transform of the activation maps. This novel technique, for which we here leverage the simple yet effective Discrete Cosine Transform (DCT) \cite{oppenheim2001discrete}, is based on the 2D relationships captured by the transformed coefficients, so that the matching is moved from a \textit{pixel-to-pixel} fashion to a correlation in the frequency domain, where each of the coefficients integrates spatial information from the whole image. Figure \ref{fig:ActivationMaps} depicts an example of the obtained activation maps when using the proposed DCT approach to match ResNet-50 ones. Note how the similarity is higher with respect to the ones obtained by AT \cite{komodakis2017paying}, a method based on an \(\ell_{2}\)-driven metric. In order to verify the effectiveness of the proposed method this paper proposes to use a evaluation of KD in scene recognition, a task defined by strong spatial and contextual relationships among stuff and objects. Scene recognition models are associated to highly variable and sparse attention maps that have been proved to be of crucial relevance for better knowledge modelling and to explain overall performance \cite{lopez2020semantic}. Moreover, we claim that the state-of-the-art in KD is over-fitted to the canonical image classification task (Table \ref{tab:CIFAR100Results}, \cite{chen2021distilling}), where image concepts are represented by a single, usually centered, object (CIFAR and ImageNet datasets). We believe that moving KD research to a more complex task that uses more realistic datasets may be beneficial not only to assess the potential benefits of each KD method in an alternative scenario, but also, to widen the scope of KD research and, in particular, to boost the efficiency of scene recognition models by using models with the same performance but with a significantly lower number of parameters. In summary, this paper contributes to the KD task by: \begin{itemize} \item Proposing a novel DCT-based metric to compare 2D structures by evaluating their similarity in the DCT domain. We propose to use this technique in an Attention-based KD approach to compare activation maps from intermediate CNN layers more adequately. \item Presenting a thorough benchmark of Knowledge Distillation methods on three publicly available scene recognition datasets and reporting strong evidences that the proposed DCT-based metric enables a student network to better focus on the relevant image areas learnt by a teacher model, hence increasing the overall performance for scene recognition. \item Publicly releasing the KD framework used to train and evaluate the scene recognition models from the paper. This framework, given its simplicity and modularity, will enable the research community to develop novel KD approaches that can be effortlessly evaluated under the same conditions for scene recognition. \end{itemize} \section{Related Work} \label{sec:Related Work} \subsection{Knowledge-Distillation} As already introduced, KD is a strategy defining a set of transferability gangways to improve the efficiency of Deep Learning models. A teacher model is used to provide training supervision for a student model, usually a shallower one. Gou \textit{et al.} \cite{gou2021knowledge} proposes to arrange KD into three different groups depending on the \textit{distilled} knowledge: response-based, relation-based and feature-based KD. The original KD idea, enclosed in the response-based group, was pioneered by Hinton \textit{et al.} \cite{hinton2015distilling}. They proposed to use teacher outputs in the form of logits to supervise, cooperatively with ground-truth labels, the training of the student network. The training using soft-labels predicted by the teacher provided a strong regularization that benefited the student's performance in the image classification task \cite{muller2019does,yuan2020revisiting}. The seminal KD was improved by changing the way logits were compared. Passalis \textit{et al.} \cite{passalis2020probabilistic} proposed to use a divergence metric (Kullback–Leibler divergence) to match the probability distributions obtained by the teacher and the student. In the same line, Tian \textit{et al.} proposed the use of contrastive learning \cite{tian2019contrastive}, which pushed response-based KD performance even further. Relation-based KD accounts for transferring the relationships between different activations, neurons or pairs of samples, that are encoded by the teacher model and transferred to the student one. Yim \textit{et al.} \cite{yim2017gift} proposed a Flow of Solution Process (FSP), which is defined by the Gram matrix between two layers. The FSP matrix summarizes the relations between pairs of feature maps. Passalis \textit{et al.} \cite{passalis2020probabilistic} proposed to model abstract feature representations of the data samples by estimating their distribution using a kernel function. Then these estimated distributions were transferred instead of the features, using feature representations of data. Feature-based KD, as originally proposed by the FitNets transferring scheme \cite{romero2014fitnets}, deals with using the matching of intermediate CNN representations as source knowledge that is transferred from the teacher to the student. Building on top of this idea, a variety of methods have been proposed. Ahn \textit{et al.} \cite{ahn2019variational} formulated feature KD as the maximization of the mutual information between teacher and student features. Guan \textit{et al.} \cite{guan2020differentiable} proposed a student-to-teacher path and a teacher-to-student path to properly obtain feature aggregations. Chen \textit{et al.} \cite{chen2020cross} detected a decrease in performance when distilling knowledge caused by semantic mismatch between certain teacher-student layer pairs, and proposed to use attention mechanisms to automatically weight layers' combinations. Chen \textit{et al.} \cite{chen2021distilling} revealed the importance of connecting features across different levels between teacher and student networks. Within Feature-based KD methods one can find the attention-based KD ones. Komodakis \textit{et al.} \cite{komodakis2017paying} proposed to simplify the intermediate features to create activation maps that were compared using an \(\ell_{2}\) difference. As already stated in Section \ref{sec:Introduction} and indicated by Gou \textit{et al.} \cite{gou2021knowledge}, it is a convention, not only in attention but also in feature-based KD methods, to build the matching metric based on the \(\ell_{2}\) norm. We argue that this pixel-wise comparison might not be adequate when comparing multi-modal spatial structures such as attention maps. \subsection{Scene Recognition} Scene recognition is a hot research topic whose complexity is, according to the reported performances \cite{lopez2020semantic}, one of the highest in image understanding. The complexity of the scene recognition task lies partially on the ambiguity between different scene categories showing similar appearance and objects' distributions: inter-class boundaries can be blurry, as the sets of objects that define a scene might be highly similar to another's. Nowadays, top performing strategies are fully based on CNN architectures. Based on context information, Xie \textit{et al.} \cite{xie2017lg} proposed to enhance fine-grained recognition by identifying relevant part candidates based on saliency detection and by constructing a CNN architecture driven by both these local parts and global discrimination. Zhao \textit{et al.} \cite{zhao2018volcano}, similarly, proposed a discriminative discovery network (DisNet) that generates a discriminative map (Dis-Map) for the input image. This map is then used to select scale-aware discriminative locations which are finally forwarded to a multi-scale pipeline for CNN feature extraction. A specific group of approaches in scene recognition is that trying to model relations between objects information and scenes. Herranz-Perdiguero \textit{et al.} \cite{herranz2018pixels} extended the DeepLab network by introducing SVM classifiers to enhance scene recognition by estimating scene objects and stuff distribution based on semantic segmentation cues. In the same vein, Wang \textit{et al.} \cite{wang2017weakly} defined semantic representations of a given scene by extracting patch-based features from object-based CNNs. The proposed scene recognition method built on these representations\textemdash Vectors of Semantically Aggregated Descriptors (VSAD), ouperformed the state-of-the-art on standard scene recognition benchmarks. VSAD's performance was enhanced by measuring correlations between objects among different scene classes \cite{cheng2018scene}. These correlations were then used to reduce the effect of common objects in scene miss-classification and to enhance the effect of discriminative objects through a Semantic Descriptor with Objectness (SDO). Finally, L\'{o}pez-Cifuentes \textit{et al.} \cite{lopez2020semantic} argued that these methods relied on object information obtained by using patch-based object classification techniques, which entails severe and reactive parametrization (scale, patch-size, stride, overlapping...). To solve this issue they proposed to exploit visual context by using semantic segmentation instead of object information to guide the network's attention. By gating RGB features from information encoded in the semantic representation, their approach reinforced the learning of relevant scene contents and enhanced scene disambiguation by refocusing the receptive fields of the CNN towards the relevant scene contents. According to the literature, we pose that the differential characteristics of the scene recognition task with respect to classical image classification one might be beneficial to boost and widen the scope of KD techniques. These characteristics include that performance results are not yet saturated, the high ambiguity between different scene categories and that relevant image features are spread out throughout the image instead of being localized in a specific area\textemdash usually the center region of the image. \section{Attention-based Knowledge Distillation Driven by DCT Coefficients} \label{sec:Method} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{Images/NetworkArchitecture.pdf} \caption{Example of the proposed gangways between two ResNet architectures representing the teacher and the student models. In this case, the intermediate feature representations for the Knowledge Distillation are extracted from the basic Residual Blocks. Besides this example, the proposed method can be applied to the whole set of ResNet, MobileNets, VGGs, ShuffleNets, GoogleNet and DenseNets families.} \label{fig:ProposedMethod} \end{figure*} Following the organization of KD methods proposed by Gou \textit{\textit{et al.}} \cite{gou2021knowledge}, the following Section is divided into Knowledge (Section \ref{subsec:Knowledge}) and Distillation (Section \ref{subsec:Distillation Scheme}). Figure \ref{fig:ProposedMethod} depicts the proposed DCT gangways in an architecture exemplified with two ResNet branches. \subsection{Knowledge} \label{subsec:Knowledge} \textbf{Attention Maps: } We rely on mean feature activation areas \cite{komodakis2017paying}, or attention maps, as the source of knowledge to be transferred from a teacher network to an student network. Given an image \(\textbf{I} \in \mathbb{R}^{3 \times W_I \times H_I}\), a forward pass until a depth \(l\) in a teacher CNN \(\psi_t\) and in a student CNN \(\psi_s\) yields feature tensors \(\psi_t(\mathbf{I},l) = \textbf{F}_{t,l} \in \mathbb{R}^{C_t \times W \times H}\) and \(\psi_s(\mathbf{I},l)=\textbf{F}_{s,l} \in \mathbb{R}^{C_s \times W \times H}\) respectively, with \(W\), \(H\) being the spatial dimensions and \(C_{t}\) and \(C_{s}\) the channel dimensions of the teacher and student features. An activation map for the teacher network \(\textbf{f}_{t,l} \in \mathbb{R}^{W \times H}\) can be obtained from these feature tensors by defining a mapping function \(\mathcal{H}\) that aggregates information from the channel dimensions: \begin{equation} \mathcal{H}: \textbf{F}_{t,l} \in \mathbb{R}^{C_t \times W \times H} \rightarrow \textbf{f}_{t,l} \in \mathbb{R}^{W \times H}. \end{equation} The mean squared activations of neurons can be used as an aggregated indicator of the attention of the given CNN with respect to the input image. Accordingly, we define the mapping function \(\mathcal{H}\) as: \begin{equation} \textbf{f}_{t,l} = \mathcal{H}(\textbf{F}_{t,l}) = \frac{1}{C_t} \sum_{C_{t}} \textbf{F}_{t,l}^2, \end{equation} obtaining the feature map \(\textbf{f}_{t,l}\). This activation map is then rescaled to the range \([0, 1]\) by a min-max normalization yielding \(\overline{\textbf{f}}_{t,l}\). This process is similarly applied for the student network to obtain \(\overline{\textbf{f}}_{s,l}\). Figure \ref{fig:ActivationMaps} depicts an example of the normalized activation maps for ResNet-18 and ResNet-50 at different depths. \textbf{Comparing Attention Maps via the DCT: } We first propose to apply the DCT \cite{oppenheim2001discrete} to the two activation maps \( \overline{\textbf{f}}_{t,l}\) and \(\overline{\textbf{f}}_{s,l}\) before comparing them. For the teacher map, \(\overline{\textbf{f}}_{t,l}\), the DCT yields a set of coefficients \(\mathcal{D}_{t,l} = \lbrace \mathcal{D}(x, y), 0 \leq x, y < W, H \rbrace\), each representing the resemblance or similarity between the whole distribution of \(\overline{\textbf{f}}_{t,l}\) values and a specific 2D pattern represented by the corresponding basis function of the transform. Specifically, in the case of the DCT, these basis functions show increasing variability in the horizontal and vertical dimensions. The DCT is here used over other transformation given its simplicity, its computational efficiency and its differentiability. Given the lossless nature of the DCT, applying the \(\ell_{2}\) metric to the obtained coefficients of the transformed maps would be equivalent to applying it over the activation maps, as in Komodakis \textit{et al.} \cite{komodakis2017paying}. However, we propose to modify the DCT coefficients in two ways: first, in order to compare the spatial structure of activation maps disregarding the global mean activation we set to zero the first coefficient, the DC coefficient associated to a constant basis function \cite{oppenheim2001discrete}. Then, we rescale the remaining coefficients to the range \([0, 1]\), again using the min-max normalization to obtain \(\overline{\mathcal{D}}_{t,l}\), which permits an scaling of the DCT-term to similar levels of the Cross-Entropy Loss, hence enabling their combination without the need of additional weighting terms. The combination of these three operations (DCT transform, DC coefficient removal and coefficients normalization) in the maps is a simple yet effective change that achieves the comparison to focus on the attention maps distribution rather than on their monomodal maximum. After extracting the DCT transform for the student map, the two activation maps are compared using the \(\ell_{2}\) norm between the normalized remaining coefficients by: \begin{equation} \label{eq:Euclidean Distance DCT} d_{t,s,l}(\textbf{f}_{t,l}, \textbf{f}_{s,l}) = \sqrt{\sum (\overline{\mathcal{D}}_{t,l} - \overline{\mathcal{D}}_{s,l})^2}. \end{equation} With the usage of the \(\ell_{2}\) norm over the DCT coefficients rather than directly on the activation map pixels, we are moving the matching from a pixel-wise computation of differences towards a metric that describes full image differences. In addition, the proposed DCT-based metric focuses on the complete spatial structure while maintaining the mathematical properties of the \(\ell_{2}\) metric: it is a differentiable convex function, it has a distance preserving property under orthogonal transformations and its gradient and Hessian matrix can be easily computed. All of these are desirable and advantageous properties when using this distance in numerical optimization frameworks. \subsection{Distillation} \label{subsec:Distillation Scheme} As stated before, the objective of the proposed distillation scheme is to properly transfer the localization of activation areas for a prediction obtained by the teacher model, $\psi_t$, for a given input \(\textbf{I}\), to the student one, \(\psi_s\). To this aim, we define the KD loss $ \mathcal{L}_{\textrm{\scriptsize DCT}}$ by accumulating the DCT differences along the \(L\) explored gangways: \begin{equation} \label{eq:Summation L} \mathcal{L}_{\textrm{\scriptsize DCT}} = \sum_l^L d_{t,s,l}. \end{equation} During training, we refine this loss by only using the teacher maps for correct class predictions. This removes the effect of using distracting maps resulting from teacher's miss-predictions in the knowledge transfer process. In other words, we propose to transfer the knowledge only when the final logit prediction \(\psi_t(\textbf{I})\) is correct. We propose to refine our proposal in Eq. \ref{eq:Summation L} as: \begin{equation}\label{eq:TeacherPred} \mathcal{L}_{\textrm{\scriptsize DCT}}=\left\{ \begin{array}{ll} \sum_l^L d_{t,s,l} & \textrm{ if } \psi_t(\textbf{I}) \textrm{ is correct} \\ [1em] 0 & \textrm{ else} \end{array}\right. \end{equation} The overall loss used to train the student CNN \(\psi_s\) is obtained via: \begin{equation} \label{eq:Final Loss} \mathcal{L} = \alpha \mathcal{L}_{\textrm{\scriptsize DCT}} + \beta \mathcal{L}_{\textrm{\scriptsize CE}}, \end{equation} where \(\mathcal{L_{\textrm{\scriptsize CE}}}\) is the regular Cross-Entropy Loss and \(\alpha\) and \(\beta\) are weighting parameters to control the contribution of each term to the final loss. As usually done with other KD methods \cite{komodakis2017paying, tian2019contrastive, chen2020cross}, the proposed approach can also be combined with the original Response-based KD loss proposed by Hinton \textit{et al.} \cite{hinton2015distilling} by including it in Eq. \ref{eq:Final Loss}: \begin{equation} \label{eq:Final Loss with KD} \mathcal{L} = \alpha \mathcal{L}_{\textrm{\scriptsize DCT}} + \beta \mathcal{L}_{\textrm{\scriptsize CE}} + \delta \mathcal{L}_{\textrm{\scriptsize KD}}, \end{equation} where \(\mathcal{L}_{\textrm{\scriptsize KD}}\) is defined as in Hinton \textit{et al.} \cite{hinton2015distilling} and \(\delta\) weights its contribution to the final loss \(\mathcal{L}\). \section{Experimental Evaluation} \label{sec:Results} This Section describes the experiments carried out for validating the proposed approach. First, Section \ref{subsec:Validation on Scene Recognition Benchmarks} delves into the reasons why a new KD benchmark is needed and motivates our choice of the scene recognition task for it. Second, to ease the reproducibility of the method, Section \ref{subsec:Implementation Details} provides a complete review of the implementation details. Section \ref{subsec:Ablation Study} motivates a series of ablation studies for the proposed method. Section \ref{subsec:State-of-the-art} reports state-of-the-art results on the standard CIFAR 100 benchmark and a and thorough state-of-the-art comparison in the scene recognition task. Quantitative and qualitative results for the obtained distilled activation maps are presented in Section \ref{subsec:AnalisysAMs}. \subsection{Validation on Scene Recognition Benchmarks} \label{subsec:Validation on Scene Recognition Benchmarks} All feature and attention-based KD methods reviewed in Section \ref{sec:Introduction} and \ref{sec:Related Work} have been mainly evaluated so far using image classification benchmarks on ImageNet \cite{deng2009imagenet}, CIFAR 10/100 \cite{krizhevsky2009learning} and MNIST \cite{deng2012mnist} datasets. We claim that scene recognition is a more suited task to evaluate KD methods for a variety of reasons: First, reported performances on scene recognition benchmarks \cite{lopez2020semantic, chen2020scene, li2021place} are not saturated. This means that results highly differ between shallow and deep architectures, providing a wider and more representative performance gap to be filled by KD methods than that existing for image classification in standard CIFAR10/100 evaluations. Note how the performance difference between a Teacher and a Vanilla baseline is just a $3\%$ in CIFAR100 (Table \ref{tab:CIFAR100Results}) while that difference grows to a $30\%$ in the ADE20K scene recognition dataset (Table \ref{tab:ADEResults}). Second, attention is an secondary factor for succeeding in ImageNet-like datasets. Due to the nature of the images, model's attention is usually concentrated around the center of the image \cite{mohsenzadeh2020emergence}. This image-center bias provokes different models focusing on very similar image areas at different depth levels, suggesting that the performance is mainly driven by the representativity and discriminability of the extracted features rather than by the areas of predominant attention. Figure \ref{fig:AMs_CIFAR100} in Section \ref{subsubsec:CIFAR100} provides examples of this observation. Differently, in scene recognition the gist of a scene is defined by several image features including stuff, objects, textures and spatial relationships between stuff and objects, which are, in turn, spread out throughout the image representing the scene. The areas of attention which different models are primarily focused on have been proved to be critical and to have a strong correlation with performance \cite{lopez2020semantic}. Actually, shallower networks can end up having better performance than deeper networks if their attention is properly guided. In this case, Attention-based KD might be a paramount strategy to build better and simpler models. Given these reasons, we believe that setting up a KD benchmarking that uses scene recognition rather than classical ImageNet-like image classification is helpful to spread the use of KD to other research scenarios, build a novel state-of-the-art and widen its application to more challenging tasks. In this section, our approach is evaluated on three well-known and publicly available scene recognition datasets: ADE20K \cite{zhou2017scene}, MIT Indoor 67 \cite{quattoni2009recognizing} and SUN 397 \cite{xiao2010sun}. However, as we understand that our approach should be also compared with respect to KD literature in a standard benchmark, results for CIFAR 100 dataset \cite{krizhevsky2009learning} are also presented in Section \ref{subsubsec:CIFAR100}. \subsection{Implementation Details} \label{subsec:Implementation Details} We provide and publicly release a novel training and evaluation KD framework for scene secognition including all the code and methods reported in this paper \footnote{http://www-vpu.eps.uam.es/publications/DCTBasedKDForSceneRecognition}. This framework enables the reproducibility of all the results in the paper and, given its modular design, enables future methods to be easily trained and evaluated under the same conditions as the presented approaches. The following implementation details regarding used architectures, hyper-parameters and evaluation metrics have been used: \textbf{Architectures:} The proposed method and the state-of-the-art approaches are evaluated using different combinations of Residual Networks \cite{he2016deep} and Mobile Networks \cite{sandler2018mobilenetv2}. \textbf{Data Normalization and Augmentation:} Each input image is spatially adapted to the network by re-sizing the smaller dimension to \(256\), while the other is resized to mantain the aspect ratio. In terms of data augmentation, we adopt the common data augmentation transformations: random crop to \(224\textrm{x}224\) dimension and random horizontal flipping. We also apply image normalization using ImageNet mean and standard deviation values. \textbf{Knowledge Distillation Layers}: For the proposed method, we select the intermediate features from ResNets \cite{he2016deep} and MobileNetV2 \cite{sandler2018mobilenetv2} Networks with the following spatial sizes \([H,W]\): \([56,56]\), \([28,28]\), \([14,14]\) and \([7,7]\), analyzing \(L=4\) levels of depth. We assume that both Teacher and Student architectures share the same spatial sizes (in Width and Height, not in Channel dimension) at some points in their architectures. This assumption may preclude the application of the method (to some extent) for pairs of disparate architectures. However, the assumption holds for the most popular architectures (at least those concerning KD and the image classification tasks): the whole set of ResNet, MobileNets, VGGs, ShuffleNets, GoogleNet and DenseNets families. All of these CNN families share the same spatial sizes [H, W] at some points of their architectures. \begin{table*}[t] \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{Ablation study regarding different stages of the proposed method. \textit{DCT}: DCT to transform the activation maps. \textit{DC Removal}: suppression of the DC coefficient. \textit{DCT Normalization}: min-max normalization of the DCT coefficients. \textit{Teacher Predictions}: use of teacher predictions to refine the Knowledge Distillation in Eq. \ref{eq:TeacherPred}. Bold values indicate best results.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline DCT & \makecell{DC Removal} & \makecell{DCT Normalization} & \makecell{Teacher Predictions} & \makecell{Hinton's KD \cite{hinton2015distilling}} & Top@1 & Top@5 & MCA & $\Delta$ Top@1 \tabularnewline \hline & & & & & 40.97 & 63.94 & 10.24 & - \tabularnewline \checkmark{} & & & & & 42.54 & 63.12 & 11.10 & + 3.83 $\%$ \tabularnewline \checkmark{} &\checkmark{} & & & & 46.51 & 68.92 & 12.45 & + 9.33 $\%$ \tabularnewline \checkmark{} &\checkmark{} & \checkmark{} & & & 46.84 & 67.41 & 12.88 & + 0.70 $\%$ \tabularnewline \checkmark{} &\checkmark{} & \checkmark{} & \checkmark{} & & \textbf{47.35} & \textbf{70.40} & \textbf{13.11} & + 1.08 $\%$ \tabularnewline \checkmark{} &\checkmark{} & \checkmark{} & \checkmark{} & \checkmark{} & \textbf{54.27} & \textbf{76.15} & \textbf{18.05} & + 14.61 $\%$ \tabularnewline \hline \end{tabular}} \par\end{centering} \label{tab:Ablation} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth,keepaspectratio]{Images/Normalization_Comparisson.pdf} \caption{Training and validation losses for ADE20K dataset. Classification curves represent Cross-Entropy loss values. Distill curves represent the proposed DCT-based loss values, either without normalization (a) or using min-max normalization (b).} \label{fig:CurvesNormalization} \end{figure*} \textbf{Hyper-parameters:} All the reported models have been trained following the same procedure. Stochastic Gradient Descent (SGD) with $0.9$ default momentum and $1^{-4}$ weight decay has been used to minimize the loss function and optimize the student network's trainable parameters. The initial learning rate was set to \(0.1\). All the models have been trained for \(70\) epochs and the learning rate was decayed every \(25\) epochs by a \(0.1\) factor. The batch size was set to \(128\) images. Unless otherwise specified along the Results Section, we set \(\alpha=\beta=1\) in the final loss equation when using the proposed approach. When combining it with Hinton's KD \cite{hinton2015distilling}, we follow the original publication and set \(\beta=0.1\) and \(\delta=1\) while maintaining \(\alpha=1\). All the models, to get rid of potential biases from pretrainings, have been trained from scratch. All the state-of-the-art reported methods have been trained by us for the scene recognition task using authors' original implementations and implementations from Tian \textit{et al.} \cite{tian2019contrastive}\footnote{\url{https://github.com/HobbitLong/RepDistiller}}. To provide a fair comparison, and in order to adapt them to the scene recognition task, an extensive $\alpha$ grid-search starting from the optimal values reported in the original papers has been performed and presented in Section \ref{subsec:State-of-the-art}. Additionally, for the CIFAR100 experiment in Section \ref{subsubsec:CIFAR100}, optimal hyper-parameter configurations reported in the original papers have been conserved. We refer to each of the individual publications for details. \textbf{Evaluation Metrics:} Following the common scene recognition procedure \cite{lopez2020semantic}, Top@\(k\) accuracy metric with \(k \in [1, K]\) being \(K\) the total number of Scene classes, has been chosen to evaluate the methods. Specifically, Top@\(\lbrace k=1,5 \rbrace\) accuracy metrics have been chosen. Furthermore, and as the Top@\(k\) accuracy metrics are biased to classes over-represented in the validation set, we also use an additional performance metric, the Mean Class Accuracy (MCA) \cite{lopez2020semantic}. For the CIFAR100 dataset experiment, following \cite{tian2019contrastive} and \cite{chen2021distilling}, regular accuracy is computed. \textbf{Hardware and Software:} The model design, training and evaluation have been carried out using the PyTorch 1.7.1 Deep Learning framework \cite{paszke2017automatic} running on a PC using a 8 Cores CPU, 50 GB of RAM and a NVIDIA RTX 24GB Graphics Processing Unit. \subsection{Ablation Studies} \label{subsec:Ablation Study} The aim of this Section is to gauge the influence of design choices, parameters and computational needs of the method. The performance impact of the different stages of the method are analyzed in Section \ref{subsubsec:Knowledge Distillation Design}, the influence of the $\alpha$ value, that weights the contribution of the proposed DCT-based loss to the global loss function (Eq. \ref{eq:Final Loss}), is measured in Section \ref{subsubsec:Influence of alpha} and the computational overhead introduced by the proposed DCT-based metric is discussed in Section \ref{subsubsec:Computational overhead}. \subsubsection{Knowledge Distillation Design} \label{subsubsec:Knowledge Distillation Design} Table \ref{tab:Ablation} quantifies the incremental influence of every step in the proposed approach. For this experiment we use the ADE20K dataset, and ResNet-50 and ResNet-18 for the teacher and student models respectively. Results suggest that even the simplest approach (second row), i.e. when activation maps are distilled from the teacher to the student using the complete non-normalized DCT, outperforms the vanilla baseline (first row). Note that when the DC coefficient is suppressed results are further increased. This suggests that using a metric that captures 2D differences while disregarding the mean intensity value of an activation map helps to increase the performance of the student network. Normalization of the DCT coefficients slightly enhances results, but more importantly, scales the DCT loss to be in a similar range than the Cross-Entropy Loss. To further stress the impact of the normalization, Figure \ref{fig:CurvesNormalization} (a) includes loss-evolution graphs for the proposed DCT-based method when DCT coefficients are not normalized, whereas Figure \ref{fig:CurvesNormalization} (b), on the contrary, represents losses when min-max normalization, as described in Section \ref{sec:Method}, is applied prior to the comparison with the $\ell_2$ loss. As it can be observed, the normalization plays a crucial role for scaling the proposed DCT loss. If normalization is not used, the distillation loss term is two orders of magnitude larger than the classification loss term, hence dominating the global loss after their combination. In order to balance the impact of the losses in their combination without normalization, larger $\alpha$ values different than $\alpha=1$ would be required, thereby increasing the complexity of setting adequate hyper-parameters. Back to Table 1, when Teacher predictions are taken into account and miss-predictions are suppressed from the KD pipeline results are further increased. Finally, the combination of the proposed approach and KD \cite{hinton2015distilling} suggests a high complementarity that can boost results even further. \subsubsection{Influence of $\alpha$} \label{subsubsec:Influence of alpha} \begin{figure}[t] \centering \centering \includegraphics[width=0.7\columnwidth]{Images/Ablation_Alfa.pdf} \caption{Influence of \(\alpha\) in the performance of the model measured over the ADE20K dataset. ResNet-50 acts as the teacher and ResNet-18 as the student.} \label{fig:AlphaAblation} \end{figure} \begin{table}[t] \renewcommand{\arraystretch}{1.2} \begin{centering} \caption{Computational cost comparison measured in extra trainable parameters needed and minutes per training epoch.} \resizebox{\columnwidth}{!}{ \begin{tabular}{ccc} \hline Method (ResNet-18) & Extra Trainable Parameters & Time per Epoch (Min)\tabularnewline \hline Baseline & - & 0.79\tabularnewline AT \cite{komodakis2017paying} & 0 M & 1.11\tabularnewline KD \cite{hinton2015distilling} & 0 M & 1.09\tabularnewline VID \cite{ahn2019variational} & 12.3 M & 1.53\tabularnewline Review \cite{chen2021distilling} & 28 M & 1.79\tabularnewline CKD \cite{chen2020cross} & 634 M & 5.03\tabularnewline \hline DCT (Ours) & 0 M & 1.14\tabularnewline \hline \end{tabular}} \label{tab:ComputationalCost} \par\end{centering} \end{table} The influence of the \(\alpha\) hyper-parameter (Eq. \ref{eq:Final Loss}) has also been analyzed. Figure \ref{fig:AlphaAblation} shows performance curves (teacher: ResNet-50, student: ResNet-18) obtained with values of \(\alpha\) ranging from \(0.1\) to \(5\) in the ADE20K dataset. For a clearer comparison, performance of the vanilla ResNet-18 is also plotted. It can be observed that our method outperforms vanilla ResNet-18 training for all \(\alpha\) values, suggesting an stable performance for a wide range of \(\alpha\) values. We use \(\alpha=1\) in all the experiments ahead as a trade-off between accuracy and balance of the distillation \(\mathcal{L}_{\textrm{\scriptsize DCT}}\) and the cross-entropy \(\mathcal{L}_{\textrm{\scriptsize CE}}\) terms into the final loss. However, it is important to remark that, differently than reported KD methods that need values of \(\alpha\) ranging usually from \(1\) to \(30000\) (Tables \ref{tab:ADEResults}, \ref{tab:MITResults} and \ref{tab:SUNResults}), the proposed approach is more stable for different $\alpha$ values thanks to the approach described in Section \ref{sec:Method} which facilitates a smooth combination of the \(\mathcal{L}_{\textrm{\scriptsize DCT}}\) and \(\mathcal{L}_{\textrm{\scriptsize CE}}\) losses. \subsubsection{Computational Overhead} \label{subsubsec:Computational overhead} Having in mind that computational resources are a key aspect that should be always taken into account, Table \ref{tab:ComputationalCost} presents the overhead derived from including the proposed DCT-based metric with respect to other KD approaches. Results indicate that our approach has a computational time per training epoch similar to that of AT \cite{komodakis2017paying} and KD \cite{hinton2015distilling}. Our implementation leverages the GPU implementation of the Fast Fourier Transform (FFT), which has already been demonstrated to be highly efficient in computational terms. This is also one of the advantages of using the DCT with respect to other alternative transformations. In addition, the proposed method, differently to many others from the state-of-the-art, does not include extra trainable parameters from the student ones, hence not needing extra memory resources. \subsection{Comparison with the State-of-the-Art} \label{subsec:State-of-the-art} \subsubsection{\textbf{CIFAR 100 Results}} \label{subsubsec:CIFAR100} Although one of the aims of our work is to extend and enhance the performance of KD in the scene recognition task, we are aware that an evaluation in the classical KD benchmark on image classification is also needed to help assess our contributions. To this aim, this section presents the performance of the proposed DCT-based approach in the CIFAR-100 dataset. For the sake of consistency, and to provide a fair comparison, we have followed the training and evaluation protocols described in the CRD paper \cite{tian2019contrastive}. In our case, the $\alpha$ parameter from Eq. \ref{eq:Final Loss} has not been modified and remains set to $\alpha=1$. All the performances reported in Table \ref{tab:CIFAR100Results} but those for our method are obtained from already published works \cite{tian2019contrastive, chen2021distilling}. \begin{table*}[t] \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{CIFAR100 accuracy results with 4 different Teacher-Student combinations. All the state-of-the-art results are extracted from CRD \cite{tian2019contrastive} and Review \cite{chen2021distilling} papers. Methods are sorted based on their average results.} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcccccc} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Year} & T: ResNet-56 & T: ResNet-110 & T: ResNet-110 & T: ResNet-32x4 & \multirow{2}{*}{\textcolor{blue}{Average}}\tabularnewline \cline{3-6} & & S: ResNet-20 & S: ResNet-20 & S: ResNet-32 & S: ResNet-8x4 & \tabularnewline \hline Teacher & - & 72.34 & 74.31 & 74.31 & 79.42 & \textcolor{blue}{75.09}\tabularnewline Vanilla & - & 69.04 & 69.06 & 71.14 & 72.50 & \textcolor{blue}{70.43}\tabularnewline \hline RKD \cite{park2019relational} & 2019 & 69.61 & 69.25 & 71.82 & 71.90 & \textcolor{blue}{70.64}\tabularnewline FitNet \cite{romero2014fitnets} & 2014 & 69.21 & 68.99 & 71.06 & 73.50 & \textcolor{blue}{70.69}\tabularnewline CC \cite{peng2019correlation} & 2019 & 69.63 & 69.48 & 71.48 & 72.97 & \textcolor{blue}{70.89}\tabularnewline NST \cite{huang2017like} & 2017 & 69.60 & 69.53 & 71.96 & 73.30 & \textcolor{blue}{71.09}\tabularnewline FSP \cite{yim2017gift} & 2017 & 69.95 & 70.11 & 71.89 & 72.62 & \textcolor{blue}{71.14}\tabularnewline FT \cite{kim2018paraphrasing} & 2018 & 69.84 & 70.22 & 72.37 & 72.86 & \textcolor{blue}{71.32}\tabularnewline SP \cite{tung2019similarity} & 2019 & 69.67 & 70.04 & 72.69 & 72.94 & \textcolor{blue}{71.33}\tabularnewline VID \cite{ahn2019variational} & 2019 & 70.38 & 70.16 & 72.61 & 73.09 & \textcolor{blue}{71.50}\tabularnewline AT \cite{komodakis2017paying} & 2017 & 70.55 & 70.22 & 72.31 & 73.44 & \textcolor{blue}{71.63}\tabularnewline PKT \cite{passalis2020probabilistic} & 2020 & 70.34 & 70.255 & 72.61 & 73.64 & \textcolor{blue}{71.71}\tabularnewline AB \cite{heo2019knowledge} & 2019 & 69.47 & 69.53 & 70.98 & 73.17 & \textcolor{blue}{71.78}\tabularnewline KD \cite{hinton2015distilling} & 2015 & 70.66 & 70.67 & 73.08 & 73.33 & \textcolor{blue}{71.93}\tabularnewline CRD \cite{tian2019contrastive} & 2019 & 71.16 & 71.46 & 73.48 & 75.51 & \textcolor{blue}{72.90}\tabularnewline \textbf{Review \cite{chen2021distilling}} & \textbf{2021} & \textbf{71.89} & \textbf{71.60} & \textbf{73.89} & \textbf{75.63} & \textbf{\textcolor{blue}{73.25}}\tabularnewline \hline DCT (Ours) & 2022 & 70.45 & 70.10 & 72.42 & 73.52 & \textcolor{blue}{71.55}\tabularnewline \hline \end{tabular}} \par\end{centering} \label{tab:CIFAR100Results} \end{table*} \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \begin{centering} \caption{ResNet-20 Activation Map's similarity using SSIM with respect to a ResNet-56 model trained using the CIFAR100 dataset. SSIM values close to 1 indicate identical maps and values close to 0 indicate no similarity.} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcccccccc} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Training} & \multicolumn{4}{c}{Validation}\tabularnewline \cline{2-9} & Level 1 & Level 2 & Level 3 & Average & Level 1 & Level 2 & Level 3 & Average\tabularnewline \hline Vanilla ResNet-20 & 0.71 & 0.70 & 0.84 & 0.75 & 0.71 & 0.70 & 0.84 & 0.75\tabularnewline AT \cite{komodakis2017paying} & 0.92 & 0.92 & \textbf{0.94} & 0.93 & 0.93 & 0.92 & \textbf{0.94} & 0.93\tabularnewline \textbf{DCT (Ours)} & \textbf{0.97} & \textbf{0.95} & 0.93 & \textbf{0.95} & \textbf{0.97} & \textbf{0.95} & 0.92 & \textbf{0.95}\tabularnewline \hline \end{tabular}} \par\end{centering} \label{tab:CIFAR100SSIM} \end{table*} Table \ref{tab:CIFAR100Results} presents accuracy results for the state-of-the-art in KD and the proposed approach for several network combinations. To ease the comparison an average column in blue color is also included. These results suggest that: (1) all the reported methods perform similarly: most of them are within the range of $1 \%$ to $3\%$ of accuracy difference; (2) our method achieves results comparable to other state-of-the-art methods even in a single object/concept dataset like CIFAR100. Our approach is specifically targeted to tasks that benefit from the aggregation of information spatially spread throughout the image, e.g., scene recognition. However, when used for tasks that can be solved just extracting features from a single (usually image-centered) region such as the CIFAR 10/100 image classification benchmark \cite{krizhevsky2009learning}, our proposal is neutral. Contributions from attention-based approaches are hindered due to the similar, centered and compact attention patterns that result from this dataset at all levels of the different CNN vanilla models: as depicted in Figure \ref{fig:AMs_CIFAR100}, highly dissimilar architectures yield similar mono-modal attention maps around the object defining the image class. Note how unlike these attention maps are from the ones depicted in Figure \ref{fig:ActivationMaps} \begin{figure*}[t] \centering \includegraphics[width=\textwidth,keepaspectratio]{Images/Comparison_AM_CIFAR100.pdf} \caption{Example of obtained activation maps at three different levels for two different architectures in CIFAR 100 dataset. Note the similarity between activation maps from different architectures and the centered and compact patterns in Level $2$ and Level $3$.} \label{fig:AMs_CIFAR100} \end{figure*} \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{Images/SOTA_Alpha_Study_Boxplot.pdf} \caption{Box plot representing state-of-the-art results using $21$ different $\alpha$ values in a range of $\pm100 \%$ from the original value proposed by the corresponding works with a step of $\pm 10 \%$. The study has been performed using ResNet-50 as teacher and ResNet-18 as student in the ADE20K dataset. Red line represents the performance of our approach. Blue crosses represent the performance of each method using the $\alpha$ value reported in the original publications.} \label{fig:SotaAlphaStudy} \end{figure} This attention map bias can be also noticed quantitatively in the experiment reported in Table \ref{tab:CIFAR100SSIM}. Here we quantify the similarity between ResNet-56's (Teacher) and some selected model's activation maps for the whole set of training and validation samples in the CIFAR100 dataset. We use the Structural Similarity Index Measure (SSIM) \cite{wang2004image} to evaluate such similarity, hence avoiding potential biases inherited from the metrics used in the training stage. It can be observed how attention maps for the vanilla ResNet-20 model are, in average, a $75\%$ similar to those of ResNet-56, a model with twice more capacity. It is noteworthy to advance that, when this experiment is carried out for scene recognition (Table \ref{tab:QuantitativeAM}), this average similarity decreases a $36.00\%$ (from 0.75 to 0.48), indicating that the correlation between attention maps is substantially higher for the CIFAR100 than for scene recognition datasets. In other words, activation maps in CIFAR-100 are already matched by most of the methods. Nevertheless, considering results from Tables \ref{tab:CIFAR100Results} and \ref{tab:CIFAR100SSIM}, one can conclude that the proposed DCT-based loss yields a better matching between Teacher and Student activation maps than a method driven by the $\ell_2$ norm (the AT \cite{komodakis2017paying} method selected for comparison in Table \ref{tab:CIFAR100SSIM}). This supports the motivation of the paper: using a 2D frequency transform of the activation maps before transferring them benefits the comparison of the 2D global information by leveraging the spatial relationships captured by the transformed coefficients. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Images/Joined_Curves.pdf} \caption{Validation set Accuracy (\%) per epoch for the teacher model (ResNet-50), the vanilla network (ResNet-18), state-of-the-art methods, the proposed DCT approach and their combinations with KD \cite{hinton2015distilling} for ADE20K (a), SUN397 (b), and MIT67 (c) datasets.} \label{fig:ComparativeTrainingCurves} \end{figure*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{Comparison with respect to state-of-the-art methods in the ADE20K dataset with different Teacher (T) - Student (S) combinations. For computational cost comparison the number of additional parameters is indicated. Results are obtained with one run of training. The \(\alpha\) value extracted from Figure \ref{fig:SotaAlphaStudy} and used to train the models is also indicated. Best results in bold.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccccc} \hline \multirow{2}{2cm}{Method} & \multirow{2}{0.9cm}{\centering{}Year} & \multirow{2}{1.3cm}{\centering{}Extra Trainable Params} & \multirow{2}{1cm}{\centering{}\(\alpha\)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-50 (25.6 M)\\S: ResNet-18 (11.7 M)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-152 (60.3 M)\\ S: ResNet-34 (21.8 M)} & \multicolumn{3}{>{\centering}m{3.2cm}}{T: ResNet-50 (25.6 M)\\S: MobileNet-V2 (3.5 M)}\tabularnewline \cline{5-13} & & & & Top1 & Top5 & MCA & Top1 & Top5 & MCA & Top1 & Top5 & MCA\tabularnewline \hline Teacher & - & \centering{}- & - & 58.34 & 79.15 & 21.80 & 60.07 & 79.65 & 24.19 & 58.34 & 79.15 & 21.80\tabularnewline Vanilla & - & \centering{}- & - & 40.97 & 63.94 & 10.24 & 41.63 & 65.15 & 10.03 & 44.29 & 67.69 & 10.44\tabularnewline \hline AT \cite{komodakis2017paying} & 2017 & \centering{}0 M & 1100 & 45.43 & 66.70 & 12.29 & 44.80 & 65.21 & 11.39 & 46.65 & 65.69 & 11.85\tabularnewline VID \cite{ahn2019variational} & 2019 & \centering{}12.3 M & 1.5 & 43.11 & 65.78 & 10.70 & 41.03 & 62.41 & 9.24 & 43.73 & 66.70 & 10.35\tabularnewline CRD \cite{tian2019contrastive} & 2019 & \centering{}0.3 M & 1.4 & 45.92 & 67.87 & 11.91 & 43.09 & 66.53 & 10.30 & 45.14 & 69.11 & 10.27\tabularnewline PKT \cite{passalis2020probabilistic} & 2020 & \centering{}0 M & 30000 & 44.59 & 65.46 & 11.89 & 42.38 & 62.98 & 10.74 & 46.42 & 67.32 & 11.81\tabularnewline CKD \cite{chen2020cross} & 2021 & \centering{}634 M & 400 & 46.89 & 69.55 & 12.70 & 45.01 & 65.70 & 11.89 & 47.30 & 68.60 & 12.30\tabularnewline Review \cite{chen2021distilling} & 2021 & \centering{}28 M & 1.8 & 45.88 & 68.20 & 12.71 & 43.03 & 65.34 & 10.84 & 45.30 & 69.74 & 11.48\tabularnewline \hline DCT (Ours) & 2022 & \centering{}0 M & 1 & \textbf{47.35} & \textbf{70.40} & \textbf{13.11} & \textbf{45.63} & \textbf{66.05} & \textbf{12.02} & \textbf{47.39} & \textbf{68.52} & \textbf{12.35}\tabularnewline \hline \hline KD \cite{hinton2015distilling} & 2015 & \centering{}0 M & 0.8 & 50.54 & 73.49 & 15.39 & 48.91 & 73.37 & 14.51 & 48.37 & 71.47 & 12.55\tabularnewline AT \cite{komodakis2017paying} + KD & 2017 & \centering{}0 M & 1100 & 48.87 & 73.01 & 13.29 & 49.35 & 72.09 & 14.16 & 47.67 & 72.97 & 12.93\tabularnewline VID \cite{ahn2019variational} + KD & 2019 & \centering{}12.3 M & 1.5 & 49.69 & 72.36 & 19.89 & 49.34 & 71.57 & 14.19 & 48.14 & 71.88 & 12.90\tabularnewline CRD \cite{tian2019contrastive} + KD & 2019 & \centering{}0.3 M & 1.4 & 48.78 & 73.76 & 12.31 & 48.16 & 72.15 & 15.36 & 47.88 & 71.97 & 11.36\tabularnewline PKT \cite{passalis2020probabilistic} + KD & 2020 & \centering{}0 M & 30000 & 49.31 & 73.41 & 14.48 & 49.70 & 73.33 & 14.64 & 49.43 & 72.76 & 13.59\tabularnewline CKD \cite{chen2020cross} + KD & 2021 & \centering{}634 M & 400 & 52.10 & 76.90 & 15.54 & \textbf{53.54} & \textbf{75.20} & \textbf{17.98} & 49.15 & 70.25 & 13.32\tabularnewline Review \cite{chen2021distilling} + KD & 2021 & \centering{}28 M & 1.8 & 50.63 & 73.73 & 14.86 & 49.59 & 72.56 & 14.99 & 48.32 & 71.84 & 12.12\tabularnewline \hline DCT (Ours) + KD & 2022 & \centering{}0 M & 1 & \textbf{54.25} & \textbf{76.15} & \textbf{18.05} & 52.68 & 74.60 & 17.07 & \textbf{50.75} & \textbf{72.53} & \textbf{14.05}\tabularnewline \hline \end{tabular}} \label{tab:ADEResults} \par\end{centering} \end{table*} \subsubsection{\textbf{Scene Recognition Results}} This Section presents a state-of-the-art benchmark for KD methods. Following common evaluations \cite{tian2019contrastive, chen2020cross, chen2021distilling} we have selected top performing KD methods: KD \cite{hinton2015distilling}, AT \cite{komodakis2017paying}, PKT \cite{passalis2020probabilistic}, VID \cite{ahn2019variational}, CRD \cite{tian2019contrastive}, CKD \cite{chen2020cross} and Review \cite{chen2021distilling}. Obtained results for ADE20K, SUN397 and MIT67 datasets are presented in Tables \ref{tab:ADEResults}, \ref{tab:SUNResults} and \ref{tab:MITResults} respectively. Performance metrics are included for three different pairs of teacher/student models: two sharing the same architecture, ResNet-50/ResNet-18 and ResNet-152/ResNet-34, and one with different backbones, ResNet-50/MobileNetV2. In addition, the combination of all these models with Hinton's KD \cite{hinton2015distilling} is also reported. First, to provide a fair comparison, Figure \ref{fig:SotaAlphaStudy} compiles the performance ranges of an extensive search of the optimal $\alpha$ value for each of the compared methods for the scene recognition task. The search has been carried out modifying the $\alpha$ values reported in the original publications (which we understand optimal for the image classification task) in a range between $\pm100 \%$ with a step of $\pm 10 \%$. The search has been performed using ResNet-50 as teacher and ResNet-18 as student in the ADE20K dataset. To ease the comparison, the performance obtained by the original $\alpha$ value and the proposed method is also included. The models trained using $\alpha$ values resulting in the best performance for each method have been used to obtain the results from Tables \ref{tab:ADEResults}, \ref{tab:SUNResults} and \ref{tab:MITResults}. Average results from Tables \ref{tab:ADEResults}, \ref{tab:SUNResults} and \ref{tab:MITResults} indicate that the proposed approach outperforms both the vanilla training of the student and all the reported KD methods. The training loss curves for the validation sets depicted in Figures \ref{fig:ComparativeTrainingCurves} (a), \ref{fig:ComparativeTrainingCurves}(b) and \ref{fig:ComparativeTrainingCurves} (c) support this assumption providing a graphical comparison between all the reported methods for ADE20K, SUN397 and MIT67 datasets respectively. \begin{table*}[!h] \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{Comparison with respect to state-of-the-art methods in the SUN397 dataset with different Teacher (T) - Student (S) combinations.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccccc} \hline \multirow{2}{2cm}{Method} & \multirow{2}{0.9cm}{\centering{}Year} & \multirow{2}{1.3cm}{\centering{}Extra Trainable Params} & \multirow{2}{1cm}{\centering{}\(\alpha\)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-50 (25.6 M)\\S: ResNet-18 (11.7 M)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-152 (60.3 M)\\ S: ResNet-34 (21.8 M)} & \multicolumn{3}{>{\centering}m{3.2cm}}{T: ResNet-50 (25.6 M)\\S: MobileNet-V2 (3.5 M)}\tabularnewline \cline{5-13} & & & & Top1 & Top5 & MCA & Top1 & Top5 & MCA & Top1 & Top5 & MCA\tabularnewline \hline Teacher & - & \centering{}- & - & 61.69 & 87.50 & 61.74 & 62.56 & 87.53 & 62.63 & 61.69 & 87.50 & 61.74\tabularnewline Vanilla & - & \centering{}- & - & 38.77 & 67.05 & 38.83 & 39.66 & 69.36 & 40.10 & 41.18 & 70.58 & 41.23\tabularnewline \hline AT \cite{komodakis2017paying} & 2017 & \centering{}0 M & 1100 & 41.52 & 69.87 & 41.58 & 40.75 & 69.53 & 40.81 & 38.84 & 68.08 & 38.91\tabularnewline VID \cite{ahn2019variational} & 2019 & \centering{}12.3 M & 1.5 & 41.16 & 69.15 & 41.21 & 39.02 & 67.77 & 39.05 & 40.59 & 69.79 & 40.64\tabularnewline CRD \cite{tian2019contrastive} & 2019 & \centering{}0.3 M & 1.4 & 43.89 & 73.55 & 43.95 & 42.13 & 71.51 & 42.14 & 42.69 & 72.98 & 42.73\tabularnewline PKT \cite{passalis2020probabilistic} & 2020 & \centering{}0 M & 30000 & 38.70 & 67.34 & 38.72 & 37.70 & 66.06 & 37.72 & 40.17 & 68.89 & 40.2\tabularnewline Review \cite{chen2021distilling} & 2021 & \centering{}28 M & 1.8 & 43.26 & 72.77 & 43.29 & 42.69 & 70.92 & 42.73 & 42.68 & 71.72 & 42.74\tabularnewline \hline DCT (Ours) & 2022 & \centering{}0 M & 1 & \textbf{45.75} & \textbf{74.59} & \textbf{45.80} & \textbf{43.50} & \textbf{72.33} & \textbf{43.54} & \textbf{43.16} & \textbf{70.59} & \textbf{43.19}\tabularnewline \hline \hline KD \cite{hinton2015distilling} & 2015 & \centering{}0 M & 0.8 & 48.83 & 77.66 & 48.90 & 48.26 & 76.79 & 48.30 & 47.31 & 77.80 & 47.38\tabularnewline AT \cite{komodakis2017paying} + KD & 2017 & \centering{}0 M & 1100 & 49.44 & 78.06 & 49.52 & 47.05 & 75.39 & 49.10 & 46.60 & 76.42 & 46.08\tabularnewline VID \cite{ahn2019variational} + KD & 2019 & \centering{}12.3 M & 1.5 & 49.26 & 78.16 & 49.32 & 47.08 & 75.95 & 47.12 & 46.64 & 76.87 & 46.71\tabularnewline CRD \cite{tian2019contrastive} + KD & 2019 & \centering{}0.3 M & 1.4 & 49.79 & 78.69 & 49.82 & 48.39 & 77.00 & 48.44 & 46.77 & 77.30 & 46.82\tabularnewline PKT \cite{passalis2020probabilistic} + KD & 2020 & \centering{}0 M & 30000 & 49.13 & 78.16 & 49.16 & 48.08 & 76.75 & 48.15 & 47.54 & 77.51 & 47.56\tabularnewline Review \cite{chen2021distilling} + KD & 2021 & \centering{}28 M & 1.8 & 49.90 & 78.71 & 49.96 & 47.05 & 76.30 & 47.07 & 47.05 & 77.44 & 47.10\tabularnewline \hline DCT (Ours) + KD & 2022 & \centering{}0 M & 1 & \textbf{55.15} & \textbf{83.20} & \textbf{55.19} & \textbf{50.51} & \textbf{79.25} & \textbf{50.55} & \textbf{49.25} & \textbf{79.35} & \textbf{49.30}\tabularnewline \hline \end{tabular}} \label{tab:SUNResults} \par\end{centering} \end{table*} \begin{table*}[!h] \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{Comparison with respect to state-of-the-art methods in the MIT67 dataset with different Teacher (T) - Student (S) combinations.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccccc} \hline \multirow{2}{2cm}{Method} & \multirow{2}{0.9cm}{\centering{}Year} & \multirow{2}{1.3cm}{\centering{}Extra Trainable Params} & \multirow{2}{1cm}{\centering{}\(\alpha\)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-50 (25.6 M)\\S: ResNet-18 (11.7 M)} & \multicolumn{3}{>{\centering}m{3cm}}{T: ResNet-152 (60.3 M)\\ S: ResNet-34 (21.8 M)} & \multicolumn{3}{>{\centering}m{3.2cm}}{T: ResNet-50 (25.6 M)\\S: MobileNet-V2 (3.5 M)}\tabularnewline \cline{5-13} & & & & Top1 & Top5 & MCA & Top1 & Top5 & MCA & Top1 & Top5 & MCA\tabularnewline \hline Teacher & - & \centering{}- & - & 77.32 & 95.20 & 79.00 & 78.11 & 95.02 & 78.91 & 77.32 & 95.20 & 79.00\tabularnewline Vanilla & - & \centering{}- & - & 49.26 & 77.02 & 46.87 & 38.84 & 67.52 & 38.88 & 49.06 & 79.08 & 48.66\tabularnewline \hline AT \cite{komodakis2017paying} & 2017 & \centering{}0 M & 1100 & 50.41 & 79.30 & 50.42 & 49.66 & 76.84 & 49.03 & 45.13 & 75.51 & 44.32\tabularnewline VID \cite{ahn2019variational} & 2019 & \centering{}12.3 M & 1.5 & 48.21 & 76.71 & 47.60 & 44.22 & 72.77 & 43.23 & 47.76 & 75.96 & 47.14\tabularnewline CRD \cite{tian2019contrastive} & 2019 & \centering{}0.3 M & 1.4 & 51.45 & 78.56 & 51.14 & 41.95 & 72.95 & 41.87 & 50.10 & 77.22 & 47.20\tabularnewline PKT \cite{passalis2020probabilistic} & 2020 & \centering{}0 M & 30000 & 51.03 & 79.15 & 49.56 & 46.32 & 74.34 & 45.55 & 50.23 & 78.80 & 47.92\tabularnewline Review \cite{chen2021distilling} & 2021 & \centering{}28 M & 1.8 & 51.73 & 80.78 & 51.18 & 44.43 & 75.36 & 44.09 & 50.25.48 & 78.60 & 49.43\tabularnewline \hline DCT (Ours) & 2022 & \centering{}0 M & 1 & \textbf{56.32} & \textbf{84.90} & \textbf{55.39} & \textbf{52.14} & \textbf{80.98} & \textbf{50.98} & \textbf{50.42} & \textbf{78.68} & \textbf{48.38}\tabularnewline \hline \hline KD \cite{hinton2015distilling} & 2015 & \centering{}0 M & 0.8 & 54.87 & 83.42 & 54.91 & 51.55 & 79.61 & 51.24 & 56.14 & 82.51 & 56.04\tabularnewline AT \cite{komodakis2017paying} + KD & 2017 & \centering{}0 M & 1100 & 58.41 & 83.78 & 57.81 & 52.30 & 80.10 & 52.48 & 52.17 & 80.53 & 51.34\tabularnewline VID \cite{ahn2019variational} + KD & 2019 & \centering{}12.3 M & 1.5 & 54.20 & 81.51 & 54.54 & 51.79 & 80.23 & 51.88 & 55.75 & 81.94 & 55.60\tabularnewline CRD \cite{tian2019contrastive} + KD & 2019 & \centering{}0.3 M & 1.4 & 55.23 & 83.83 & 54.83 & 50.54 & 79.92 & 50.53 & 55.16 & 81.78 & 54.79\tabularnewline PKT \cite{passalis2020probabilistic} + KD & 2020 & \centering{}0 M & 30000 & 53.83 & 80.83 & 53.77 & 50.52 & 79.37 & 50.71 & 53.05 & 81.87 & 52.90\tabularnewline Review \cite{chen2021distilling} + KD & 2021 & \centering{}28 M & 1.8 & 56.48 & 81.89 & 57.17 & 51.42 & 78.96 & 51.05 & 56.99 & 81.59 & 56.98\tabularnewline \hline DCT (Ours) + KD & 2022 & \centering{}0 M & 1 & \textbf{60.11} & \textbf{86.88} & \textbf{60.53} & \textbf{55.18} & \textbf{81.64} & \textbf{55.62} & \textbf{57.35} & \textbf{84.79} & \textbf{56.89}\tabularnewline \hline \end{tabular}} \label{tab:MITResults} \par\end{centering} \end{table*} Results from the proposed method compared with respect to the rest of the approaches reinforce the hypothesis that properly learnt CNN attention is crucial for scene recognition. Results from smaller networks can be boosted if their attention is properly guided towards representative image areas, which are better obtained by deeper and more complex architectures. The increase in performance of the method with respect to AT \cite{komodakis2017paying} suggests that, even though adopting similar knowledge sources, the proposed loss is able to consistently achieve better results by better quantifying the differences between attention maps. CKD \cite{chen2020cross} outperforms our method in an specific combination of Table \ref{tab:ADEResults} (T: ResNet-152 and S: ResNet-34 + KD) for the ADE20K dataset, being behind us in the other two combinations evaluated. Nevertheless, the number of extra trainable parameters required by CKD grows with the resolution of the images: whereas CKD is reasonable for datasets composed of low-resolution images (CIFAR 10/100 datasets), here the number of parameters is $30$ times larger than the teacher from where the knowledge is transferred. Given this amount of extra trainable parameters, it may be worthier to train a vanilla model with that capacity. Therefore, we do not include the evaluation for CKD in the SUN397 and MIT67 datasets. Results from Tables \ref{tab:ADEResults}, \ref{tab:SUNResults} and \ref{tab:MITResults} also indicate that when dealing with scene recognition datasets a proper selection of the architectures to be used in KD is important. Note how using a deeper architecture like ResNet-152 might not be as beneficial as using ResNet-50, maybe due to overfitting, or how extremely efficient models like MobileNet-V2 can get similar results as ResNet-18 or ResNet-34. When the proposed method is combined with KD \cite{hinton2015distilling}, results show an increase in performance with respect to the rest of the methods, which evidences that the proposed DCT-based method can be properly combined with KD, benefiting from the extra regularization that seminal KD provides at the response level. \begin{table}[t] \scriptsize \renewcommand{\arraystretch}{1.2} \begin{centering} \caption{Error Rates when transfering learning from ImageNet to MIT67 scene recognition Dataset. All results except DCT (Ours) are extracted from Zagoruyko \textit{et al.} \cite{komodakis2017paying}.} \resizebox{0.7\columnwidth}{!}{ \begin{tabular}{llc} \hline Method & Backbone & Error Rate\tabularnewline \hline Teacher & ResNet-34 & 26.0\tabularnewline Student & ResNet-18 & 28.2\tabularnewline AT \cite{komodakis2017paying} & ResNet-18 & 27.1\tabularnewline KD \cite{hinton2015distilling} & ResNet-18 & 28.1\tabularnewline DCT (Ours) & ResNet-18 & \textbf{26.35}\tabularnewline \hline \end{tabular}} \par\end{centering} \label{tab:TransferLearning} \end{table} \subsubsection{Transfer Learning Results} Table \ref{tab:TransferLearning} presents a Transfer Learning experiment for scene recognition. We have followed the same training and evaluation protocol for the AT method as that proposed by Zagoruyko \textit{et al.} \cite{komodakis2017paying}. The aim of the experiment is to illustrate that our method also works when transferring attention in a Transfer Learning scenario, i.e., fine tuning a model to the MIT67 dataset from a model with ImageNet pre-trained weights. Results indicate that the proposed approach helps the transfer learning process by decreasing the error rate a \(6.56\%\) and a \(2.76\%\) with respect to the student and AT-transferred model, respectively. \subsection{Analysis of Activation Maps}\label{subsec:AnalisysAMs} \begin{figure*}[!t] \centering \includegraphics[width=0.8\textwidth, height=0.8\textheight, keepaspectratio]{Images/Comparison_AM_Appendix_1.pdf} \caption{Obtained activation maps for the proposed method using ResNet-50 as teacher and ResNet-18 as student. Note how the proposed approach enables a ResNet-18 architecture to have similar activation maps to the ones obtained by a ResNet-50.} \label{fig:Comparison AMs} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.8\textwidth, height=0.8\textheight, keepaspectratio]{Images/Comparison_AM_Appendix_2.pdf} \caption{Obtained activation maps for the proposed method using ResNet-50 as teacher and ResNet-18 as student. AT \cite{komodakis2017paying} activation maps are also included for comparison. Note how the proposed approach enables a ResNet-18 architecture to have similar activation maps to the ones obtained by a ResNet-50. Note also how the matching is better than the one achieved by AT \cite{komodakis2017paying}.} \label{fig:Comparison AT} \end{figure*} \begin{table*}[t] \scriptsize \renewcommand{\arraystretch}{1.25} \begin{centering} \caption{Similarity between ResNet-50 activation maps, trained in ADE20K dataset, and the corresponding level's activation maps of several models. SSIM values close to 1 indicate identical maps and values close to 0 indicate no similarity.} \resizebox{\textwidth}{!}{ \begin{tabular}{llccccccccc} \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{Training Set} & \multicolumn{5}{c}{Validation Set}\tabularnewline \cline{2-11} & Level 1 & Level 2 & Level 3 & Level 4 & Average & Level 1 & Level 2 & Level 3 & Level 4 & Average\tabularnewline \hline ResNet-18 & 0.46 & 0.32 & 0.39 & 0.72 & 0.48 & 0.47 & 0.32 & 0.40 & 0.71 & 0.47\tabularnewline AT \cite{komodakis2017paying} & 0.66 & 0.73 & 0.76 & \textbf{0.90} & 0.76 & 0.67 & 0.74 & 0.77 & \textbf{0.83} & 0.75\tabularnewline DCT (Ours) & \textbf{0.89} & \textbf{0.87} & \textbf{0.81} & 0.82 & \textbf{0.85} & \textbf{0.89} & \textbf{0.87} & \textbf{0.81} & 0.79 & \textbf{0.84}\tabularnewline KD \cite{hinton2015distilling} & 0.48 & 0.55 & 0.42 & 0.78 & 0.56 & 0.48 & 0.56 & 0.43 & 0.73 & 0.56\tabularnewline DCT (Ours) + KD & \textbf{0.90} & \textbf{0.88} & \textbf{0.82} & 0.87 & \textbf{0.87} & \textbf{0.90} & \textbf{0.88} & \textbf{0.83} & \textbf{0.83} & \textbf{0.86}\tabularnewline \hline \end{tabular}} \par\end{centering} \label{tab:QuantitativeAM} \end{table*} Figures \ref{fig:ActivationMaps}, \ref{fig:Comparison AMs} and \ref{fig:Comparison AT} present qualitative results of the obtained activation maps by the proposed method. In addition, Figures \ref{fig:ActivationMaps} and \ref{fig:Comparison AT} include those obtained by AT \cite{komodakis2017paying} for comparison. Specifically, Figure \ref{fig:ActivationMaps} shows how AT maps resemble teacher ones only in the wider and intense areas of activation, i.e., the \textit{bed} and the \textit{wardrobe} in Level 3, while the proposed approach yields more similar maps in all the image areas where the teacher is focused on, i.e., the \textit{bed}, and the \textit{wardrobe} but also the \textit{lamps}, the \textit{paintings} and even the \textit{book} on the table. This suggests that the proposed DCT-based metric achieves a better matching when activation patterns are diverse and spread throughout the image. Table \ref{tab:QuantitativeAM} quantifies qualitative observations from Figures \ref{fig:ActivationMaps}, \ref{fig:Comparison AMs} and \ref{fig:Comparison AT} by repeating the presented experiment from Section \ref{subsubsec:CIFAR100}, i.e., computing the similarity between ResNet-50 (Teacher) and some model's activation maps for the whole set of training and validation samples in the ADE20K dataset using the SSIM. Results in Table \ref{tab:QuantitativeAM} confirm the qualitative analysis presented in Figures \ref{fig:ActivationMaps}, \ref{fig:Comparison AMs} and \ref{fig:Comparison AT}: the similarity for levels $L={1..3}$, in both Training and Validation sets, increases when the proposed DCT-based loss is used. Level $L=4$ similarity is slightly better for AT, mainly because activation maps in this level tend to be image-centred, continuous, and mono-modal, which benefits the $\ell_2$ measure. Overall, the average similarity achieved by the proposed DCT method is \(11.84\%\) higher for the training set and \(12\%\) higher for the validation respect to AT. Finally, it is remarkable how similarity is even higher when the DCT+KD combination is used, which again indicates a high complementarity between both losses. \section{Conclusions} \label{sec:Conclusions} This paper proposes a novel approach to globally compare 2D structures or distributions by evaluating their similarity in the Discrete Cosine Transform domain. The proposed technique is the core of an Attention-based Knowledge Distillation method that aims to transfer knowledge from a teacher to a student model. Specifically, intermediate feature representations from the teacher and the student are used to obtain activation maps that are spatially matched using a DCT-based loss. The proposal is applied to the scene recognition task, where the attention of trained models is highly correlated with performance. The reported results show that the proposed approach outperforms the state-of-the-art Knowledge Distillation approaches via better comparing attention maps. The presented results provide promising evidences that the use of 2D discrete linear transforms that efficiently capture 2D patterns might be helpful, not only for the Knowledge Distillation task, but also for other Computer Vision tasks where vectorial metrics, i.e. \(\ell_{2}\) metrics, are nowadays used by default. \section*{Acknowledgments} This study has been supported by the Spanish Government through the \textit{Formacion de Personal Investigador} (FPI) programm (PRE2018-084916 grant) from the TEC2017-88169-R MobiNetVideo project.
2,877,628,090,873
arxiv
\section{Introduction} \symbolfootnote[0]{ Reaserch partially supported by NSF grant No. DMS-0503735} Let $N^n$ be a complete noncompact Riemannian manifold of dimension $n$ and assume that the Ricci curvature has the lower bound $Ric\geq -(n-1).$ As a consequence of Cheng's theorem (\cite{C}) we know that if $\lambda _1\left( N\right) $ denotes the bottom of the spectrum of the Laplacian on $N $ then $\lambda _1\left( N\right) \leq \frac{\left( n-1\right) ^2}4.$ This is a sharp upper bound for $\lambda _1\left( N\right) $, the hyperbolic space form $\mathbb{H}^n$ is an example where equality is achieved. Recall that the proof of Cheng's theorem is relying on the Laplacian comparison theorem for Riemannian manifolds, that is to say on an upper bound of the Laplacian of the distance function on $N$. An interesting question is to study all manifolds satisfying the equality case in Cheng's theorem, i.e. those manifolds for which $Ric\geq -(n-1)$ and $\lambda _1\left( N\right) =\frac{% \left( n-1\right) ^2}4.$ In \cite{L-W2} P. Li and J. Wang proved that if equality holds in Cheng's upper bound and $n\geq 3$ then either the manifold has one end or the manifold has two ends in which case $N$ must either be (1) a warped product $N=\mathbb{R}\times P$ with $P$ compact and metric given by $ds_N^2=dt^2+\exp \left( 2t\right) ds_P^2$, or (2) if $n=3$ a warped product $N=\mathbb{R}\times P$ with $P$ compact and metric given by $ds_N^2=dt^2+\cosh ^2\left( t\right) ds_P^2$. In \cite{L-W3} P. Li and J. Wang have proved that Cheng's theorem has an analogue in the K\"{a}hler setting. Throughout this paper $M^m$ is a complete noncompact K\"{a}hler manifold of complex dimension $m.$ If $ds^2=h_{\alpha \overline{\beta }}dz^\alpha d\overline{z}^\beta $ denotes the K\"{a}hler metric on $M,$ then ${Re}\left( ds^2\right) $ defines a Riemannian metric on $M.$ Suppose $\left\{ e_1,e_2,...,e_{2m}\right\} $ with $e_{2k}=Je_{2k-1}$ for any $k\in \left\{ 1,...,m\right\} $ is an orthonormal frame with respect to this Riemannian metric, then $\left\{ v_1,...,v_m\right\} $ is a unitary frame of $T_x^{1,0}M,$ where \[ v_k=\frac 12(e_{2k-1}-\sqrt{-1}e_{2k}). \] Recall that the bisectional curvature $BK_M$ of $M$ is defined by \[ R_{\alpha \bar{\alpha}\beta \bar{\beta}}=<R_{v_\alpha v_{\bar{\alpha}% }}v_\beta ,v_{\bar{\beta}}> \] and we say that $BK_M\geq -1$ on $M$ if for any $\alpha$ and $\beta$ \[ R_{\alpha \bar{\alpha}\beta \bar{\beta}}\geq -(1+\delta _{\alpha \bar{\beta}% }). \] Note that for the space form $\mathbb{C}\mathbb{H}^m$ we have $BK_{\mathbb{C}\mathbb{H}% ^m}=-1.$ \begin{theorem} (P. Li and J. Wang) If $M^m$ is a complete noncompact K\"{a}hler manifold of complex dimension $m$ with $BK_M\geq -1$ then \[ \lambda _1\left( M\right) \leq m^2=\lambda _1\left( \mathbb{C}\mathbb{H}^m\right) . \] \end{theorem} Li-Wang proved this theorem in the spirit of Cheng's proof, they first obtained a Laplacian comparison theorem for manifolds with $BK_M\geq -1$ (Theorem 1.6 in \cite{L-W3}) and then the sharp estimate for $\lambda _1\left( M\right) $ follows. We would like to point out that the bisectional curvature assumption is essential in their proof of the Laplacian comparison theorem. An interesting question that one can ask is if the sharp estimate for $\lambda _1\left( M\right)$ from Theorem 1 remains true under a Ricci curvature bound from below. This question is motivated in part by the following situation in the compact K\"{a}hler case, where we have a version of Lichnerowicz's theorem. Namely, if for a compact K\"{a}hler manifold $N^m$ the Ricci curvature has the lower bound $Ric_N\geq 2(m+1)$, then the first eigenvalue of the Laplacian has a sharp lower bound, $\lambda_1\left( N\right)\geq 4(m+1)$. We are grateful to Lei Ni for pointing out this result to us, for a simple proof of it see \cite{U}. In this paper, our first goal is to show that indeed there is a sharp estimate for $\lambda _1\left( M\right)$ under only Ricci curvature lower bound. Our proof is based on the variational principle for $\lambda _1\left( M\right) $ and integration by parts. In fact, our argument can be localized on each end of the manifold. \begin{theorem} Let $M^m$ be a complete noncompact K\"{a}hler manifold of complex dimension $% m$ such that the Ricci curvature is bounded from below by \[ Ric\geq -2\left( m+1\right) . \] Then if $E$ is an end of $M$ and $\lambda _1\left( E\right) $ is the infimum of the Dirichlet spectrum of the Laplacian on $E\,$then \[ \lambda _1\left( E\right) \leq m^2. \] In particular, we have the sharp estimate \[ \lambda _1\left( M\right) \leq m^2. \] \end{theorem} Note that the condition on the Ricci curvature in Theorem 2 means \[ Ric(e_k,e_j)\geq -2\left( m+1\right) \delta _{kj} \] for any $k,j\in \{1,..,2m\},$ which is equivalent to \[ Ric_{\alpha \overline{\beta }}\geq -\left( m+1\right) \delta _{\alpha \overline{\beta }}, \] for the unitary frame $\left\{ v_1,v_2,...,v_m\right\} .$ The same as in the Riemannian setting, it is interesting to study the K\"{a}hler manifolds for which equality is achieved in Theorem 2. Let us recall that for bisectional curvature lower bound Li-Wang have proved in \cite{L-W} that such manifolds need to have at most two ends. \begin{theorem} (P. Li and J. Wang) If $M^m$ is a complete noncompact K\"{a}hler manifold with $% \lambda _1\left( M\right) =m^2$ and $BK_M\geq -1$ then $M$ has at most two ends. \end{theorem} Their proof relies on a study of the Buseman function $\beta $ on $M,$ so the Laplacian comparison theorem plays again an important role. An intersting fact about their proof is that it gives an unified approach of the question in the Riemannian and K\"{a}hlerian case. We now want to make some comments on the case when $M$ has exactly two ends. In this case, the proof of Li-Wang provides some structure information of the manifold. Namely, not only that for any $t$ the level set $\beta=t$ is diffeomorphic to the level set $\beta=t_0$ for some $t_0$ fixed, but also the metric on $\beta=t$ is determined by the metric on $\beta=t_0$. Our second goal in this paper is to obtain the same conclusion on the number of ends if equality is achieved in Theorem 2. This will be done by a careful study of the estimates in Theorem 2. If $M$ is assumed to have exactly two ends, using our approach we will be able to deduce the same structure information of the manifold as discussed above, for the level sets of a function defined by the Li-Tam theory. \textbf{Remark}. After this paper was written the author was informed by Peter Li that the analysis of the two ends case can be deepened and in fact if $M$ has bounded curvature, then it is isometrically covered by $\mathbb{C}\mathbb{H}^m$. Examples are also known, with both bounded and unbounded curvature, see \cite{L-W}. We expect this result can be recovered with our way, and in fact this will become apparent towards the end of the proof of Theorem 4. \begin{theorem} Let $M^m$ be a complete noncompact K\"{a}hler manifold of complex dimension $% m$ such that the Ricci curvature is bounded from below by \[ Ric\geq -2\left( m+1\right) . \] If $\lambda _1\left( M\right) =m^2$ then $M$ has at most two ends. \end{theorem} \textbf{Aknowledgement}. The author would like to express his deep gratitude to his advisor, Professor Peter Li, for constant help, support and many valuable discussions. \section{The proofs} To prove Theorem 2 and Theorem 4 we first need the following preparation. Let $E$ be a nonparabolic end of $M.$ Withouth loss of generality we will henceforth assume that $\lambda _1\left( E\right) >0.$ From the theory of Li-Tam (\cite{L-T}) we know that there exists a harmonic function $f$ on $E$ that is obtained with the following procedure. Let $f_R$ be the harmonic function with Dirichlet boundary conditions: $f_R=1$ on $% \partial E,$ $f_R=0$ on $\partial E_p(R),$ where $E_p(R)=E\cap B_p\left( R\right) .$ Then $f_R$ admits a subsequence convergent to $f,$ with the properties: $% 0<f<1$ on $E,$ $f$ $=1$ on $\partial E$ and $f$ has finite Dirichlet integral. Moreover, since $\lambda _1\left( E\right) >0,$ we know by a theorem of Li-Wang that (\cite{L-W1}) \[ \int_{E_p\left( R+1\right) \backslash E_p\left( R\right) }f^2\leq c_1\exp \left( -2\sqrt{\lambda _1(E)}R\right) . \] Further on integration on the level sets of $f$ will play a central role in our proofs, and for this let us recall the following important property of $% f $ (\cite{L-W4}). Namely, for $t,a,b<1$ let \[ l\left( t\right) =\left\{ x\in E\left| \;f\left( x\right) =t\right. \right\} \] and define the set \[ L\left( a,b\right) =\left\{ x\in E\left| \;a<f\left( x\right) <b\right. \right\} . \] Then for almost all $t<1$ \[ \int_{l\left( t\right) }\left| \nabla f\right| =const<\infty \] and we have: \[ \int_{L\left( a,b\right) }\left| \nabla f\right| ^2=\left( b-a\right) \int_{l\left( t_0\right) }\left| \nabla f\right| . \] Let us denote \[ L=L(\frac 12\delta \varepsilon ,2\varepsilon ), \] where $\delta ,\varepsilon >0$ are sufficiently small fixed numbers to be chosen later. Since we will often integrate by parts on $L$, let us construct a cut-off $% \phi $ with compact support in $L.$ Define $\phi =\psi \varphi $ with $\psi $ depending on the distance function \[ \psi =\left\{ \begin{array}{c} 1 \\ R-r \\ 0 \end{array} \left. \begin{array}{c} \text{on} \\ \text{on} \\ \text{on} \end{array} \right. \left. \begin{array}{l} E_p\left( R-1\right) \\ E_p\left( R\right) \backslash E_p\left( R-1\right) \\ E\backslash E_p\left( R\right) \end{array} \right. \right. \] and $\varphi $ defined on the level sets of $f$% \[ \varphi =\left\{ \begin{array}{c} (\log 2)^{-1}(\log f-\log (\frac 12\delta \varepsilon )) \\ 1 \\ (\log 2)^{-1}(\log 2\varepsilon -\log f) \\ 0 \end{array} \left. \begin{array}{l} \text{on}\;\;L(\frac 12\delta \varepsilon ,\delta \varepsilon ) \\ \text{on}\;L\left( \delta \varepsilon ,\varepsilon \right) \\ \text{on}\;\;L\left( \varepsilon ,2\varepsilon \right) \\ \text{otherwise.} \end{array} \right. \right. \] For convenience, let us assume $R=\frac 1{\delta \varepsilon }.$ We have the following result: \begin{lemma} For any $0<a<2$ the following inequality holds: \begin{eqnarray*} \frac 1{16}\left( \frac 1m-\frac{\left( 1-a\right) ^2}{a\left( 2-a\right) }% \right) \frac 1{(-\log \delta )}\int_L\frac{\left| \nabla f\right| ^4}{f^3}% \phi ^2 &\leq &\frac a{2-a}\frac{m+1}4\int_{l\left( t_0\right) }\left| \nabla f\right| \\ &&+\frac c{(-\log \delta )^{\frac 12}}, \end{eqnarray*} where $c$ is a constant not depending on $\delta $ or $\varepsilon .$ \end{lemma} \textbf{Proof of Lemma 1.} Note that the gradient and the Laplacian satisfy: \begin{eqnarray*} \nabla f\cdot \nabla g &=&2\left( f_\alpha g_{\overline{\alpha }}+f_{% \overline{\alpha }}g_\alpha \right) \\ \Delta f &=&4f_{\alpha \overline{\alpha }}. \end{eqnarray*} Let $u=\log f,$ then a simple computation shows that \[ u_{\alpha \overline{\beta }}=f^{-1}f_{\alpha \overline{\beta }% }-f^{-2}f_\alpha f_{\overline{\beta }}. \] Consider now \[ \int_Lf\left| u_{\alpha \overline{\beta }}\right| ^2\phi ^2 \] which we estimate from above and from below to prove our claim. To begin with, \[ \int_Lf\left| u_{\alpha \overline{\beta }}\right| ^2\phi ^2=\int_Lf^{-1}\left| f_{\alpha \overline{\beta }}\right| ^2\phi ^2-2\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2+\frac 1{16}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2. \] The first term is \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \overline{\beta }}\right| ^2\phi ^2 &=&\int_Lf^{-1}(f_{\alpha \overline{\beta }}\cdot f_{\overline{\alpha }\beta })\phi ^2=-\int_Lf_\alpha \left( f^{-1}f_{\overline{\alpha }\beta }\phi ^2\right) _{\overline{\beta }} \\ &=&\int_Lf^{-2}(f_{\overline{\alpha }\beta }f_\alpha f_{\overline{\beta }% })\phi ^2-\int_Lf^{-1}f_\alpha f_{\overline{\alpha }\beta \overline{\beta }% }\phi ^2-\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^2\right) _{\overline{\beta }} \end{eqnarray*} and using the Ricci identities and $\Delta f=0$ we see that $f_{\overline{% \alpha }\beta \overline{\beta }}=0.$ It also shows that the last integral needs to be a real number. This proves that \begin{eqnarray} \int_Lf\left| u_{\alpha \overline{\beta }}\right| ^2\phi ^2 &=&-\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2 \nonumber \\ &&+\frac 1{16}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2-\int_Lf^{-1}f_{% \overline{\alpha }\beta }f_\alpha \left( \phi ^2\right) _{\overline{\beta }}. \label{1} \end{eqnarray} Let us use again integration by parts to see that \begin{eqnarray*} -\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2 &=&\int_Lf_\alpha \left( f^{-2}f_{\overline{\alpha }}f_\beta \phi ^2\right) _{\overline{\beta }}= \\ &=&-2\int_Lf^{-3}f_\alpha f_{\overline{\alpha }}f_\beta f_{\overline{\beta }% }\phi ^2+\int_Lf^{-2}f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \phi ^2 \\ &&+\int_Lf^{-2}f_\alpha f_{\overline{\alpha }}f_\beta \left( \phi ^2\right) _{\overline{\beta }}. \end{eqnarray*} Similarly, one finds \begin{eqnarray*} -\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2 &=&-\int_Lf^{-2}(f_{\overline{\alpha }\beta }f_\alpha f_{\bar{\beta}% })\phi ^2=\int_Lf_{\bar{\alpha}}\left( f^{-2}f_\alpha f_{\bar{\beta}}\phi ^2\right) _\beta \\ &=&-2\int_Lf^{-3}f_\alpha f_{\overline{\alpha }}f_\beta f_{\overline{\beta }% }\phi ^2+\int_Lf^{-2}f_{\alpha \beta }f_{\bar{\alpha}}f_{\bar{\beta}}\phi ^2 \\ &&+\int_Lf^{-2}f_\alpha f_{\overline{\alpha }}f_{\bar{\beta}}\left( \phi ^2\right) _\beta . \end{eqnarray*} Combining the two identities we get \begin{eqnarray} -\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2 &=&-\frac 18\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2+\int_Lf^{-2}{Re}\left( f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right) \nonumber \\ &&+\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^2\right) _\beta ). \label{2} \end{eqnarray} Note that the following inequality holds on $E$: \begin{equation} \left| f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right| \leq \frac 14\left| f_{\alpha \beta }\right| \left| \nabla f\right| ^2 \label{3} \end{equation} We want to insist on the proof of this inequality because it will matter when we study the manifolds with $\lambda _1\left( M\right) =m^2$. Since the two numbers in (\ref{3}) are independent of the unitary frame, let us choose a frame at the fixed point $x\in E$ such that \[ e_1=\frac 1{\left| \nabla f\right| }\nabla f. \] Certainly, we need $\left| \nabla f\right| \left( x\right) \neq 0$ which we assume without loss of generality because if $\left| \nabla f\right| \left( x\right) =0$ there is nothing to prove. Then one can see that \[ f_{e_1}=\left| \nabla f\right| ,\;f_{e_2}=0,....f_{e_{2m}}=0 \] or, in the unitary frame \[ f_1=f_{\bar{1}}=\frac 12\left| \nabla f\right| ,\;\;\;f_\alpha =f_{\bar{% \alpha}}=0\;\;\text{if}\;\alpha >1. \] This proves the inequality because \[ \left| f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right| =\frac 14\left| \nabla f\right| ^2\left| f_{11}\right| \leq \frac 14\left| f_{\alpha \beta }\right| \left| \nabla f\right| ^2. \] Moreover, we learn that equality holds in (\ref{3}) if and only if \[ f_{\alpha \beta }=0\;\;\text{for}\;(\alpha ,\beta )\neq \left( 1,1\right) , \] with respect to the frame chosen above. Since the following holds: \[ Re\left( f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right) \leq \left| f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right| \leq \frac 14\left| f_{\alpha \beta }\right| \left| \nabla f\right| ^2, \] we get for an arbitrary $a>0$ \begin{gather} 2\int_Lf^{-2}Re\left( f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \right) \phi ^2\leq \int_L2\left( f^{-1/2}\left| f_{\alpha \beta }\right| \phi \right) \left( \frac 14f^{-3/2}\left| \nabla f\right| ^2\phi \right) \nonumber \\ \leq a\int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2+\frac 1{16a}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2. \label{4} \end{gather} Moreover, again integrating by parts we have \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2 &=&\int_Lf^{-1}f_{\alpha \beta }f_{\bar{\alpha}\bar{\beta}}\phi ^2=-\int_Lf_\alpha \left( f^{-1}f_{\bar{\alpha}\bar{\beta}}\phi ^2\right) _\beta \\ &=&\int_Lf^{-2}f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta \phi ^2-\int_Lf^{-1}f_\alpha f_{\bar{\alpha}\bar{\beta}\beta }\phi ^2-\int_Lf^{-1}f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta \end{eqnarray*} and on the other hand \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2 &=&\int_Lf^{-1}f_{\alpha \beta }f_{\bar{\alpha}\bar{\beta}}\phi ^2=-\int_Lf_{% \bar{\alpha}}\left( f^{-1}f_{\alpha \beta }\phi ^2\right) _{\bar{\beta}} \\ &=&\int_Lf^{-2}f_{\alpha \beta }f_{\bar{\alpha}}f_{\bar{\beta}}\phi ^2-\int_Lf^{-1}f_{\bar{\alpha}}f_{\alpha \beta \bar{\beta}}\phi ^2-\int_Lf^{-1}f_{\bar{\alpha}}f_{\alpha \beta }\left( \phi ^2\right) _{\bar{% \beta}} \end{eqnarray*} so that combining the two identities we get \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2 &=&\int_Lf^{-2}Re(f_{% \overline{\alpha }\overline{\beta }}f_\alpha f_\beta )\phi ^2-\int_Lf^{-1}f_\alpha f_{\bar{\alpha}\bar{\beta}\beta }\phi ^2 \\ &&-\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ). \end{eqnarray*} Note that the Ricci identities imply \[ f_{\bar{\alpha}\bar{\beta}\beta }=f_{\bar{\beta}\bar{\alpha}\beta }=f_{\bar{% \beta}\bar{\beta}\bar{\alpha}}+Ric_{\beta \bar{\alpha}}f_{\bar{\beta}} \] and therefore we obtain \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2 &\leq &\int_Lf^{-2}Re(f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta )\phi ^2+\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&-\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ). \end{eqnarray*} Plug this inequality into (\ref{4}) and it follows \begin{gather*} \left( 2-a\right) \int_Lf^{-2}Re\left( f_{\overline{\alpha }\overline{\beta }% }f_\alpha f_\beta \right) \phi ^2\leq a\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ +\frac 1{16a}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2-a\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ). \end{gather*} Let us fix henceforth $0<a<2$ to make sure that $2-a>0.$ Now we are getting back to (\ref{2}) and obtain \begin{gather} -\int_Lf^{-2}(f_{\alpha \overline{\beta }}f_{\overline{\alpha }}f_\beta )\phi ^2\leq \left( -\frac 18+\frac 1{16a(2-a)}\right) \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2 \nonumber \\ +\frac a{2-a}\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2-\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ) \nonumber \\ +\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^2\right) _\beta ). \label{5} \end{gather} We have thus proved that \begin{gather} \int_Lf\left| u_{\alpha \overline{\beta }}\right| ^2\phi ^2\leq \left( -\frac 1{16}+\frac 1{16a(2-a)}\right) \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2 \nonumber \\ +\frac a{2-a}\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2+\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^2\right) _\beta ) \nonumber \\ -\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^2\right) _{% \overline{\beta }}-\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{% \beta}}\left( \phi ^2\right) _\beta ). \label{6} \end{gather} To finish the upper estimate of $\int_Lf\left| u_{\alpha \overline{\beta }% }\right| ^2\phi ^2$ we need to estimate the terms involving $\left( \phi ^2\right) _\beta .$ We will prove that they can be bounded from above by a constant $\cdot (-\log \delta )^{1/2}.$ Start with \begin{eqnarray*} 2\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^2\right) _\beta ) &\leq &\frac 12\int_Lf^{-2}\left| \nabla f\right| ^3\left| \nabla \phi ^2\right| \\ &\leq &\int_Lf^{-2}\left| \nabla f\right| ^3\left| \nabla \varphi \right| \psi +\int_Lf^{-2}\left| \nabla f\right| ^3\left| \nabla \psi \right| \varphi \end{eqnarray*} Now it is easy to see that by the gradient estimate and co-area formula \begin{eqnarray*} \int_Lf^{-2}\left| \nabla f\right| ^3\left| \nabla \varphi \right| &\leq &c_2(\int_{L(\frac 12\delta \varepsilon ,\delta \varepsilon )}f^{-1}\left| \nabla f\right| ^2+\int_{L(\varepsilon ,2\varepsilon )}f^{-1}\left| \nabla f\right| ^2) \\ &\leq &c_{3,} \end{eqnarray*} while by the decay rate of $f^2$ we get \[ \int_Lf^{-2}\left| \nabla f\right| ^3\left| \nabla \psi \right| \leq c_4\frac 1{\delta \varepsilon }\exp \left( -2\sqrt{\lambda _1(E)}R\right) \leq c_5, \] using that $R=\frac 1{\delta \varepsilon }.$ Clearly, the constants so far do not depend on the choice of $\delta $ or $\varepsilon .$ To estimate the other terms one proceeds similarly. For example, \begin{eqnarray*} -2\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ) &\leq &\int_Lf^{-1}\left| f_{\bar{\alpha}\bar{\beta}}\right| \left| \nabla f\right| \phi \left| \nabla \phi \right| \\ &\leq &\left( \int_Lf^{-1}\left| \nabla f\right| ^2\left| \nabla \phi \right| ^2\right) ^{\frac 12}\left( \int_Lf^{-1}\left| f_{\bar{\alpha}\bar{% \beta}}\right| ^2\phi ^2\right) ^{\frac 12} \\ &\leq &c_6\left( \int_Lf^{-1}\left| f_{\bar{\alpha}\bar{\beta}}\right| ^2\phi ^2\right) ^{\frac 12}, \end{eqnarray*} However, using an inequality proved above we get \begin{eqnarray*} \int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2 &\leq &\int_Lf^{-2}Re(f_{\overline{\alpha }\overline{\beta }}f_\alpha f_\beta )\phi ^2+\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&-\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^2\right) _\beta ) \\ &\leq &\frac 14\int_Lf^{-2}\left| f_{\alpha \beta }\right| \left| \nabla f\right| ^2\phi ^2+\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&+\frac 12\int_Lf^{-1}\left| f_{\alpha \beta }\right| \left| \nabla f\right| \phi \left| \nabla \phi \right| \\ &\leq &\frac 18\int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2+\frac 18\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2 \\ &&+\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&+\frac 14\int_Lf^{-1}\left| f_{\alpha \beta }\right| ^2\phi ^2+\frac 14\int_Lf^{-1}\left| \nabla f\right| ^2\left| \nabla \phi \right| ^2, \end{eqnarray*} which shows there exists constants $c_7$ and $c_8$ such that: \begin{eqnarray*} \int_Lf^{-1}\left| f_{\bar{\alpha}\bar{\beta}}\right| ^2\phi ^2 &\leq &c_7\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2+c_8\int_Lf^{-1}\left| \nabla f\right| ^2\left| \nabla \phi \right| ^2 \\ &\leq &c_9\left( -\log \delta \right) . \end{eqnarray*} We have proved that \[ \int_Lf^{-1}\left| f_{\bar{\alpha}\bar{\beta}}\right| \left| \nabla f\right| \phi \left| \nabla \phi \right| \leq c_{10}(-\log \delta )^{\frac 12}. \] Let us gather the information we have so far: \begin{eqnarray*} \int_Lf\left| u_{\alpha \overline{\beta }}\right| ^2\phi ^2 &\leq &\left( -\frac 1{16}+\frac 1{16a(2-a)}\right) \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2 \\ &&+\frac a{2-a}\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2+c(-\log \delta )^{\frac 12}. \end{eqnarray*} The estimate from below is straightforward: \begin{equation} \left| u_{\alpha \overline{\beta }}\right| ^2\geq \sum_\alpha \left| u_{\alpha \bar{\alpha}}\right| ^2\geq \frac 1m\left| \sum_\alpha u_{\alpha \overline{\alpha }}\right| ^2=\frac 1{16m}f^{-4}\left| \nabla f\right| ^4. \label{7} \end{equation} Hence, this shows that \begin{eqnarray*} \frac 1{16}\left( 1+\frac 1m-\frac 1{a(2-a)}\right) \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2 &\leq &\frac a{2-a}\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&+c(-\log \delta )^{\frac 12}, \end{eqnarray*} which proves the Lemma. \textbf{Q.E.D.} In the following Lemma, we will estimate $\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2$ from bellow. To serve our purpose, we need this estimate to depend on $\lambda _1\left( E\right) $ and this is done using the variational principle. Recall that $E$ is a nonparabolic end, $\lambda _1\left( E\right) >0$ and we set $L=L\left( \frac 12\delta \varepsilon ,2\varepsilon \right) $ for $\delta ,\varepsilon $ sufficiently small. \begin{lemma} \[ \frac 1{(-\log \delta )}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2\geq 4\lambda _1\left( E\right) \int_{l(t_0)}\left| \nabla f\right| -\frac{c_0}{% (-\log \delta )^{\frac 12}}. \] \end{lemma} \textbf{Proof of Lemma 2.} By the variational principle for $\lambda _1\left( E\right) ,$ \[ \lambda _1\left( E\right) \int_Ef\phi ^2\leq \int_E\left| \nabla \left( \phi f^{\frac 12}\right) \right| ^2, \] which means that \begin{eqnarray*} \lambda _1\left( E\right) \int_Lf\phi ^2 &\leq &\frac 14\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2+\int_Lf\left| \nabla \phi \right| ^2+\int_L\phi \left| \nabla f\right| \left| \nabla \phi \right| \\ &\leq &\frac 14\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2+c_{11}, \end{eqnarray*} based on estimates similar to what we did in Lemma 1. This implies that \[ \frac 1{(-\log \delta )}\int_Lf\phi ^2\leq \frac 1{4\lambda _1\left( E\right) }\int_{l(t_0)}\left| \nabla f\right| +\frac{c_{11}}{(-\log \delta )}% . \] Finally, using the Schwarz inequality we get \begin{eqnarray*} \int_{l(t_0)}\left| \nabla f\right| &=&\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2 \\ &\leq &\frac 1{(-\log \delta )}\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^2+\frac 1{(-\log \delta )}\int_{L\cap \left( E\backslash E_p\left( R-1\right) \right) }f^{-1}\left| \nabla f\right| ^2 \\ &\leq &\left( \frac 1{(-\log \delta )}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2\right) ^{\frac 12}\left( \frac 1{(-\log \delta )}\int_Lf\phi ^2\right) ^{\frac 12} \\ &&+\frac{c_{12}}{(-\log \delta )}\frac 1{\delta \varepsilon }\exp \left( -2% \sqrt{\lambda _1(E)}R\right) \\ &\leq &\left( \frac 1{(-\log \delta )}\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^2\right) ^{\frac 12}\times \\ &&\times \left( \frac 1{4\lambda _1\left( E\right) }\int_{l(t_0)}\left| \nabla f\right| +\frac{c_{11}}{(-\log \delta )}\right) ^{\frac 12}+\frac{% c_{13}}{(-\log \delta )}, \end{eqnarray*} which proves the Lemma. \textbf{Q.E.D}. Suppose now that $M$ has a parabolic end $F.$ A theorem of Nakai (\cite{N}, see also \cite{N-R}) states that there exists an exhaustion function $f$ on $% \overline{F}$ which is harmonic on $F$ and $f=0$ on $\partial F.$ In this case we consider for $T,\beta >0$ fixed \[ \phi =\left\{ \begin{array}{c} (\log 2)^{-1}(\log f-\log (\frac 12T)) \\ 1 \\ (\log 2)^{-1}(\log (2\beta T)-\log f) \\ 0 \end{array} \left. \begin{array}{l} \text{on}\;\;L(\frac 12T,T) \\ \text{on}\;L\left( T,\beta T\right) \\ \text{on}\;\;L\left( \beta T,2\beta T\right) \\ \text{otherwise,} \end{array} \right. \right. \] where the level sets are now defined on $F.$ Since $f$ is proper, there is no need for a cut-off depending on the distance function. Our point now is that Lemma 1 and Lemma 2 hold for this choice of $\phi $ also, the proofs are identical. Note that if \[ \tilde{L}=L(\frac 12T,2\beta T) \] then the following inequalities hold on $\tilde{L}$: \begin{eqnarray*} \frac 1{16}\left( \frac 1m-\frac{\left( 1-a\right) ^2}{a\left( 2-a\right) }% \right) \frac 1{\log \beta }\int_{\tilde{L}}\frac{\left| \nabla f\right| ^4}{% f^3}\phi ^2 &\leq &\frac a{2-a}\frac{m+1}4\int_{l\left( t_0\right) }\left| \nabla f\right| \\ &&+\frac{\widetilde{c}}{(\log \beta )^{\frac 12}}, \end{eqnarray*} and \[ \frac 1{\log \beta }\int_{\tilde{L}}f^{-3}\left| \nabla f\right| ^4\phi ^2\geq 4\lambda _1\left( F\right) \int_{l(t_0)}\left| \nabla f\right| -\frac{% \widetilde{c}_0}{(\log \beta )^{\frac 12}}. \] \bigskip Now we are ready to prove Theorem 2. \vspace{0.5in} \textbf{Proof of Theorem 2.} Let us first prove the Theorem for a nonparabolic end $E.$ We know from Lemma 1 and Lemma 2 that \[ \lambda _1\left( E\right) \frac 14\left( \frac 1m-\frac{\left( 1-a\right) ^2% }{a\left( 2-a\right) }\right) \int_{l(t_0)}\left| \nabla f\right| \leq \frac a{2-a}\frac{m+1}4\int_{l\left( t_0\right) }\left| \nabla f\right| +\frac C{(-\log \delta )^{\frac 12}}, \] inequality that holds for any $\delta >0$ and for any $0<a<2.$ Therefore, making $\delta \rightarrow 0$ we get that for any $0<a<2,$% \[ \lambda _1\left( E\right) \leq \frac{a\left( m+1\right) }{2-a}\left( \frac 1m-\frac{\left( 1-a\right) ^2}{a(2-a)}\right) ^{-1}. \] Let us choose $a=\frac m{m+1},$ then \[ \frac 1m-\frac{\left( 1-a\right) ^2}{a(2-a)}=\frac 1m\frac{m+1}{m+2}\;;\;% \frac{a\left( m+1\right) }{2-a}=\frac{m(m+1)}{m+2}, \] which shows that \[ \lambda _1\left( E\right) \leq m^2. \] This proves Theorem 2 for a nonparabolic end $E$. The proof for a parabolic end $F$ is verbatim. \textbf{Q.E.D.} \vspace{0.5in} \textbf{Proof of Theorem 4:} Suppose that $M$ has more than one end. We know from \cite{L-W} that if $\lambda _1\left( M\right) >\frac{m+1}2$ the manifold has only one nonparabolic end. Hence let us set $E$ this nonparabolic end and consequently $F=M\backslash E$ will be a parabolic end. Note that an end of $M$ is defined with respect to a compact subset of $M$, so that writing $M=E\cup F$ with $E$ nonparabolic and $F$ parabolic we are not loosing generality, in fact $M$ can have many ends with respect to other compact subsets. The construction of Li-Tam implies that there exists a harmonic function $f:M\rightarrow \left( 0,\infty \right) $ with the following properties: 1. On $E$ the function has the decay rate \[ \int_{E_p\left( R\right) \backslash E_p\left( R-1\right) }f^2\leq c_1\exp \left( -2\sqrt{\lambda _1(M)}R\right) , \] 2. On $F$ the function is proper. 3. We have: \[ \sup_{x\in F}f\left( x\right) =\infty ,\;\;\inf_{x\in E}f\left( x\right) =0. \] Let us point out some facts about the proofs of Lemma 1 and Lemma 2. In the two lemmata, the function $f$ was defined only on a single end, which was first assumed to be nonparabolic, and then we observed that the proofs still work on a parabolic end. In the framework of Theorem 4, we know that $f $ is defined on the whole manifold, so now $L=L\left( b_0,b_1\right) =\left\{ x\in M\left| \;b_0<f\left( x\right) <b_1\right. \right\} $ makes sense for any $0<b_0<b_1$. One can see that the computations proved in Lemma 1 are true for $L$ and moreover we may replace everywhere $\phi ^2$ with $% \phi ^3.$ With this in mind, let us fix $b_0=\delta \varepsilon ,$ $% b_1=\beta T,$ where $0<\delta \varepsilon <\varepsilon <T<\beta T$ and for convenience choose $\beta =\frac 1\delta $. Hence, everywhere in this proof \[ L=L\left( \delta \varepsilon ,\beta T\right) , \] and $a=\frac m{m+1}.$ The proof of this theorem is based on a more detailed study of inequalities in Lemma 1 and Lemma 2. We want to prove that $\lambda _1\left( M\right) =m^2 $ forces all the inequalities to become equalities on $L\left( \varepsilon ,T\right) .$ Since $\varepsilon ,T$ are arbitrary, it will follow that we need to have equalities everywhere on $M.$ Choose $\phi =\varphi \psi ,$ where \[ \psi =\left\{ \begin{array}{c} 1 \\ R-r \\ 0 \end{array} \left. \begin{array}{c} \text{on} \\ \text{on} \\ \text{on} \end{array} \left. \begin{array}{l} E_p\left( R-1\right) \cup F \\ E_p\left( R\right) \backslash E_p\left( R-1\right) \\ E\backslash E_p\left( R\right) \end{array} \right. \right. \right. \] and \[ \varphi =\left\{ \begin{array}{c} (-\log \delta )^{-1}(\log f-\log \left( \delta \varepsilon \right) ) \\ 0 \\ (\log \beta )^{-1}(\log (\beta T)-\log f) \\ 1 \end{array} \left. \begin{array}{l} \text{on}\;\;\;L\left( \delta \varepsilon ,\varepsilon \right) \\ \text{on}\;\;\;L\left( 0, \delta \varepsilon \right) \cup \left(L\left(\beta T, \infty \right)\cap F\right) \\ \text{on}\;\;\;L(T,\beta T)\cap F \\ \text{otherwise.} \end{array} \right. \right. \] Recall that by (\ref{6}) and (\ref{7}) we have \begin{gather} \frac 1{16}(\frac 1m-\frac{\left( 1-a\right) ^2}{a(2-a)})\int_Lf^{-3}\left| \nabla f\right| ^4\phi ^3\leq \frac a{2-a}\frac{m+1}4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3 \nonumber \\ +\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta )-\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{\overline{\beta }} \nonumber \\ -\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^3\right) _\beta ). \label{8} \end{gather} On the other hand, Schwarz inequality implies \begin{equation} \left( \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3\right) ^2\leq \left( \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^3\right) \left( \int_Lf\phi ^3\right) , \label{9} \end{equation} and by the variational principle it follows \begin{eqnarray*} \lambda _1\left( M\right) \int_Lf\phi ^3 &\leq &\int_L\left| \nabla \left( f^{\frac 12}\phi ^{\frac 32}\right) \right| ^2 \\ &=&\frac 14\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\frac 94\int_Lf\phi \left| \nabla \phi \right| ^2+\frac 32\int_L\phi ^2\nabla f\cdot \nabla \phi . \end{eqnarray*} Our point now is that a careful study of the two $\nabla \phi -$terms shows that they converge to zero as $\beta \rightarrow \infty $ (and $\delta =\frac 1\beta \rightarrow 0$). It is clear that $\frac 94\int_Lf\phi \left| \nabla \phi \right| ^2\leq \frac{c_1}{\log \beta },$ while \[ \int_L\phi ^2\nabla f\cdot \nabla \phi =\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\phi ^2-\frac 1{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-1}\left| \nabla f\right| ^2\phi ^2. \] The integral on $F$ is readily found by the co-area formula: \begin{eqnarray*} \frac 1{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-1}\left| \nabla f\right| ^2\phi ^2 &=&(\int_{l\left( t_0\right) }\left| \nabla f\right| )\int_T^{\beta T}t^{-1}\left( \frac{\log (\beta T)-\log t}{\log \beta }\right) ^2dt \\ &=&\frac 13\int_{l\left( t_0\right) }\left| \nabla f\right| . \end{eqnarray*} It is clear that the same formula holds on $E$ if we integrate against $% \varphi ^2$ and therefore: \[ \frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\phi ^2\leq \frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }\varphi ^2\left| \nabla f\right| ^2f^{-1}=\frac 13\int_{l\left( t_0\right) }\left| \nabla f\right| . \] For later use, observe that a converse of the latter inequality also holds: \begin{eqnarray*} \frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\phi ^2 &\geq &\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\varphi ^2 \\ &&-\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) \cap \left( E\backslash E_p\left( R-1\right) \right) }f^{-1}\left| \nabla f\right| ^2\varphi ^2 \\ &\geq &\frac 13\int_{l\left( t_0\right) }\left| \nabla f\right| -\frac{c_2}{% (-\log \delta )}. \end{eqnarray*} In particular, from the above estimates it follows \[ \int_L\phi ^2\nabla f\cdot \nabla \phi \leq 0. \] We have thus proved that \[ \lambda _1\left( M\right) \int_Lf\phi ^3\leq \frac 14\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\frac{c_1}{\log \beta }, \] which plugged into (\ref{9}) yields \begin{eqnarray*} \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^3 &\geq &4\lambda _1\left( M\right) \frac{\left( \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3\right) ^2% }{\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\frac{c_3}{\log \beta }} \\ &=&4\lambda _1\left( M\right) \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3-% \frac{c_4}{\log \beta }\frac{\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3}{% \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\frac{c_3}{\log \beta }} \\ &\geq &4\lambda _1\left( M\right) \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3-\frac{c_4}{\log \beta }. \end{eqnarray*} Now let's return to (\ref{8}) and use this lower bound, it follows that we have \begin{eqnarray} 0 &\leq &\frac{c_5}{\log \beta }+\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta ) \nonumber \\ &&-\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }}-\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{% \beta}}\left( \phi ^3\right) _\beta ). \label{10} \end{eqnarray} \textbf{Claim:} There exists a constant $c\geq 0$ such that \begin{gather*} \frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta )-\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{\overline{\beta }} \\ -\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^3\right) _\beta )\leq \frac c{(\log \beta )^{\frac 12}}. \end{gather*} \textbf{Proof of the claim}. Let us study each of the three terms in the left hand side. I. We have: \begin{eqnarray*} \frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta ) &=&\frac 3{16}\int_L\phi ^2f^{-2}\left| \nabla f\right| ^2\nabla f\cdot \nabla \phi \\ &=&\frac 3{16}\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-3}\left| \nabla f\right| ^4\phi ^2 \\ &&-\frac 3{16}\frac 1{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-3}\left| \nabla f\right| ^4\phi ^2. \end{eqnarray*} As we stressed above, the estimates in Lemma 1 and Lemma 2 are true on any end. Certainly, $\phi $ here is not the same on $L\left( \delta \varepsilon ,\varepsilon \right) $ with $\phi $ from Lemma 1. Nevertheless, the computations are the same. In fact, in this case there is no need to consider a cut-off $\varphi $ on $L\left( \frac 12\delta \varepsilon ,\delta \varepsilon \right) ,$ because $\phi $ already is zero there. On $L\left( \varepsilon ,2\varepsilon \right) $ the cut-off $\varphi $ is the same as in Lemma 1. Therefore, one can use Lemma 1 for $L\left( \delta \varepsilon ,\varepsilon \right) $ and Lemma 2 for $L(T,\beta T)\cap F$ to estimate the above subtraction. By Lemma 1 we know that \begin{eqnarray*} \frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-3}\left| \nabla f\right| ^4\phi ^2 &\leq &4\lambda _1\left( M\right) \frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\phi ^2 \\ &&+\frac{c_6}{(-\log \delta )^{\frac 12}} \\ &\leq &\frac 43\lambda _1\left( M\right) \int_{l\left( t_0\right) }\left| \nabla f\right| +\frac{c_6}{(-\log \delta )^{\frac 12}}, \end{eqnarray*} while Lemma 2 implies that \[ \frac 1{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-3}\left| \nabla f\right| ^4\phi ^2\geq \frac 43\lambda _1\left( M\right) \int_{l\left( t_0\right) }\left| \nabla f\right| -\frac{c_7}{(\log \beta )^{\frac 12}}. \] Combining the two estimates, it results \[ \frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta )\leq \frac{c_8}{(\log \beta )^{\frac 12}}. \] II. Start with \begin{eqnarray*} -\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }} &=&-\frac 3{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-2}(f_{\overline{\alpha }\beta }f_\alpha f_{\bar{\beta}})\phi ^2 \\ &&+\frac 3{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-2}(f_{% \overline{\alpha }\beta }f_\alpha f_{\bar{\beta}})\phi ^2. \end{eqnarray*} From (\ref{1}) and (\ref{7}) we have: \begin{gather*} \frac 1{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-2}(f_{% \overline{\alpha }\beta }f_\alpha f_{\bar{\beta}})\phi ^2\leq \left( \frac 1{16}-\frac 1{16m}\right) \frac 1{\log \beta }\times \\ \times \int_{L\left( T,\beta T\right) \cap F}f^{-3}\left| \nabla f\right| ^4\phi ^2+\frac{c_9}{(\log \beta )^{\frac 12}} \\ \leq\left( \frac 1{16}-\frac 1{16m}\right) \frac 43\lambda _1\left( M\right) \int_{l\left( t_0\right) }\left| \nabla f\right| +\frac{c_{10}}{(\log \beta )^{\frac 12}}, \end{gather*} while from (\ref{5}) we know that \begin{gather*} -\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-2}(f_{\overline{\alpha }\beta }f_\alpha f_{\bar{\beta}})\phi ^2\leq \left( -\frac 18+\frac 1{16a\left( 2-a\right) }\right) \frac 1{(-\log \delta )}\times \\ \times \int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-3}\left| \nabla f\right| ^4\phi ^2 \\ +\frac a{2-a}\frac{m+1}4\frac 1{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-1}\left| \nabla f\right| ^2\phi ^2+% \frac{c_{11}}{(-\log \delta )^{\frac 12}} \\ \leq \left( \left( -\frac 18+\frac 1{16a\left( 2-a\right) }\right) \frac 43\lambda _1\left( M\right) +\frac a{2-a}\frac{m+1}4\frac 13\right) \int_{l\left( t_0\right) }\left| \nabla f\right| \\ +\frac{c_{12}}{(-\log \delta )^{\frac 12}}, \end{gather*} using the estimates in I. It is easy to see that the coefficients of $% \int_{l\left( t_0\right) }\left| \nabla f\right| $ cancel out (this comes as no surprise) and therefore \[ -\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }}\leq \frac{c_{13}}{(\log \beta )^{\frac 12}}. \] Note also that in a similar fashion it can be proved that \[ \int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }}\leq \frac{c_{14}}{(\log \beta )^{\frac 12}}. \] III. Finally, by (\ref{2}) one has: \begin{gather*} -\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^3\right) _\beta )=-\frac 3{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-2}Re(f_{\overline{\alpha }\bar{\beta}}f_\alpha f_{% \bar{\beta}})\phi ^2 \\ +\frac 3{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-2}Re(f_{% \overline{\alpha }\beta }f_\alpha f_{\bar{\beta}})\phi ^2 \\ \leq -\frac 3{8(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-3}\left| \nabla f\right| ^4\phi ^2+\frac 3{(-\log \delta )}\int_{L\left( \delta \varepsilon ,\varepsilon \right) }f^{-2}(f_{\alpha \bar{\beta}}f_{\bar{\alpha}}f_\beta )\phi ^2 \\ +\frac 3{8\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-3}\left| \nabla f\right| ^4\phi ^2-\frac 3{\log \beta }\int_{L\left( T,\beta T\right) \cap F}f^{-2}(f_{\alpha \bar{\beta}}f_{\bar{\alpha}}f_\beta )\phi ^2 \\ +\frac{c_{15}}{(\log \beta )^{\frac 12}}. \end{gather*} By I and II it can be proved that \[ \int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{\beta}}\left( \phi ^3\right) _\beta )\leq \frac{c_{16}}{(\log \beta )^{\frac 12}}. \] This proves the claim. \textbf{Q.E.D.} Let us use this result in (\ref{10}), then we infer that \begin{gather} 0\leq \frac{c_5}{\log \beta }+\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta ) \nonumber \\ -\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }}-\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{% \beta}}\left( \phi ^3\right) _\beta )\leq \frac C{(\log \beta )^{\frac 12}}. \label{11} \end{gather} Since $\beta $ (and $\delta =\frac 1\beta $) is arbitrary it follows that for $\varepsilon $ and $T$ fixed the above inequality becomes equality by letting $\beta \rightarrow \infty .$ From (\ref{11}) we are able to draw the conclusion that the following formulas need to hold on $M$: \begin{eqnarray} Ric_{1\bar{1}} &=&-(m+1) \nonumber \\ \left| \nabla f\right| &=&2\sqrt{\lambda _1\left( M\right) }f \label{12} \\ u_{\alpha \bar{\beta}} &=&-m\delta _{\alpha \bar{\beta}} \nonumber \\ u_{\alpha \beta } &=&m\delta _{1\alpha }\delta _{1\beta } \nonumber \end{eqnarray} with respect to the frame \begin{eqnarray*} v_\alpha &=&\frac 12\left( e_{2\alpha -1}-\sqrt{-1}Je_{2\alpha -1}\right) , \\ e_1 &=&\frac 1{\left| \nabla f\right| }\nabla f,\;\;Je_{2k-1}=e_{2k}. \end{eqnarray*} Note that in view of (\ref{12}) this frame is globally defined on $M.$ Let us prove that indeed we have these relations on $M.$ Suppose that there exists a point $x_0\in M$ and a positive $\eta _0$ such that: \[ Ric_{1\bar{1}}(x_0)\geq -(m+1)+\eta _0. \] Let us choose $\varepsilon $ and $T$ such that $x_0\in L\left( \varepsilon ,T\right) .$ Recall that $L=L\left( \delta \varepsilon ,\beta T\right) ,$ for arbitrary $\beta $ and for $\delta =\frac 1\beta .$ Then one can see that there exists $\eta _1>0$ such that \[ -\int_Lf^{-1}f_\alpha f_{\bar{\alpha}\bar{\beta}\beta }\phi ^3\leq \frac{m+1}% 4\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3-\eta _1. \] It is easy to check that now (\ref{11}) will become \begin{eqnarray*} 0 &<&\eta _1\leq \frac{c_5}{\log \beta }+\frac 14\int_Lf^{-2}\left| \nabla f\right| ^2Re(f_{\bar{\beta}}\left( \phi ^3\right) _\beta ) \\ &&-\int_Lf^{-1}f_{\overline{\alpha }\beta }f_\alpha \left( \phi ^3\right) _{% \overline{\beta }}-\frac a{2-a}\int_Lf^{-1}Re(f_\alpha f_{\bar{\alpha}\bar{% \beta}}\left( \phi ^3\right) _\beta )\leq \frac C{(\log \beta )^{\frac 12}}, \end{eqnarray*} which gives a contradiction if we let $\beta \rightarrow \infty .$ Next, let us focus on the Schwarz inequality (\ref{9}). Suppose by absurd that there exists no constant $a\neq 0$ such that \[ \left| \nabla f\right| \left( x\right) =af\left( x\right) \;\text{for any}% \;x\in U, \] where $U\subset L\left( \varepsilon ,T\right) $ is a fixed open set. It is clear that if \[ h=f^{-\frac 32}\left| \nabla f\right| ^2\phi ^{\frac 32},\;\;g=f^{\frac 12}\phi ^{\frac 32}, \] then there exists no $a\in \mathbb{R}$ such that $g=ah$ on $U,$ which implies that \[ \eta _0:=\min_{a\in \mathbb{R}}\int_U\left( g-ah\right) ^2>0. \] This shows that \begin{eqnarray*} \eta _0 &\leq &a^2\int_Uh^2-2a\int_Ugh+\int_Ug^2, \\ 0 &\leq &a^2\int_{L\backslash U}h^2-2a\int_{L\backslash U}gh+\int_{L\backslash U}g^2, \end{eqnarray*} for any $a\in \mathbb{R}.$ As a consequence, the following inequality is true for any $a\in \mathbb{R}:$ \[ 0\leq a^2\int_Lh^2-2a\int_Lgh+\left( \int_Lg^2-\eta _0\right) . \] It follows that \[ \left( \int_Lgh\right) ^2\leq \left( \int_Lh^2\right) \left( \int_Lg^2-\eta _0\right) . \] Similarly, one can see that there exists an $\eta _1>0$ such that \[ \left( \int_Lgh\right) ^2\leq \left( \int_Lg^2\right) \left( \int_Lh^2-\eta _1\right) . \] Adding these two inequalities and using the arithmetic mean inequality we get that there exists $\eta _2>0$ with the property \[ \left( \int_Lgh+\eta _2\right) ^2\leq \int_Lg^2\int_Lh^2. \] We have thus proved that there exists a constant $\eta _2>0$ depending on $U$ but not on $\beta $ (and $\delta $) such that \[ \left( \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\eta _2\right) ^2\leq \left( \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^3\right) \left( \int_Lf\phi ^3\right) , \] inequality that will be used instead of (\ref{9}) in the argument that followed. Consequently, \begin{eqnarray*} \int_Lf^{-3}\left| \nabla f\right| ^4\phi ^3 &\geq &4\lambda _1\left( M\right) \frac{\left( \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\eta _2\right) ^2}{\int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+\frac{c_3}{\log \beta }} \\ &\geq &4\lambda _1\left( M\right) \int_Lf^{-1}\left| \nabla f\right| ^2\phi ^3+8\lambda _1\left( M\right) \eta _2-c_{17}\frac 1{\log \beta }. \end{eqnarray*} However, using the same reasoning as for Ricci one can see that this yields a contradiction. Summing up, we have proved that there exists a constant $a>0$ such that $% \left| \nabla f\right| =af$ on $M.$ Using Lemma 1 and Lemma 2 one can see that $a=2\sqrt{\lambda _1\left( M\right) }.$ The proofs for the remaining two formulas use the same ideas. Note that in (% \ref{7}) we need to have equality everywhere on $M,$ therefore there exists a function $\mu $ on \thinspace $M$ such that \[ u_{\alpha \bar{\beta}}=\mu \delta _{\alpha \bar{\beta}}. \] However, taking the trace and using that $f$ is harmonic one can show that $% \mu =-m.$ Finally, we pointed out that if equality holds in (\ref{3}) then \[ f_{\alpha \beta }=0\;\;\text{for}\;(\alpha ,\beta )\neq \left( 1,1\right) , \] and on the other hand equality holds in (\ref{4}) if and only if \begin{eqnarray*} \left| f_{\alpha \beta }\right| &=&\frac 1a\frac{\left| \nabla f\right| ^2}{% 4f}=m(m+1)f, \\ Re(f_{11}) &=&\left| f_{11}\right| . \end{eqnarray*} This means that \[ f_{11}=m(m+1)f, \] or in terms of $u$ one has \[ u_{\alpha \beta }=m\delta _{1\alpha }\delta _{1\beta } \] as claimed. Now we are ready to complete the proof of Theorem 4. Let us compute the real Hessian of \[ B:=\frac 1{2m}u. \] We have: \begin{eqnarray*} B_{e_1e_1} &=&B_{11}+B_{\bar{1}\bar{1}}+2B_{1\bar{1}}=1-1=0, \\ B_{e_2e_2} &=&-\left( B_{11}+B_{\bar{1}\bar{1}}-2B_{1\bar{1}}\right) =-2, \\ B_{e_{2k-1}e_{2k-1}} &=&B_{kk}+B_{\bar{k}\bar{k}}+2B_{k\bar{k}}=-1, \\ B_{e_{2k}e_{2k}} &=&-B_{kk}-B_{\bar{k}\bar{k}}+2B_{k\bar{k}}=-1, \\ B_{e_ke_j} &=&0\;\;if\;\;k\neq j, \end{eqnarray*} for $k\in \{2,...,m\}.$ Also, notice that $\left| \nabla B\right| =1$ on $M.$ Since all the computations from now on will be done in the real frame $% \left\{ e_1,....,e_{2m}\right\} $ with $Je_{2k-1}=e_{2k}$ and $e_1=\frac 1{\left| \nabla f\right| }\nabla f,$ for convenience we will drop the $e_k$ index and use only $k$ in the formulas for the real Hessian and the curvature. Also, let us make the convention that Roman letters $i,j,k$ run from $1$ to $% 2m$ and Greek letters $\alpha ,\beta ,\gamma $ run from $3$ to $2m.$ We have proved that there exists a smooth function $B$ on $M$ with real Hessian \[ \left( B_{ij}\right) =\left( \begin{array}{lllllll} 0 & 0 & 0 & 0 & . & . & 0 \\ 0 & -2 & 0 & 0 & . & . & 0 \\ 0 & 0 & -1 & 0 & . & . & 0 \\ 0 & 0 & 0 & -1 & . & . & 0 \\ 0 & . & . & . & . & . & . \\ 0 & . & . & . & . & . & . \\ 0 & 0 & 0 & 0 & . & . & -1 \end{array} \right) \] and with unit gradient, $\left| \nabla B\right| =1$ on $M.$ Note that our function $B$ satisfies the same properties as the Buseman function $\beta$ in \cite{L-W}. Using the Hessian of $\beta$ Li-Wang proved that the manifold has at most two ends and in the two ends case they inferred some information on the structure of $M$. We will give an outline of their argument below. Denote the level set of $B$ by \[ N_t=\left\{ x\in M\left| \;B\left( x\right) =t\right. \right\} . \] Since $\left| \nabla B\right| =1$, $M$ is diffeomorphic to $\mathbb{R}\times N_0 $ and $e_1=\nabla B$ is the unit normal to $N_t$ for any $t.$ If $N_0$ is noncompact, then $M$ will have one end, wich contradicts our assumption that $M$ has more than one end. Consequently, $N_0$ is compact, and this implies that $M$ has two ends. For the remainder of this proof $M$ has two ends, and we want to find the metric of $N_t$ depending on the metric of $N_0$. Knowing $B_{ij}$ is equivalent to knowing the second fundamental form of $% N_t,$ which implies that if \[ \nabla e_i=\omega _{ik}e_k, \] then one can find \[ \omega _{i1}\left( e_j\right) =\left\{ \begin{array}{l} 0\;\;\;\text{for}\;\;i\neq j \\ 2\;\;\;\text{for\ }\;i=j=2 \\ 1\;\;\;\text{for}\;\;3\leq i=j\leq 2m. \end{array} \right. \] Also, using the K\"{a}hler property we know that \[ \omega _{1k}Je_k=J\nabla e_1=\nabla Je_1=\nabla e_2=\omega _{2k}e_k, \] which implies \[ \omega _{\alpha 2}\left( e_j\right) =\left\{ \begin{array}{l} \;0\;\;\;\;\text{for}\;\;j=1\;or\;j=2 \\ -1\;\;\;\text{for}\;\;\alpha =2p+1,\;j=2p+2\;\; \\ \;1\;\;\;\text{for\ }\;\alpha =2p+2,\;j=2p+1. \end{array} \right. \] It is clear that the flow $\phi _t:M\rightarrow M$ generated by $e_1$ is a geodesic flow. Since \[ \nabla _{e_1}e_2=\nabla _{e_1}Je_1=J\nabla _{e_1}e_1=0 \] we can conclude that $e_2$ is parallel along the geodesic $\tau $ defined by $e_1$. We will consider the rest of the frame so that it is also parallel along this geodesic. The next step is to prove that \begin{eqnarray*} V_2\left( t\right) &=&e^{-2t}e_2 \\ V_\alpha \left( t\right) &=&e^{-t}e_\alpha \end{eqnarray*} are the Jacobi fields along the geodesic $\tau $ with initial conditions \begin{eqnarray*} V_2\left( 0\right) &=&e_2,\;\;V_2^{\prime }\left( 0\right) =-2e_2 \\ V_\alpha \left( 0\right) &=&e_\alpha ,\;\;V_\alpha ^{\prime }\left( 0\right) =-e_\alpha . \end{eqnarray*} This is true because the information on $\omega _{i1}$ and $\omega _{\alpha 2}$ allows to find sufficient values for the curvature tensor. Using the second structural equations one can show that \begin{eqnarray} R_{1212} &=&-4,\;\;R_{121\alpha }=0, \nonumber \\ R_{1\alpha 1\beta } &=&-\delta _{\alpha \beta }, \label{13} \end{eqnarray} and this indeed shows that $V_k\left( t\right) $ are Jacobi fields for $k\in \{2,...,2m\}.$ However, $d\phi _t\left( e_k\right) $ for $k\geq 2$ are also Jacobi fields with the same initial conditions as $V_k\left( t\right) ,$ so they must coincide. The conclusion is that the metrics on $N_t$ viewed as one parameter of metrics on $N_0$ are \[ ds_t^2=e^{-4t}\omega _2^2\left( 0\right) +e^{-2t}\left( \omega _3^2\left( 0\right) +....+\omega _{2m}^2\left( 0\right) \right) , \] where $\left\{ \omega _1,...,\omega _{2m}\right\} $ is the dual frame of $% \left\{ e_1,...,e_{2m}\right\} $.\textbf{\ Q.E.D.}
2,877,628,090,874
arxiv
\section{Introduction} In the last few decades, we can note an increasing research interest in the asymptotic behavior of the solutions of the linear differential equations \begin{gather} \label{lineq} \frac{dx}{dt}=A(t)x\ ,\ t\in[0,\infty)\ ,\ x\in\mathbb X\ , \end{gather} where $A(t)$ is in general an unbounded linear operator on a Banach space $\mathbb X$, for every fixed $t$. In the case that $A(t)$ is a matrix continuous function, O.~Perron \cite{per} first observed a connection between the asymptotic behavior of the solutions of the above equation and the properties of the differential operator $\frac{d}{dt} - A(t)$ as an operator on the space $C_b(\mathbb R_+,\mathbb R^n)$ of $R^n$-valued, bounded and continuous functions on the half-line $\mathbb R_+$. This result became a milestone for many works on the qualitative theory of solutions of differential equations. We refer the reader to the monograph by Massera and Sch\"{a}ffer \cite{masseraschaffer}, and Daleckij and Krein \cite{dalkre} for a characterization of the exponential dichotomy of the solutions of the above equation in terms of surjectiveness of the differential operator $\frac{d}{dt} - A(t)$ in the case of bounded $A(t)$ and by Levitan and Zhikov \cite{levzhi} for an extension to the infinite-dimensional setting for equations defined on the whole line. Recently, there is a lot of work done in the study of the asymptotic behavior of the solutions of differential equations in Banach spaces, in particular in the unbounded case (see e.g., \cite{EngelNagel,Neervenbook,prepogpre3,prepogpre5,schnaubelt2001}). Also, there is an increasing interest ``applications-wise'' for the Cauchy problems associated with the above evolution equations since many physical situations can be interpreted as Cauchy problems by choosing the ``right'' state space. Among the physical equations which are eligible for such an approach we can mention heat equation, Schr\"odinger equation, certain population equations, Maxwell's equations, wave equation, and also seemingly unrelated problems such as delay equations, Markov processes or Boltzmann's equations. In the present paper, we will continue the approach initiated by Perron for semilinear nonautonomous evolution equations of the form \begin{equation} \label{semilineq} \frac{dx}{dt}= A(t)x+f(t,x) \ . \end{equation} Here $A(t)$ is a (possibly unbounded) linear operator acting on a real or complex Banach space $\mathbb X$ and $f: \mathbb R\times\mathbb X\to\mathbb X$ is a (possibly nonlinear) continuous function. Following \cite{aulmin}, we assume that the linear equation \eqref{lineq} is well-posed (i.e. there exists a continuous linear evolution family \ensuremath{\{U(t,s)\}_{(t,s)\in\Delta}} such that for every $s\in\mathbb R_+$ and $x\in D(A(s))$, the function $x(t) = U(t, s) x$ is the uniquely determined solution of equation \eqref{lineq} satisfying $x(s) = x$). Then we can consider the \defnemph{mild solution} of the semilinear equation \eqref{semilineq} (defined on some interval $[s , s + \delta), \delta > 0$) as being the solution of the integral equation \begin{equation} \label{integreq} x(t) = U(t, s)x + \int_s^t U(t, \tau)f(\tau, x(\tau)) d\tau \quad,\quad t\geq s\ , \end{equation} Furthermore, if we assume also that the nonlinear function $f(t, x)$ is jointly continuous with respect to $t$ and $x$ and Lipschitz continuous with respect to $x$ (uniformly in $t\in\mathbb R_+$, and $f(t,0) = 0$ for all $t\in\mathbb R_+$) we can generate a (nonlinear) evolution family \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} , in the sense that the map $t\mapsto X(t,s)x:[s,\infty)\to\mathbb X$ is the unique solution of equation \eqref{integreq}, for every $x\in\mathbb X$ and $s\in\mathbb R_+$. Considering the \emph{Green's operator} $(\mathbb G f)(t)=\int_0^t X(t,s)f(s)ds$ we prove that if the following conditions hold \begin{itemize} \item \quad the map $\mathbb G f$ lies in $L^q(\mathbb R_+,\mathbb X)$ for all $f\in L^{p}(\mathbb R_+,\mathbb X)$, and \item \quad $\mathbb G:L^{p}(\mathbb R_+,\mathbb X)\to L^{q}(\mathbb R_+,\mathbb X)$ is Lipschitz continuous, i.e. there exists $K>0$ such that $$\|\mathbb G f-\mathbb G g\|_{q} \leq K\|f-g\|_{p}\ ,\ \mbox{for all}\ f,g\in L^p(\mathbb R_+,\mathbb X)\ ,$$ \end{itemize} then the above mild solution will have an exponential decay. It is worth to note that, although the autonomous case (i.e. time invariant evolution equations), was much more analyzed than the nonautonomous case, the latter one often arises quite naturally, not only in physics and mechanics, but also in the mathematical theory of differential equations when one linearizes an autonomous equation along a nonstationary solution. For particular cases of autonomous evolution equations arising from the linearization along a compact invariant manifold it has been shown (see e.g. \cite{sacsel}) that one can define a skew-product semiflow which allows to apply the methods of classical dynamical systems to the underlying time-dependent equations. For the case of a time-invariant linear part of equation \eqref{semilineq} the existence problem for solutions has been investigated by many authors (see e.g. \cite{iwamiya,komatsu,martin,oharu,oharu2,pavel,pazy,webb} and the references therein). \section{Semilinear evolution equations. Examples} First, let us recall some notations and definitions. Throughout this paper, $\mathbb X$ will denote a Banach space, $\mathbb R$ the set of all real numbers, $\mathbb R_+$ the subset of all nonnegative real numbers and put $\Delta=\{(t,s)\in\mathbb R^2_+ : t\geq s\}$. If $\mathbb{Y}$ denotes also a Banach space, then the set of all maps $T:\mathbb X\to\mathbb{Y}$ such that $$\|T\| _{lip} := \inf\{ M > 0 : \| Tx-Ty\| \leq M\| x-y \|\ ,\ \mbox{for all}\ x, y\in \mathbb X\}\ <\ \infty\ .$$ will be denoted by $\ensuremath{\textsl{Lip}}(\mathbb X,\mathbb{Y})$. Also, if $\mathbb X=\mathbb{Y}$ we will put simply $\ensuremath{\textsl{Lip}}(\mathbb X)$ instead of $\ensuremath{\textsl{Lip}}(\mathbb X,\mathbb X)$. It is easy to see that $(\ensuremath{\textsl{Lip}}(\mathbb X), \| \cdot \| _{lip})$ is a seminormed vector space which has the property $$\| T \circ S \|_{lip} \leq \| T \| _{lip}\| S \| _{lip}\ \mbox{for all} \ T,S \in\ensuremath{\textsl{Lip}}(\mathbb X)\ .$$ For a given interval $\mathbb{J}$ of the real line, we denote by $$L^{p}(\mathbb J,\mathbb X) = \{ f: \mathbb J\rightarrow \mathbb X: f\ \mbox{is measurable and}\ \int_{\mathbb J}\! \| f(t)\|^{p}dt < \infty \}\quad ,$$ for all $p \in [1,\infty)$ and by $$ L^{\infty}(\mathbb J,\mathbb X)= \{ f:\mathbb J\rightarrow \mathbb X\,:\,f\ \mbox{is measurable and}\ {\rm ess}\sup\limits_{\hspace{-4mm}t\in \mathbb J}\| f(t)\| < \infty \} . $$ It is well-known that $L^{p}(\mathbb J,\mathbb X), L^{\infty}(\mathbb J,\mathbb X)$ are Banach spaces endowed with the norms $$\| f\| _{p} = \left(\int_{\mathbb J}\| f(t)\|^{p}dt\right)^{1/p}\quad,$$ $$\| f\| _{\infty} = {\rm ess}\sup\limits_{\hspace{-4mm}t\in\mathbb J} \| f(t)\| \quad,$$ respectively. \begin{definition} \label{defn:evolproc} A family \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} of (possibly nonlinear) operators acting on $\mathbb X$ is called an \defnemph{evolution family} if it satisfies the following conditions: \begin{enumerate} \item[$(e_1)$] \quad $X(t,t)x = x$ for all $t\geq 0$ and $x \in \mathbb X;$ \item[$(e_2)$] \quad $X(t,s) = X(t,r)\circ X(r,s)$ for all $t\geq r \geq s \geq 0$. \end{enumerate} Such an evolution family is called \defnemph{continuous} if there exist $M, \omega> 0$ such that \begin{enumerate} \item[$(e_3)$] \quad $\|X(t,s)\|_{lip}\leq Me^{\omega(t-s)}$ \item[$(e_4)$] \quad $X(t,s)x$ is jointly continuous with respect to $t$, $s$ and $x$.. \end{enumerate} \end{definition} If the operators $X(t,s)$ are linear, then by $(e_3)$ they are also bounded and the family will be called a \defnemph{linear evolution family}. \begin{remark} Condition $(e_3)$ is equivalent with the existence of some locally bounded function $\varphi$ such that \begin{enumerate} \item[$(e'_3)$] $\|X(t,s)x-X(t,s)y\|\leq \varphi(t-s)\|x-y\|$ for all $x,y\in \mathbb X$. \end{enumerate} Indeed, if $(e_3)$ holds one can take $\varphi(t)=Me^{\omega t}$. Conversely, the constants $M=\sup_{t\in[0,1]}\varphi(t)$ and $\omega=\max\{1,\ln\varphi(1)\}$ satisfy $(e_3)$. \end{remark} \begin{definition} \label{defn:wellposedness} The linear equation \eqref{lineq} is said to be \defnemph{well-posed} if there exists a continuous linear evolution family \ensuremath{\{U(t,s)\}_{(t,s)\in\Delta}} such that for every $s\in\mathbb R_+$ and $x\in D(A(s))$, the function $x(t) = U(t, s) x$ is the uniquely determined solution of equation \eqref{lineq} satisfying $x(s) = x$. \end{definition} \begin{definition} Suppose the linear equation \eqref{lineq} is well-posed. Then, every solution $x(t)$ (defined on some interval $[s , s + \delta), \delta > 0$) of the integral equation \begin{equation} \label{integreq} x(t) = U(t, s)x + \int_s^t U(t, \tau)f(\tau, x(\tau)) d\tau \quad,\quad t\geq s\ , \end{equation} is called a \defnemph{mild solution} of the semilinear equation \eqref{semilineq} starting from $x$ at $t = s$. Furthermore, \defnemph{equation \eqref{semilineq} is said to generate an evolution family} \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} if for every $x\in\mathbb X$ and $s\in\mathbb R_+$, the map $t\mapsto X(t,s)x:[s,\infty)\to\mathbb X$ is the unique solution of equation \eqref{integreq}. \end{definition} \begin{proposition} Suppose the following conditions are satisfied: \begin{enumerate} \item[(i)] The linear equation \eqref{lineq} is well-posed; \item[(ii)] The nonlinear function $f(t, x)$ is jointly continuous with respect to $t$ and $x$ and Lipschitz continuous with respect to $x$, uniformly in $t\in\mathbb R_+$, and $f(t,0) = 0$ for all $t\in\mathbb R_+$. \end{enumerate} Then, the semilinear equation \eqref{semilineq} generates a continuous evolution family. \end{proposition} \begin{proof} Using standard arguments, see for instance \cite{segal} it can be shown that the equation \eqref{semilineq} generates an evolution family. Moreover, from \cite{segal}, it follows that $X(t,s)x$ is jointly continuous with respect to $t$, $s$ and $x$. We show briefly bellow that $X(t,s)x$ also fulfill condition $(e_3)$ in Definition~\ref{defn:evolproc}. We have that \begin{gather*} \begin{split} \|X(t,s)x -X(t,s)y\|\leq&\, \|U(t,s)x-U(t,s)y\|+ \int_s^t \|U(t,\xi)\|\,\|f(\xi,X(\xi,s)x)-f(\xi,X(\xi,s)y)\|d\xi\\ \leq&\, Ke^{\omega(t-s)}\|x-y\| + \int_s^t Ke^{\omega(t-\xi)}L\|X(\xi,s)x-X(\xi,s)y\|d\xi\ , \end{split} \end{gather*} where $L$ is a Lipschitz constant of $f(t,x)$ with respect to $x$ and $K,\omega$ stem from the well-posedness of the linear equation \eqref{lineq} and for convenience choose $\omega$ to be positive. Applying Gronwall's Lemma we get \begin{gather*} \|X(t,s)x-X(t,s)y\|\leq Ke^{(\omega+KL)(t-s)}\|x-y\|\ , \end{gather*} for any $(t,s)\in\Delta$ and $x,y\in X$. \end{proof} \begin{remark} It is worth to note that one of the goals with respect to the asymptotic behavior of solutions of the equation \eqref{semilineq} is also to point out conditions for that equation to admit invariant (stable, unstable or center) manifolds (see, e.g., \cite{aulmin,batesjones,dalkre,halemagalhaes,henry,hps,minh,sellyou}). As far as we know, the most popular conditions for the existence of invariant manifolds are the exponential stability, dichotomy of the linear part \eqref{lineq} and the uniform Lipschitz continuity of the nonlinear part $f(t, x)$ with sufficiently small Lipschitz constants (i.e., $\|f(t, x)-f(t, y)\|\leq M \|x-y\|$ for $M$ small enough). Moreover, the manifolds considered in the existing literature are mostly constituted by trajectories of solutions bounded on the positive (or negative) half-line. We refer the reader to \cite{aulmin,batesjones,halemagalhaes,henry,hps,minh,sellyou} and references therein for more details on this topic. \end{remark} \begin{example} \label{firstex} For a given continuous map $h:{{\mathbb R } }\rightarrow[1/2,1]$ consider the differential equation on $\mathbb R$ \begin{equation} \label{(2.1.)} \dot{u}(t) = h(u(t))\ . \end{equation} We claim that this equation leads to a continuous evolution family. Indeed, consider the map $H: {{\mathbb R } }\rightarrow {{\mathbb R } }$ given by $$H(t)=\int^{t}_{0}\frac{ds}{h(s)}\quad .$$ One can easily check that $$\vert u-v\vert \leq \vert H(u)-H(v)\vert \leq 2\vert u-v\vert$$ for all $u,v\in\mathbb R$. It follows that $H$ is bijective and so it is easy to see that \begin{equation} X(t,s):\mathbb R\to\mathbb R \quad,\quad X(t,s)x = H^{-1}(t-s+H(x)) \end{equation} is an evolution family which has the property $(e_4)$. Also, we have \begin{gather*} \begin{split} \vert X(t,s)x-X(t,s)y\vert =& \vert H^{-1}(t-s+H(x))-H^{-1}(t-s+H(y))\vert \\ \leq & \vert (t-s+H(x)) - (t-s+H(y))\vert \\ \leq & \vert x-y\vert \quad , \end{split} \end{gather*} for all $(t,s)\in\Delta$ and all $x\in\mathbb R$. Thus we obtain that \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is a continuous evolution family on $\mathbb R$. \end{example} \begin{example} \label{secondex} Consider $h:\mathbb R\to\mathbb R$ continuously differentiable with $h^{\prime}\in L^{\infty}(\mathbb R,\mathbb R)$ and the problem \begin{gather} \label{(2.5.)} \begin{cases} \displaystyle\frac{\partial u}{\partial t}(x,t)=\frac{\partial ^{2}u}{\partial x^{2}}(x,t)+h(u(x,t))\\[3mm] \displaystyle\frac{\partial u}{\partial x}(0,t)=\frac{\partial u}{\partial x}(1,t)=0 \end{cases} \end{gather} If we denote $x(t)=u(\,\cdot\,,t)$, the problem \eqref{(2.5.)} is equivalent to \begin{equation}\label{(2.6.)} \dot{x}(t)=Ax(t)+f(x(t)), \end{equation} where $A:D(A)\subset L^{2}([0,1],\mathbb R)\to L^{2}([0,1],\mathbb R)$, $D(A)$ is defined as the set of all functions $z\in L^{2}([0,1],\mathbb R)$ such that $z, z^{\prime}$ are absolutely continuous with $z^{\prime \prime}\in L^{2}([0,1],\mathbb R)$ and $z^{\prime}(0)=z^{\prime}(1) = 1$, and for each $z\in D(A)$, we define $Az=z^{\prime \prime}$. Also we consider $B:L^{2}([0,1],{{\mathbb R } })\rightarrow L^{2}([0,1],{{\mathbb R } })$ which is given by $Bz=h\circ z$. It is well known that $A$ generates a strongly continuous semigroup of linear operators $\{ T(t)\}_{t\geq 0}$ in $L^{2}([0,1],{{\mathbb R } })$. And it is easy to check that $B$ is Lipschitz continuous. By Example~\ref{firstex} it is clear that the equation (\ref{(2.6.)}), as an abstract variant of (\ref{(2.5.)}), generates a continuous evolution family. \end{example} For \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} an evolution family, the \emph{trajectory} determined by $t_0\in\mathbb R_+$ and $x_0\in\mathbb X$ will be denoted by \begin{equation} u_{t_{0},x}:\mathbb R_+\to\mathbb X\quad,\quad u_{t_{0},x}(t) = \begin{cases} X(t,t_{0})x &,\ t\geq t_{0}\\ 0 &,\ 0\leq t\leq t_0 \end{cases} \quad. \end{equation} \begin{definition} \label{def 2.2.} An evolution family \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is said to be \begin{enumerate} \item[(u.e.s.)] \defnemph{uniformly exponentially stable}, if there exist $N, \nu > 0$ such that \begin{gather} \label{defn:expstab} \|X(t,s)\|_{lip} \leq Ne^{-\nu(t-s)}\ ,\ \mbox{for all}\ (t,s)\in\Delta\ ; \end{gather} \item[(u.s.)] \defnemph{uniformly stable} if there exists $N>0$ such that \begin{equation} \label{defn:unifstab} \|X(t,s)\|_{lip} \leq N\ ,\ \mbox{for all}\ (t,s)\in\Delta\ ; \end{equation} \item[(a.s.)] \defnemph{asymptotically stable} if all its trajectories are decaying to zero, i.e. \begin{equation} \label{defn:asymptstab} \lim_{t\to\infty}{u_{t_0,x_0}(t)}=0 \ ,\ \mbox{for all}\ t_0\in\mathbb R_+\ \mbox{and}\ x_0\in\mathbb X\ . \end{equation} \end{enumerate} \end{definition} We will need in the next the following additional lemma. \begin{lemma} Let \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} be a continuous evolution family on $\mathbb X$. Then, the function $s\mapsto X(t,s)f(s):\mathbb R_+\to\mathbb X$ is locally integrable provided that $f:\mathbb R_+\to\mathbb X$ is locally integrable. \end{lemma} \begin{proof} This claim follows easily using conditions $(e_2)$, $(e_3)$ and $(e_4)$ of \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} : \begin{gather} \begin{split} \int_0^t \|X(t,s)f(s)-X(t,0)f(0)\|ds \leq &\, \int_0^t{\|X(t,s)f(s)-X(t,s)X(s,0)f(0)\|}ds \\ \leq &\, Me^{\omega t}\int_0^t \left(\|f(s)\|+\|X(s,0)f(0)\|\right)ds \end{split} \end{gather} which holds for any $t\in\mathbb R_+$. \end{proof} Given a continuous evolution family \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} \!, we will denote by $\mathbb G$ the \emph{Green's operator} \begin{equation} \mathbb G:L^1_{loc}(\mathbb R_+,\mathbb X)\to L^1_{loc}(\mathbb R_+,\mathbb X)\quad,\quad (\mathbb G f)(t)=\int_0^t X(t,s)f(s)ds\quad. \end{equation} (where $L^1_{loc}(\mathbb R_+,\mathbb X)$ denotes the space of all locally integrable functions from $\mathbb R_+$ into $\mathbb X$). Set now $${\mathcal A}_{X} = \{ \chi _{[a,b]}u_{t_{0},x}:x\in \mathbb X,\; t_{0}\geq0,\; 0 \leq a \leq b \}\ ,$$ where $\chi_{[a,b]}$ denotes the characteristic function of the interval $[a,b]$. Then, $A_X$ is contained in $L^p(\mathbb R_+,\mathbb X)$, for every $p\in[1,\infty]$. \begin{definition} \label{Definition 2.3.} The pair $(L^{p}(\mathbb R_+,\mathbb X),L^{q}(\mathbb R_+,\mathbb X))$ is said to be \defnemph{admissible to \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} } if the following conditions hold \begin{itemize} \item \quad the map $\mathbb G f$ lies in $L^q(\mathbb R_+,\mathbb X)$ for all $f\in L^{p}(\mathbb R_+,\mathbb X)$, and \item \quad $\mathbb G:L^{p}(\mathbb R_+,\mathbb X)\to L^{q}(\mathbb R_+,\mathbb X)$ is Lipschitz continuous, i.e. there exists $K>0$ such that $$\|\mathbb G f-\mathbb G g\|_{q} \leq K\|f-g\|_{p}\ ,\ \mbox{for all}\ f,g\in L^p(\mathbb R_+,\mathbb X)\ .$$ \end{itemize} \end{definition} \section{Main results} \begin{lemma} \label{lemma:unifboundforh} Let $h\in L^{q}(\mathbb R_+,\mathbb R)$, $q\in[1,\infty]$ such that $h(0)\geq0$. If $$h(r)\leq m\,h(t)\quad,\ \mbox{for all}\ r\in[t,t+1]\ \mbox{and for all}\ t\geq0\ ,$$ then, $h\in L^{\infty}(\mathbb R_+,\mathbb R)$ and $\|h\|_{\infty}\leq mh(0)+ m\|h\|_{q}$. \end{lemma} \begin{proof} Let $r\geq1$. Since $h(r)\leq m h(t)$ for all $t\in[r-1,r]$, it follows that $$h(r)\,\leq\, m\int_{r-1}^r\!h(t)\,dt\,\leq\, m\|h\|_{q}\ .$$ If $r\in[0,1]$, from the hypothesis we have that $h(r)\leq m\,h(0)$. Therefore, $$h(r)\leq m (h(0) + \|h\|_{q})\ ,$$ for every $r\geq0$, and the above claim follows immediately. \end{proof} \begin{lemma} \label{lem:masseraschaffer} Let $g:\Delta\to\mathbb R_+$ be a function with the following properties: \begin{enumerate} \item $ g(t,t_{0}) \leq g(t,s)g(s,t_{0})$, for all $t \geq s \geq t_{0}$; \item there exist $M, d > 0$ and $c \in (0,1)$ satisfying \begin{gather*} \begin{split} & g(t,t_{0}) \leq M, \; \mbox{for all}\ t\in[t_{0},t_{0}+d]\,,\ t_0\geq0\ \mbox{and}\\ & g(t_{0}+d, t_{0})\leq c\,,\ \mbox{for all}\ t_{0}\geq 0\ . \end{split} \end{gather*} \end{enumerate} Then, there exist $N, \nu > 0$ such that $$g(t,t_{0})\leq Ne^{-\nu (t-t_{0})}\ ,\ \mbox{for all}\; t \geq t_{0} \geq 0\ .$$ \end{lemma} \begin{proof} Let $(t,t_0)\in\Delta$ and $n = \left[\frac{t-t_{0}}{d}\right]$. Then, we have that \begin{gather*} \begin{split} g(t,t_{0})\leq&\, g(t,t_{0}+nd)g(t_{0}+nd,t_{0})\leq g(t,t_{0}+nd)c^{n} \\ \leq&\, Mc^{n}=Me^{-\nu nd}\leq Ne^{-\nu (t-t_{0})}\ , \end{split} \end{gather*} where $\nu = - \frac{1}{d}\ln c$, $N = Me^{\nu d}$. \end{proof} \medskip Next we define the functions $a_{p}, b_{p}:\mathbb R_{+}\rightarrow \mathbb R$ given by \[ a_{p}(t) = \left \{ \begin{array}{lll} t^{1 - \frac{1}{p}} & , & p \in [1,\infty)\\ t & , & p = \infty \end{array} \right. , b_{p}(t)= \| \chi _{[0,t]}\| _{p} = \left \{ \begin{array}{lll} t^{\frac{1}{p}} & , & p \in [1,\infty) \\ 1 & , & p = \infty . \end{array} \right. \] \begin{remark} \label{rem 3.1.} One can easily check that $$\int\limits^{t_{0}+t}_{t_{0}} \| f(s)\| ds \leq a_{p}(t)\| f\| _{p}\ ,$$ for all $t,t_{0}\geq 0$ and $f \in L^{p}(\mathbb R_+,\mathbb X)$. \end{remark} \begin{theorem} \label{the 3.2} Let \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} be a continuous evolution family and $p,q\in[1,\infty]$ such that $(p,q)\neq(1,\infty)$. If the pair $(L^{p}(\mathbb R_+,\mathbb X), L^{q}(\mathbb R_+,\mathbb X))$ is admissible to \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} , then \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is uniformly exponentially stable. \end{theorem} \begin{proof} Let $t_{0}\geq 0,\; x_{1},\; x_{2} \in\mathbb X$ and $f_{1},\; f_{2}: {{\mathbb R } }_{+}\rightarrow \mathbb X$ given by \begin{gather} f_{i}(t) = \begin{cases} X(t,t_{0})x_{i} &,\ t\in [t_{0}, t_{0}+1] \\ 0 &,\ t\in\mathbb R_+\setminus [t_{0}, t_{0}+1] \end{cases} \quad,\ i ={1,2} \ . \end{gather} Clearly, $f_{1}, f_{2} \in {\mathcal A}_{X}$ with $\| f_{1}-f_{2}\|_{p}\leq Me^{\omega} \| x_{1}-x_{2}\| $ and $$ (\mathbb G f_{i})(t) = \int\limits^{t}_{0}X(t,s)f_{i}(s)ds=\int\limits^{t_{0}+1}_{t_{0}}X(t,s)X(s,t_{0})x_{i}ds= X(t,t_{0})x_{i}=u_{t_{0},x_{i}}(t)\ ,$$ for all $t\geq t_{0}+1$, $i ={1,2}$. For $t \in [t_{0}, t_{0}+1]$, we have that $$\| u_{t_{0},x_{1}}(t)-u_{t_{0},x_{2}}(t)\| \leq \| X(t,t_{0})\|_{lip}\| x_{1}-x_{2}\| \leq Me^{\omega}\| x_{1}-x_{2}\|\ .$$ It follows that $u_{t_{0},x_{1}}-u_{t_{0},x_{2}} \in L^{q}(\mathbb R_+,\mathbb X)$ and \begin{gather} \begin{split} \|u_{t_{0},x_{1}}-u_{t_{0},x_{2}}\|_{q} \leq&\, Me^{\omega} \|x_{1}-x_{2}\| + \|(\mathbb G f_{1}-\mathbb G f_{2})\,\chi _{[t_{0}+1,\infty)}\|_{q}\\ \leq&\, Me^{\omega}\|x_{1}-x_{2}\| + \| \mathbb G f_{1}- \mathbb G f_{2}\|_{q}\\ \leq&\, Me^{\omega}\|x_{1}-x_{2}\| + K \| f_{1}-f_{2}\|_{p}\\ \leq&\, (K+1)Me^{w}\| x_{1}-x_{2}\| . \end{split} \end{gather} Let us define the map $h:\mathbb R_{+}\to\mathbb R_{+}$, $h(t) =\| u_{t_{0},x_{1}}(t_{0}+t) - u_{t_{0},x_{2}}(t_{0}+t)\|$. Then, $h\in L^{q}(\mathbb R_+,\mathbb R)$ with $\| h\|_{q} = \| u_{t_{0},x_{1}} -u_{t_{0},x_{2}}\|_{q} \leq (K+1)Me^{w}\| x_{1}-x_{2}\|$, and \begin{gather*} \begin{split} h(r) =&\, \| X(t_{0}+r,t_{0}+t)X(t_{0}+t,t_{0})x_{1} - X(t_{0}+r,t_{0}+t)X(t_{0}+t,t_{0})x_{2}\| \\ \leq&\, \| X(t_{0}+r,t_{0}+t)\| _{lip}\|X(t_{0}+t,t_{0})x_{1}-X(t_{0}+t,t_{0})x_{2}\| \\ \leq&\, Me^{\omega(r-t)}h(t)\leq Me^{\omega}h(t)\quad,\quad 0\leq t\leq r\leq t+1\ . \end{split} \end{gather*} By Lemma \ref{lemma:unifboundforh} we obtain that $h\in L^{\infty}(\mathbb R_+,\mathbb R)$ and \begin{gather*} \|h \| _{\infty} \leq Me^{\omega}\|h\| _{q} + Me^{\omega}h(0)\leq (K+1)M^{2}e^{2\omega}\|x_{1}-x_{2}\| + Me^{\omega}\|x_{1}-x_{2}\| \end{gather*} Now we have that there exists $C > 0$ such that \begin{gather*} \|X(t,s)x_{1}-X(t,s)x_{2}\| \leq C\| x_{1}-x_{2}\| \ , \end{gather*} for all $(t,s)\in\Delta$ and $x_{1}x_{2}\in \mathbb X$ and hence \begin{gather} \label{rel:unifstability} \| X(t,s)\|_{lip}\leq C\ , \quad \mbox{for all}\quad (t,s)\in\Delta\ . \end{gather} Consider again $x_{1},x_{2}\in \mathbb X$, $t_{0}\geq 0$, $\delta > 0$, $f_{3}, f_{4}:\mathbb R_{+}\rightarrow \mathbb X$ given by \begin{gather} f_{i}(t)= \begin{cases} X(t,t_{0})x_{i-2} &,\ t\in [t_{0}, t_{0}+\delta] \\ 0 &,\ t\in\mathbb R_{+}\setminus [t_{0},t_{0}+\delta ] \end{cases} \quad,\ i = {3,4}\ . \end{gather} Then $f_{3}, f_{4}\in {\mathcal A}_{X}$ with $\| f_{3}-f_{4}\| _{p}\leq C b_{p}(\delta)\| x_{1}-x_{2}\|$ and $$ (\mathbb G f_{i})(t) = \int\limits^{t}_{0}X(t,s)f_{i}(s)ds= \int\limits^{t}_{t_{0}}X(t,s)X(s,t_{0})x_{i-2}\,ds = (t-t_{0})U(t,t_{0})x_{i-2} $$ for all $t\in[t_{0},t_{0}+\delta]$, $i ={3,4}$. Using the last equality we have that \begin{gather} \begin{split} \frac{\delta ^{2}}{2}\| X(t_{0}+\delta,t_{0})x_{1} -X(t_{0}+\delta,t_{0})x_{2}\| =&\, \int\limits^{t_{0}+\delta}_{t_{0}}(t-t_{0})\|X(t_{0}+\delta,t_{0})x_{1}-X(t_{0}+\delta, t_{0})x_{2}\|dt\\ \leq&\, C\int\limits^{t_{0}+\delta}_{t_{0}}(t-t_0)\|X(t,t_{0})x_{1}-X(t,t_{0})x_{2}\| dt\\ =&\,C\int\limits^{t_{0}+\delta}_{t_{0}}\| (\mathbb G f_{3})(t)- (\mathbb G f_{4})(t)\| dt\\ \leq&\, C a_{q}(\delta) \| \mathbb G f_{3}- \mathbb G f_{4}\|_{q}\\ \leq&\, KCa_{q}(\delta)\|f_{3}-f_{4}\|_{p}\\ \leq&\, KC^{2}a_{q}(\delta)b_{p}(\delta)\|x_{1}-x_{2}\|\\ =&\, \frac{KC^{2}\delta^{2}}{a_{p}(\delta)b_{q}(\delta)}\|x_{1}-x_{2}\|\ . \end{split} \end{gather} It follows that $$\|X(t_{0}+\delta, t_{0})\| _{lip}\leq\frac{2KC^{2}}{a_{p}(\delta)b_{q}(\delta)}\ ,$$ for all $t_{0}\geq 0$, $\delta > 0$. Since $(p,q)\not=(1,\infty)$, we have that $\lim\limits_{\delta \to\infty}a_{p}(\delta)b_{q}(\delta)=\infty$, and therefore we can choose $d>0$ such that \begin{gather} \|X(t_{0}+d,t_{0})\|_{lip}\leq \frac{1}{2}\ , \end{gather} {for all} $t_{0}\geq0$. Applying Lemma~\ref{lem:masseraschaffer} to the map $g:\Delta\rightarrow \mathbb R_{+}$, defined by $ g(t,s)=\| U(t,s)\|_{lip}$ we obtain that \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is uniformly exponentially stable. \end{proof} \begin{theorem}\label{the 3.1} The pair $(L^{1}(\mathbb R_+,\mathbb X),L^{\infty}(\mathbb R_+,\mathbb X))$ is admissible to the continuous evolution fa\-mi\-ly \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} if and only if the following statements hold \begin{enumerate} \item[$(i)$]\quad there exists $\psi\in L^1(\mathbb R_+,\mathbb X)$ such that $\mathbb G\psi\in L^{\infty}(\mathbb R_+,\mathbb X)$, and \item[$(ii)$]\quad there exists $N>0$ such that $\|X(t,s)\|_{lip} \leq N$, for all $(t,s)\in\Delta$. \end{enumerate} \end{theorem} \begin{proof} The {\it necessity} follows from the proof of Theorem~\ref{the 3.2} since we proved \eqref{rel:unifstability} without the $(p,q)\neq(1,\infty)$ assumption. {\it Sufficiency}. Let $f\in L^{\infty}(\mathbb R_+,\mathbb X)$. Since $\psi\in L^1(\mathbb R_+,\mathbb X)$ and $f\in L^1(\mathbb R_+,\mathbb X)$, we have that \begin{gather} \begin{split} \label{unifstabil:estim1} \| (\mathbb G\psi)(t)-(\mathbb G f)(t)\| \leq&\, \int\limits^{t}_{0}\| X(t,s)\phi(s)- X(t,s)f (s)\| ds \\ \leq&\, \int\limits^{t}_{0}\|X(t,s)\|_{lip}\|\psi(s) - f(s)\|ds \\ \leq&\, N\int\limits^{t}_{0}\| \psi (s) -f (s)\| ds \\ \leq& N \| \psi - f \| _{1} \quad, \end{split} \end{gather} for all $t \geq0$, which implies that $\mathbb G f \in L^\infty (\mathbb R_+,\mathbb X)$. It remains to prove that $\mathbb G: L^1(\mathbb R_+,\mathbb X)\to L^{\infty}(\mathbb R_+,\mathbb X)$ is Lipschitz continuous. But, using similar arguments as in \eqref{unifstabil:estim1}, we obtain that \begin{gather} \|\mathbb G f_1- \mathbb G g\|_{L^{\infty}(\mathbb R_+,\mathbb X)}\leq N\|f-g\|_{L^1(\mathbb R_+,\mathbb X)}\ , \end{gather} for all $f,g \in L^1(\mathbb R_+,\mathbb X)$. \end{proof} \begin{remark} Theorem \ref{the 3.1} extends a similar result for linear equations (see e.g. \cite{cop,masseraschaffer}). Note that in the linear case, condition $(ii)$ is automatically satisfied. \end{remark} In the following theorem we try to answer concerns regarding the converse of what was obtained in Theorem~\ref{the 3.2}. With elementary arguments we can show that the admissibility of the pair $(L^p(\mathbb R_+,\mathbb X),L^q(\mathbb R_+,\mathbb X))$ with $p\leq q$ is a necessary condition for the uniform exponential stability of a continuous evolution family. The idea is based on the use of Fubini's theorem, H\"{o}lder's inequality and the observation that if $f\in L^p(\mathbb R_+,\mathbb X)\cap L^q(\mathbb R_+,\mathbb X)$, then $f\in L^r(\mathbb R_+,\mathbb X)$ with $\|f\|_r\leq\max\{\|f\|_p,\|f\|_q\}$, for any $1\leq p\leq r\leq q\leq\infty$. \begin{theorem} Let \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} be a continuous evolution family and $1\leq p\leq q\leq \infty$. Then, the pair $(L^p(\mathbb R_+,\mathbb X), L^q(\mathbb R_+,\mathbb X))$ is admissible to \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} provided that \begin{enumerate} \item there is a function $\psi\in L^p(\mathbb R_+,\mathbb X)$ such that $\mathbb G\psi \in L^q(\mathbb R_+,\mathbb X)$, and \item \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is uniformly exponentially stable. \end{enumerate} \end{theorem} \begin{proof} Let $N,\nu>0$ be like in Definition~\ref{def 2.2.}. Fix arbitrarily $f,g\in L^p(\mathbb R_+,\mathbb X)$. We have that \begin{eqnarray*} \| (\mathbb G f)(t) -(\mathbb G g)(t)\| &\leq& \int_0^t \|X(t,s)(f(s)-g(s))\|ds \leq \int_0^t \|X(t,s)\|_{lip}\|f(s)-g(s)\|ds\\ &\leq& N\int_0^t e^{-\nu(t-s)}\|f(s)-g(s)\|ds \end{eqnarray*} Consider $h,H:\mathbb R_+\to\mathbb R_+$ given by $h(t)=\|f(t)-g(t)\|$ and \begin{equation} H(t)=\int_0^t e^{-\nu(t-s)}h(s)ds\ \end{equation} for any $t\in\mathbb R_+$. In what follows, we will prove that if $h\in L^p(\mathbb R_+,\mathbb R)$ then $H\in L^q(\mathbb R_+,\mathbb R)$ (in the hypothesis $1\leq p\leq q\leq\infty$). \textit{Case 1.} If $p=\infty$, then $q=\infty$ and since \begin{eqnarray*} H(t) &\leq& \int_0^t e^{-\nu(t-s)}\|h\|_{\infty}ds\leq \|h\|_{\infty}\int_0^t e^{-\nu\tau}d\tau \leq \frac{1}{\nu}\|h\|_{\infty}\ , \end{eqnarray*} for all $t\in\mathbb R_+$, it follows that $H\in L^{\infty}(\mathbb R_+,\mathbb R)$ with $\|H\|_{\infty}\leq \frac{1}{\nu}\|h\|_{\infty}$. \textit{Case 2.} If $p=1$, note that \begin{eqnarray*} H(t) = \int_0^t e^{-\nu(t-s)}h(s)ds\leq \int_0^t h(s)ds \leq \|h\|_1\ , \end{eqnarray*} for all $t\in\mathbb R^+$ and thus $H\in L^{\infty}(\mathbb R_+,\mathbb R)$ with $\|H\|_{\infty}\leq \|h\|_1$. Also, using Fubini's theorem we have that \begin{eqnarray*} \int_0^{\infty} H(t)dt &=& \int_0^{\infty}\!\int_0^{t}e^{-\nu(t-s)}h(s)\,ds\,dt = \int_0^{\infty}\!\int_s^{\infty} e^{-\nu(t-s)}h(s)\,dt\,ds \\ &=& \int_{0}^{\infty}e^{\nu s}h(s)\int_s^{\infty}\!e^{-\nu t}dt\,ds= \int_0^{\infty}e^{\nu s}h(s)\frac{e^{-\nu s}}{\nu}ds\\ &=& \frac{1}{\nu}\|h\|_1\ , \end{eqnarray*} which implies that $H\in L^1(\mathbb R_+,\mathbb R)$ with $\|H\|_1\leq \frac{1}{\nu}\|h\|_1$. Then, $H\in L^q(\mathbb R_+,\mathbb R)$ and $\|H\|_q\leq \max\{1,1/\nu\} \|h\|_1$. \textit{Case 3.} If $p\in(1,\infty)$, let $p'\in(1,\infty)$ such that $\frac{1}{p}+\frac{1}{p'}=1$ and let $\alpha,\beta\in(0,1)$ such that $\alpha+\beta=1$. For any $t\in\mathbb R_+$, we can write down \begin{eqnarray*} H(t)\leq \left(\int_0^t h(s)^p ds\right)^{1/p} \left(\int_0^t e^{-\nu p'\tau}d\tau\right)^{1/p'}\leq \frac{1}{(\nu p')^{1/p'}}\|h\|_p\ . \end{eqnarray*} We obtained that $H\in L^{\infty}(\mathbb R_+,\mathbb R)$ and $\|H\|_{\infty}\leq (\nu p')^{-1/p'}\|h\|_p$. Next, we prove that $H\in L^p(\mathbb R_+,\mathbb R)$. Via H\"{o}lder's inequality, we have that \begin{eqnarray*} \int_0^t e^{-\nu(t-s)}h(s)ds &=& \int_0^t e^{-\nu\alpha(t-s)}e^{-\nu\beta(t-s)}h(s)ds \\ &\leq& \left(\int_0^t e^{-\nu\alpha p'(t-s)}ds\right)^{1/p'}\left(\int_0^t e^{-\nu\beta p(t-s)}h(s)ds\right)^{1/p}\\ &\leq& \left[\left(\frac{1}{\nu\alpha p'}\right)^{p-1} \int_0^t e^{-\nu\beta p(t-s)}h(s)ds\right]^{1/p}\ . \end{eqnarray*} Then, denoting $C:=(\nu\alpha p')^{1-p}>0$, we can write down \begin{eqnarray*} \int_0^{\infty}H(t)^pdt &=& \int_0^{\infty}\left(\int_0^t e^{-\nu(t-s)}h(s)ds\right)^p dt \leq C \int_{0}^{\infty}\int_0^{t} e^{-\nu\beta p(t-s)}h(s)^pds\,dt \\ &\leq& \frac{C}{\nu\beta p} \|h\|_p^p \end{eqnarray*} (the last step follows similarly to \textit{Case 2}, using Fubini's theorem). From here, it follows that $H\in L^p(\mathbb R_+,\mathbb R)$ with $\|H\|_p\leq C^{1/p}(\nu\beta p)^{-1/p}\|h\|_p$. Therefore, $H\in L^q(\mathbb R_+,\mathbb R)$ and we have that $\|H\|_q\leq\max\{(\nu p')^{-1/p'},C^{1/p}(\nu\beta p)^{-1/p}\}\|h\|_p$. In any case, we obtained that $H\in L^q(\mathbb R_+,\mathbb R)$ (provided that $h\in L^p(\mathbb R_+,\mathbb R)$) and the existence of some $K>0$ (independent of $h$) such that $\|H\|_q\leq K\|h\|_p$. To complete the proof, note that from (a) we have that $\psi-f\in L^p(\mathbb R_+,\mathbb X)$ and $\mathbb G\psi\in L^q(\mathbb R_+,\mathbb X)$; in virtue of all above we obtain that $\mathbb G\psi - \mathbb G f\in L^q(\mathbb R_+,\mathbb X)$ (no matter how we take $f\in L^p(\mathbb R_+,\mathbb X)$). Moreover, we can state that $\|\mathbb G f - \mathbb G g\|_q\leq K\|f-g\|_p$, for all $f,g\in L^p(\mathbb R_+,\mathbb X)$. Hence, $\mathbb G\in\ensuremath{\textsl{Lip}}(L^p(\mathbb R_+,\mathbb X),L^q(\mathbb R_+,\mathbb X))$. \end{proof} \begin{theorem} If the pair $(L^1(\mathbb R_+,\mathbb X),L_0^{\infty}(\mathbb R_+,\mathbb X))$ is admissible to the continuous evolution family \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} \!, then \ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} is asymptotically stable. \end{theorem} \begin{proof} Let $x\in\mathbb X$, $t_0\in\mathbb R_+$ and consider the function $f:\mathbb R_+\to\mathbb X$ $$f(t) = \begin{cases} X(t,t_0)x &,\ t\in[t_0,t_0+1]\\ 0 &,\ t\in\mathbb R_+\setminus[t_0,t_0+1] \end{cases}\quad.$$ We have that $f\in L^1(\mathbb R_+,\mathbb X)$ and note that $$(\mathbb G f)(t) =\int_{t_0}^{t_0+1}X(t,s)X(s,t_0)xds=u_{t_0,x}(t)\ ,$$ for all $t\geq t_0+1$. Since $\mathbb G f\in L_0^{\infty}(\mathbb R_+,\mathbb X)$, we get that $\lim_{t\to\infty}u_{t_0,x}(t)=0$. \end{proof} \section{Applications} In this section as a model for applications of the obtained results in the previous section we consider equations of the form \begin{eqnarray*} \frac{\partial u(t,x)}{\partial t} &=& \frac{\partial^2 u(t,x)}{\partial x^2} +g(t,u(t,x)), \quad t>0 ,x\in (0,\pi ),\\ \frac{\partial u(t,x)}{\partial t} &=& 0, \quad x=0,\pi , \end{eqnarray*} where $u(t,x)$ is a scalar function of $(t,x)\in \mathbb R^+\times\mathbb R$, $g(t,y)$ is uniformly Lipschitz continuous in $y\in\mathbb R$, and $g(\cdot , 0)\in L^r(\mathbb R_+,\mathbb X)$ with $1\le r < \infty$. This equation can be re-written in the following abstract form in a Banach space $ \mathbb X$ \begin{equation}\label{ex1} \frac{d}{dt} u(t)= \Delta u(t) + G(t,u(t)), \end{equation} where $X= \{ v\in W^{2,2}(0,\pi ): v' =0 \ \mbox{at} \ x=0,\pi \}$, $ \Delta v= \partial^2/\partial x^2$ on $X$, $G(t,u(t))= g(t,u(t,\cdot ))$. As is well known, (actually, an extension of) $\Delta$ generates a strongly continuous analytic semigroup in $\mathbb X$ that we denote by $(T(t))_{t\ge 0}$. By a standard argument (see e.g. \cite{aulmin}), we can prove that (\ref{ex1}) generates a continuous evolution family $\ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} $ in $\mathbb X$ that is determined from the equation \begin{equation} X(t,s)x= T(t-s)x + \int^t_s T(t-\xi ) G(\xi ,X(\xi ,s)x)d\xi , \quad \ \mbox{for all} \ t\ge s \ge 0. \end{equation} \begin{theorem}\label{the 3.3} Assume that the pair $(L^p(\mathbb R_+,\mathbb X),L^q(\mathbb R_+,\mathbb X))$ is admissible to the continuous evolution family $\ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} $. Then there exists a mild solution $u\in L^r(\mathbb R_+,\mathbb X)$ of (\ref{ex1}) that attracts all other mild solutions of the equations at exponential rate. \end{theorem} \begin{proof} Let us consider the evolution semigroup $(T^h)_{h\ge 0}$ in $L^r(\mathbb R_+,\mathbb X)$ associated with the linear equations $u'=\Delta u$, defined as \begin{equation}\label{semi} [T^hf](t):= \begin{cases} T(h)f(t-h), \ \mbox{if} \ h\le t \\ 0, \ \mbox{if} \ 0\le t< h , \end{cases} \end{equation} for all $f\in L^r(\mathbb R_+,\mathbb X)$. Since $1\le r<\infty$, this semigroup is strongly continuous (see e.g. \cite{chilat,minrabsch}). Let us denote by ${\mathcal L}$ the generator of this semigroup. As is well known (see e.g. \cite{minrabsch}), $u\in D({\mathcal L})$ and ${\mathcal L}u=-f$ if and only if \begin{equation}\label{ex 2} u(t)= \int^t_0 T(t-\xi )f(\xi d\xi \quad t\ge 0. \end{equation} Consider the operator ${\mathcal L} +{\mathcal G}$ on $L^r(\mathbb R_+,\mathbb X)$, where ${\mathcal G}$ is the Nemytsky operator associated with $G$, defined as $L^r(\mathbb R_+,\mathbb X): \ni \phi (\cdot ) \mapsto G(\cdot , \phi (\cdot )) \in L^r(\mathbb R_+,\mathbb X)$. Note that under the assumption on $g$, the Nemytsky operator acts in $L^r(\mathbb R_+,\mathbb X)$ as a Lipschitz continuous operator. So, in the same way as in \cite{aulmin}, we can show that the operator ${\mathcal L} +{\mathcal G}$ associated with (\ref{ex1}) generates a strongly continuous semigroup $(S(h))_{h\ge 0}$ in $L^r(\mathbb R_+,\mathbb X)$ that is referred to as the evolution semigroup associated with (\ref{ex1}). Moreover, this semigroup is determined by \begin{equation}\label{semi-1} [S(h)f](t):= \begin{cases} X(t,t-h)f(t-h), \ \mbox{if} \ h\le t \\ 0, \ \mbox{if} \ 0\le t< h . \end{cases} \end{equation} for all $f\in L^r(\mathbb R_+,\mathbb X)$. By the admissibility of the pair $(L^p(\mathbb R_+,\mathbb X),L^q(\mathbb R_+,\mathbb X))$, $\ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} $ is exponentially stable. This implies the strict contraction of $S(h)$ for sufficiently large $h$. In fact, \begin{eqnarray*} \| S(h)\phi -S(h)\psi \| _r &\le& Ne^{-\alpha h} \| \phi -\psi\|_r \end{eqnarray*} for all $\phi, \psi \in L^r(\mathbb R_+,\mathbb X)$. Therefore, if $h$ is sufficiently large $Ne^{-\alpha h} <1$. Let us fix a sufficiently large integer $n_0$. This yields that $S(n_0)$ has a unique fixed point $\varphi\in L^r$. Since $S(h)$ commutes with $S(n_0)$ for all $h\ge 0$, it is easy to see that $\varphi$ is the unique common fixed point for the entire semigroup $(S(h))_{h\ge 0}$. This implies that $({\mathcal L}+{\mathcal G})\varphi =0$. So, ${\mathcal L}\varphi = -{\mathcal G}\varphi$, and by the formula (\ref{ex 2}), we have $$ \varphi (t)=\int^t_0 T(t-\xi )G(\xi ,\varphi (\xi ))d\xi ,\quad t\ge 0 . $$ This means that $\varphi\in L^r(\mathbb R_+,\mathbb X)$ is a mild solution starting at zero of (\ref{ex1}), so $\varphi (t) =X(t,0)0.$ Now we show that this solution attracts all other solutions at exponential rate. In fact, every other solution starting, say, at $x\in \mathbb X$ is of the form $X(t,0)x$. Therefore, $$ \| X(t,0)x-X(t,0)0\| \le Ne^{-\alpha t} \| x\| , \quad t\ge 0 . $$ This completes the proof of the theorem. \end{proof} Before concluding this section we give an application of Theorem \ref{the 3.1} \begin{proposition} Let the pair $(L^1(\mathbb R_+,\mathbb X),L^\infty(\mathbb R_+,\mathbb X))$ be admissible to the continuous evolution family $\ensuremath{\{X(t,s)\}_{(t,s)\in\Delta}} $ (generated by (\ref{ex1})). Moreover, assume that $g(t,0)=0$ for all $t\ge 0$. Then every mild solution $u$ of (\ref{ex1}) is bounded. \end{proposition} \begin{proof} Obviously, the $u(t)=0$ is the trivial mild solution of (\ref{ex1}). Therefore, for every mild solution $u(t) = X(t,s)x$, where $x\in \mathbb X$, we have $$ \| u(t)\| = \| X(t,s)x\| =\| X(t,s)x-X(t,s)0\| \le \| X(t,s)\| _{lip} \| x\| \le N \| x\| \quad \ \mbox{for all} \ t\ge 0, $$ where $N$ is a positive number whose existence is guaranteed by Theorem \ref{the 3.1}. \end{proof} \bibliographystyle{amsplain}
2,877,628,090,875
arxiv
\section{Introduction}\label{intro} Gravitational wave astronomy has tremendous potential for discovery, as has been spectacularly demonstrated by the ground-based LIGO/Virgo observatories~\cite{PhysRevLett.116.061102, PhysRevLett.119.161101}. The signals that have been detected to-date have all been from binary systems, and are accurately modeled by theoretical templates. Going forward, it is hoped that entirely new classes of signals will be discovered, many of which we will not have templates for, either due to the difficulty in calculating the waveform (such as for supernovae), or from our ignorance about the existence of the source. Detecting signals of unknown morphology is challenging since the instruments themselves produce non-Gaussian transients, or glitches, that can be mistaken for signals of astrophysical origin. The Laser Interferometer Space Antenna (LISA)~\cite{2017arXiv170200786A}, like its ground based cousins, will very likely be afflicted by glitches. Glitches were seen in data from the LISA Pathfinder mission~\cite{PhysRevLett.116.231101, PhysRevLett.120.061101}, and it is hard to imagine that the will be absent from the more complex LISA measurement system. Characterizing these glitches and accurately estimating their waveforms, will be an important component of the LISA global data analysis program. Unlike the situation on ground, where the availability of multiple independent interferometers simplifies the task of separating glitches from signals, with LISA we will have a single instrument. Nor will we have any ``off-source'' data, free of loud gravitational wave signals, with which to perform a measurement of the instrument noise. With LISA the signal and noise measurement must be done simultaneously~\cite{0264-9381-34-24-244002} as part of a global analysis. Similar concerns led to the developments of burst and glitch characterization analyses for LIGO. One such analysis was the wavelet-based Bayesian algorithm BayesWave~\cite{0264-9381-32-13-135012}. This algorithm has played the key role for model-independent waveform reconstructions for most of the detected mergers seen by LIGO~\cite{PhysRevLett.116.061102}. Its broad capabilities were best demonstrated by the binary neutron star merger. BayesWave's ability to characterize a loud instrumental glitch, obscuring a large fraction of the all-important late inspiral, allowed for an accurate reconstruction of a the astrophysical signal with the glitch removed~\cite{Pankow:2018qpo}, so that other analyses could properly characterize the binary neutron star's physical parameters~\cite{PhysRevLett.119.161101}. For LISA, we wish to develop an algorithm to serve a similar purpose of analyzing glitches and bursts. Instrumental glitches in LISA studied here will fall into two categories: optical path and acceleration. Optical path glitches reflect non-Gaussian deviations in the optical path length of any of LISA's 6 laser links. Acceleration glitches result from disturbance to the acceleration of LISA's spacecraft. Laser phase noise glitches will be neglected in this study since they will be suppressed in the time delay interferometry (TDI) data channels~\cite{PhysRevD.65.082003}. Glitches can be represented through a superposition of sine-Gaussian wavelets in each component of the instrument. Gravitational wave bursts can similarly be represented by a superposition of wavelets. The signal is referenced to the solar system barycenter (SSB) and then projected onto LISA by computing the instrument response. As a first step, we consider signals and glitches that are described by a single wavelet and defer the generalization to multi-wavelet fits to future work. To investigate our ability to characterize glitches and bursts consisting of the single wavelet, we will use Bayesian probability theory to calculate our degree of belief in the parameters which describe the injected signal, as quantified by the posterior distribution. The duration of these signals ranges from tens of seconds to roughly a day. Their duration, and frequency content in relation to the light travel time between the spacecraft will have important implications on our ability to characterize these signals and will also play a key role in our ability to distinguish whether the data contains a glitch (and which kind) or burst. Glitches will enter the time delay interferometry (TDI) data channels with time delays of the light travel time between spacecraft. This time is about 8.3 seconds for the nominal $L= 2.5$ Gm separation, which sets the LISA response transfer frequency $f_* = c/(2\pi L)$ to be 19.1 mHz. Wavelets with frequencies below the transfer frequency will be harder to characterize and distinguish. An additional piece of the puzzle is which data channels the wavelets power crops up and in which proportion. Acceleration glitches enter the data stream by afflicting 2 different phase measurements while optical path glitches afflict only 1. Bursts, on the other hand, enter all phase measurements through time delays which depend on the various projected arm lengths depending on where the incident gravitational wave originates on the sky. While the power distribution is the most useful discriminant, the phasing becomes most important in the case of a malfunctioning LISA arm {\it i.e.} when when we would be left with only one data channel. In this study we will address these considerations and investigate what we can learn and what features are most informative. This work is organized as follows: Section~\ref{sec:model} discusses the the waveforms for optical path and acceleration glitches and for gravitational wave bursts. Section~\ref{sec:bayes} reviews Bayesian inference and then describes the Markov Chain Monte Carlo algorithm we employ to carry out the parameter estimation and model selection analyses in this paper. Section~\ref{sec:PE} shows how well we can characterize glitch and burst parameters and recover the injected waveform. In section~\ref{sec:ModSel} we explore under what conditions we are able to distinguish a glitch and burst, and identify what features of the signal are most responsible for making the distinction. We end with a discussion of future work to be carried out in section~\ref{sec:discuss}. Note that we work in units where $G=c=1$. \section{Glitch and Burst Models}\label{sec:model} The LISA constellation consists of 3 spacecraft in the shape of a quasi-equilateral triangle trailing behind Earth. The spacecraft have a total of 6 laser links, 2 for each arm. Each laser get is phase measured by phasemeters onboard of the LISA spacecraft. A photon sent from the laser situated on spacecraft $i$, pointing towards spacecraft $j$, is emitted at time $t - L_{ij}$, where $L_{ij}$ is arm length connecting spacecraft $i$ and $j$. The phase of this photon is measured at time $t$ by the phasemeter on spacecraft $j$. This phase measurement can be approximated as~\cite{PhysRevD.69.082003} \begin{align} \Phi_{ij}(t) &= C_{i}(t-\mathrm{L}_{ij}) - C_{j}(t) + \psi_{ij}(t) + n^{\mathrm{o}}_{ij}(t) \nonumber \\ &~~~-\hat{\textbf{r}}_{ij}(t)\cdot\left(\textbf{n}^{\mathrm{a}}_{ji}(t-L_{ij}) - \textbf{n}^{\mathrm{a}}_{ij}(t) \right)\,\,. \label{equ:phase} \end{align} The noise in the laser phase itself is described by the terms $C_{i}$. The term $\psi_{ij}$ describes the phase shift induced by the presence of gravitational waves. The term $n^{\mathrm{o}}_{ij}$ represents the contribution from the optical bench on spacecraft $j$ that receives light from spacecraft $i$. The last term represents the contribution to the phase measurement incurred by the acceleration noise of the spacecrafts. Note that in this simplified model, where we are neglecting higher order features of LISA's motion such as the flexing of the arms, the only component of the acceleration that is relevant is the differential acceleration along the line $\hat{\textbf{r}}_{ij}$ connecting the center of mass of the two spacecraft. These laser phase measurements are expected to be dominated by laser phase noise. Current estimates indicate that the laser phase noise will be roughly 10 orders of magnitude greater that the changes in phase induced by the gravitational waves of interest~\cite{PhysRevD.65.082003}. The phase noise can be canceled using time delay interferometry (TDI). The TDI data combinations synthesize light paths of equal length by adding together phase measurements with time delays given by multiples of the instantaneous light travel times. This superposition cancels the laser phase noise. When higher order corrections to the spacecraft motion are taken into account the superposition of time delayed phase measurements become more complicated. Here we use the simpler first generation TDI data combinations. Three Michelson-like TDI channels can be formed from the signals extracted at each vertex of the observatory. These are denoted as $X$, $Y$, and $Z$. The $X$ TDI channels is constructed as follows: \begin{align} X(t) &= \Phi_{12}(t-3\mathrm{L}) - \Phi_{13}(t - 3\mathrm{L}) + \Phi_{21}(t-2\mathrm{L}) \nonumber \\ &~~~ - \Phi_{31}(t-2\mathrm{L}) + \Phi_{13}(t-\mathrm{L}) - \Phi_{12}(t-\mathrm{L}) \nonumber \\ &~~~+ \Phi_{31}(t) - \Phi_{21}(t) \,\,, \label{equ:X} \end{align} where we have assumed that the LISA arm lengths are of constant length, i.e. $L_{ij}(t) = L$. The $Y$ and $Z$ channels are constructed through a cyclic permutation of the spacecraft labels---e.g. $1\rightarrow2$, $2\rightarrow3$, and $3\rightarrow1$ to construct the $Y$ TDI channel. It is often convenient to work with following linear combinations of the $X,Y,Z$ channels: \begin{subequations} \begin{align} A &= \frac{1}{3}(2X-Y-Z) \\ E &= \frac{1}{\sqrt{3}}(Z-Y) \\ T &= \frac{1}{3}(X+Y+Z) \,\,. \label{equ:AET} \end{align} \end{subequations} Below the transfer frequency, the $A$ and $E$ channels synthesize two right angle interferometers with a relative orientation of $45^\circ$, and provide instantaneous measures of the plus and cross polarization states of a gravitational wave. At these frequencies the $T$ channel is mostly sensitive to the scalar breathing mode polarization state, which is absent in Einstein gravity, and thus provides a null channel that is useful for measuring a particular combination of the noise contributions. When the noise levels are equal on each spacecraft, the cross-spectral density of the noise in the $A,E,T$ channels vanish~\cite{PhysRevD.82.022002,PhysRevD.89.022001}. An arbitrary signal seen in the TDI data channels may be reconstructed by a superposition of sine-Gaussian wavelets. In this study we use Gabor-Morlet wavelets. In the time-domain they are given by \begin{equation} \Psi = A e^{-(t-t_{0})^{2}/\tau^{2}}\cos\left[2\pi f_{0}(t-t_{0}) + \phi_{0}\right] \,\,, \label{equ:wavelet} \end{equation} where $A$ is the wavelet amplitude, $t_{0}$ and $f_{0}$ are the central time and frequency, the wavelet time scale is $\tau$---related to the wavelet quality factor $Q$ through $\tau~=~Q/2\pi f_{0}$---and $\phi_{0}$ is the initial phase. Occasionally we will use the variable $\bar{\phi} = \phi_{0} - 2\pi f_{0} t_{0}$. The Fourier transform of the Gabor-Morlet wavelet is \begin{align} \tilde{\Psi} &= \frac{\sqrt{\pi}\tau A}{2}e^{-i(2\pi f t_{0} + \phi_{0})} \nonumber \\ &~~~~~ \times\left[e^{-\left(\pi\tau(f+f_{0})\right)^{2}} + e^{2i\phi_{0}}e^{-\left(\pi\tau(f-f_{0})\right)^{2}} \right] \,\,. \label{equ:FTwavelet} \end{align} In the Fourier domain we see in the large quality factor $Q$, or equivalently large $\tau$ regime, the second term in eqn.~(\ref{equ:FTwavelet}) is dominant. Ignoring the sub-dominant term, we can estimate the signal-to-noise ratio (SNR) in the case of white noise as \begin{align} \rho^{2} \approx& \frac{4}{S_{n}(f_{0})}\int_{0}^{\infty} \left(\frac{\sqrt{\pi}\tau A}{2}\right)^{2}e^{-2\left(\pi\tau(f-f_{0})\right)^{2}} df \nonumber \\ \approx & \sqrt{\frac{\pi}{2}}\frac{A^{2} \tau}{S_{n}(f_{0})} \,\,, \label{eq:snrEST} \end{align} where $S_{n}(f_{0})$ is an appropriate noise power spectral density which has been assumed constant such that we may approximate the integral. This result will become useful later when we wish to estimate a reasonable bandwidth in the frequency domain to calculate these signals over. \subsection{Instrumental Glitches}\label{glitch} To model instrumental glitches we inject a Gabor-Morlet wavelet into the appropriate term in eqn. (\ref{equ:phase}). For example, a glitch in the optical path length pointing from spacecraft 1 to 2 is modeled as $\Phi_{12}(t) = n^{\mathrm{o}}_{12}(t) = \Psi(t)$. We will label such a glitch as $\Phi_{12}^{\mathrm{op}}$. For an acceleration glitch associated with the proof mass on spacecraft 2 that is referenced against spacecraft 1 will appear in to phase measurements: $\Phi_{12}(t) = -\Psi(t)$ and $\Phi_{21}(t) = \Psi(t-L)$. This acceleration glitch will be denoted as $\Phi_{12}^{\mathrm{ac}}$. Laser phase glitches are neglected in this work since the TDI channels are constructed such that laser phase noise is canceled. The $X$, $Y$, and $Z$ TDI channels can be constructed for both optical path and accelerations glitches analytically in the frequency domain. For the optical path glitch $\Phi_{12}^{\mathrm{op}}$ the response is \begin{subequations} \begin{align} \tilde{X} &= 2i\tilde{\Psi}e^{-2if/f_{*}}\sin\frac{f}{f_{*}}\\ \tilde{Y} &= -2i\tilde{\Psi}e^{-if/f_{*}}\sin\frac{f}{f_{*}} \\ \tilde{Z} &= 0 \, . \end{align} \end{subequations} Note that there is no response in the $Z$ channel. The factor of $\sin f/f_{*}$ is due to differencing the disturbance by the time delay. The only other optical path glitch that has no response in the $Z$ channel is $\Phi_{21}^{\mathrm{op}}$, which produces the response \begin{subequations} \begin{align} \tilde{X} &= 2i\tilde{\Psi}e^{-if/f_{*}}\sin\frac{f}{f_{*}} \,\,,\\ \tilde{Y} &= -2i\tilde{\Psi}e^{-2if/f_{*}}\sin\frac{f}{f_{*}} \\ \tilde{Z} &= 0 \, . \end{align} \end{subequations} We can already glean insight into how optical patch glitches can be identified. When all 6 laser links are functioning, none of the optical path glitches can be made to look like the other. For example, suppose we try to match the $X$ channel response of $\Phi_{12}^{\mathrm{op}}$ to that of $\Phi_{21}^{\mathrm{op}}$. This would require a time shift of $t+L$ i.e. a factor of $e^{if/f_{*}}$ in the frequency domain. This time shift will of course shift the $Y$ response in the opposite desired direction in time. We cannot find a transformation of wavelet parameters such that any optical path glitch looks like another when all 6 laser links functioning. If we are unfortunate enough to have only 2 functioning arms, we will be at a loss when attempting to distinguish these two glitches. That is if we have only the $X$ channel, we will not be able to distinguish $\Phi_{12}^{\mathrm{op}}$ from a time shifted $\Phi_{21}^{\mathrm{op}}$. We must also contend with acceleration glitches. The acceleration glitch $\Phi^{\mathrm{ac}}_{12}$ has the TDI response \begin{equation} \tilde{Y} = 4 \tilde{\Psi} e^{-2if/f_{*}}\sin^{2}\frac{f}{f_{*}} \,\,. \end{equation} Where both the $X$ and $Z$ channel are null. All acceleration glitches have a response in only 1 of the $X$, $Y$, and $Z$ data channels. Acceleration glitches also have an additional suppression from the extra factor the transfer function $\sin f/f_{*}$. This is due to the acceleration glitch appearing in two phase measurements separated by the light travel time between spacecraft. With acceleration glitches however, we are unable to unambiguously determine their origin even when all 6 laser links are functioning. There are perfect degeneracies between pairs of acceleration glitches. For example, the response to the acceleration glitch $\Phi^{\mathrm{ac}}_{32}$ is \begin{equation} \tilde{Y} = -4 \tilde{\Psi} e^{-2if/f_{*}}\sin^{2}\frac{f}{f_{*}} \,\,. \end{equation} has precisely the same form as the $\Phi^{\mathrm{ac}}_{12}$ except a shift in its initial phase (by $\pi$). In the scenario that we lose one arm of the constellation we will be no worse off with respect to distinguishing acceleration glitches. The response to glitches in other components are shown in table~\ref{tbl:TDI}. \begin{table*}[t] \centering \renewcommand\arraystretch{1.5} \begin{tabular}{ |c|c|c|c|c| } \hline & $\tilde{X}$ & $\tilde{Y}$ & $\tilde{Z}$ \\ \hline $\Phi^{\mathrm{op}}_{12}$ & $2i\tilde{\Psi}e^{-2i f/f_{*}}\sin \left(f/f_{*}\right)$ & $-2i\tilde{\Psi}e^{-i f/f_{*}}\sin\left(f/f_{*}\right)$ & 0 \\ $\Phi^{\mathrm{op}}_{21}$ & $2i\tilde{\Psi}e^{-i f/f_{*}}\sin\left(f/f_{*}\right)$ & $-2i\tilde{\Psi}e^{-2i f/f_{*}}\sin\left(f/f_{*}\right)$ & 0 \\ $\Phi^{\mathrm{op}}_{13}$ & 0 & $2i\tilde{\Psi}e^{-i f/f_{*}}\sin\left(f/f_{*}\right)$ & $-2i\tilde{\Psi}e^{-2i f/f_{*}}\sin\left(f/f_{*}\right)$ \\ $\Phi^{\mathrm{op}}_{31}$ & 0 & $-2i\tilde{\Psi}e^{-i f/f_{*}}\sin\left(f/f_{*}\right)$ & $2i\tilde{\Psi}e^{-2i f/f_{*}}\sin\left(f/f_{*}\right)$ \\ $\Phi^{\mathrm{op}}_{23}$ & $2i\tilde{\Psi}e^{-i f/f_{*}}\sin\left(f/f_{*}\right)$ & 0 & $-2i\tilde{\Psi}e^{-2i f/f_{*}}\sin\left(f/f_{*}\right)$ \\ $\Phi^{\mathrm{op}}_{32}$ & $2i\tilde{\Psi}e^{-i f/f_{*}}\left(f/f_{*}\right)$ & 0 & $-2i\tilde{\Psi}e^{-2i f/f_{*}}\sin\left(f/f_{*}\right)$ \\ \hline $\Phi^{\mathrm{ac}}_{12}$ & 0 & $4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ & 0 \\ $\Phi^{\mathrm{ac}}_{21}$ & $-4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ & 0 & 0 \\ $\Phi^{\mathrm{ac}}_{13}$ & 0 & 0 & $-4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ \\ $\Phi^{\mathrm{ac}}_{31}$ & $4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ & 0 & 0 \\ $\Phi^{\mathrm{ac}}_{23}$ & 0 & 0 & $4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ \\ $\Phi^{\mathrm{ac}}_{32}$ & 0 & $-4\tilde{\Psi}e^{-2i f/f_{*}}\sin^{2}\left(f/f_{*}\right)$ & 0 \\ \hline \end{tabular} \caption{\label{tbl:TDI} This table contains the analytic first generation TDI variables for optical path and acceleration glitches. Note that optical path glitches occupy 2 of the $XYZ$ TDI channels while the acceleration glitches only occupy 1. Acceleration glitches pick up an additional factor of the transfer function $\sin f/f_{*}$. A change in wavelet parameters, specifically the initial phase, leads to a perfect degeneracy between pairs of acceleration glitches when all three TDI channels are functioning. This is not the case for optical path glitches.} \end{table*} When generating these waveforms we wish economically sample an appropriate bandwidth. The signal-to-noise ratio for an optical path glitch, given one data channel, can be estimated as \begin{equation} \rho_{\mathrm{est}}^{2} = \frac{\sqrt{\pi/2} A^{2} \tau}{S_{X,M}(f_{0})} \,\,, \end{equation} in the large $\tau$ limit obtained from eqn.~(\ref{eq:snrEST}). $S_{X,M}$ is the Michelson-equivalent power spectral density defined as $S_{X,M} = S_{X}/4 \sin^{2}(f/f_{*})$. Similarly, the SNR for an acceleration glitch can be estimated as \begin{equation} \rho_{\mathrm{est}}^{2} = \frac{4 \sqrt{\pi/2} A^{2} \tau \sin^{2}(f_{0}/f_{*})}{S_{X,M}(f_{0})} \,\,. \end{equation} A bandwidth of $\Delta f = 4 (\rho_{\mathrm{est}}/5)^{2}/\tau$ was used to capture in excess of $99.9\%$ of the SNR in each glitch. In addition to distributing power in different TDI channels, glitches in different components produce different phasing in the response. The phasing information depends critically on the frequency of the glitch, $f_0$ and the duration of the glitch $\tau$. Higher frequency glitches get heavily modulated by the transfer functions making it easier to determine their origin. In Figure~\ref{fig:ThreeRegimes} the $AET$ TDI channels for optical path glitches $\Phi_{12}^{\mathrm{op}}$ are displayed in red and acceleration glitches $\Phi_{21}^{\mathrm{ac}}$ in shown blue. \begin{figure*}[!htb] \centering \includegraphics[width=1.0\textwidth]{ExampleWavelets.pdf} \caption{\label{fig:ThreeRegimes} This figure displays the $AET$ TDI channel responses for various glitches ($\Phi_{12}^{\mathrm{op}}$ in red, and $\Phi_{21}^{\mathrm{ac}}$ in blue) and gravitational wave bursts (in black). The top row shows wavelets with durations that are much longer than the light travel time between LISA spacecraft. The middle row shows wavelets with duration that is comparable to the light travel time. The bottom row shows wavelet with duration less than the light travel time, which leads to a clean separation of the glitch wavelets in the TDI channels. Note that the glitch wavelets only appear in a subset of the TDI channels.} \end{figure*} The amplitudes of the optical and acceleration glitches were chosen for ease of comparison, while maintaining the correct relative amplitudes in the different TDI channels. The top row (case 1) displays glitches with the parameters $\tau = 0.2$~hours, and $f_{0} = 2/\tau$ i.e. $2.7$~mHz, placing this glitch well below the transfer frequency. These parameters give the glitch a quality factor of 12.6. Since the wavelet has a low frequency, its amplitude does not change substantially in a light travel time. This means that the construction of the TDI channels acts like a derivative of the input. In the middle row (case 2) the parameters of the wavelet are $\tau = L = 8.33$~seconds and $f_{0} = 1.3/\tau = 156$~mHz ($Q\sim 8$). This wavelet has a temporal extent comparable to that of the light travel time between spacecraft. This results in a waveform that is the superposition of two wavelets with a small time shift between them. Lastly, in the bottom row (cases 3) we see a wavelet of $\tau=1$~second and $f_{0} = 800$~mHz i.e. $Q = 5.1$. Here the frequency of the signal is substantially larger than the transfer frequency and the duration of the signal in time is much less than the light travel time, leading to a clean separation of the wavelets in the TDI channels. Note that in the low frequency regime the optical path glitch has a suppressed output in the $T$ channel. We also see that the $E$ channel response to the acceleration glitch is totally suppressed. This is because there is no $Z$ or $Y$ response for this specific acceleration glitch and the $E$ channel has no $X$ channel dependence. \subsection{Gravitational Wave Bursts}\label{burst} The optical path length change due to a gravitational wave signal in the laser link connecting the $i^{\mathrm{th}}$ and $j^{\mathrm{th}}$ spacecraft is given by \begin{equation} \delta \ell_{ij}(t) = \textbf{D}_{ij}:\int_{\xi_{i}}^{\xi_{j}}\textbf{h}(t) dt\,\,, \label{equ:dl} \end{equation} where the colon denotes full contraction between the tensors, i.e. $\textbf{A}:\textbf{B} = A^{jk}B_{jk}$. The time $t$ is Solar System Barycenter (SSB) time, and $\xi_{i} = t_{i} - \hat{\textbf{k}}\cdot\textbf{x}_{i}$ is the wave variable defining surfaces of constant phase for the gravitational wave. The position of the $i^{\mathrm{th}}$ spacecraft is $\textbf{x}_{i}$ and $t_{i}$ is the time of emission of the laser photon from spacecraft $i$ and $t_{j}$ is the time of reception of the laser photon at spacecraft $j$. The detector tensor $\textbf{D}$ is given by \begin{equation} \textbf{D} = \frac{1}{2}\frac{\hat{\textbf{r}}_{ij}\otimes\hat{\textbf{r}}_{ij}}{1-\hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij}} \,\,, \label{equ:det_tensor} \end{equation} where $\hat{\textbf{k}} = -(\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$ is the gravitational wave propagation direction---$\theta$ and $\phi$ designate the source's position in spherical polar coordinates in the SSB frame. The quantity $\hat{\textbf{r}}_{ij}$ is the unit-separation vector between the LISA spacecraft pointing from spacecraft $i$ to spacecraft $j$. In this study the LISA orbits are kept to leading order in eccentricity thereby fixing LISA arm length to be constant~\cite{PhysRevD.69.082003} $\mathrm{L} = |\textbf{r}_{ij}|$ for all $i,j$ combinations. The gravitational wave tensor $\textbf{h}$ is given by \begin{equation} \textbf{h} = h_{+}(t) \textbf{e}'_{+}(\psi,\theta,\phi) + h_{\times}\textbf{e}'_{\times}(\psi,\theta,\phi) \,\,, \label{equ:gw_tensor} \end{equation} where $\textbf{e}'_{+,\times}$ are the polarization tensors $\textbf{e}_{+,\times}$ rotated by the polarization angle $\psi \in \left[0, \pi\right]$. In this work we assume that the gravitational waves are elliptically polarized such that, in the frequency domain, $\tilde{h}_{\times} = i \epsilon \tilde{h}_{+}$ parameterized by the ellipticity $\epsilon \in \left[0,1\right]$. We model the integrated gravitational wave polarizations as Gabor-Morlet wavelets such that $\int^{t}h_{+}(t')dt' = L \Psi(t)$. We may approximate the detector as static for the duration of a wavelet, since corrections would be on the order of $\tau/1 \mathrm{yr}$. This means we may safely evaluate all terms associated with the position of the detector at the central time of the wavelet $t_{0}$ and assume the value is constant. The response to the wavelet in the frequency domain is then \begin{align} \tilde{y}_{ij}&= \frac{f}{f_{*}} \left(F^{+'}_{ij} + i \epsilon F^{\times'}_{ij}\right) \mathcal{T}_{ij}(f; \hat{\textbf{k}}) \tilde{\Psi}(f)e^{-2\pi i f \hat{\textbf{k}} \cdot \textbf{x}_{i}(t_{0})} \label{eq:gwRep} \end{align} where $y_{ij} = \delta \ell_{ij}/2L$ and $F^{+,\times}_{ij} = \left[\hat{\textbf{r}}_{ij}\otimes \hat{\textbf{r}}_{ij}\right]: \textbf{e}_{+,\times}$. $\mathcal{T}_{ij}$, the transfer function, is given by \begin{align} \mathcal{T}_{ij} =& \frac{1}{4} e^{i\left( \frac{f}{2 f_{*}}(1 - \hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij})\right)} \sinc \left( \frac{f}{2 f_{*}}(1 - \hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij})\right)\,\,. \label{eq:transfer} \end{align} The wavelet has its central time shifted by the light travel time between the SSB origin and spacecraft $i$ through the phase factor $e^{2\pi i f \hat{\textbf{k}} \cdot \textbf{x}_{i}(t_{0})}$ present in eqn.~(\ref{eq:gwRep}). The ellipticity $\epsilon$ and polarization angle $\psi$ simply modulate the amplitude of the response. We see that the sky angles modulate the amplitude too, but also enter into the phasing. As opposed to instrumental glitches, gravitational wave bursts will induce responses in all TDI channels. It is important to note though that for frequencies below the transfer frequency $f_{*}$, the gravitational wave response in the $T$ channel is heavily suppressed~\cite{PhysRevD.82.022002}. This can be seen in the $T$ channel response for case 1 in Figure~\ref{fig:ThreeRegimes} . The signal in each panel of Figure~\ref{fig:ThreeRegimes} represents a gravitational wave burst. The sky angles were chosen such that $\cos\theta = 0.23$ and $\phi=2.31$. The polarization angle was $0.45$ and the ellipticity $0.5$. The wavelet parameters are precisely the same as those for the glitches in the panel the burst shares (give or take an amplitude factor or time shift for the sake of easy comparison). Note that for case 1 the signal response is distinctly different than for the glitch. Recall that the glitches in this case were cleanly separated. This is because glitches enter the data stream with time delays equal to the light travel time between spacecraft, which is longer than their extent in time. Gravitational waves enter the data stream with time delays equal to the projected arm lengths. This can lead to foreshortened arms allowing for some overlap between the wavelets as see in case 3. \section{Bayesian Inference }\label{sec:bayes} With the glitch and burst models established we now turn to the methods used to infer the properties of the gravitational wave signals and instrument glitches and develop probability distributions for the parameters of the models. These probabilities are quantified by the posterior distribution $p(\vec{\lambda}_{\tiny \mathcal{M}}|\textbf{s}, \mathcal{M})$ which reflects our belief about a given set of parameters $\vec{\lambda}_{\tiny \mathcal{M}}$ which specify model $\mathcal{M}$ given data $\textbf{s}$. The posterior distribution is obtained via Baye's theorem: \begin{equation} p(\vec{\lambda}_{\tiny \mathcal{M}}|\textbf{s}, \mathcal{M}) = \frac{p(\textbf{s}|\vec{\lambda}_{\tiny \mathcal{M}}, \mathcal{M})p(\vec{\lambda}_{\tiny \mathcal{M}}|\mathcal{M})}{p(\textbf{s}|\mathcal{M})} \,\,, \end{equation} where $p(\vec{\lambda}_{\tiny \mathcal{M}}|\mathcal{M})$ is the prior distribution for the parameters $\vec{\lambda}_{\tiny \mathcal{M}}$, $p(\textbf{s}|\vec{\lambda}_{\tiny \mathcal{M}}, \mathcal{M})$ the likelihood of the data given the parameters, and $p(\textbf{s}|\mathcal{M})$ is the evidence for the model $\mathcal{M}$. Along with the assumptions we have already made in the construction of the TDI channels, we further assume that, aside from the glitches modeled here, the noise is stationary and Gaussian. The likelihood function for the data then takes the form \begin{equation} p(\textbf{s}|\vec{\lambda}, \mathcal{M}) \propto \exp\left[ -\frac{1}{2} \sum_{I} \left(\textbf{s}_{I}-\textbf{h}_{I}(\vec{\lambda})|\textbf{s}_{I}-\textbf{h}_{I}(\vec{\lambda})\right) \right] \,\,, \end{equation} where the $\mathcal{M}$ subscript on the parameters has been dropped for simplicity. The sum is over TDI data streams $I = \lbrace A, E, T\rbrace$ (or just $I = \lbrace X \rbrace$ for some of our investigations). The noise-weighted inner product is defined as \begin{equation} (\textbf{a}_{I}|\textbf{b}_{I}) = 4\mathcal{R}\int_{0}^{\infty} \frac{\tilde{a}_{I}(f)\tilde{b}_{I}^{*}(f)}{S_{n,I}(f)}df \,\,. \label{equ:nwip} \end{equation} The noise strain spectral density in these data channels are given by \begin{widetext} \begin{subequations} \begin{align} S_{AE} &= \frac{16}{3}\sin^{2}\frac{f}{f_{*}}\left[\left(2+\cos\frac{f}{f_{*}}\right)P_{\mathrm{OMS}} + 2\left(3 + 2\cos\frac{f}{f_{*}} + \cos \frac{2f}{f_{*}}\right)\frac{P_{\mathrm{acc}}}{(2\pi f)^{4}}\right]\frac{1}{(2\mathrm{L})^{2}} \\ S_{T} &= \frac{16}{3}\sin^{2}\frac{f}{f_{*}}\left[\left(1-\cos\frac{f}{f_{*}}\right)P_{\mathrm{OMS}} + \left(3-4\cos \frac{f}{f_{*}} + \cos\frac{2f}{f_{*}}\right)\frac{P_{\mathrm{acc}}}{(2\pi f)^{4}} \right]\frac{1}{(2\mathrm{L})^{2}} \\ S_{X} &= 4 \sin^{2}\frac{f}{f_{*}}\left[4 P_{\mathrm{OMS}} + 8 \left(1+ \cos^{2} \frac{f}{f_{*}}\right)\frac{P_{\mathrm{acc}}}{(2\pi f)^{4}} \right]\frac{1}{(2\mathrm{L})^{2}} \, . \end{align} \end{subequations} \end{widetext} The noise in the $A$ and $E$ channels, $S_{AE}$, is the same and the noise in the $T$ channel is $S_{T}$. The single-link optical metrology noise power $P_{\mathrm{OMS}}$ and single test-mass acceleration noise power $P_{\mathrm{acc}}$ are quoted in~\cite{2018arXiv180301944C}. Another contribution to the measured noise comes from millions of unresolved galactic binaries~\cite{1742-6596-840-1-012024} emitting gravitational waves to which LISA is sensitive. Estimates of the unresolved galactic binary confusion noise for various observation periods can also be found in the same reference. \subsection{Maximization over nuisance parameters} In a fully Bayesian analysis we would compute the joint posterior distributions all parameters in the model. To simplify the analysis and achieve more rapid convergence, we chose to eliminate certain nuisance parameters by analytically maximizing the likelihood with respect to these parameters using the F-statistic approach (we could have analytically {\em marginalized} over the nuisance parameters instead~\cite{Thrane:2018qnx}, but it is much faster and simpler to maximize). The F-statistic~\cite{PhysRevD.72.043005} provides a way to maximize the likelihood over the extrinsic parameters---$A$, $\phi_{0}$ for a glitch, and $A$, $\phi_{0}$, $\psi$, $\epsilon$ for a burst. Through the use of several filters, constructed from the burst or glitch wavelet with specific choices of extrinsic parameters, one may construct the maximize likelihood. To understand how to construct the F-statistic it is useful to consider the burst model in the large $\tau$ and low frequency limit \begin{align} y_{ij} =&-\frac{f_{0}}{4 f_{*}} \left[ F_{ij}^{+'} \sin\left(2\pi f_{0} t_{i} + \bar{\phi} \right) \right.\nonumber \\ &\left. + \epsilon F_{ij}^{\times '} \cos\left(2\pi f_{0} t_{i} + \bar{\phi} \right) \right] \,\,, \end{align} where $t_{i} = t - \hat{\textbf{k}}\cdot\textbf{x}_{i}$. This signal may be deconstructed into four terms which consist of a constant amplitude dependent on extrinsic parameters multiplying a time-dependent factor, additionally dependent on the intrinsic parameters ($f_{0}$, $\tau$, $t_{0}$, $\theta$, $\phi$) \begin{equation} y_{ij} = \sum_{k} a_{k} A^{k}(t) \,\,. \end{equation} The four filters $A^{k}(t)$ \begin{subequations} \begin{align} A^{1} &= -\frac{f_{0}}{4f_{*}}F_{ij}^{+} \sin\left(2\pi f_{0} t_{i} \right) \\ A^{2} &= -\frac{f_{0}}{4f_{*}}F_{ij}^{\times} \sin\left(2\pi f_{0} t_{i} \right) \\ A^{3} &= -\frac{f_{0}}{4f_{*}}F_{ij}^{+}\cos\left(2\pi f_{0} t_{i} \right) \\ A^{4} &=- \frac{f_{0}}{4f_{*}}F_{ij}^{\times} \cos\left(2\pi f_{0} t_{i} \right) \end{align}\label{eq:filters} \end{subequations} may be constructed by inserting the following the extrinsic parameters, as described by table~\ref{tbl:Fstat}, into the burst waveform generator. \begin{table}[t] \renewcommand\arraystretch{1.4} \centering \begin{tabular}{ |p{10mm}|p{10mm}|p{10mm}|p{10mm}|p{10mm}| } \hline Filter & $A$ & $\bar{\phi}$ & $ \psi$ & $\epsilon$ \\ \hline $A^{1}$ & 1 & 0 & 0 & 0 \\ $A^{1}$ & 1 &0 & $-\pi/4$ & 0 \\ $A^{3}$ & 1 & $\pi/2$ & 0 & 0\\ $A^{4}$ & 1 &$\pi/2$ & $-\pi/4$ & 0 \\ \hline \end{tabular}\label{tbl:Fstat} \caption{Plugging these parameters into the gravitational wave burst waveform generator will construct the filters eqns.~(\ref{eq:filters}). The resulting filters can then be used to maximize the likelihood over the extrinsic parameters.} \end{table} The glitch F-statistic filters can be constructed by the parameter choices: 1) $A=1, \bar{\phi} = 0$ and 2) $A=1, \bar{\phi} = -\pi/4$. The extrinsic parameter coefficients are \begin{subequations} \begin{align} a_{1} = & A\left( \cos 2\psi \cos \bar{\phi} - \epsilon \sin 2\psi \sin\bar{\phi} \right) \\ a_{2} = & A\left( -\sin 2\psi \cos \bar{\phi} - \epsilon \cos 2\psi \sin\bar{\phi} \right) \\ a_{3} = & A\left( \cos 2\psi \sin \bar{\phi} + \epsilon \sin 2\psi \cos\bar{\phi} \right) \\ a_{4} = & A\left( -\sin 2\psi \sin \bar{\phi} + \epsilon \cos 2\psi \cos\bar{\phi} \right) \,\,. \end{align} \end{subequations} The noise-weighted inner product of these filters with the data $N^{k} = (\textbf{s}|\textbf{A}^{k})$ can be used to construct the maximized relative likelihood \begin{equation} \mathcal{F} = \log \mathcal{L} = \frac{1}{2}\left(M^{-1}\right)_{mn} N^{m} N^{n} \,\,. \end{equation} The value $\mathcal{L}$ is the relative likelihood, i.e. the ratio between the likelihood assuming $\textbf{h}$ contains a burst and the likelihood assuming there is no such signal, i.e. $\textbf{h} = 0$, such that \begin{equation} \log \mathcal{L} = (\textbf{s}|\textbf{h}) - \frac{1}{2}(\textbf{h}|\textbf{h}) \,\,. \end{equation} The results hold for summing over multiple data channels such as when we use the $AET$ TDI channels. The matrix $M^{m n} = (\textbf{A}^{m}|\textbf{A}^{n})$ is simply the inner product matrix of the filters. Although, in this study we do not make use of the extrinsic parameters which maximize the likelihood it may prove useful in a future to study to be able to calculate them. Inverting the equations for the filter returns the extremized extrinsic parameters \begin{subequations} \begin{align} A =& \sqrt{\frac{1}{2}\left(s + \sqrt{pq}\right)} \\ \epsilon =& \frac{s - \sqrt{pq}}{2\left (a_{1}a_{4} - a_{2}a_{3}\right) } \\ \tan(2\psi) =& \frac{a_{1}^{3} + 2 a_{2} a_{3}a_{4} + a_{1}(a_{2}^{2} +a_{3}^{2} - a_{4}^{2} + \sqrt{pq})}{a_{1}^{2}a_{2} + 2 a_{1} a_{3}a_{4} + a_{2}(a_{2}^{2} -a_{3}^{2} + a_{4}^{2} + \sqrt{pq})} \\ \tan\bar{\phi} =& \frac{a_{1}^{2} + a_{2}^{2} - a_{3}^{2} - a_{4}^{2} + \sqrt{pq}}{-2(a_{1}a_{3} + a_{2}a_{4})} \end{align} \end{subequations} where $s = a_{1}^{2} + a_{2}^{2} + a_{3}^{2} + a_{4}^{2}$, $p=(a_{2}+a_{3})^{2} + (a_{1}-a_{4})^{2}$, and $q=(a_{2}-a_{3})^{2} + (a_{1}+a_{4})^{2}$. For glitches the amplitude and phase can be extracted via \begin{subequations} \begin{align} A =& \sqrt{a_{1}^{2} - a_{2}^{2}} \\ \tan \bar{\phi} =& \frac{-a_{2}}{a_{1}} \,\,. \end{align} \end{subequations} \subsection{Markov Chain Monte Carlo} In this study we wish to characterize what we can learn about a wavelet present in the data. To accomplish this we marginalize the posterior distribution via the Markov Chain Monte Carlo (MCMC) algorithm. Suppose we inject a signal into our data $\textbf{s}$. Upon choosing a model specified by the initial set of parameters $\vec{x}$ we generate a proposed set of parameters from a probability density $q(\vec{y}|\vec{x})$. The chance that we accept this new set of parameters $\vec{y}$ is given by the Hasting's ratio \begin{equation} H = \min \bigg \lbrace 1, \frac{p(\textbf{s}|\vec{y},\mathcal{M}) p(\vec{y}|\mathcal{M}) q(\vec{x}|\vec{y})}{p(\textbf{s}|\vec{x},\mathcal{M}) p(\vec{x}|\mathcal{M}) q(\vec{y}|\vec{x})} \bigg \rbrace \,\,. \end{equation} The sequence of parameters we accept, called a chain, constitute samples from the posterior distribution $p(\vec{\lambda}_{\tiny \mathcal{M}}|\textbf{s}, \mathcal{M})$. The MCMC we created used the F-statistic likelihood, extremizing the likelihood over the extrinsic parameters of the signal. This effectively reduces the search space of the MCMC, greatly improving its convergence, especially for the burst model which otherwise converges slowly when the sky location is poorly-constrained. For the MCMC developed in this study uniform priors were set for the parameter set $\lbrace \log{A}$, $f_{0}$, $t_{0}$, $\log{\tau}$, $\bar{\phi}$, $\cos\theta$, $\phi$, $\psi$, $\epsilon \rbrace$. To aid in the convergence of the MCMC we used a mixture of proposal distributions. We utilized local Gaussian approximations to the posteriors through the Fisher matrix (which approximates the inverse covariance matrix) \begin{equation} \Gamma_{ij} = \sum_{\mathrm{I}} \left(\textbf{h}_{\mathrm{I},i}|\textbf{h}_{\mathrm{I},j}\right) \,\,, \label{equ:Fisher} \end{equation} where $\textbf{h}_{I,i}$ represent the derivative of waveform (in the $I^\mathrm{th}$ data channel) with respect to the $i^{\mathrm{th}}$ parameter $\lambda_{i}$. These derivatives were calculated numerically using finite differencing of the waveforms discussed in section~\ref{sec:model}. We occasionally used proposals from the prior distribution. Since we are not currently developing a detection algorithm, only an MCMC which characterizes the signal, we used a targeting distribution to help the MCMC find appropriate central frequencies $f_{0}$ and decay factor $\tau$. For $f_{0}$ and $\tau$ individually, this proposal consisted of a Gaussian distribution centered on the true parameter used to generate the injection. The width of Gaussian was chosen based on the Fisher matrix estimation for the error in that parameter. To improve the acceptance rate of this proposal distribution the Gaussian was mixed with a 20\% by weight uniform distribution covering the prior range. Differential evolution~\cite{Braak2006} proposals were also used. Lastly, a time shift proposal was used to help highly oscillatory wavelets where shifts, forwards or backwards, in the central time of the wavelet by the wavelet's period were proposed (with an appropriate shift in initial phase). To further improve convergence, and to ensure a thorough exploration of parameter space---such as investigating the existence of secondary modes on the sky for bursts---parallel tempering~\cite{PhysRevLett.57.2607} was utilized. During parallel tempering multiple chains are simulated simultaneously at different temperatures, i.e. their likelihoods are flattened $p(\textbf{s}|\vec{\lambda},\mathcal{M})^{\beta_{j}}$, where $\beta_{j} = 1/T_{j}$ is the inverse temperature fo the j$^{\mathrm{th}}$ chain. The cold chain, i.e. $T_{0} = 1$, represents samples from the posterior distribution. The chains at various temperatures propose and accept new parameters just as before, but with the flattened likelihood. Occasionally, swaps of parameters between chains neighboring in temperature are proposed based on the probability \begin{equation} H_{\mathrm{PT}} = \min \bigg \lbrace 1, \frac{ p(\textbf{s} | \vec{\lambda}_{j}, \mathcal{M})^{\beta_{j+1}} p(\textbf{s} | \vec{\lambda}_{j+1}, \mathcal{M})^{\beta_{j}} }{p(\textbf{s} | \vec{\lambda}_{j}, \mathcal{M})^{\beta_{j}} p(\textbf{s} | \vec{\lambda}_{j+1}, \mathcal{M})^{\beta_{j+1}} }\bigg \rbrace \,\,. \end{equation} Parallel tempering vastly improves convergence once a proper selection of temperatures is made. The maximum temperature is chosen such that the hottest chain freely explores the parameters' prior volume, while not so hot as to be redundant in the prior space exploration as cooler chains. In section~\ref{sec:TI} we see how parallel tempering additionally aids us in determining whether a glitch (and which one) or a burst best explains the data. \section{Parameter Estimation }\label{sec:PE} \begin{figure*}[th] \centering \includegraphics[width=3.5in]{lowFreq_OP12_corner.pdf} \includegraphics[width=3.5in]{lowFreq_Burst_corner.pdf} \caption{\label{fig:lowFreq_OP12} Marginalized posteriors for the parameters $f_{0}$, $t_{0}$, and $\tau$ are displayed for an optical path glitch $\Phi_{12}^{\mathrm{op}}$ on the left panel and a gravitational wave burst on the right panel. The fully marginalized posteriors for $f_{0}$ and $\tau$ are similar for the two injections (while the joint posterior exhibits some correlation for the burst), but the central time posteriors differ significantly. The injected parameters are marked by the red lines. } \end{figure*} The MCMC may now be used to address questions such as how well do we characterize the parameters of the signal, and how well do we recover the waveform itself. The central frequency $f_{0}$ and time damping factor $\tau$ are typically well determined for bursts and glitches. An example marginalized posterior for these parameters is seen in Figure~\ref{fig:lowFreq_OP12}. The left panel shows marginalized posterior distributions for an optical path glitch $\Phi_{12}^{\mathrm{op}}$ and the right panel shows marginalized posteriors for the same parameters for a burst. The injected signals both had a signal-to-noise of 8. The signal-to-noise (SNR) is given by \begin{equation} \rho = \sum_{I}(h_{I}|h_{I})\,\,. \end{equation} They shared the parameter values $f_{0} = 15$~mHz, $t_{0}=0.5 T$ (where $T$ is the observation period), and $\tau=53$~seconds (giving the wavelets a quality factor of 5.0). The burst injection had the following additional parameters: $\cos\theta=0.23$, $\phi=2.31$, $\psi = 0.45$, and $\epsilon = 0.5$. We see that the fully marginalized posterior distributions for the central frequency $f_{0}$ and $\tau$ are rather similar for these two injections. However, the posteriors for the central time $t_{0}$ are quite distinct in a significant way. One can show through a simple Fisher matrix calculation for a wavelet~\cite{0264-9381-32-13-135012} that the standard deviation in $t_{0}$ for a wavelet scales as $1/\rho\tau$. For the injected glitch has a measured standard deviation of $53$~seconds while the Fisher matrix standard deviation estimates an error of $70$~seconds, demonstrating agreement. The standard deviation in $t_{0}$ for the burst is $4.6$~minutes, much larger than that of glitch which must be attributed to the more complex response of a burst compared to a glitch. The reason for the increase in error associated with the central time of the burst can be seen in Figure~\ref{fig:t0_phi}. \begin{figure}[th] \centering \includegraphics[scale=0.5]{t0_phi.pdf} \caption{\label{fig:t0_phi} The figure displays the joint posterior for the azimuthal angle $\phi$ and the wavelet's central time $t_{0}$ as well as their fully marginalized posteriors. The injected parameters are again marked by the red lines. There exists a very large correlation between these two parameters.} \end{figure} There exists a substantial correlation between the azimuthal sky angle $\phi$ and $t_{0}$. Without the central time constrained appropriately it turns out that we cannot determine the sky location, which is the case for this example burst. We can understand this by looking at the low frequency response to a GW burst signal. In this regime, the Michelson-equivalent $A$ and $E$ TDI channels are proportional to $\frac{f}{f_{*}}\tilde{\Psi} e^{-2\pi i f \hat{\textbf{k}}\cdot\textbf{x}_{1}}$ modulo overall constants that differ between the channels. The $T$ channel is null in this limit. We see that the sky angles enter the phasing through time shift factor multiplying the Fourier transform of the Gabor-Morlet wavelet. Since this factor is a time shift, the sky angles are almost perfectly degenerate with the central time of the wavelet. The likelihood is approximately constant under mappings of the azimuth sky location and central time that keep fixed the combination $t_{0} - R \sin\theta\cos(2\pi f_{m} t_{0} - \phi) \,,$ where $f_{m} = 1/\mathrm{yr}$ is orbital modulation frequency and $R = 1$~AU. This relationship holds to leading order in the orbital eccentricity. Higher order corrections to the phasing incorporate additional information about the sky location in the form of the projected arm lengths $L_{ij} = L\left(1-\hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij}\right)$. \begin{figure}[th] \centering \includegraphics[scale=0.4]{time_delays.pdf} \caption{\label{fig:tDelay} The posterior distributions for the quantities \\$1-\hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij}$ are displayed here, i.e. LISA's projected arm lengths for a high frequency gravitational wave burst, normalized by the arm length $L=2.5$~Gm. The red lines indicate the injected values for the GW burst signal.} \end{figure} In Figure~\ref{fig:tDelay} we see the posteriors distribution for the projected arm lengths from a burst injection of the same sky location, polarization, ellipticity, and SNR as in the previous example, but now the central frequency is $50$~mHz and $\tau = 16$~seconds. This shorter envelope allows the central time to be measured and therefore the sky location to be better determined. The duration of the wavelet is more important in determining the sky location than the central frequency. In Figure~\ref{fig:sky} we see the posterior distribution for the sky location for two different high frequency bursts. Both bursts in Figure~\ref{fig:sky} have the same sky location, which is denoted by the blue dot in the sky map on the left. The central frequency of each source is $50$~mHz. The duration of the source shown on the left is $\tau=16$ seconds (i.e. the same burst used in Figure~\ref{fig:tDelay}) while the sky map on the right is for a source with $\tau=2.8$~minutes. We see that the origin of the burst on the sky has been localized for the short duration burst. This is due to the tight constraint on the central time of the wavelet. When the central time of the wavelet is measured to better than the light travel time between spacecraft we begin to have the power to localize the wavelet on the sky. The source shown on the right had a longer duration and poorer constraint on the central time, and therefore a poorer constraint on the sky location. Interesting structure emerges on the sky posterior for the right source. The most important factor in determining the sky location are the measured values for the projected arm lengths. The projected arm lengths can be related to the sky locations via the relations \begin{widetext} \begin{subequations} \begin{align} \hat{\textbf{k}}\cdot\hat{\textbf{r}}_{12}(t) =& - \frac{1}{2}\cos\theta\left[\cos \alpha + \cos(\alpha + \frac{\pi}{3})\right] + \frac{1}{8 \sqrt{3}}\sin\theta \left[ 2 \cos(2\alpha - \phi) - 9 \cos\phi + 3\sqrt{3}\sin\phi + 2 \sin (\phi - 2 \alpha + \frac{\pi}{6})\right] \\ \hat{\textbf{k}}\cdot\hat{\textbf{r}}_{13}(t) =& -\frac{1}{2} \cos\theta\left[ \cos\alpha + \sin(\alpha + \frac{\pi}{6})\right] + \frac{\sqrt{3}}{24}\sin\theta\left[2\cos(2\alpha - \phi) -9 \cos\phi -3\sqrt{3}\sin\phi+ 2\sin(2\alpha - \phi + \frac{\pi}{6}) \right]\\ \hat{\textbf{k}}\cdot\hat{\textbf{r}}_{23}(t) =&-\frac{\sqrt{3}}{2}\cos\theta\sin\alpha + \sin\theta\left[ \sin(2\alpha - \phi + \frac{\pi}{6}) - 3\sqrt{3}\sin\phi + \sin(2\alpha - \phi - \frac{\pi}{6})\right] \,\,, \end{align} \label{eq:timeD} \end{subequations} \end{widetext} where $\alpha = 2 \pi f_{m} t$ . When these values, $\hat{\textbf{k}}\cdot\hat{\textbf{r}}_{ij}(t)$ which are present in the phase, are well measured then the sky location can be determined. In the right panel of Figure~\ref{fig:sky} we see the curves on the sky defined by equations~(\ref{eq:timeD}). The blue curve defines sky locations which give the same time delay (also same projected arm length) along the arm connecting the spacecrafts $1$ and $2$ specified by the true sky location of the injected burst. The red line displays the sky locations which maintain the same time delay between spacecraft $1$ and $3$ as the true sky location. The black line applies to the arm spanned by spacecraft $2$ and $3$. We see that these curves intersect at two specific sky locations, one of them coinciding with the true sky location by construction. The other intersection constituted a second mode that the MCMC explored. There exists one other mode which corresponds with a different central time that also provided a good fit to the data. This secondary mode is shifted by a half period in time from the true value. In addition to the time delays, the sky localization is also impacted by the antenna patterns which change the amplitude of the signal in each channel, but this is a weaker effect. \begin{figure*}[ht] \centering \includegraphics[width=3.5in]{sky.pdf} \includegraphics[width=3.5in]{skyTimeDelay.png} \caption{\label{fig:sky}These figure displays the joint posterior for the sky location for two bursts of the same central frequency. The burst shown on the left has a short duration ($\tau = 17$~seconds), while the burst shown on the right has a longer duration ($\tau = 2.8$~minutes). The blue dot on the left figure represents the true sky location, which is the same for both sources. Lines of constant projected arm length are shown in the sky map on the right. The blue line for spacecraft $1$ and $2$, the red line for $1$ and $3$ and the black line for $2$ and $3$. There are two sky locations which satisfy these constraints. There is an additional maxima away from the intersection of these lines that corresponds to a secondary mode with an overall half-period time shift.} \end{figure*} Lastly, we will also be concerned with the accuracy of our waveform reconstruction. Especially in future work where we will work with signals consisting of a superposition of wavelets. Figure~\ref{fig:recOP12} shows an example optical path glitch $\Phi_{12}^{\tiny \mathrm{op}}$ with a signal-to-noise of $8$, a central frequency of $15$ mHz, $\tau$ of $2$ minutes ($Q=17.0$). The observation period was set to $4.55$ hours. The dotted black line denotes the signal corresponding to the injected parameters. The red lines denote waveforms for parameters sampled from the MCMC. \begin{figure*}[th] \centering \includegraphics[width=1.0\textwidth]{recoveredGlitchOP12.png} \caption{\label{fig:recOP12} The left, center, and right panels display the $A E T$ TDI responses respectively for an optical path glitch injection denoted by the dot-dash black line. The red lines are MCMC samples of the waveform. The central frequency for this wavelet was $15$ mHz, and $\tau=2$~minutes.} \end{figure*} Where the amplitude of the wavelet is largest we see that the MCMC sampled wavelets hug the injected waveform more tightly; the errors in the wavelets are greater near the edges. \section{Model Selection }\label{sec:ModSel} Now we will investigate whether we will be confused when identifying whether the data contains a glitch or signal. Recall that glitches entered the data stream with time delays of the arm light travel times. On the other hand, gravitational wave bursts instead enter the data stream with delays equal to the \textit{projected} arm lengths $L(1-\hat{\textbf{k}}~\cdot~\hat{\textbf{x}}_{ij})$. This impacts the phasing of there response, which can be used to infer the origin of glitch. Additionally, there are important differences between GW bursts and instrument glitches in where they place power in the TDI channels. Gravitational wave signals whose frequency is below the transfer frequency have a greatly diminished response in the $T$ channel. Additionally, the fact that gravitational waves are seen in at least two TDI channels, while glitches enter in only 1 or 2, will be of great importance. In this section we first demonstrate through a simple argument that we do not expect to be confused between GW signals and glitches when the all six laser links are operational. Later in this section we will more rigorously demonstrate this conclusion through calculation of the Bayesian evidence. We will also study whether it is the phasing of the response or the power distributed in the TDI data channels that provides the greatest leverage for separating GW bursts from instrument glitches. If one has access to the $A$, $E$, $T$ data channels it is easy to make an argument that we will almost never confuse a glitch for a signal. Consider the following: noise in the data streams will affect our ability to match the true signal. A measurement of this match is the fitting factor (FF), i.e. a normalized (such that 1 indicates a perfect reconstruction of the signal) noise-weighted inner product, between the data and model waveform, maximized over all model parameters. The noise in the data leads to statistical deviations in the fitting factor, even if the true parameters and model are used. The expected deviation from a fitting factor of 1 is described by~\cite{PhysRevLett.118.051101} \begin{equation} 1- FF = \frac{D-1}{2\rho^2} \,\,, \label{eq:FF_dev} \end{equation} where $D$ is the dimension of the model. Let us consider the scenario of using an acceleration glitch model, which crops up in the $X$ channel only, when the data actually contains a gravitational wave burst. In the low frequency limit, were we would expect to be most confused, the burst does not have significant power in the $T$ channel and also $A\sim h_{+}$, $E\sim h_{\times}$. Recall, in the frequency domain $\tilde{h}_{\times} = i \epsilon \tilde{h}_{+}$. The overlap (normalized noise-weighted inner product) between the acceleration glitch and burst is \begin{align} &\frac{\sum_{I}(\textbf{s}_{I}|\textbf{h}_{I})}{\sqrt{\left(\sum_{I}(\textbf{s}_{I}|\textbf{s}_{I})\right)\left(\sum_{J}(\textbf{h}_{J}|\textbf{h}_{J})\right)}} = \nonumber \\ &~~~~~~~\frac{\frac{2}{3}(\textbf{A}_{B}|\textbf{X}_{\mathrm{ac}})}{\rho\sqrt{\left(\frac{2}{3}\right)^{2}(\textbf{X}_{\mathrm{ac}}|\textbf{X}_{\mathrm{ac}}) + \left(\frac{1}{3}\right)^{2}(\textbf{X}_{\mathrm{ac}}|\textbf{X}_{\mathrm{ac}})}} \,\,, \end{align} where $\rho$ is the SNR of the burst injection. This overlap is maximized if somehow the acceleration glitch conspires to be proportional to the burst's $A$ channel response $\textbf{X}_{\mathrm{ac}} \propto \textbf{A}_{\mathrm{B}}$. Let us also assume that the $A$ channel response to the burst accounts for a fraction $x$ of the squared SNR, i.e. $(\textbf{A}_{\mathrm{B}}|\textbf{A}_{\mathrm{B}}) = x \rho^{2}$. We then find that the overlap simplifies to $2\sqrt{x/5}$. If in the worst case scenario, all of the burst's SNR is in the $A$ channel the largest fitting factor that can be obtained is $0.89$---similar considerations for all other glitches demonstrate that this glitch is indeed the worst case in the regime under consideration. Are we to be concerned by a fitting factor this large? To answer this question we can consider eqn.~($\ref{eq:FF_dev}$) to understand the statistical error in the fitting factor. Inserting the value $0.89$ into this equation results in an SNR of $4.3$. We can loosely understand this SNR as the largest SNR possible for the burst that could result in a confusion of the origin of the data (i.e. whether it was a glitch or signal). So we see that, under some very general assumptions, it is only when a burst is marginally detectable that we might confuse it for an instrument glitch. \subsection{Bayesian Evidence} \label{sec:TI} To more rigorously find out which model best explains the data we must calculate the ratio of evidences $p(\textbf{s}|\mathcal{M})$ for two given models. This quantity is known as the Bayes factor \begin{equation} B_{ij} = \frac{p(\textbf{s}|\mathcal{M}_{i})}{p(\textbf{s}|\mathcal{M}_{j})} \,\,. \end{equation} In this subsection we calculate the Bayes factor for competing glitch and burst models for different injections such that we can understand our ability to distinguish a signal's origin and to determine when we cannot. We calculate the Bayes factor via thermodynamic integration~\cite{PhysRevD.80.063007}. Since we have utilized parallel tempering in our MCMC we can calculate the average log likelihood $E_{\beta}\left[ \log p(\textbf{s}|\vec{\lambda},\mathcal{M})\right]$ for each temperature of the MCMC by simply calculating the sample mean of the log likelihood values for each sample in the chain. With these in hand one may calculate the evidence for a model via the integral \begin{equation} \ln p(\textbf{s}|\mathcal{M}) = \int_{0}^{1} d\beta E_{\beta}\left[ \log p(\textbf{s}|\vec{\lambda},\mathcal{M})\right] \,\,. \end{equation} We perform the integral using the methods described in Refs.~\cite{PhysRevD.91.084034,0264-9381-32-13-135012}. The covariance matrix between the log likelihood values for each temperature are estimated and used to define a log likelihood for the thermodynamic integration integrand.~\cite{PhysRevD.91.084034}. The integrand is fit by a cubic spline whose control points and locations are marginalized over via a reversible jump MCMC~\cite{Green1995}. The MCMC gives us estimates for the evidence integral (upon integrating the cubic spline) and its associated error. \begin{figure}[th] \centering \includegraphics[scale=0.45]{Bayes_low_freq_Burst.pdf} \caption{\label{fig:BF_burst1} Bayes factors as function of signal to noise ratio. A burst was injected into the data stream with a central frequency of $15$~mHz and $\tau =2$~minutes. The purple and orange lines represent the Bayes factors when the $AET$ TDI channels were used. Note that for the $AET$ lines the Bayes factor lie on top of each other. The red and green lines represent the case when only the $X$ TDI channel was used. Triangle markers denote the GW burst Bayes factors vs $\Phi_{12}^{\mathrm{op}}$, while circle markers represent GW burst Bayes factors vs $\Phi_{21}^{\mathrm{ac}}$. With the $AET$ channels we gain confidence swiftly of the true model as SNR grows. The growth in Bayes factor is much slower when only the $X$ channel is available.} \end{figure} Figure~\ref{fig:BF_burst1} shows the Bayes factor between the glitch and GW burst models for data containing a simulated burst. We use the notation $\mathcal{B}_{A,B} = p(\textbf{s}|A)/p(\textbf{s}|B)$ to represent the Bayes factor in the figure legends. Additionally, the label $B$ is used to denote the burst model. The burst injections has the same values for $\theta$, $\phi$, $\psi$, and $\epsilon$ as the burst discussed in section~\ref{sec:PE}. The other important parameters are the central frequency, set to $15$~mHz, and $\tau$ set to $2$~minutes (giving a quality factor of $17.0$). The orange and purple lines denote the Bayes factor when the $AET$ TDI channels are used and the red and green lines show the Bayes factor when only the $X$ channel was used. The lines marked by upside-down triangles represent the model comparison between the burst model and an optical path glitch between spacecraft $1$ and $2$. The lines marked with circles represent the Bayes factor between the burst model and an acceleration glitch between the two same spacecraft. The dashed green line represents a Bayes factor of 1, {\it i.e.} no preference between the two competing models. A Bayes factor between $3-20$ shows positive evidence~\cite{Romano2017} towards the true model. The evidence of the correct model is strong if the Bayes factor lies in the range $20-150$, and considered very strong if the Bayes factor is greater than 150. These regions are denoted by the various dashed horizontal black lines in figures~\ref{fig:BF_burst1} and~\ref{fig:BF_burst2}. With the $AET$ channel combination we see that the Bayes factor grows rapidly with signal-to-noise, and for SNRs greater than 5 we are confident that the signal is astrophysical. This supports our argument that GW burst and glitches are easily separated when we have the full collection of TDI channels. With just the $X$ channel the prospects are not as good, and it is not until the signal reaches SNR 10 that it can be confidently distinguished from a glitch. In this low frequency regime we find that a burst injection recovered with an optical path glitch model gets biased central frequency and damping time scales. In Figure~\ref{fig:BF_burst2} we see the Bayes factors for a high frequency burst injection, where $f_{0} = 50$~mHz and $\tau=16.9$~seconds ($Q=5.0$). We see that it is much easier to differentiate a gravitational wave burst from an acceleration glitch. Our ability to distinguish this burst from the optical path glitch $\Phi_{12}^{\mathrm{op}}$ however is not enhanced as much as that of the acceleration glitch, but is still improved. \begin{figure}[th] \centering \includegraphics[scale=0.45]{Bayes_high_freq_Burst.pdf} \caption{\label{fig:BF_burst2} Bayes factors as function of signal to noise ratio for a high frequency burst injection. The lines are labeled according to the same scheme as in Figure~\ref{fig:BF_burst1}. The GW burst is much easier to separate from an instrument glitch in this case.} \end{figure} Lastly, we wish to how well we can differentiate models for glitch injections. For low frequency injections the story is similar in that with the full $AET$ data stream we will be able to differentiate a glitch, both acceleration and optical path, through the distribution of power in the different data channels. When we only have the $X$ channel discrimination once again becomes challenging until the SNR becomes large. In Figure~\ref{fig:glitchBayes} Bayes factors are displayed for high frequency glitch injections. The central frequency of the glitches was $50$~mHz and $\tau=11$~seconds ($Q = 3.5)$. The blue lines represent Bayes factors for an optical path glitch injection $\Phi_{12}^{\mathrm{op}}$ and the red lines an acceleration glitch injection $\Phi_{21}^{\mathrm{ac}}$. We see that in this high frequency regime there will little issue in discriminating the origin of the signal. This figure also suggests, as seen before, that there might be a more of a challenging discriminating this optical path glitch from a burst. \begin{figure}[th] \centering \includegraphics[scale=0.45]{Bayes_OP12lo_AC21hi.pdf} \caption{\label{fig:glitchBayes} This figure displays Bayes factors for a high frequency glitch injections. The blue lines denote Bayes factors for an optical path glitch $\Phi_{12}^{\mathrm{op}}$ injection and the red lines denote a $\Phi_{21}^{\mathrm{ac}}$ injection. } \end{figure} \section{Discussion }\label{sec:discuss} To realize the full discovery potential of the LISA observatory we need to be in a position to detect unexpected and unknown signals. We have developed a forward model for a wavelet basis to represent instrumental glitches and gravitational wave bursts, as first step towards this goal. Ideally, to separate unmodeled signals from noise, we would have multiple independent LISA observatories. We have shown that the separation is possible with a single LISA detector, and even with a single TDI data channel, though the performance is much better when all three TDI channels are available. The properties of the signals and glitches can be recovered with good accuracy, though degeneracies in some parameters can degrade sky localization. There are several extensions to that will need to be made to handle generic glitches and signals . In our analysis we assumed the Gaussian noise level were not only equal, but also that they were known. In reality power spectral density of the noise in each component will have to be estimated from the data, as was done in Refs.~\cite{PhysRevD.82.022002,PhysRevD.89.022001}. We will also need to generalize the analysis to model non-stationary noise, a complication we know LISA will experience owed at least in part to the significant contribution to the noise by unresolved galactic binaries~\cite{1742-6596-840-1-012024}. Our analysis in this paper took a quasi-Bayesian approach via the maximization of the likelihood over extrinsic parameters through the F-statistic. In the future a full marginalization will have to be done, though the the F-statistic could be used to produced very effective proposal distributions for the MCMC based on maps of the F-statistic likelihood. For gravitational wave bursts we will generalize the polarizations to be not elliptically polarized. One last crucial extension is the use of multiple wavelets in the analysis~\cite{0264-9381-32-13-135012}. Not only will we need to characterize multiple wavelets, but we will also need to marginalize over the number of wavelets in the data stream. Due to the shear number of combinations of wavelets we expect in data stream we would expect the need to implement an effective Reversible Jump MCMC~\cite{Green1995} implementation to address the issue of determining an appropriate number of wavelets and for determining the evidence that a GW signal or an instrument glitch is present in the data. There may be additional information gather by LISA in the form of instrument monitors. These could provide crucial information in characterizing glitches and assessing whether a glitch has indeed occurred. \section*{Acknowledgments} TR and NJC appreciate the support of the NASA grant NNX16AB98G.
2,877,628,090,876
arxiv
\section{Introduction} \input{introduction} \section{Family of Laplace--Beltrami operators on the embedded surface} \label{sec:general} \input{general} \section{Canonical stochastic process on the embedded surface} \label{sec:stochastic} \input{stochastic} \section{Stochastic processes on quadric surfaces in the Heisenberg group} \label{sec:heisenberg} \input{heisenberg} \subsection{Paraboloid of revolution} \label{sec:para} \input{paraboloid} \subsection{Ellipsoid of revolution} \label{sec:sph} \input{spheroid} \subsection{Hyperbolic paraboloid} \label{sec:hyperpara} \input{hyperpara} \section{Stochastic processes on canonical surfaces in \texorpdfstring{${\rm SU}(2)$}{SU(2)} and \texorpdfstring{${\rm SL}(2,\R)$}{SL(2,R)}} \label{sec:model} \input{model} \subsection{Special unitary group \texorpdfstring{${\rm SU}(2)$}{SU(2)}} \label{sec:SU} \input{unitary} \subsection{Special linear group \texorpdfstring{${\rm SL}(2,\R)$}{SL(2,R)}} \label{sec:SL} \input{linear} \subsection{A unified viewpoint} \label{s:unif} \input{unified} \bibliographystyle{plain}
2,877,628,090,877
arxiv
\section{Background} Prior to Hyperledger Fabric all blockchain platforms, permissioned or permissionless, followed \emph{order-execute} pattern, i.e. network participants use consensus protocol to order transactions and only once the order is decided, all transactions are executed sequentially. Thus essentially implementing active state machine replication~\cite{SMR}. The \emph{order-execute} approach poses a set of limitations, the fact that transactions have to be executed sequentially effectively leads to throughput degradation, becoming a bottleneck. Additionally an important issue to consider which also suffers from the deficiency of the \emph{order-execute} model, is the possible non-deterministic outcome of the transactions. The active state machine replication technique, implies that transaction results has to be deterministic, simply because execution phase followed after consensus-ordering stage to prevent state "forks". Most of the current blockchains implement domain specific language to overcome problem of non-determinism. \begin{figure} \centering \includegraphics[width=\columnwidth]{fabric_overview} \caption{High level structure of Hyperledger Fabric blockchain network. Includes three organizations \textbf{OrgA}, \textbf{OrgA} and \textbf{OrgC}, each including three, two and three peers respectively. The chaincode SampleCC and endorsement policy which requires signature of at least one peer from each organization. And the ordering service which is responsible for total order of transactions.} \label{fig:overview} \end{figure} Hyperledger Fabric provides modular architecture and introduces a novel \emph{execute-order-validate} approach to address limitations mentioned in the previous paragraph. A distributed application in Hyperledger Fabric basically comprised from two main parts: \begin{enumerate} \item \textbf{Chaincode} - a business logic implemented with general purpose programming language (Java, Go, NodeJS) and invoked during the \emph{execution} phase. The chaincode is a synonym for the well known concept of \emph{smart contracts} and is a core element of Hyperledger Fabric which is executed in a distributed fashion. \item \textbf{Endorsement policies} - rules which specify what is the correct set of the peers responsible for the execution and approval of a given chaincode. Such peers, called \emph{endorsing peers}, govern the validity of the chaincode execution results by providing a signature over those results. The endorsement policies are defined with logical expressions such as: $Org1\vee (Org2\wedge Org3)$ \end{enumerate} The Hyperledger Fabric blockchain network formed by nodes which could be classified into three categories based on theirs roles: \begin{enumerate} \item \textbf{Clients} - network nodes running the application code, which coordinates transaction execation \item \textbf{Peers} - maintain a record of transactions within append-only ledger, responsible for execution of the chaincode and its lifecycle. In order to allow load balancing, not all peers are responsible for execution of the chaincode, but only a subset of peers called \emph{endorsing peers} \item \textbf{Ordering nodes} - a cluster of the replica nodes which exposes an abstraction of atomic broadcast to establish total order between all transactions within Hyperledger Fabric. Ordering nodes are completely oblivious to the application state and don't take any part in transaction validation or execution. \end{enumerate} In order to provide finer grained privacy and confidentiality Hyperledger Fabric introduces concept of \emph{channels}, a high level abstraction which basically represents a blockchain network. Each channel can contain different or even disjoin set of peers, thus allowing to segregate application state allowing greater privacy control by partitioning data across different channels. \subsection{Transaction execution flow}\label{tx_execution} \begin{figure \centering \includegraphics[scale=0.25]{flow} \caption{Hyperledger Fabric - high level transaction flow.} \label{fig:flow} \end{figure} The following summarises the execution flow of transaction submitted by a client into Hyperledger Fabric, depicted in Fig.~\ref{fig:flow}: \begin{enumerate} \item Client uses SDK to form a \emph{transaction proposal}, which includes: the channel name, the chaincode name to invoke and the input parameters for the chaincode to be executed. Next, client sends transaction proposal to all endorsing peers to satisfy the endorsement policy of the given chaincode. \item Endorsing peers simulate the transaction based on parameters received from the client, by actually interacting with chaincode to record state updates and produce output in the form of read-write set, following by signing the read-write set and returning the results back to the client. \item Client collects responses from all endorsing peers, validates that results are consistent, e.g. all endorsing peers have signed the same payload, followed by concatenation of all signatures of the endorsing peers along with the read-write sets, creating a transaction which is submitted to the ordering service. \item Ordering service collects all incoming transactions, order them to impose total order of transactions within channel context and periodically cuts blocks which include all those transactions ordered. \item Dedicated peers of each organization, pull new blocks from the ordering service and disseminate then by using scalable middleware for ledger replication, which implementation is based on an epidemic diffusion based protocol - gossip~\cite{gossip}. \item Each peer upon receiving a new block, iterates over transactions to validate: a) the endorsement policy, i.e. whether the set of the endorsing peers signatures satisfies the endorsement policy correlated to the chaincode; b) performs multi-value concurrency control checks. \item Once the transaction validation has finished, the peer appends the block to the ledger and updates its state based on valid transactions. After the block is committed it emits events to update the client connected to it. \end{enumerate} \section{Endorsement Policies} An endorsement policy is represented in a structure that is called a "signature policy". Before we understand what a signature policy means, lets describe how endorsement policies are validated. In Hyperledger Fabric, policy evaluation can be thought of as a function: \[ f := P(identity)\times P(Signature) \leftarrow \{0, 1\} \] Informally - the function takes as input a set of identities, and a set of signatures, and either accepts or rejects. The computation of the policy evaluation is comprised of 2 parts: \begin{enumerate} \item The policy evaluation determines whether the signatures correspond to identities that are privileged to endorse the transaction. The privilege to endorse the transaction is denoted as a set of Principals. Each identity can satisfy at least some principal, and usually identities of peers satisfy at most one principal, which is their organisational association. There can be several sets of principals that each on its own is deemed sufficient to make the transaction be accepted by the policy evaluation. \item The signatures are verified with their corresponding public keys found in the identities. \end{enumerate} Therefore, the endorsement policy itself (what step (1) is evaluating) can be described as a set of sets of principals, such that if the endorsement contains in it signatures that are validated under identities which satisfy some set of principals - the endorsement policy accepts the signatures and the identities, and otherwise - rejects. An endorsement policy is comprised of 2 objects: \begin{itemize} \item $P$ - An array of Principals \item $T$ - A tree that represents the principal sets that the policy expects at least one of them to be satisfied by the identities. The tree has 2 types of vertices: \begin{itemize} \item Inner vertices (non-leaves) are quantifiers (NoutOf). Each such a vertex specifies how many of its direct descendants should be satisfied for itself to be satisfied. \item Leaf vertices are pointers to the array of principals . \end{itemize} \end{itemize} Even though quantifiers can denote any number from 1 to the out degree of a vertex, two common use cases are 1OutOf and NoutOf which represent OR and AND respectively. \subsection{Endorsement policy evaluation for service discovery} In the discovery service, an endorsement policy is evaluated to produce combinations of principals such that every combination satisfies the endorsement policy on its own. To compute the combinations of principals, the policy tree $T$ is traversed and sub-trees of $T$ are computed such that the leaf level in every such sub-tree of $T$ is a combination of principals that satisfies the policy. This is done by computing for each inner vertex with an \emph{NoutOf} quantifier of $n$ , all permutations of its descendants of size $n$. Afterwards, the tree $T$ is traversed in BFS and for each inner vertex, the sub-tree is duplicated to accommodate all permutations, until all the vertices are visited. Having the sub-trees that each sub-tree's leaves are principals that satisfy the endorsement policy, the leaves are aggregated into a histogram that produces a mapping from principals to their plurality in the leaves of the sub-tree. Each such a mapping is a combination of principals and is called a "layout". The pluralities are used for endorsement policies that require multiple signatures of the same principal, i.e. 3 different identities that correspond to organisation A need to sign, etc. Aside from layouts, the discovery service computes a mapping from principals to peers. This is done by creating a bi-partite graph $G=(V_1\cup V_2, E)$: \[ V_1 := \{ principals \in T\}, V_2 := \{ peers \}, E := \{(u, v) | u \text{ satisfies } v\} \] Finally, for each principal we take all the peers that are connected to it in to produce a mapping from principals to sets of peers. This information is then package into an object called an EndorsementDescriptor, which contains information about which peers can be used to request endorsement from, such that the endorsement policy would be fulfilled. It contains 2 objects: \begin{itemize} \item Endorsers by groups: maps principals to sets of peers \item Layouts: mappings of principals to their needed quantities. \end{itemize} Given an EndorsementDescriptor, clients can compute which peers they should ask for endorsements for using the following algorithm: \begin{enumerate} \item Let $e: G \leftarrow P$ be the "endorsers by groups" field that maps a group to a set of peers \emph{ Note that applying e on a group g yields a set of peers}. \item Select a layout $l: G \leftarrow N$ out of the layouts given. $l$ maps a set of principals to integers which represents amount of peers to select in order for the endorsement policy to be satisfied by the selected layout $l$. \item $R := \emptyset$ \item For each group $g \in l$, compute $n=l(g)$ \begin{enumerate} \item Denote $P_g$ as a set of $n$ random peers $\{p_1, p_2, \dots p_n\}$ selected from $e(g)$. \item $R := R \cup P_g$ \end{enumerate} \item The set of peers is the peers the client needs to request endorsements from. \end{enumerate} Note: This selection doesn't take into account network level information which would result in a more appropriate selection of peers, such as - preferring peers with higher ledger heights than other peers, or excluding peers that are found to be offline by the client, etc. \section{Introduction} Blockchain technology is gaining a lot of traction becoming one of the most appealing and intriguing areas of interest for both research communities and industrial parties. The popularity of blockchain technologies stems from its huge potential of developing a wide range of distributed applications, allowing safe collaboration between mutually distrusting parties, without the use of a central trusted authority. Blockchain could be viewed as an append-only immutable data structure - a distributed \emph{ledger} which maintains transaction records between distrusting parties. The transactions are usually grouped into blocks. Then, every party involved in the blockchain network takes part in a consensus protocol to validate transactions and agree on an order between blocks, consequently building a hash chain over these blocks. This process forms a ledger of ordered transactions and is crucial for consistency and integrity. Each party is responsible maintaining its own copy of the distributed ledger not assuming trust on anyone else. Therefore, blockchain protocols exhibits traits that achieve some properties of Byzantine fault tolerance. Much of the increasing enthusiasm around Bitcoin~\cite{bitcoin} is attributed to blockchain as a promising technology to run trusted exchanges in the digital world. Bitcoin is operated in public, where anyone can join or leave the blockchain network, and no one is required to specify the real identity. Such blockchain systems are known as public or permission-less blockchains. Public blockchains inherently involve the notion of a native cryptocurrency and are mostly based on the \emph{proof-of-work} consensus protocol to compensate for the lack of identity and open group model. The \emph{proof-of-work} consensus protocol has several salient disadvantages: (1) a huge computational cost, that manifests in prohibitive power consumption, (2) probabilistic nature of transaction confirmation, leading to large confirmation latency, and (3) low transaction throughput. These factors make public blockchains unsuitable for enterprise grade application. Therefore, growing interest from industry triggered the development of new blockchain platforms designed for permissioned settings, where the blockchain protocol runs among a set of known, authenticated participants. This is a natural evolution to address requirements posed by business applications running blockchain among a set of identifiable participants which do not fully trust each other. It is possible to embed business rules into a Turing complete programmable transaction logic, to be executed by blockchain in the form of a \emph{Smart Contract}, as introduced by Ethereum~\cite{Ethereum}. The Bitcoin script was a predecessor of this concept allowing the transfer of native crypto-coins (bitcoins) from one owner to another. A smart contract provides an abstraction which resembles the functionality of a \emph{trusted distributed application}, leveraging underlying blockchain facilities to gain security and consistency guaranties. Both bitcoin scripts and Ethereum smart contracts resemble a replicated state machine~\cite{SMR}, a well known technique to build resilient distributed applications. Many permissioned blockchains use the same paradigm: they order the transactions and then execute them on all peers. This is known as the \emph{order-execute} architecture which leads to intolerance to non-deterministic smart contracts and to sequential execution of transactions which severely limits performance~\cite{fabric}. Hyperledger Fabric~\cite{fabric} (HLF) is an open source project, released to the Linux Foundation\footnote{www.linuxfoundation.org}. It introduces a new architecture for enterprise grade permissioned blockchain platforms following the novel paradigm of \emph{execute-order-validate} for distributed execution of smart contracts (\emph{chaincode} in HLF). In contrast to the \emph{order-execute} paradigm, in HLF transactions are first \emph{executed} by a \emph{subset} of peers (endorsed). Transactions (with results) are then grouped into blocks and \emph{ordered}, and finally a \emph{validation} phase makes sure that transactions were properly endorsed and are not in conflict with other transactions. This architecture allows multiple transactions to be executed in parallel by disjoint subsets of peers, increasing throughput, and tolerates non-deterministic chaincode. Invalid transactions are dropped in the validation phase. The \emph{endorsement policy} is the set of rules that determine which subset of peers should execute a transaction, and what constitutes a valid execution. In a sense, HLF benefits from the combination of two well know approaches for replication, passive and active~\cite{budhiraja1993primary, charron2010replication}. Blockchain applications are typically comprised of two tiers: the first - called the ``platform tier'' - focuses on the modelling of the data schema and embedding of business rules into the blockchain by means of \emph{chaincode} and \emph{endorsment policies}. The second - called the ``client tier'' - uses the SDK (Software Development Kit) provided by HLF to implement client side application logic. However there is a gap between the two tiers that hinders the rapid adoption of changes in the platform tier within the client tier. Currently\footnote{www.hyperledger.org}, the chaincode identifier and location as well as endorsement policies are statically configured into the HLF client. That is, the client is statically configured with the addresses of the peers that need to execute and endorse a transaction proposal. This limits the reliability and availability of the client in the event of changes in the platform: whenever the endorsement policy changes, a peer is added or removed, or the chaincode evolves, the client needs to be reconfigured. Moreover, configuration is complicated and technical, which makes the platform more difficult to use. In this work we describe the design and implementation of the \emph{Service Discovery} component, which extends the architecture and capabilities of HLF, increasing the availability and resiliency of the client side applications. Service Discovery provides APIs that allow the client application to dynamically discover the configuration details of the endorsement policies and chaincode it needs to use. It therefore alleviates the client application developer from the burden of painstakingly reconfiguring the client every time these change. Service Discovery leverages the membership and gossip capabilities of the HLF replication layer~\cite{gossip} to gather and disseminate the necessary information needed to implement theses APIs. The rest of the paper describes in brief the internal structure of HLF, outlines endorsement policies, and finally presents the design and implementation of the new service discovery component. \section{Service Discovery} In order to execute chaincode on peers, submit transactions to orderers, and to be updated about the status of transactions, applications connect to an API exposed by an SDK as outlined in section~\ref{tx_execution}. However, the SDK needs a lot of information in order to allow applications to connect to the relevant network nodes. In addition to the enrollement CA and TLS CA certificates of the orderers and peers on the channel - as well as their IP addresses and port numbers - it must know the relevant endorsement policies along with which peers have the chaincode installed (so the application knows which peers to send chaincode proposals to) on them. In previous versions of Hyperledger Fabric, this information was statically encoded. However, this implementation is not dynamically reactive to network changes (such as the addition of peers who have installed the relevant chaincode, or peers that are temporarily offline). Static configurations also do not allow applications to react to changes of the endorsement policy itself (as might happen when a new organization joins a channel). Furthermore, the client application has no way of knowing which peers have updated ledgers and which do not, so it might submit proposals to peers whose ledger data is not in sync with the rest of the network, resulting in transaction being invalidated upon commit. This is a waste of both time and resources. The \emph{discovery service} improves this process by having the peers compute the needed information dynamically and present it to the SDK in a consumable manner. \subsection{How service discovery works in Fabric} The application is bootstrapped knowing about a group of peers which are trusted by the application developer/administrator to provide authentic responses to discovery queries. A good candidate peer to be used by the client application is one that is in the same organization. The application issues a configuration query to the discovery service and obtains all the static information it would have otherwise needed to communicate with the rest of the nodes of the network. This information can be refreshed at any point by sending a subsequent query to the discovery service of a peer. The service runs on peers -- not on the application -- and uses the network metadata information maintained by the gossip~\cite{gossip} communication layer to render the list of peers that are online. It also fetches information, such as relevant endorsement policies, from the peer's state database. With service discovery, applications no longer need to specify which peers they need endorsements from. The SDK can simply send a query to the discovery service asking which peers are needed given a channel and a chaincode ID. The discovery service can respond to the following queries: \begin{itemize} \item \textbf{Configuration query} - returns the configuration required for initialization of the CA certificates of all organizations in the channel along with the orderer endpoints of the channel. \item \textbf{Peer membership query} - returns the peers that have joined the channel. \item \textbf{Endorsement query} returns an endorsement descriptor for given chaincode(s). The descriptor allows easy selection of some set of peers such that if endorsements are obtained from the set, the endorsement policy would be satisfied. \item \textbf{Local peer membership query} returns the local membership information of the peer that responds to the query. \end{itemize}
2,877,628,090,878
arxiv
\section{Introduction} \subsection{Summary} A {\em polygonal chain\/} $P=(v_0,v_1,\ldots,v_n)$ is a sequence of consecutively joined segments $s_i =v_iv_{i+1}$ of fixed lengths $\ell_i = |s_i|$, embedded in space. A chain is {\em closed\/} if the line segments are joined in cyclic fashion, i.e., if $v_n=v_0$; otherwise, it is {\em open}. A {\em polygonal tree\/} is a collection of segments joined into a tree structure. A chain or tree is {\em simple\/} if only adjacent edges intersect, and only then at the endpoint they share. We study reconfigurations of simple polygonal chains and trees, continuous motions that preserve the lengths of all edges while maintaining simplicity. One basic goal is to determine if an open chain can be {\em straightened\/}---stretched out in a straight line, and whether a closed chain can be {\em convexified\/}---reconfigured to a planar convex polygon. For trees, straightening permits noncrossing violations of simplicity to allow the segments to align along the common straight line. If an open chain or tree cannot be straightened, or a closed chain convexified, it is called {\em locked}. This terminology is borrowed from~\cite{bddlloorstw-lupc3d-99} and~\cite{bddloorsw-orfltcl-98}.% \footnote{ Straightening for trees is never defined in~\cite{bddloorsw-orfltcl-98}. Instead they rely on mutually unreachable simple configurations. } Most of the work in this area was fueled by the longstanding open problem of determining whether every open (or closed) chain in 2D can be straightened (or convexified). This was recently settled~\cite{cdr-epcbu-00} in the affirmative: 2D chains cannot lock. In contrast it was earlier established that trees in 2D~\cite{bddloorsw-orfltcl-98}, and both open and closed chains in 3D~\cite{cj-nepiu-98,bddlloorstw-lupc3d-99} can lock. In this paper we prove that, for all dimensions $d \ge 4$, neither chains (open or closed) nor trees can lock. We partition our results into four main theorems: \begin{theorem} Every simple open chain in 4D may be straightened, by an algorithm that runs in $O(n^2)$ time and $O(n)$ space, and which accomplishes the straightening in $O(n)$ moves. \theolab{open.4D} \end{theorem} Here ``move'' is used in the sense defined in~\cite{bddlloorstw-lupc3d-99}.\footnote{ ``During each move, a (small) constant number of individual joint moves occur, where for each a vertex $v_{i+1}$ rotates monotonically about an axis through joint $v_i$, with the axis of rotation fixed in a reference frame attached to some edges.'' } Essentially each move is a simple monotonic rotation of a few joints. We have implemented this algorithm for the case when the vertices are in general position, when it is straightforward. Nearly the same algorithm proves the same result for trees, within the same bounds: \begin{theorem} Every simple tree in 4D may be straightened, by an algorithm that runs in $O(n^2)$ time and $O(n)$ space, and which accomplishes the straightening in $O(n)$ moves. \theolab{tree.4D} \end{theorem} \noindent Closed chains require more effort: \begin{theorem} Every simple closed chain in 4D may be convexified, by an algorithm that runs in $O(n^6 \log n)$ time, and which accomplishes the straightening in $O(n^6)$ moves. \theolab{closed.4D} \end{theorem} \noindent All these results easily extend to higher dimensions: \begin{theorem} Theorems~\theoref{open.4D}, \theoref{tree.4D}, and~\theoref{closed.4D} hold for all dimensions $d \ge 4$, i.e., neither polygonal chains nor trees can lock in dimensions greater than three. \theolab{d.ge.4} \end{theorem} We summarize our results in the context of earlier work in the table below. \begin{center}\begin{tabular}{| c | c | c | } \hline Dimension & Chains & Trees \\ \hline \hline $2$ & Cannot lock & Lockable \\ \hline $3$ & Lockable & Lockable \\ \hline $d \ge 4$ & {\em Cannot lock\/} & {\em Cannot lock\/} \\ \hline \end{tabular} \end{center} \subsection{Background} \seclab{Background} Before commencing with our technical arguments, we start with some background, with the intent of providing intuition to support our results. \paragraph{No Knots in 4D.} In~\cite{cj-nepiu-98} and~\cite{bddlloorstw-lupc3d-99}, the same example of a locked open chain in 3D is provided. The version in the latter paper is shown in Fig.~\figref{knitting}. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{knitting.eps} \caption{The ``knitting needles'' example, based on Fig.~1 in \protect\cite{bddlloorstw-lupc3d-99} (by permission). } \figlab{knitting} \end{figure} One proof (used in~\cite{bddlloorstw-lupc3d-99}) that this chain $K$ is locked depends on closing the chain by connecting $v_0$ to $v_5$ to form $K'$, and then arguing that $K$ can be straightened iff the corresponding trefoil knot $K'$ can be unknotted, which of course it cannot. Thus there is a close connection in 3D between unknotted, locked chains and knots. However, the following theorem is well known: \begin{theorem} No 1D closed, tame,\footnote{ A curve is {\em tame\/} if it is topologically equivalent to a polygonal curve~\cite[p.5]{cf-ikt-65}. Any curve that is continuously differentiable, i.e., in class $C^1$, is tame. } non-self-intersecting curve $C$ is knotted in $\R^4$. \end{theorem} See, e.g.,~\cite[pp.270-1]{a-kb-94} for an informal proof. Because proofs of this theorem employ topological deformations, it seems they are not easily modified to help settle our questions about chains in 4D. The rigidity of the links prevents any easy translation of the knot proof technique to polygonal chains. However, it does suggest that it would be difficult to construct a locked chain by extending the methods used in 3D. \paragraph{No Cages in 4D.} A second consideration lends support to the intuition behind our main claim. This is the inability to confine one segment in a ``cage'' composed of other segments in 4D. Consider segment $s_0 = v_0 v_1$ in Fig.~\figref{knitting}. It is surrounded by other segments in the sense that it cannot be rotated freely about one endpoint (say $v_0$) without colliding with the other segments. Let $S$ be the $2$-sphere in $\R^3$ of radius $\ell_0$ centered at $v_0$. Each point on $S$ is a possible location for $v_1$. Segment $s_0$ is confined in the sense that there are points of $S$ that cannot be reached from $s_0$'s initial position without collision with the other segments. This can be seen by centrally projecting the segments from $v_0$ onto $S$, producing an ``obstruction diagram.'' It should be clear that $v_1$ is confined to a cell of this diagram. Although this by no means implies that the chain in Fig.~\figref{knitting} is locked, it is at least part of the reason that the chain might be locked. We now argue informally that such confinement is not possible in 4D. Again let $s_0 = v_0 v_1$ be fixed at $v_0$, and let $S$ be the $3$-sphere in $\R^4$ of radius $\ell_0$ centered on $v_0$ that represents the possible locations for $v_1$. Again we project the other segments onto $S$ producing an obstruction diagram. As in the lower dimensional case, this diagram is composed of 1D curves, being the projection of 1D segments. But in the $3$-sphere $S$, $v_1$ has three degrees of freedom, and cannot be confined by a (finite) set of 1D curves. Our next task is to make this intuitive argument more precise. \section{Straightening Open Chains in 4D} \seclab{Open} Let $P$ be a simple, open polygonal chain in 4D with $n \ge 2$ vertices. Each vertex $v_i$ is also called a {\em joint\/} of the chain. The segment $s_i = v_i v_{i+1}$ we sometimes call a {\em link\/} of the chain. We say a joint $v_i$ is {\em straightened\/} if $(v_{i-1}, v_i, v_{i+1})$ are collinear and form a simple chain; in this case, the angle at $v_i$ is $\pi$. We prove Theorem~\theoref{open.4D} by straightening the first joint $v_1$, ``freezing'' it, and repeating the process until the entire chain has been straightened. This is a procedure which, of course, could not be carried out in 3D. But there is much more room for maneuvering in 4D. We have two different algorithms for accomplishing this task. The first (Algorithm~1a) is easier to understand, but only establishes a bound of $O(n^4)$ on the number of moves, and requires $O(n^4 \log n)$ time. The second (Algorithm~1b) is a bit more intricate but achieves $O(n)$ moves in $O(n^2)$ time. Both follow the rough outline just sketched. We provide full details for Algorithm~1a, but only sketch Algorithm~1b. Define the {\em goal position\/} $v_g$ for $v_0$ (and $s_g=v_g v_1$ the goal position for $s_0$) as the unique position that represents straightening of joint $v_1$. Call the goal position {\em intersected\/} if $s_g \cap s_i \neq \emptyset$ for some $i > 2$; and otherwise call it {\em free}. \subsection{Algorithm 1a} A high-level view of the algorithm is as follows: \begin{center} \fbox{% \begin{minipage}{56mm} \begin{tabbing} \hspace*{3mm}\=\hspace*{3mm}\=\hspace*{3mm}\=\kill {\bf Algorithm 1a: Open Chains}\\ {\sf repeat until} chain straightened {\sf do}\\ \> {\sf 1: if} $s_g$ is {\em free\/} {\sf then}\\ \> \> Construct obstruction diagram Ob$(v_0)$ on $3$-sphere.\\ \> \> Apply motion planning to move $v_0$ to $v_g$.\\ \> {\sf 2: else} $s_g$ is {\em intersected\/}\\ \> \> Construct obstruction diagram Ob$(v_1)$ on $2$-sphere.\\ \> \> Move $v_1$ so that the goal position is not intersected.\\ \end{tabbing} \end{minipage} } \end{center} \subsubsection{Step~1: $s_g$ is free} Our argument depends on some basic intersection facts, which we formulate in $\R^d$ in a series of lemmas before specializing to the $d=3$ and $d=4$ cases we need. \paragraph{Geometric Intersections in $\R^d$.} \seclab{Geometric.Intersections} Let the coordinates of $\R^d$ be $x_1,x_2,\ldots,x_d$. A {\em $k$-flat\/} is the translate of a subspace spanned by $k$ linearly independent vectors. Flats for $k=0,1,2$ are also called points, lines, and planes. A $k$-sphere is the set of points in a $(k+1)$-flat at a fixed radius from a point (its {\em center\/}) in that flat. A $0$-sphere is a set of two points, a circle is a $1$-sphere, and the surface of a ball in $\R^3$ is a $2$-sphere. When emphasizing the topology of a $k$-sphere, we will use the symbol $\Sph^k$. \begin{lemma} The intersection of a $2$-flat $H$ (i.e., a plane) with a $(d{-}1)$-sphere $S$ in $\R^d$ is a circle, a point, or empty. \lemlab{plane.sphere} \end{lemma} \begin{pf} Translate and rotate the sphere and plane so that the sphere is centered on the origin, and the plane is parallel to the $x_1x_2$-plane. The equations of the sphere $S$ and the plane $H$ are then: \begin{eqnarray} S & : & x_1^2 + x_2^2 + \cdots + x_d^2 = r^2 \\ H & : & x_3 = a_3 \,,\, x_4 = a_4 \,,\, \cdots ,\, x_d=a_d \end{eqnarray} where the $a_i$ are constants. Let $A^2 = \sum_{i=3}^d a_i^2$. Then \begin{eqnarray} S \cap H & : & x_1^2 + x_2^2 + A^2 = r^2 \\ & & x_1^2 + x_2^2 = r^2 - A^2 \end{eqnarray} If $r^2 < A^2$, the intersection is empty. If $r^2 = A^2$, the intersection is the point $(0,0,a_3,\ldots,a_d)$. If $r^2 > A^2$, the intersection is a circle in $H$ with radius $\sqrt{r^2 - A^2}$, and center $(0,0,a_3,\ldots,a_d)$. \end{pf} \begin{lemma} The intersection of a (1D) line, ray, or segment with a $(d{-}1)$-sphere $S$ in $\R^d$ is at most two points, i.e., it either contains one or two points or is the empty set. \lemlab{line.sphere} \end{lemma} \begin{pf} Let $s = ab$ be a segment, and let the sphere center be $c$. Let $H$ be the 2D plane determined by the three points $a,b,c$, i.e., $H$ is the affine span of $\{a,b,c\}$. Because $s \subset H$, we must have $s = s \cap H$. So \begin{eqnarray} s \cap S & = & (s \cap H) \cap S \\ & = & s \cap (H \cap S) \end{eqnarray} By Lemma~\lemref{plane.sphere}, $H \cap S$ is a circle, and the claim for segments follows because a segment intersects a circle in at most two points. Rays and lines yield the same result by selecting $a$ and $b$ sufficiently large. \end{pf} Let $a$, $b$, and $c$ be three distinct points in $\R^d$, such that $c$ does not lie on the segment $ab$. Call the set of points that lie on rays that start at $c$ and pass through a point of $ab$ a {\em triangle cone\/} ${\triangle}_c(a,b)$. If $(a,b,c)$ are collinear, the triangle cone degenerates to a ray. \begin{lemma} The intersection of a triangle cone ${\triangle}_c(a,b)$ with a $(d{-}1)$-sphere $S$ in $\R^d$ consists of at most two connected components---and, if $c$ is the center of $S$, of at most one component---each of which is a circular arc or a point. \lemlab{tricone.sphere} \end{lemma} \begin{pf} Let ${\triangle} = {\triangle}_c(a,b)$, and let $H$ be the 2D plane containing ${\triangle}$. Because ${\triangle} \subset H$, ${\triangle} = {\triangle} \cap H$. So ${\triangle} \cap S = {\triangle} \cap (H \cap S)$. By Lemma~\lemref{plane.sphere}, $H \cap S$ is a circle $C$ in the plane containing ${\triangle}$. So the problem reduces to the intersection of a triangle cone with a circle. As illustrated in Fig.~\figref{tricone}a, this intersection is at most one arc if the cone's apex $c$ is at the center of the $C$ (${\triangle}_1$ in the figure), and at most two arcs otherwise (${\triangle}_2$ in the figure). Any of the arcs illustrated could degenerate to points if the cone is a ray. (When $c$ is not the center of $S$, the arc could be the whole circle $C$.) \end{pf} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{tricone.eps} \caption{(a) Intersections of triangle cones ${\triangle}_1={\triangle}_{c_1}(a_1,b_1)$ and ${\triangle}_2={\triangle}_{c_2}(a_2,b_2)$ with a circle $C$ centered at $c_1$; (b) Intersections of quadrilateral cones $Q_1$ and $Q_2$ with $C$.} \figlab{tricone} \end{figure} We will need a slight extension of this lemma. Define a {\em quadrilateral cone\/} $Q_c(a,b)$ to be the closure of ${\triangle}_c(a,b) \setminus t$, where $t$ is the triangle determined by $(a,b,c)$. Thus $Q_c(a,b)$ is all the points on the rays from $c$ at or beyond $ab$. The next lemma says that the conclusion of the previous lemma holds for quadrilateral cones as well. \begin{lemma} The intersection of a quadrilateral cone $Q_c(a,b)$ with a $(d{-}1)$-sphere $S$ in $\R^d$ consists of at most two connected components---and, if $c$ is the center of $S$, of at most one component---each of which is a circular arc or a point. \lemlab{quadcone.sphere} \end{lemma} \begin{pf} As Fig.~\figref{tricone}b makes clear, $Q_c(a,b)$ is just ${\triangle}_c(a,b)$ intersected with a closed halfplane in $H$ containing $ab$. Intersecting the components from Lemma~\lemref{tricone.sphere} with a halfplane cannot increase their number, and so the claim follows. \end{pf} \paragraph{Obstruction Diagram Ob$(v_0)$.} Let ${\cal{C}}_0$ be the {\em configuration space\/} for vertex $v_0$ when $v_1$ is fixed: the set of all possible positions for $v_0$ that preserve the length of $v_1 v_0$. ${\cal{C}}_0$ is a $3$-sphere $S$ in $\R^4$ centered at $v_1$ with radius $\ell_0$. Let ${\cal{F}}_0$ be the {\em free space\/} for vertex $v_0$ with all other vertices $v_i$ of the chain fixed: the subset of ${\cal{C}}_0$ for which the chain is simple, i.e., for which $s_0$ does not intersect $s_i$, $i > 1$, and $s_0$ intersects $s_1$ only at $v_1$. We define the {\em obstruction diagram\/} $\B0$ for $v_0$ as the set such that ${\cal{F}}_0 = {\cal{C}}_0 \setminus \B0$. Our goal is to describe, and ultimately construct, $\B0$. To ease notation, let $_j{\triangle}_i={\triangle}_{v_j}(v_i,v_{i+1})$ be the triangle cone with apex $v_j$ determined by segment $i$, and define ${_jQ_i} \subseteq {_j{\triangle}_i}$ as the similar quadrilateral cone. \begin{lemma} The set of points $\B0 \subset {\cal{C}}_0$ in the $3$-sphere $S$ consists of at most $n-1$ components, each of which is a circular arc of a circle or a point. \lemlab{Obv0} \end{lemma} \begin{pf} $\B0$ is the union of the obstructions contributed by each segment $s_i$, $i > 1$, plus the single point disallowing overlap with $s_1$. If $s_0$ intersects $s_i$, then $v_0$ lies in the set ${_1Q_i}$ in $\R^4$, for then $v_0$ lies on a ray from $v_1$ along $s_0$, beyond the crossing with $s_i$. (For example, in Fig.~\figref{tricone}b, we have $c_1 = v_1$, $a_1 = v_i$, and $b_1=v_{i+1}$.) Thus ${_1Q_i} \cap S$ is precisely the locus of positions of $v_0$ for which $s_0$ intersects $s_i$. By Lemma~\lemref{quadcone.sphere}, this intersection is a circular arc or a point. Unioning over all $i>1$ establishes the claim. \end{pf} \noindent This lemma is now immediate: \begin{lemma} If $v_0$'s goal position $v_g$ is free, then $v_1$ may be straightened. \lemlab{free.straighten} \end{lemma} \begin{pf} Because $v_g$ is free, $v_g \not\in {\rm Ob}(v_0)$. Because the given chain is assumed simple, the initial position $v_0 \not\in {\rm Ob}(v_0)$. The locus of possible $v_0$ positions forms the $3$-sphere $S$. The obstacles $\B0$ are a finite set of circular arcs and points. The removal of $\B0$ from $S^3$ cannot disconnect $v_0$ from $v_g$. This follows from the fact that $\R^d$ cannot be separated by a subset of dimension of less than or equal to $d{-}2$~\cite[Thm.~3-61, p.~148]{hy-t-61}. Neither then can $\Sph^d$ be so disconnected. For suppose set $X$ disconnects two points $p$ and $q$ of $\Sph^d$. Then stereographically project $\Sph^d$ to $\R^d$, from a center not in $X$ or at the two points. This produces a set $X'$ that disconnects $p'$ from $q'$ in $\R^d$, contradicting the quoted theorem. Therefore there is a path in ${\cal{F}}_0 = S \setminus \B0$ from $v_0$ to $v_g$, which represents a continuous motion of $s_0$ that straightens $v_1$. \end{pf} It is this lemma which justifies the claim made in Section~\secref{Background} that there can be no cages in 4D. We will defer to Section~\secref{Canny.1} construction of the path guaranteed by this lemma. \subsubsection{Step~2: $s_g$ is intersected} If $s_g$ is intersected, then rotating $s_0$ to the goal position necessarily violates simplicity at the goal position. In this case, we slightly move $v_1$, the joint between $s_0$ and $s_1$, so that the new goal position $s'_g$ is no longer intersected. \noindent That we can ``break'' the degeneracy of an intersected goal is established by this lemma: \begin{lemma} $v_1$ may be moved to $v'_1$ while keeping all other vertices fixed, so that the chain remains simple, and the new goal $s'_g$ is not intersected. \lemlab{inter.break} \end{lemma} \begin{pf} Fix the positions of $v_0, v_2, v_3, \ldots, v_n$. The $2$-sphere $$S = \{z \in \R^4 : |z-v_0|=\ell_0, |z-v_2|=\ell_1\}$$ represents all the possible positions for $v_1$ that preserve the lengths of its incident links. Note that $S$ consists of the intersection of two $3$-spheres. Because we may assume that the angle at $v_1$ is not already straightened, $S$ does not degenerate to a single point. Thus $S$ is a $2$-sphere. Now we construct an obstruction diagram $\D1$ on $S$ that is a superset of all those positions of $v_1$ for which~(1) the goal position $s_g$ (of $s_0$) is intersected, or for which~(2) the chain $(v_0,v_1,v_2)$ intersects the remaining, fixed chain $(v_2,\ldots,v_n)$. We construct a superset rather than the precise obstruction set because the former is easier but equally effective computationally. \begin{enumerate} \item Intersected goal positions $s_g$. A goal segment $s_g$ lies on the ray from $v_2$ through $v_1$, for it is exactly those $s_g$ that are straight at $v_1$. For $s_g$ to intersect $s_i$, $v_1$ must lie in $_2{\triangle}_i$, the triangle cone with apex at $v_2$ and delimited by $s_i$. See Fig.~\figref{sgint}. \begin{figure}[htbp] \centering \includegraphics[height=7cm]{sgint.eps} \caption{The triangle cone $_2{\triangle}_i$ intersects the sphere $S$ in at most two circular arcs.} \figlab{sgint} \end{figure} Not every $v_1 \in {_2{\triangle}_i}$ leads to intersection of $s_g$ with $s_i$: $s_g$ must reach $s_i$. The relevant subset of ${_2{\triangle}_i}$ could be detailed, but because it has one curved edge, we content ourselves with a supset of the obstructions by forbidding $v_1$ anywhere in ${_2{\triangle}_i}$. Applying Lemma~\lemref{tricone.sphere} shows that $S \cap {_2{\triangle}_i}$ contributes at most two arcs or points to $\D1$, for each $i \not\in \{0,1\}$. \item Intersections between $s_0$ and $s_1$ and the remainder of the chain. $\D1$ also contains all the positions of $v_1$ that cause the two adjacent links to intersect any of the other segments. The link $v_2 v_1$ is clearly covered by $_2{\triangle}_i$. The link $v_0 v_1$ can be handled by the analogous triangle cone $_0{\triangle}_i$ with apex at $v_0$ and through $s_i$. Again these sets provide a superset of the obstructions, and Lemma~\lemref{tricone.sphere} again applies. \end{enumerate} Summing over all $i$ yields the obstruction superset $\D1$ composed of at most $2 \cdot 3 (n-2) = O(n)$ arcs or points on $S$. Thus $\D1$ is an arrangement of $O(n)$ arcs on a $2$-sphere, with the initial position of $v_1$ lying on at least one arc (because by hypothesis, $s_g$ is intersected). Choosing any point $v'_1 \in S \setminus \D1$ interior to an arrangement cell on whose boundary $v_1$ lies suffices to establish the claim. \end{pf} Note that it is quite possible for $v_1$ to be confined within a cell of the arrangement $\D1$, but that this ``cage'' is no impediment. We do not need a path from $v_1$ to an arbitrary point of $S$; rather we only need a path to any unobstructed point $v'_1$. Although we could construct the arrangement $\D1$ in $O(n^2\alpha(n))$ time and $O(n^2)$ space~\cite{egppss-acptc-92,h-a-97}, for our limited goal of constructing just one point, we can do better: \begin{lemma} A move of $v_1$ to the position guaranteed by Lemma~\lemref{inter.break} may be computed in $O(n)$ time and $O(n)$ space. \lemlab{inter.break.move} \end{lemma} \begin{pf} Let $Z = \{a_1,\ldots,a_m\}$ be the collection of arcs of $V$ that contain $v_1$. $Z$ may be found by a brute force check of each of the $O(n)$ arcs. Pick two arcs $a_1$ and $a_j$ angularly consecutive about $v_1$. This can be accomplished in $O(n)$ time by fixing $a_1$, and letting $a_j$ be the arc that makes the smallest angle with $a_1$. Let $a$ be a circular arc ray (i.e., a directed great circle starting and ending at $v_1$) that bisects this angle; or if $Z$ only contains one arc, let $a$ be orthogonal to it; or if $Z$ only contains one point, let $a$ be any ray from $v_1$. Intersect $a$ with every arc and point of $\D1$, again in $O(n)$ time. Let ${\delta}$ be the distance from $v_1$ along $a$ to the closest intersection. Finally, choose $v'_1$ as the point ${\delta}/2$ along $a$. This point is guaranteed to be off $\D1$, and therefore unobstructed. Moving (in one move) $v_1$ to $v'_1$ establishes a new goal $s'_g$ that is not intersected. \end{pf} \subsubsection{Motion Planning} \seclab{Canny.1} Now that we know we can perform Step~2 of Algorithm~1a in $O(n)$ time per iteration, we return to finding a path through $S^3$ for $v_0$, as guaranteed by Lemma~\lemref{free.straighten}. Motion planning between two points of the $3$-sphere ${\cal{F}}$ may be achieved by any general motion planning algorithm~\cite[Sec.~40.1.1]{s-amp-97}. For example, Canny's Roadmap algorithm achieves a time and space complexity of $O(n^k \log n)$, where $n$ is the number of obstacles, and $k$ the number of degrees of freedom in the robot's placements. In our case, $k=3$. His algorithm produces a piecewise algebraic path through ${\cal{F}}$, of $O(n^k)$ pieces. Each piece constitutes a constant number of moves, with the constant depending on the algebraic degree of the curves, which is bounded as a function of $k$. Therefore each joint straightening can be accomplished in $O(n^3)$ moves. Repeating the planning and straightening $n$ times leads to $O(n^4)$ moves in $O(n^4 \log n)$ time. In the next section we reduce the $O(n^3)$ moves per joint straightening to just $3$ moves per straightening. \subsection{Algorithm 1b} We have now established Theorem~\theoref{open.4D}, but with weaker complexity bounds than claimed. It is not surprising that applying a general motion planning algorithm is wasteful in our relatively simple situation. In fact a significant improvement over Algorithm~1a can be achieved by switching attention from the absolute position of $v_0$, to the direction in which $s_0$ rotates. Let the vector along $s_0$ be $w_0 = v_0 - v_1$, and similarly let $w_g = v_g - v_1$. Let $w$ be the {\em goal direction\/}: a unit vector orthogonal to $w_g$ that represents the direction in which $w_0$ should be rotated to move it to its goal position. See Fig.~\figref{goal.direction}. \begin{figure}[htbp] \centering \includegraphics[height=2.5in]{goal.direction.eps} \caption{The goal direction vector $w$ defines the direction that $w_0$ should be rotated to reach $w_g$. The shaded triangle cone ${_1{\triangle}(v_0,v_g)}$ is not crossed by any links of the chain if $w$ is unobstructed.} \figlab{goal.direction} \end{figure} Thus $w$ is the unique unit vector pointing in the direction of the component of $w_g - w_0$ orthogonal to $w_g$: \begin{equation} a_1 w_g + b_1 w = w_g - w_0 \eqlab{goal.dir} \end{equation} for some reals $a_1 > 0$ and $b_1 > 0$. The space of possible directions $w$ forms a $2$-sphere rather than the $3$-sphere we faced in Step~1 of Algorithm~1a. This permits replacing the $O(n^3 \log n)$ moves per step from motion planning, with at most two moves. We now proceed to describe this. Because this represents a computational improvement only, the proofs are only sketched. More detailed proofs are contained in~\cite{Roxana}. Algorithm~1b distinguishes three possibilities: \begin{enumerate} \item The goal position is {\em intersected\/} by some other link of the chain (just as in Algorithm~1a). \item The goal direction is {\em obstructed\/} in that rotation of $s_0$ in the direction $w$ might hit some link of the chain along its direct rotation to the goal position. We again define a direction to be obstructed conservatively, working with a superset of the true obstructions: $w$ is obstructed if the triangular cone ${\triangle}_{v_1}(v_0,v_g)={_1{\triangle}(v_0,v_g)}$ is intersected by any $s_i$, $i > 1$. \item The goal direction is {\em free\/}: it is not obstructed (and so the goal position is not intersected). \end{enumerate} A high-level view of our second algorithm is as follows: \begin{center} \fbox{% \begin{minipage}{56mm} \begin{tabbing} \hspace*{3mm}\=\hspace*{3mm}\=\hspace*{3mm}\=\kill {\bf Algorithm 1b: Open Chains}\\ {\sf repeat until} chain straightened {\sf do}\\ \> {\sf 1: if} $w$ is {\em free\/} {\sf then}\\ \> \> Rotate $s_0$ directly to $s_g$.\\ \> {\sf 2: else if} $w$ is {\em obstructed\/} {\sf then}\\ \> \> Rotate $s_0$ to new position whose goal direction is free.\\ \> {\sf 3: else if} $s_g$ is {\em intersected\/} {\sf then}\\ \> \> Move $v_1$ so that the goal position is not intersected. \end{tabbing} \end{minipage} } \end{center} Step~3 is identical to Step~2 of Algorithm~1a, so we only discuss the first two steps. \subsubsection{Step~1: $w$ is free} By our definitions, $s_0$ may be rotated directly to $s_g$ without hitting any other segment of the chain. Because the goal position $s_g$ is not intersected, the chain remains simple even after the rotation has been completed. Therefore, the link $s_0$ can be straightened in one move. Note that this is the generic situation, in that for a ``random'' chain, e.g., one whose vertex coordinates are chosen randomly from a 4D box, each link can be straightened with Step~1 of the algorithm with probability~$1$. Steps~2 and~3 handle ``degenerate'' cases. We exploit this in our implementation (Section~\secref{Implementation}). \subsubsection{Step~2: $w$ is obstructed (but $s_g$ is not intersected)} \paragraph{Detecting obstructions.} When $w$ is obstructed, we again rely on construction of an obstruction diagram. First we describe the space in which the obstruction diagram is embedded. Consider the space of possible directions from which $s_0$ might approach $s_g$. In 3D, this set of unit vectors forms a $1$-sphere, a circle, which can be viewed as orthogonal to $s_g$ and centered at $v_g$; see Fig.~\figref{sphere.directions}a. Similarly, in 4D, the set of possible approach directions toward $s_g$ forms a unit $2$-sphere $S$, which again we center on $v_g$. Every point on this sphere represents a direction of approach to $s_g$; see Fig.~\figref{sphere.directions}b. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{sphere.directions.eps} \caption{(a) Directions approaching the goal position in 3D; (b) $S$ is a $2$-sphere in $\R^4$.} \figlab{sphere.directions} \end{figure} The {\em obstruction diagram\/} Ob$(s_g)$ is the set of vectors $w$ representing obstructed goal directions for $s_g$. \begin{lemma} If the goal $s_g$ is not intersected, the obstruction diagram Ob$(s_g)$ consists of at most $n$ arcs on $S$. \lemlab{Ob} \end{lemma} \begin{pf} Take an arbitrary segment $s_i$ of the chain, and ``project'' it to $s'_i$ in the $3$-flat $\Pi \supset S$ orthogonal to $s_g$; i.e., $s'_i = {_1{\triangle}_i} \cap \Pi$. See Fig.~\figref{sphere.directions}a for the 3D analog. We first claim that the set of directions $w$ obstructed by $s'_i$ is identical to those obstructed by $s_i$. Next we determine this set of directions. Every vector $w$ determined by a point on $S$ and its center $v_g$, is orthogonal to $s_g$ by our choice of $\Pi$. So the set of $w$ obstructed by $s'_i$ is just those $w$ determined by the intersection of ${_g{\triangle}}(s'_i)$ with $S$. By Lemma~\lemref{tricone.sphere}, this is at most one arc on the sphere. See Fig.~\figref{sphere.4D}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{sphere.4D.eps} \caption{In 4D, $s_i$ projects to $s_i'$ in the $3$-flat containing $S$, and produces an arc of the obstruction diagram determined by the intersection of the triangle cone ${_g{\triangle}}(s'_i)$ with $S$.} \figlab{sphere.4D} \end{figure} \end{pf} Detection of obstruction therefore reduces to deciding if $w$ lies on one or more arcs of an arrangement of circular arcs on a $2$-sphere $S$, which can be accomplished in $O(n)$ time and space as in Lemma~\lemref{inter.break.move}. \paragraph{Skirting obstructions.} Our next task is to move $s_0$ when $w$ is obstructed so that its new goal direction is free. This task is similar to that handled in Lemma~\lemref{inter.break.move}---stepping off the arcs meeting at $w$---with one additional constraint: the move must maintain the simplicity of the chain. Note that Ob$(s_g)$ does not record chain simplicity, but rather records free goal directions. So we need to find a $\Delta w$ that will move $w$ to be free, while simultaneously maintaining simplicity during the motion of $s_0$. \begin{lemma} If $w$ is obstructed, $s_0$ can be moved, maintaining simplicity throughout, so that its new goal direction $w' = w + \Delta w$ is unobstructed. $\Delta w$ may be computed in $O(n)$ time and space. \lemlab{Obs.avoidance} \end{lemma} \begin{pf} Because the chain is initially simple, there must exist a ${\beta} > 0$ such that rotation of $s_0$ about $v_1$ by an angle less than ${\beta}$ leaves the chain simple. This ${\beta}$ can be computed by finding the smallest distance $d$ from $s_0$ to any other segment, and using the angle of a cone centered at $s_0$ of radius $d/2$. Now $\Delta w$ is selected just as in Lemma~\lemref{inter.break.move}, but subject to this angle constraint. \end{pf} Note that because we have based our analysis on a fixed $s_g$, moving $s_0$ does not alter the obstruction diagram, which records obstructed directions of approach to $s_g$. \subsubsection{Algorithm~1b Complexity} The algorithm straightens one joint in at most three moves: one to move $v_1$ so the goal is not intersected (Step~3), one to move $v_0$ so that the goal is not obstructed (Step~2), and one to rotate directly to the goal (Step~1). The total number of moves used by the algorithm is then at most $3n = O(n)$. For each of the $n$ iterations, Lemma~\lemref{Obs.avoidance} shows that the computations can be performed in linear time and space. This then establishes the total time complexity of $O(n^2)$ claimed in Theorem~\theoref{open.4D}. Because each move is performed independently, the obstruction diagram arcs may be discarded after each iteration. Thus the space requirements remain at $O(n)$. \subsection{Implementation} \seclab{Implementation} We have implemented Algorithm~1b for chains in ``general position'' in C++. The program accepts a chain as input, and first checks if it is simple. If it is, the straightening process starts; otherwise the program exits. The program then straightens the chain link-by-link using Step~1, one move per link. It also detects whether the goal is obstructed (Step~2) or intersected (Step~3) by solving sets of linear equations, but in those cases it simply halts; we have not implemented the obstruction diagrams, or avoiding obstructions. For a chain whose vertex coordinates are chosen randomly, the program straightens it with probability~$1$, for then the degenerate cases handled by Steps~2 and~3 (when a point, $w$ or $v_1$, hits an arc on a $2$-sphere, e.g., Fig.~\figref{sphere.4D}) are unlikely to occur. The output of the program is a set of Geomview or Postscript files that animate the straightening process. Fig.~\figref{chain.anim} shows output for a chain whose $n=100$ vertices were chosen randomly and uniformly in $[0,1]^4$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{fig.ps} \caption{Snapshots of the algorithm straightening a chain of $n=100$ vertices, initially $(0)$, and after $25$, $50$, $75$, and all $99$ joints have been straightened (left to right). (a) Scale approx. 50:1; the entire chain is visible in each frame. (b) Scale approx. 1:1; the straightened tail is ``off-screen.'' (The apparent link length changes are an artifact of the orthographic projection of the 4D chain down to 2D.)} \figlab{chain.anim} \end{figure} \section{Straightening Trees in 4D} \seclab{Trees} It will come as no surprise that essentially the same algorithm as just described can straighten trees in 4D. The reason is that each segment was considered a fixed obstruction in the chain straightening algorithm, and whether those segments form a chain or a tree is largely irrelevant, as long as there is a free end. There is one spot at which the difference between a chain and a tree does matter, however: freeing up an intersected goal position. We concentrate on this difference in the description below. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{tree.eps} \caption{(a) Tree $T$ rooted at $z$; (b) After straightening chains $C$ incident to $x$; $C'_1$ is the set of straightened chains excluding one distinguished chain $(v_0, v_1, \ldots)$.} \figlab{tree} \end{figure} \begin{center} \fbox{% \begin{minipage}{56mm} \begin{tabbing} \hspace*{3mm}\=\hspace*{3mm}\=\hspace*{3mm}\=\kill {\bf Algorithm 2: Trees}\\ {\sf repeat} until straightened {\sf do}\\ \> {\sf 1:} Identify a node $x$ with chain descendants $C$.\\ \> {\sf 2:} Straighten each chain in $C$, forming $C'$.\\ \> {\sf 3: if} $r_g$ is intersected {\sf then}\\ \> \> Construct obstruction diagram Ob$(x)$ on 2-sphere.\\ \> \> Move $x$ so that $r_g$ not intersected.\\ \> {\sf 4:} Rotate each segment in $C'$ to $r_g$ and coalesce. \end{tabbing} \end{minipage} } \end{center} Algorithm~2 chooses a leaf $z$ of the given tree $T$ as root, and then identifies some node $x$ all of whose descendant subtrees are chains (Step~1). Call this set $C$; see Fig.~\figref{tree}a. Each chain in $C$ can be straightened one at a time via Algorithm~1, leaving a set of straightened chains, or segments, $C'$ (Step~2). Define the goal ray to be the extension of the parent segment $yx$ incident to $x$; see Fig.~\figref{tree}b. If $r_g$ is not intersected by any segment of $T \setminus C'$, then each segment in $C'$ can be rotated to $r_g$, each lying on top of one another (Step~4). We can view them as coalesced into a single link, reducing the degree of $x$ to $2$. The process then repeats. If, however, $r_g$ is intersected (Step~3), we need to move $x$ so that the goal ray becomes free. There are several ways to achieve this; we choose to parallel Step~2 of Algorithm~1a. Let $(v_0,v_1, \ldots, v_m)$ be one of the chains of $C'$, with $v_m$ adjacent to $x$. We distinguish this chain from the others in $C'$; call the set of others $C'_1$. Let the $2$-link chain $(v_0, x, y)$ play the role of $(v_0, v_1, v_2)$ in Algorithm~1a. In that algorithm we argued that Ob$(v_1)$ is a set of arcs and points on a $2$-sphere (Fig.~\figref{sgint}). Here will will reach the same conclusion for Ob$(x)$ on the $2$-sphere $S$ of positions for $x$. The only difference is that in the current situation, the {\em star\/} of segments $C'_1$ is attached to $x$, and we need to augment Ob$(x)$ to reflect its obstructions. We opt to translate $C'_1$ as $x$ moves; this gives rise to two sets of constraints: (1)~those caused by a segment in $C'_1$ intersecting a segment of $T' = T \setminus \{ C'_1 \cup xy \cup xv_0 \}$; (2)~those caused by $xy$ or $xv_0$ intersecting a segment in $C'_1$. For the first, the locus of positions of $x$ that cause some $s \in C'_1$ to intersect some $s_i \in T'$ is a parallelogram, congruent to the Minkowski sum $s \oplus s_i$. Analogous to Lemma~\lemref{tricone.sphere}, it is easy to see that this holds: \begin{lemma} The intersection of a parallelogram with a $(d{-}1)$-sphere $S$ in $\R^d$ consists of at most four connected components, each of which is an arc or a point. \lemlab{para.sphere} \end{lemma} \unskip{\hfill $\Box$} Thus the constraints~(1) add $O(n)$ arcs or points to Ob$(x)$. Constraints~(2) can be seen to consist of $O(n)$ points on $S$: translating the star $C'_1$ to $y$ determines the rays that $xy$ might align with to cause $xy$ to intersect $C'_1$; and similarly translating $C'_1$ to $v_0$ determines rays for intersection with $x v_0$. The two placements of $C'_1$ therefore generate $O(n)$ additional point obstructions. With Ob$(x)$ again a set of $O(n)$ arcs and points on a $2$-sphere, Lemmas~\lemref{inter.break} and~\lemref{inter.break.move} hold, leading to the same time complexities clamed for Algorithm~1, and establishing Theorem~\theoref{tree.4D}. \section{Convexifying Closed Chains in 4D} \seclab{Closed} Our algorithm for convexifying closed chains employs the {\em line tracking\/} motions introduced in~\cite{lw-rcpce-95}. Indeed our algorithm mimics theirs in that we repeatedly apply line tracking motions, each of which straightens at least one joint, until a triangle is obtained (which is a planar convex polygon, as desired). Although the overall design of our algorithm is identical, the details are quite different, for there is a major difference with~\cite{lw-rcpce-95}: They permitted self-intersections of the chain, whereas we do not. This greatly complicates our task.% \footnote{ An alternative convexifying algorithm, again permitting self-intersections, is described in~\cite{s-scsc-73}. Sallee accomplishes the same result by a different basic motion, involving four consecutive vertices rather than the five used in~\cite{lw-rcpce-95}. } Let $(v_0,v_1,v_2,v_3,v_4)$ be five consecutive vertices of a closed polygonal chain. We allow $v_0 = v_4$. A {\em line tracking\/} motion of $v_2$ moves $v_2$ along some line $L$ in space, while keeping both $v_0$ and $v_4$ fixed. As long as the angle at joints $v_1$ and $v_3$ (the {\em elbows\/}) are neither $\pi$ (straight) nor $0$ (folded), such a motion is possible. Neither angle can be $0$ because that would violate the simplicity of the chain. Straightening one joint is precisely our goal, so we assume that neither joint is straight; and therefore a line tracking motion is possible. We will choose $L$ and a direction along it so that the movement increases the distance from $v_2$ to both $v_0$ and $v_4$ simultaneously. This necessarily opens both elbow angles. The motion stops when one elbow straightens. The only issue is whether this can be done while maintaining simplicity. Our aim is to prove this theorem: \begin{theorem} For a simple 4D chain $(v_0,\ldots,v_4)$, there exists a line tracking motion of $v_2$ that straightens either $v_1$ or $v_3$ (or both) while maintaining simplicity of the chain throughout the motion. \theolab{line.tracking} \end{theorem} A high-level view of the algorithm is as follows: \begin{center} \fbox{% \begin{minipage}{56mm} \begin{tabbing} \hspace*{3mm}\=\hspace*{3mm}\=\hspace*{3mm}\=\kill {\bf Algorithm 3: Closed Chains}\\ {\sf repeat until} chain is a triangle {\sf do}\\ \> Compute a line $L$ along which to move $v_2$.\\ \> Compute free paths $\pi_1$ and $\pi_3$ for $v_1$ and $v_3$.\\ \> Move $v_2$ along $L$, $v_1$ along $\pi_1$, and $v_2$ along $\pi_2$.\\ \> Freeze the straightened joint $v_1$ or $v_3$. \end{tabbing} \end{minipage} } \end{center} \subsection{Choosing $L$} To fix $L$, the ray along which $v_2$ moves, we choose a point $q \in \R^4$ different from $v_2$, and let $L$ be the ray from $v_2$ that contains $v_2q$. We will choose $q$ so that it is itself the point where one of the two joints $v_1$ or $v_3$ becomes straight while moving $v_2$ along $L$. \begin{lemma} A point $q$ determining an appropriate $L$ may always be found, and in time and space $O(n^4)$. \lemlab{Choosing.L} \end{lemma} \begin{pf} We choose $q$ so that it satisfies these conditions: \begin{enumerate} \item Moving $v_2$ along $L$ increases the distance from $v_2$ to $v_0$ and to $v_4$. \item Either $v_1$ or $v_3$ becomes straight, i.e., $|qv_0|=|v_0v_1|+|v_1v_2|=r_0$, or $|qv_4|=|v_2v_3|+|v_3v_4|=r_4$ \item \begin{enumerate} \item If $|qv_0|=r_0$, then $qv_0$ does not intersect any other segment of the chain than those to which it is incident. \item If $|qv_4|=r_4$, then $qv_4$ does not intersect any other segment of the chain than those to which it is incident. \end{enumerate} \item $v_2 q$ does not intersect a segment $s_i$, $i > 4$. \end{enumerate} Condition~3 ensures that our ``goal'' is not itself intersected, in the sense used in Section~\secref{Open}. Let $R_i$ be the set of points (the ``region'') of $\R^4$ that satisfy Condition~$i$ above. $R_1$ is the intersection of two closed half-spaces containing $v_2$, orthogonal to $v_0v_2$ and $v_2v_4$ respectively. Note that $v_2 \in R_1$. If $v_0v_2$ and $v_2v_4$ lie on the same line, $R_1$ degenerates to a $3$-flat orthogonal to that line; otherwise it is a $4$-dimensional set.\footnote{ Although we could remove this possible degeneracy by moving $v_2$ in a neighborhood (while preserving simplicity) to break the collinearity, this is not necessary, as the proof goes through regardless. } See Fig.~\figref{L} for a lower dimensional analog of the situation. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{L.eps} \caption{Choosing $q \in L$. $R_1 \cap R_2 = R_1 \cap (S_0 \cup S_4)$.} \figlab{L} \end{figure} The set of points $R_2 = S_0 \cup S_4$ in 4D that satisfy Condition~2 is the union of two 3-spheres, $S_0$ and $S_4$, centered at $v_0$ and $v_4$ and of radius $r_0$ and $r_4$, respectively. Because $|v_0v_2|<r_0$, $v_2$ is inside the $4$-ball bounded by $S_0$. Therefore, $R_1 \cap S_0 \neq \emptyset$. Similarly, $R_1 \R^4 \setminus \cap S_4 \neq \emptyset$. So $R_1 \cap R_2 \neq \emptyset$. The dimensionality of this set depends on whether or not $\{v_0, v_2, v_4\}$ are collinear: if they are, the $3$-spheres are intersected by a $3$-flat producing $2$-spheres; if they are not, the $3$-spheres are intersected by a $4$-dimensional wedge, producing $3$-dimensional regions of the $3$-spheres. Consider Condition~3a; clearly 3b is analogous. We want all those points $q$ such that $qv_0$ does not intersect any other link of the chain. Clearly the points forbidden by segment $s_i$ lie in the triangle cone ${_0{\triangle}_i}={\triangle}_{v_0}(v_i,v_{i+1})$, just as in the proof of Lemma~\lemref{inter.break}. Intersecting ${_0{\triangle}_i}$ for all $i$ with $R_1 \cap R_2$ marks the set of points that must be avoided in our choice of $q$: $R_3 \supset \R^4 \setminus \bigcup_i {_0{\triangle}_i}$. It is easiest to concentrate on the intersection of $_0{\triangle}_i$ with the spheres in $R_2$. By Lemma~\lemref{tricone.sphere}, we know this intersection is at most two arcs or points, independent of the dimension of the spheres. So whether or not $\{v_0, v_2, v_4\}$ are collinear, the intersection produces $O(n)$ arcs or points. Similarly, Condition~4 leads to $R_4 \supset \R^4 \setminus \bigcup_{i>4} {_2{\triangle}_i}$, for $v_2q$ can intersect $s_i$ only if $q$ lies in ${_2{\triangle}_i}$. Again, $O(n)$ arcs or points need be avoided in $R_1 \cap R_2$. No union of arcs and points can cover the set $R_1 \cap R_2$, which is either $2$- or $3$-dimensional. Thus $\bigcap_i R_i \neq \emptyset$. We need only choose a $q$ in this set. There are a variety of ways to choose such a $q$ algorithmically. A naive method is to first construct an arrangement of $2$-flats in $\R^4$ each containing a triangle $_0{\triangle}_i$ or $_2{\triangle}_i$. This computation could be performed in $O(n^4)$ time and space \cite{ess-ztha-93}. Intersecting this arrangement with the halfspaces delimiting $R_1$ and the $3$-spheres $S_0$ and $S_4$ leave us cells bound by algebraic surfaces inside $\bigcap_i R_i$. The centroid of any such cell can be selected as $q$. \end{pf} \subsection{Line Tracking in 3D} We start by thinking about the analogous situation in 3D. This will both set notation, and ground intuition by showing why Theorem~\theoref{line.tracking} does not hold in 3D. \subsubsection{Topology of Configuration Space in 3D} Let $\R_{[0,1)}$ be the interval $[0,1)$ on the real line, open at $1$. We will parametrize the location of $v_2$ along $L$ by $t \in [0,1)$, with $t=0$ the start, and $t=1$ when $v_2$ reaches the $q$ of Lemma~\lemref{Choosing.L}, the first time at which a joint, straightens. Let this joint be $v_1$ without loss of generality. Let ${\cal{C}}'$ be the configuration space of the four-link system in isolation, permitting intersections between the links, the prime to remind us that $t=1$ has been excluded. We claim that \begin{equation} {\cal{C}}' = \Sph^1 \times \Sph^1 \times \R_{[0,1)} . \eqlab{C3} \end{equation} This can be seen as follows. Fix some $t$ so that $v_2$ is fixed. Then each of $v_1$ and $v_3$ is free to rotate (independently) on a circle in $\R^3$ centered on the axis $v_0 v_2$ and $v_2 v_4$ respectively. As $t$ varies from $0$ to $1$, these circles move in space, and grow and shrink in radius; see Fig.~\figref{circles.3D}. \begin{figure}[htbp] \centering \includegraphics[height=4.5cm]{circles.3D.eps} \caption{In 3D, the circle on which $v_1$ may lie moves in space as $v_2$ slides up $L$.} \figlab{circles.3D} \end{figure} At $t=1$ the $v_1$ circle shrinks to a point. But for $t \in [0,1)$, both circles retain a positive radius. Thus the configuration space ${\cal{C}}$ has the topology of $\Sph^1 \times \Sph^1$ for each $t$, and the claim follows. \subsubsection{Obstruction Diagram in 3D} As in Section~\secref{Open}, we incorporate the obstacles representing the other links via an ``obstruction diagram.'' We start by ignoring the four moving links as obstructions, and only consider the remaining, fixed links of the polygonal chain as obstacles. We develop the obstruction diagram first for fixed $t$, so that the relevant configuration space is $\Sph^1 \times \Sph^1$. Because we are ignoring the moving links as obstructions, movement on the two circles is independent, so it suffices to determine the obstruction diagram Ob$(v_1)$ on one $1$-sphere/circle $S_1$, that for $v_1$. The following lemma will be key in 4D: \begin{lemma} In 3D, if $(v_2 - v_0) \cdot (v_1 - v_0) \neq 0$ and $(v_2 - v_0) \cdot (v_1 - v_2) \neq 0$, then a single segment contributes at most four points to Ob$(v_1)$. Otherwise, if either dot product is zero, a segment could obstruct a finite-length arc of the $S_1$ circle for $v_1$. \lemlab{obs.pts.3D} \end{lemma} \begin{pf} We only sketch a proof, leaving details for the 4D case considered below. Spinning $v_1$ along its circle of freedom while maintaining $v_0$ and $v_2$ fixed traces out a ``spindle'' shape, which can be viewed as the union of two cones. A segment $s$ that does not lie along a line through either $v_0$ or $v_2$ can intersect each cone in at most two points, and so intersect the spindle in at most four points. See Fig.~\figref{four.obstacles}. \begin{figure}[htbp] \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{four.obstacles.eps} \caption{One segment $s$ can contribute four points to Ob$(v_1)$.} \figlab{four.obstacles} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=0.5\linewidth]{dot.zero.eps} \caption{$(v_2-v_0) \cdot (v_1-v_2)=0$ and segment $s$ (which lies in the plane of the circle) contributes an arc to the obstruction diagram Ob$(v_1)$.} \figlab{dot.zero} \end{minipage} \end{figure} These four segment-cone intersection points correspond one-to-one with four $v_1$ positions on $S_1$ at which there is an intersection between the $2$-link chain $(v_0,v_1,v_2)$ and $s$. If the segment $s$ lies in the surface of the cone, then it contributes just one point to the diagram, corresponding to the angle of spin that aligns one of the two links with the obstacle segment. Finally, if either of the two links $v_0v_1$ or $v_1v_2$ is orthogonal to the axis of the spindle, i.e., either dot product is zero, then a segment obstacle could obstruct the entire circle, for one of the cones is then degenerately flat. As Fig.~\figref{dot.zero} illustrates, here a segment might obstruct a range of rotations of $v_1-v_2$, producing an arc in Ob$(v_1)$. \end{pf} \subsubsection{Disconnected Free Space in 3D} Let $v_1(t)$ represent the position of $v_1$ on its circle $S_1$ at a particular time $t$. The goal is for the links $(v_0,v_1,v_2)$ to avoid all obstacles, which means that $v_1(t)$ should avoid points of the obstruction diagram. If we ignore for now the orthogonality case, then we have the situation that a finite set of links produce an obstruction diagram consisting of a finite set of points on $S_1$. As $t$ moves, these points wander around the circle, disappear, enter, join, or split. The moving links, previously ignored, just add a few more points to the obstruction diagram, moving in a different manner. The diagram for the configuration space for $v_1$ then looks like arcs on the tube-like $\Sph \times \R_{[0,1)}$. It is clear that it is possible for the point $v_1(t)$ to be ``captured'' between two points of the obstruction diagram which move together and squeeze $v_1(t)$ into a collision. See Fig.~\figref{config.tube}. In this case, the free space for the point $v_1$ is not connected from $p_1(0)$ to $p_1(1)$. \begin{figure}[htbp] \centering \includegraphics[height=6cm]{config.tube.eps} \caption{Point $v_1(t)$ is ``captured'' by two obstacle points in configuration space, the tube-like surface.} \figlab{config.tube} \end{figure} And indeed it is easy to ``cage in'' the moving links by the fixed links so that no straightening is possible. Our next task is to show that such caging-in is impossible in 4D. \subsection{Line Tracking in 4D} \subsubsection{Topology of Configuration Space in 4D} Turning now to 4D, exactly analogous to the situation in 3D, an elbow at the join of two links has a space of possible motions in 4D that is topologically $\Sph^2$, for it is the intersection of two 3-spheres. Thus the configuration space ${\cal{C}}'$ of our four-link chain for $t\in[0,1)$, ignoring self-intersections, is \begin{equation} {\cal{C}}' = \Sph^2 \times \Sph^2 \times \R_{[0,1)} \;. \eqlab{C5} \end{equation} At $t=1$ at least one of the $2$-spheres shrinks to a point. \subsubsection{Obstruction Diagram in 4D} As in 3D, we analyze the obstruction diagram on one $2$-sphere $S_1$, that for $v_1$, at a fixed value of $t$: Ob$(v_1)$. Let $v_1(t)$ represent the position of $v_1$ on its sphere $S_1$ at time $t$. We seek the set of points Ob$(v_1)$ for which the links $(v_0,v_1,v_2)$ intersect some other segment of the chain, $s_4, s_5, \ldots, s_n$. Just as in 3D, Ob$(v_1)$ is (in nondegenerate situations) a finite set of points. This claim relies on how a line may intersect a cone. Define a {\em $(d{-}1)$-cone\/} $C(a,b,{\theta})$, for apex point $a$, axis point $b$, and cone angle ${\theta} \in [0,\pi/2]$, to be the set of points $p \in \R^d$ that form an angle ${\theta}$ with respect to the axis, i.e., which satisfy: \begin{equation} ( p - a ) \cdot (b - a ) = |p - a| |b - a| \cos {\theta} \;. \eqlab{cone} \end{equation} For the extreme values of ${\theta}$, $C(a,b,0)$ is a ray from $a$ through $b$, and $C(a,b,\pi/2)$ is a $(d{-}1)$-flat containing $a$ and orthogonal to $ab$. Note that a $1$-cone is not the triangle cone from Section~\secref{Geometric.Intersections}; rather a $1$-cone is the union of two rays from $a$. In 3D, $C(a,b,{\theta})$ is the surface of a right circular cone whose axis is the ray from $a$ through $b$, and which form the angle ${\theta}$ with the axis at $a$ (cf.~Fig.~\figref{four.obstacles}). Its intersection with a plane orthogonal to $ab$ is a circle. In 4D, $C(a,b,{\theta})$ is a ``right spherical cone,'' whose intersection with a $3$-flat orthogonal to $ab$ is a $2$-sphere. Note that it is no restriction to insist that ${\theta} \in [0,\pi/2]$, for we can ensure this for ${\theta} > \pi/2$ by selecting an axis point $b'$ for the cone to be on the other side of the apex $a$, on the line containing $ab$, thereby ``reflecting'' ${\theta}$ to $\pi - {\theta}$. \begin{lemma} The intersection of the $(d{-}1)$-cone $C(a,b,{\theta})$, ${\theta} \neq \pi/2$, with a line, ray, or segment whose containing line does not include the apex $a$, is at most two points: two points, one point, or empty. \lemlab{line.cone} \end{lemma} This claim can be seen intuitively as follows. Let $C$ be the cone and $s$ a segment in $\R^d$. If $s$ is contained in a $(d{-}1)$-flat $\Pi$ orthogonal to $ab$, then because $\Pi \cap C$ is a sphere, the result follows from Lemma~\lemref{line.sphere}. Otherwise $s$ is contained in a flat whose intersection with $C$ is an ellipsoid, and the result follows because an ellipsoid is affinely equivalent to a sphere~\cite[p.~95]{s-pg-88}. \begin{pf} Let $|ab| = 1$ without loss of generality. Translate and rotate $C$ so that $a = (0,0,\ldots,0)$ and $b=(1,0,0,0,\ldots,0)$. For a point $p=(x_1,\ldots,x_d)$, Eq.~\eqref{cone} reduces to \begin{eqnarray} p \cdot b & = & |p| \cos {\theta} \\ (x_1,\ldots,x_d) \cdot (1,0,0,0,\ldots,0) & = & \sqrt{x_1^2+\cdots+x_d^2} \cos {\theta} \\ x_1^2 & = & (x_1^2+\cdots+x_d^2) \cos^2 {\theta} \eqlab{cone.dot} \end{eqnarray} Represent the point $p$ via the parameter $t$: \begin{equation} p = ({\alpha}_1 + {\beta}_1 t,\ldots,{\alpha}_d + {\beta}_d t ) \; . \end{equation} Substitution of this into Eq.~\eqref{cone.dot} yields a quadratic equation in $t$, which has at most two roots. We now examine the degenerate solutions. Because we assumed that ${\theta} \neq \pi/2$, $\cos {\theta} \neq 0$. Thus the righthand side of Eq.~\eqref{cone.dot} can only be zero when $x_1^2+\cdots+x_d^2 = 0$, i.e., when $p=(0,0,\ldots,0)$ is the apex $a$. This corresponds to a line through $a$, excluded by our assumptions. \end{pf} \begin{lemma} In 4D, if $(v_2 - v_0) \cdot (v_1 - v_0) \neq 0$ and $(v_2 - v_0) \cdot (v_1 - v_2) \neq 0$, then a single segment $s$ contributes at most four points to Ob$(v_1)$. \lemlab{obs.4pts.4D} \end{lemma} \begin{pf} Moving $v_1$ sweeps out two finite cones, which are truncations of the infinite cones $C(v_0, v_2, {\theta}_0)$ and $C(v_2, v_0, {\theta}_2)$, with \begin{eqnarray} (v_2 - v_0) \cdot (v_1 - v_0) = |v_2 - v_0| |v_1 - v_0| \cos {\theta}_0 \eqlab{t0} \\ (v_2 - v_0) \cdot (v_1 - v_2) = |v_2 - v_0| |v_1 - v_2| \cos {\theta}_2 \end{eqnarray} By the preconditions of the lemma, we have ${\theta}_j \neq \pi/2$, $j=0,2$, so we may assume ${\theta}_j \in [0,\pi/2)$ by the reflection maneuver suggested previously. Consider two cases: \begin{enumerate} \item The line containing $s$ does not pass through either cone apex, $v_0$ or $v_2$. The conditions of Lemma~\lemref{line.cone} are satisfied, establishing that $s$ intersects the two cones in at most four points. Each of these points fixes a position of $v_1$ corresponding to an obstruction, and so contributes this point to Ob$(v_1)$. \item The line $H$ containing $s$ passes through $v_0$ (the case through $v_2$ is exactly analogous and will not be treated separately). Then it may be that $s \cap C(v_0, v_2, {\theta}_0)$ is a subsegment of $s$. This is because the vector $p - v_0$ makes the same angle with $v_2 - v_0$ for all $p \in s$ (cf.~Eq.~\eqref{cone}). In this case, $s$ obstructs the unique position of $v_1$ that places it on $H$, and so contributes just one point to Ob$(v_1)$. Together with the at most two points from the other cone, $s$ generates at most three points of Ob$(v_1)$. \end{enumerate} \end{pf} The case excluded by the precondition of Lemma~\lemref{obs.4pts.4D} refers to the situation in which one cone is degenerately flat, as previously illustrated in Fig.~\figref{dot.zero}. We now analyze this situation in detail. \begin{lemma} If $(v_2 - v_0) \cdot (v_1 - v_0) = 0$, then Ob$(v_1)$ is a finite set of points and arcs on $S_1$ (the $2$-sphere of $v_1$ positions). \lemlab{cone.flat.4D} \end{lemma} \begin{pf} In this case ${\theta}_0 = \pi/2$ from Eq.~\eqref{t0}, and the infinite cone $C(v_0,v_2,\pi/2)$ degenerates to the $3$-flat orthogonal to the axis $v_0 v_2$ and including the apex $v_0$. The finite cone swept out by the link $s_0 = v_0 v_1$ is a ball $B_0$ of radius $\ell_0$ centered at $v_0$. In the 3D situation, $B_0$ is a disk (cf.~Fig.~\figref{dot.zero}); in 4D, it is a solid sphere whose boundary is a $2$-sphere $S_1$ representing the possible positions for $v_1$. The obstructed positions on $S_1$ are those for which $s_0$ intersects some segment $s_i$. Consider two possibilities: \begin{enumerate} \item $s_i$ does not lie in the same $3$-flat of $\R^4$ as $S_1$. Then $s_i$ intersects $B_0$ in at most one point $p$ (because it can intersect the flat in at most one point), and then only when $s_0$ passes through $p$ do we have an obstruction. Thus $s_i$ contributes one point to Ob$(v_1)$. \item $s_i$ is in the same $3$-flat as $S_1$. Now we have a situation exactly analogous to that shown in Fig.~\figref{sphere.4D}: the obstruction is the intersection of the triangle cone ${_0{\triangle}_i}$ with $S_1$. Lemma~\lemref{tricone.sphere} then establishes that $s$ adds at most two arcs or points to Ob$(v_1)$. \end{enumerate} \end{pf} \begin{lemma} The condition $(v_2 - v_0) \cdot (v_1 - v_0) = 0$ can hold at most one value of $t \in [0,1]$ during the movement of $v_2$ along $L$. \lemlab{dotzero.once} \end{lemma} \begin{pf} This follows immediately from our choice of $L$, which guarantees that the distance $|v_0 v_2|$ increases, and so the angle at $v_1$ opens. This angle can therefore pass through $\pi/2$ at most once. See Fig.~\figref{dotzero.once}. \begin{figure}[htbp] \centering \includegraphics[height=5.5cm]{dotzero.once.eps} \caption{The special condition $(v_2 - v_0) \cdot (v_1 - v_0) = 0$ holds at most once.} \figlab{dotzero.once} \end{figure} \end{pf} \subsubsection{Connected Free Space in 4D} Again let $v_1(t)$ represent the position of $v_1$ on its $2$-sphere $S_1$ of possible positions. We first describe the free space for the motion of the $2$-link chain $(v_0,v_1,v_2)$, avoiding the fixed links $s_4, s_5, \ldots, s_n$. It is a subset of $\Sph^2 \times \R_{[0,1)}$. For each $t \in [0,1)$, we know from Lemma~\lemref{obs.4pts.4D} that Ob$(v_1)$ is a set of points or arcs; and from Lemma~\lemref{dotzero.once} we know Ob$(v_1)$ is a finite set of points, except for at most one $t$, at which it is a set of points and arcs. Thus if $v_1(t)$ avoids these obstructions, it avoids intersection with the remainder of the chain. But now it should be clear that it is easy for $v_1(t)$ to ``run away'' from the obstructions. Think of its sphere of possible positions growing and shrinking with time $t$. $v_1(t)$ must avoid a set of points at any one time, and once (cf.~Lemma~\lemref{dotzero.once}), a set of arcs. This is easily done: there is no way to ``cage'' in $v_1(t)$ with these obstacles. Another view of this situation is that the configuration space $\Sph^2 \times \R_{[0,1)}$ is $3$-dimensional, and the obstructions Ob$(v_1(t))$ for $t \in [0,1)$ are $1$- or $0$-dimensional, and the removal of a 1D set cannot disconnect a 3D set (cf.~proof of Lemma~\lemref{free.straighten}). The remainder of this subsection establishes this claim more formally. A {\em path\/} in a topological space $X$ is a continuous function $\gamma :[0,1]\rightarrow X$. A space is {\em path-connected\/} if any two of its points can be joined by a path~\cite{a-bt-79}. We first work with the space ${\cal{C}}'_1$: the positions for $v_1$, for $t \in [0,1)$. Later we will add in $t=1$, and positions for $v_3$. \begin{lemma} The free space ${\cal{F}}'_1 \subset {\cal{C}}'_1$ for $v_1$ in the configuration space ${\cal{C}}'_1 = \Sph^2 \times \R_{[0,1)}$ is path-connected. \lemlab{connected.1} \end{lemma} \begin{pf} It will help to view our configuration space as follows. The $2$-sphere $S_1$ is represented by a flat two-dimensional sheet, and $\R_{[0,1)}$ is represented as a vertical axis. The result is a three-dimensional space, analogous to Fig.~\figref{config.tube}, that could look as depicted in Fig.~\figref{connected.space}. The point obstacles Ob$(v_1)$ become paths monotone with respect to the vertical $t$-axis. At one $t=t_1$ we may have arc obstacles as well. We need to show that $v_1(0)$ is connected by a path to $v_1(t')$, for any $t' < 1$. \begin{figure}[htbp] \centering \includegraphics[height=8cm]{connected.space.eps} \caption{The free space ${\cal{F}}_1$ for $v_1$ is path-connected. $\pi_1$ (dark) connects $p_1(0)$ to $p_1(1)$. Ob$(v_1)$ includes points at a fixed $t$, forming curves (shaded) over time. The shaded subspace at time $t=t_1$ includes arcs in Ob$(v_1)$.} \figlab{connected.space} \end{figure} We proceed in two cases. \begin{enumerate} \item Ob$(v_1)$ contains only points for all $t \in [0,1)$. Let $N$ be the maximum number of points in Ob$(v_1)$ over all $t$; we know $N \le 2n$. A $2$-sphere with a finite number $N$ points removed is path-connected. For each $t$, remove $N$ points from the corresponding $S_1(t)$: those in Ob$(v_1)$ at that $t$, and extra distinct points to ``pad out'' to $N$. Any two spheres with the same number of points removed are homeomorphic. Therefore ${\cal{F}}'_1$ is homeomorphic to $S_1(0) \times \R_{[0,1)}$. Because each of those spaces is path-connected, and the product of two path-connected spaces is path-connected, we have established the claim. \item Ob$(v_1)$ contains arcs at $t=t_1$. The main idea here is to choose a point $p_1 = v_1(t_1)$ that is unobstructed at time $t=t_1$, and then connect from $v_1(0)$ to $p_1$, and from $p_1$ to $v_1(t')$. It is clear, as we have shown in Case~1, that the spaces ${\cal{F}}_- = {\cal{C}}_{/t \in [0,t_1)}$ and ${\cal{F}}_+ = {\cal{F}}_{/t \in (t_1,1)}$ are path connected. We will prove that there exist points $p_0 \in {\cal{F}}_-$, and $p_2 \in {\cal{F}}_+$ such that $p_0$ and $p_2$ are connected by a path. We will call a point $p$ {\em free\/} if it does not belong to any obstruction diagram. Let $p_1 \in S_1(t_1)$ be a free point on $S_1$ at $t$. It is clear that such a point exists, since the obstruction diagram is a finite set of arcs and points. It is also clear that there exists a neighborhood $U \subset {\cal{F}}'_1$ of $p_1$ all of whose points are free. Choose $p_0 \in U$, $p_0 \in S_1(t_0)$, $t_0 < t_1$ and $p_2 \in U$, $p_2 \in S_1(t_2)$, $t_1 < t_2$. See Fig.~\figref{connected.space}. Both points are free and can be connected by a path in $U$ to $p_1$. But $p_0 \in {\cal{F}}_-$ and $p_2 \in {\cal{F}}_+$, both path connected spaces. Thus we may connect $v_1(0)$ to $p_0$ to $p_1$ to $p_2$ to $v_1(t')$. \end{enumerate} \end{pf} We now address the endpoint $t=1$, extending $C'_1$ to $C_1$ for $t \in [0,1]$. As $v_2$ approaches $q$ on $L$, one of the spheres, that for $v_1$ by our assumptions, shrinks to zero radius. Thus Fig.~\figref{connected.space} is not an accurate depiction near $t=1$, for the configuration space narrows to a point here. \begin{lemma} The free space ${\cal{F}}_1$ for $v_1$ in the full configuration space ${\cal{C}}_1$ is path-connected. \lemlab{connected.2} \end{lemma} \begin{pf} We have chosen $q$ and $L$ in Lemma~\lemref{Choosing.L} so that the $t=1$ endpoint is free in the sense that the straightened chain $v_0 v_1 v_2$ does not intersect the fixed portion of the chain. Thus there is a neighborhood $U$ of $t=1$ such that ${\cal{C}}_1$ is devoid of all obstructions within that neighborhood. Choose $t' \in U$ and apply Lemma~\lemref{connected.1} to yield a path from $v_1(0)$ to $v_1(t')$. Connect within $U$ from $v_1(t')$ to the endpoint $v_1(1)$. \end{pf} Now we include $v_3$ in the analysis. \begin{lemma} The free space ${\cal{F}} \subset {\cal{C}}$ for both $v_1$ and $v_3$ in the configuration space ${\cal{C}}$ for $t \in [0,1]$ is path-connected. \lemlab{connected.3} \end{lemma} \begin{pf} The key here is the independence of the motions of $v_1$ and $v_3$. Let $\pi_1$ be a path for $v_1(t)$ through ${\cal{F}}_1$, whose existence is guaranteed by Lemmas~\lemref{connected.1} and~\lemref{connected.2}. Now construct ${\cal{F}}_3$ as the possible positions $v_3(t)$ for $v_3$, avoiding at each time Ob$(v_3(t))$, where this time the obstructions include not only the fixed links $s_4, s_5, \ldots, s_n$, but also the two moving links $s_0$ and $s_1$, determined by $\pi_1$. If $v_3(t)$ avoids Ob$(v_3(t))$ for each $t$, then all intersections are avoided: we do not need to include the moving links in ${\cal{F}}_1$, because intersection is symmetric---if the links $s_2$ and $s_3$ do not intersect $s_0$ and $s_1$, then $s_0$ and $s_1$ do not intersect $s_2$ and $s_3$. For a fixed $t$, the obstacles are fixed segments, and Ob$(v_3)$ is again a finite set of points, or, for at most one $t$, a set of arcs: Lemmas~\lemref{obs.4pts.4D} and~\lemref{dotzero.once} apply unchanged. The independence of the motion of $v_3$ from $v_1$ permits us to treat the moving segments $s_0$ and $s_1$ on par with the fixed segments: the only difference is that their obstacle points move through ${\cal{C}}_3$ differently. Therefore a path $\pi_3$ for $v_3(t)$ may be found in ${\cal{F}}_3 \subset {\cal{C}}_3$. The two paths $\pi_1$ and $\pi_3$, together with the ray $L$ for $v_2$, constitute a path for moving the $4$-link chain $(v_0,v_1,v_2,v_3,v_4)$ through ${\cal{C}}$ while maintaining simplicity. \end{pf} \noindent This finally completes the proof of Theorem~\theoref{line.tracking}. \subsection{Motion Planning} \seclab{Canny.2} We now know a path that avoids self-intersection exists, i.e., either the joint $v_1$ or $v_3$ can be straightened. The next step is to compute such a path algorithmically. We rely on general motion planning algorithms, as in Section~\secref{Canny.1}. Our ``robot'' consists of the four links $(v_0,v_1,v_2,v_3,v_4)$ moving in the 5-dimensional configuration space ${\cal{C}}$, Eq.~\eqref{C5}. The subspace ${\cal{C}}_0$ that avoids self-intersection between the four links is some semialgebraic subset of ${\cal{C}}$, semialgebraic because the constraints on self-intersection may be written in Tarski sentences (see, e.g., \cite{m-crag-97}). The free configuration space ${\cal{F}}$ is composed of the points of ${\cal{C}}_0$ that avoid the obstacles, which is again a semialgebraic set. Canny's Roadmap algorithm achieves a time and space complexity of $O(n^5 \log n)$, where $n$ is the number of obstacles, because in our case, the configuration space has $k=5$ dimensions. The algorithm produces a piecewise algebraic path through ${\cal{F}}$, of $O(n^5)$ pieces. Each piece constitutes a constant number of moves, and so each joint straightening can be accomplished in $O(n^5)$ moves. Repeating the planning and straightening $n$ times leads to $O(n^6)$ moves in $O(n^6 \log n)$ time. Because choosing $L$ times requires at most $O(n^4)$ time by Lemma~\lemref{Choosing.L}, the time complexity is dominated by the path planning, thereby establishing the bounds claimed in Theorem~\theoref{closed.4D}. In the same way that Algorithm~1b improved on Algorithm~1a by avoiding motion planning, it is likely Algorithm~3 could be improved by an {\em ad hoc\/} algorithm. \section{Higher Dimensions} We have already shown that every simple open chain or tree in 4D can be straightened, and every closed chain convexified. Our final task is to prove that these results hold for higher dimensions, using the results from 4D. For an open chain, we straighten four links at a time and then repeat the procedure until the chain is straight. If the chain or tree contains fewer than four links, then it spans at most a $k$-flat for $k \le 3$, and it can be included in $\R^4$. For a closed chain, our algorithm also moves four links at a time. Four links determine at most a $k$-flat $H$ for $k \le 4$ which means that it can be included in a $4$-flat in $\R^d$, $d \ge 4$. We have already shown that these four links, for both all types of chains, can be straightened in 4D; therefore, they can be straightened in this $4$-flat $H \subset \R^d$. We only have to worry about the pieces of the remainder of the chain that intersect $H$. But since we are dealing with segments, their intersection with $H$ is either a point or a segment. But these are the kind of obstructions we have proven that can be avoided in $\R^4$. Therefore, the straightening of these four links can be completed in $H$, and therefore in $\R^d$, while maintaining rigidity and simplicity. The complexity for the algorithms in $\R^d$, $d \ge 4$, is the same as for the algorithms in 4D, for all computations are performed in $4$-flats. This proves Theorem~\theoref{d.ge.4}. \vspace{2mm} \small \noindent{\bf Acknowledgements.} We thank Erik Demaine and Godfried Toussaint for helpful comments, and Lee Rudolph for help with topology. We are grateful for the perceptive comments of the referees, which not only led to increased clarity throughout, but also improved the complexities of Algorithms~1a and~1b. \normalsize \bibliographystyle{alpha}
2,877,628,090,879
arxiv
\section{Introduction} To provide most precise theoretical predictions for observables at colliders, it is helpful to resum various kinds of large logarithms to all orders in the coupling constants. These large logarithms are usually induced by the soft and collinear radiations. The resummation of these large logarithms has been achieved by making use of the factorization of the cross section to a set of functions at different energy scales and the renormalization group evolutions controlled by the corresponding anomalous dimensions \cite{Sterman:1986aj,Catani:1989ne,Korchemsky:1993uz,Contopanagos:1996nh,Forte:2002ni,Banfi:2004yd,Becher:2006nr,Luisoni:2015xha}. So far, the formula and results at leading power have been extensively explored. Beyond leading power, there are a number of studies toward understanding the subleading power threshold effects for colorless final states \cite{Bonocore:2014wua,Bonocore:2015esa,Bonocore:2016awd,DelDuca:2017twk,Bahjat-Abbas:2018hpv} or colored final states \cite{Bhattacharya:2018vph,vanBeekveld:2019prq,vanBeekveld:2019cks,Boughezal:2019ggi}, the subleading power corrections for $N$-jettiness subtractions at next-to-next-to leading order in the strong coupling constant $\alpha_s$ \cite{Moult:2016fqy, Boughezal:2016zws,Moult:2017jsg,Boughezal:2018mvf,Ebert:2018lzn}, the subleading power corrections to the transverse momentum spectrum of a Higgs boson or a gauge boson \cite{Balitsky:2017flc,Balitsky:2017gis,Ebert:2018gsn,Cieri:2019tfv}, the anomalous dimensions of subleading power operators \cite{Hill:2004if,Beneke:2005gs,Freedman:2014uta,Goerke:2017lei,Beneke:2017ztn,Beneke:2018rbh,Beneke:2019kgv}, and the resummation of double logarithms at subleading power for the thrust observable \cite{Moult:2018jjd,Moult:2019mog,Moult:2019uhz}, the threshold cross section of Drell-Yan like processes \cite{Beneke:2018gvs,Bahjat-Abbas:2019fqa,Beneke:2019mua,Beneke:2019oqx}, and the energy-energy correlator in the back-to-back limit \cite{Moult:2019vou}. One kind of the large double logarithm at subleading power appears in the loop-induced processes \cite{Jikia:1996bi,Fadin:1997sn,Kotsky:1997rq,Akhoury:2001mz,Melnikov:2016emg,Braaten:2017lxx,Braaten:2017ukc}, such as the Higgs boson decay $H\to \gamma\gamma$ via a massive quark loop, or similar processes with a massive quark propagator \cite{Penin:2014msa,Penin:2016wiw,Liu:2017vkm,Alte:2018nbn,Liu:2018czl,Alte:2019iug}. If the mass of the quark in the loop, for example the bottom quark mass $m_b$, is much less than the Higgs boson mass $m_h$, the amplitude contains large double logarithms $\alpha_s^n\ln^{2n}( m_h^2/m_b^2 )$. In contrast to the Sudakov double logarithms which are induced by soft and collinear gauge bosons, these double logarithms are induced by a fermion. For some specific processes, they have been obtained up to the two-loop level \cite{Inoue:1994jq,Spira:1995rr,Fleischer:2004vb,Harlander:2005rq,Anastasiou:2006hc,Aglietti:2006tp,Spira:2016ztx}, and resummed to all orders by using the off-shell Sudakov form factor \cite{Kotsky:1997rq,Akhoury:2001mz} or by applying a sequence of identities graphically \cite{Liu:2017vkm,Liu:2018czl}. In this work, we propose a method to resum the large double logarithms in loop induced processes with an effective field theory. This study can help to understand the all order structure of subleading power logarithms, and the method developed in this work may be also useful to resum general large logarithms at subleading power, especially for the processes in which the subleading power corrections are numerically significant. We show our resummation scheme through an example of the process $H\to \gamma\gamma$ via a bottom-quark loop, leaving the generalization to non-Abelian cases to future work. We notice that a different method has been developed in \cite{Liu:2019oav} to deal with the same problem. \begin{figure}[h] \centering \includegraphics[width=0.48\linewidth]{LO.pdf} \caption{The leading order Feynman diagram of the decay $H\to \gamma\gamma$ via a bottom-quark loop. The other diagram with inverted fermion flow is not shown.} \label{fig:sketch} \end{figure} \section{Factorization} As shown in Fig.\ref{fig:sketch}, the leading order amplitude of $H \to \gamma(k_1)\gamma(k_2)$ via a bottom-quark loop can be written as \begin{align} \mathcal{A} = &ie_q^2y_bN_c \int \frac{d^dp}{(2\pi)^d} \frac{\textrm{Tr}[(p\!\!\!\slash +m_b) \ep^*\!\!\!\!\!\slash~(k_2) (p\!\!\!\slash+k\!\!\!\slash_2 +m_b) (p\!\!\!\slash-k\!\!\!\slash_1+m_b) \ep^*\!\!\!\!\!\slash~(k_1)]}{(p^2-m_b^2) [(p+k_2)^2-m_b^2] [(p-k_1)^2-m_b^2]} \label{eqlo} \end{align} with $y_b$ the bottom quark's Yukawa coupling, its electric charge $e_q=\frac{-1}{3}e$ and $d=4-2\epsilon$. It is convenient to choose two light-like directions $n$ and $\bar{n}$ such that \begin{align} k_1^{\mu} = \frac{m_h}{2}n^{\mu}, \qquad k_2^{\mu} = \frac{m_h}{2}\bar{n}^{\mu}. \end{align} Then, any momentum $q$ can be decomposed as \begin{align} q^{\mu} = q^+\frac{\bar{n}^{\mu}}{2} + q^- \frac{n^{\mu}}{2} + q_{\perp}^{\mu} \end{align} with $q^+ \equiv q\cdot n, q^- \equiv q\cdot \bar{n}.$ The $n$-collinear momentum scales as $(q^+,q^-,q_{\perp})\sim m_h (\lambda^2,1,\lambda)$, while the soft momentum scales as $(q^+,q^-,q_{\perp})\sim m_h(\lambda,\lambda,\lambda)$ with $\lambda=m_b/m_h$. For later analyses, we also need hard-collinear momentum $m_h(\lambda, 1, \sqrt{\lambda})$ as well as quasi-hard-collinear momentum $m_h(\lambda, 1, \lambda)$. This quasi-hard-collinear momentum is present only for specific loop momentum. It has the same offshellness as the hard-collinear momentum, but with smaller transverse momentum fluctuation. The amplitude in Eq.(\ref{eqlo}) (or higher-order results) can be expanded in a series of $\lambda$ using the method of expansion by regions \cite{Smirnov:1997gx,Beneke:1997zp}. However, we will derive the leading contributions with soft-collinear effective theory (SCET) \cite{Bauer:2000ew,Bauer:2000yr,Bauer:2001yt,Bauer:2002nz,Beneke:2002ph}. Because the power counting is explicit in the effective Lagrangian and operators, we get a systematic control on the large logarithms in power expansion and can resum them to all orders in $\alpha_s$. First we present a factorization form of the amplitude to all orders in $\alpha_s$. The QCD current that induces the process $H\to \gamma\gamma$ is given by \begin{align} O(x,y_1,y_2)= J(x) \cdot i \mathcal{L}_{\gamma}(y_1) \cdot i\mathcal{L}_{\gamma}(y_2) \end{align} where $J(x)\equiv -y_b \phi(x)\bar{\psi}(x) \psi (x)$ and $\mathcal{L}_{\gamma}(y) \equiv e_q \bar{\psi}(y)A\!\!\!\slash(y) \psi(y) $ with $ \phi(x), \psi(x), A(x) $ the Higgs, bottom quark, and photon field, respectively. The amplitude can be written as \begin{align} \mathcal{A}=&\langle k_1; k_2| \int d^d x \int d^d y_1 \int d^d y_2 \bold{T}\big[ O(x,y_1,y_2)\big]| p_h \rangle, \label{eqamp} \end{align} where $k_1,k_2$ are the momenta of the photons, and $p_h$ is the momentum of the Higgs boson. We can eliminate the Higgs field and set $x=0$. Now we match the QCD current to SCET, \begin{align} &O(y_1,y_2) \to O_h(y_1,y_2) + \int ds dt [ \tilde{C}_n(s,t) O_n(y_1,y_2,s,t) \nonumber\\ &+\tilde{C}_{\bar{n}}(s,t) O_{\bar{n}}(y_1,y_2,s,t) +\tilde{C}_{s}(s,t) O_{s}(y_1,y_2,s,t) ]. \label{eqmatch} \end{align} Here $O_h(y_1,y_2) $ is obtained from $O(y_1,y_2)$ by neglecting the quark mass and results in the amplitude $\mathcal{A}_H$. The other terms are given by \begin{subequations} \begin{align} &\int d^d y_2 \bold{T}[O_n(y_1,y_2,s,t)] =\bold{T}[J^{B1}_n(s,t), i\mathcal{L}_{m,n}^{(0)}(y_1) ] , \label{eq:coll}\\ &\int d^d y_1 \bold{T}[O_{\bar{n}}(y_1,y_2,s,t)] =\bold{T}[J^{B1}_{\bar n}(s,t), i \mathcal{L}_{m,\bar{n}}^{(0)}(y_2) ] , \label{eq:anticoll}\\ &\bold{T}[O_s(y_1,y_2,s,t)] =\bold{T}[J^{(A0,A0)}_{n\bar{n}}(s,t), i\mathcal{L}_{s,n}^{(1/2)}(y_1), i\mathcal{L}_{s,\bar{n}}^{(1/2)}(y_2) ].\label{eq:soft} \end{align} \end{subequations} The currents are defined following the convention in \cite{Beneke:2017ztn}, \begin{align} J^{B1}_n(s,t)&=\frac{e_q y_b}{\bar{\mathcal{P}}_1} \bar{\chi}_n(t\bar{n}) \ep^*\!\!\!\!\!\slash_{\perp}(k_2) \frac{\bar{n}\!\!\!\slash}{2} \chi_n(s\bar{n}), \label{eq:JB1}\\ J^{(A0,A0)}_{n\bar{n}}(s,t) &=-y_bJ^{A0}_{\bar{n}}(s) J^{A0}_{n}(t)=-y_b \bar{\zeta}_{\bar{n}}(sn) \zeta_n(t\bar{n}), \end{align} where the collinear and hard-collinear quark field are \cite{Bauer:2002nz} \begin{align} \chi_n(x) \equiv W_c^{\dagger}\xi_c(x), ~~~~ \zeta_n(x) \equiv W_{hc}^{\dagger}\xi_{hc}(x), \end{align} and the operator $\bar{\mathcal{P}}_1$ picks out the $O(\lambda^0)$ momentum component of the $\bar{\chi}_n $ field. One of the inserted vertex can be found in \cite{Leibovich:2003jd} \begin{align} \mathcal{L}_{m,n}^{(0)}(y) =\frac{e_q m_b m_h}{\bar{\mathcal{P}}_1\bar{\mathcal{P}}_2} \bar{\chi}_n(y)A\!\!\!\slash_{\perp}(y) \frac{\bar{n}\!\!\!\slash}{2} \chi_n(y), \end{align} where we have omitted those terms that do not contribute to the amplitude. The superscript of $\mathcal{L}_{m,n}^{(0)}$ denotes that this is a leading power interaction. The operator $\bar{\mathcal{P}}_{1,2}$ here act on the $\chi_n $ and $\bar{\chi}_n $ field, respectively. The other inserted vertices are \begin{align} \mathcal{L}_{s,n}^{(1/2)}(y) =e_q \bar{\xi}_{\rm qhc}(y)A\!\!\!\slash_{\perp}(y)q_{s}(y) + {\rm h.c.}, \label{eqLsn} \end{align} where we introduce the offshell field $ \bar{\xi}_{\rm qhc}(y)$ with the momentum $p_{\rm qhc}\sim (\lambda,1,\lambda)$ due to the momentum conservation. Compared with the hard-collinear mode, it has smaller transverse momentum. We emphasise that this offshell field appears only as an intermediated state. The power counting of $ \mathcal{L}_{s,n}^{(1/2)}$ is $O(\lambda^{1/2})$. The appearance of the subleading current $J^{B1}$ in Eqs.(\ref{eq:coll},\ref{eq:anticoll}) and two inserted vertices $ \mathcal{L}_{s,n}^{(1/2)}$ in Eq.(\ref{eq:soft}) indicates that the large double logarithms in this process are one kind of logarithms at subleading power. The matching in Eq.(\ref{eqmatch}) can be obtained by integrating out hard momentum in the loop or expanding the QCD amplitude with the method of expansion by regions, and has been verified through reproducing the leading logarithms in fixed-order QCD results. Then we define the hard function for $O_n$ as \begin{align} H_n(z)&= \int ds dt e^{im_h z t} e^{im_h\bar{z}s} \tilde{C}_n(s,t) , \end{align} where we have used $z$ to denote the momentum fraction of one jet in all collinear final state and $\bar{z}\equiv 1-z$. The jet function for $O_n$ is defined as \begin{align} &\frac{-e_q m_b m_h N_c}{16\pi^2}\textrm{Tr}[\ep^*\!\!\!\!\!\slash_{\perp}(k_1)\frac{n\!\!\!\slash}{2} \Gamma ] J_n(z) \equiv \int d^d y e^{ik_1\cdot y} \langle k_1| \bold{T}[ \bar{\chi}_{n,z}(0)\Gamma \chi_{n,\bar{z}}(0), i \mathcal{L}_{m,n}^{(0)}(y) ]| 0 \rangle , \label{eqJc} \end{align} where $\bar{\chi}_{n,z}(0)$ denotes the collinear jet field with a momentum fraction $z$ of the total collinear momentum, i.e., $p^-\equiv m_h z$, and $\Gamma$ represents any combination of Dirac matrices. The amplitude induced by $O_n$ is given by \begin{align} \mathcal{A}_C=&\frac{y_b e_q^2N_c}{8\pi^2}\epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) m_b \int_0^1 d z \frac{1}{z}H_n (z) J_n(z), \label{eqAc} \end{align} where the factor $1/z$ comes from the denominator in Eq.(\ref{eq:JB1}). Similarly we obtain the amplitude induced by $O_{\bar{n}}$, denoted as $\mathcal{A}_{\bar{C}}$. The hard function for $O_s$ is given by \begin{align} H^s&= \int ds dt e^{im_h s } e^{im_h t} \tilde{C}_s(s,t). \end{align} We define the jet function in the $n$-collinear direction \begin{align} J_n^{s}(k_1,p)=\int d^d y e^{-ip\cdot y}\langle k_1| \bold{T}[\zeta(0),i\bar{\xi}_{\rm qhc}(y)A\!\!\!\slash_{\perp}(y)]|0\rangle \end{align} and in the $\bar{n}$-collinear direction \begin{align} J_{\bar{n}}^{s}(k_2,\bar{p})=\int d^d y e^{i\bar{p}\cdot y}\langle k_2| \bold{T}[\bar{\zeta}(0),iA\!\!\!\slash_{\perp}(y)\xi_{\rm qhc}(y)]|0\rangle . \end{align} The soft function is then defined as \begin{align} S^s(p,\bar{p})=& \int d^{d}y_1 \int d^{d} y_2 e^{ip\cdot y_1 -i\bar{p}\cdot y_2 } \langle 0| \bold{T}[ \hat{Y}^{\dagger}_{\bar{n}}(0,\bar{p})q_s(y_1)\bar{q}_s(y_2) \hat{Y}_{n}(0,p)]|0 \rangle , \label{eqSs} \end{align} where we have inserted the soft Wilson line along the hard-collinear particle, \begin{align} \hat{Y}_n(x,p)=\bold{\bar{P}} \exp\left( -i g_s \int^{\infty}_0 ds n\cdot A_s(x+s n) e^{-isn\cdot p}\right), \end{align} and the soft Wilson line along the hard-anti-collinear particle, \begin{align} \hat{Y}^{\dagger}_{\bar{n}}(x,\bar{p})=\bold{P} \exp\left( i g_s \int^{\infty}_0 ds \bar{n}\cdot A_s(x+s \bar{n}) e^{is\bar{n}\cdot \bar{p}}\right), \end{align} in order to decouple the soft interaction from the hard-(anti-)collinear particles. These Wilson lines are obtained following the method in \cite{Chay:2004zn}. To the leading logarithmic accuracy the jet function and soft function are given by \begin{align} J_n^{s}(k_1,p)&=\ep^*\!\!\!\!\!\slash_{\perp}(k_1)\frac{n\!\!\!\slash}{2}\frac{1}{p^+}J_n^{s}(p^+),\\ J_{\bar{n}}^{s}(k_2,\bar{p})&=\ep^*\!\!\!\!\!\slash_{\perp}(k_2)\frac{\bar{n}\!\!\!\slash}{2}\frac{1}{\bar{p}^-}J_{\bar{n}}^{s}(\bar{p}^-),\\ S^s(p,\bar{p})&= \frac{im_b}{p^2-m_b^2} (2\pi)^d \delta^{(d)}(p-\bar{p}) S^s(p),\label{eqSsll} \end{align} where the scalar functions $J_n^{s}(p^+),J_{\bar{n}}^{s}(\bar{p}^-),S^s(p)$ are just one at leading order in $\alpha_s$. We keep the factor $m_b$ in Eq.(\ref{eqSsll}) because the helicity must be flipped on the soft quark propagator. After contracting the Lorentz indices, we obtain the amplitude induced by $O_s$ \begin{align} \mathcal{A}_S = & 2i y_b e_q^2 N_c \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) m_bH^s \int \frac{d^{d}p}{(2\pi)^{d}} \frac{J_n^s(p^+) S^s(p) J_{\bar{n}}^s(\bar{p}^-)}{p^+ p^- (p^2 - m_b^2)} . \label{eqAs1} \end{align} Summarising the above results, we obtain the amplitude \begin{align} \label{eq0} \mathcal{A}& =(\mathcal{A}_H+\mathcal{A}_C+\mathcal{A}_{\bar{C}}- \mathcal{A}_S)[1+O(\lambda)]. \end{align} The minus sign of $\mathcal{A}_S$ arises because the zero-bin subtraction in the (anti-)collinear sectors has been performed \cite{Manohar:2006nz,Idilbi:2007ff,Idilbi:2007yi}. The collinear, anti-collinear and soft sectors are separated by the rapidity of the momentum $p$. There exist rapidity divergences, as shown in Eqs.(\ref{eqAc},\ref{eqAs1}), which must be regularised in the intermediate steps but cancel in the end among the collinear, anti-collinear and soft sectors. We choose the $\Delta$-regulator \cite{Chiu:2009yx} to regularise these rapidity divergences \footnote{The analytic regulator \cite{Becher:2012xx} is often chosen to regularise the rapidity divergences. However, it is not appropriate in this case of $H\to \gamma\gamma$ if we want to resum the large logarithms to all orders. The reason is that the leading-order result contains poles like $(p^-)^{-1-\alpha}$. Higher-order loop corrections would generate structures like $(p^-)^{-\epsilon}/\epsilon$. As a result, the amplitude at higher orders contains $(p^-)^{-1-\alpha-\epsilon}/\epsilon$. One needs to expand the result first in $\alpha$ and then in $\epsilon$ to get the correct result. This means that one can not perform renormalization simply before integrating over $p$. But after integrating over $p$, it is not easy to distinguish different origins of the large logarithms, i.e., the factorization structure becomes unclear. }. Accordingly, we take the replacement of the denominators, \begin{align} \frac{1}{ (p-k_1)^2-m_b^2 }& \to \frac{1}{ (p-k_1)^2-m_b^2 +\Delta_1}, \\ \frac{1}{ (p+k_2)^2-m_b^2 }& \to \frac{1}{ (p+k_2)^2-m_b^2 +\Delta_2}. \end{align} These regulators $\Delta_{1,2}$ have mass dimension two, and are assumed to be much less than $m_b^2$ but can not be dropped in the denominator even after power expansion. Therefore we rewrite Eqs.(\ref{eqAc},\ref{eqAs1}) as \begin{align} \mathcal{A}_C=&\frac{y_b e_q^2N_c}{8\pi^2} \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) m_b \int_0^1 d z \frac{1}{z+\Delta_2/m_h^2}H_n (z) J_n(z). \label{eqAc2} \end{align} and \begin{align} \mathcal{A}_S = & 2i y_b e_q^2 N_c \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) m_bH^s\nonumber\\ & \int \frac{d^{d}p}{(2\pi)^{d}} \frac{J_n^s(p^+) S^s(p) J_{\bar{n}}^s(\bar{p}^-)}{(p^+ - \Delta_1/m_h) (p^-+\Delta_2/m_h) (p^2 - m_b^2)} . \label{eqAs} \end{align} Notice that the $\Delta$-regulator is only applied for the integration of the outmost quark loop momentum. It is not implemented for those higher-order loop integration induced by gluons. Therefore, with this regulator, the leading order rapidity divergences exist in the form of $\ln^n \Delta_{1,2}/m_h^2$, while higher-order divergences are still in the form of $1/\epsilon^n$. The large logarithms associating with these higher-order divergences can be separated from the leading order ones without ambiguity since they are in different form now. Besides the rapidity divergence, there are usual infrared and ultraviolet divergences in the hard, collinear and soft sectors, respectively. They can be tamed with dimensional regularisation. The loop-induced processes are different from those having tree-level contributions, since the leading order contributions in the effective theory, i.e., the hard, collinear and soft sectors, already contain divergences. These leading order divergences are not renormalized as usual in the multiplicative renormalization scheme. Instead, they cancel each other among the hard, collinear and soft sectors. To see the structure of divergences more clearly, we rewrite Eq.(\ref{eq0}) as (dropping the $O(\lambda)$ corrections) \begin{align} \mathcal{A}=&\mathcal{A}_H\left(\epsilon\right) +\mathcal{A}_{C}\left(\epsilon,\Delta_2\right) +\mathcal{A}_{\bar{C}}\left(\epsilon,\Delta_1\right) -\mathcal{A}_S\left(\epsilon,\Delta_1,\Delta_2\right) . \end{align} The $\epsilon$-poles in $\mathcal{A}_H$ are infrared divergences since the propagators are all massless. The $\epsilon$-poles in $\mathcal{A}_{C},\mathcal{A}_{\bar{C}}$ and $\mathcal{A}_S$ are ultraviolet divergences, generated when the transverse momentum $p_T$ is integrated up to infinity. Then we can rearrange the above equation as \begin{align} \mathcal{A}&=\mathcal{A}_H\left(\epsilon\right) +\left[\mathcal{A}_{C}\left(\epsilon,\Delta_2\right)\right]_{p_T > m_h} +\left[\mathcal{A}_{\bar{C}}\left(\epsilon,\Delta_1\right)\right]_{p_T > m_h} -\left[\mathcal{A}_S\left(\epsilon,\Delta_1,\Delta_2\right)\right]_{p_T > m_h}\nonumber\\ &+\left[\mathcal{A}_{C}\left(\Delta_2\right)\right]_{p_T \le m_h} +\left[\mathcal{A}_{\bar{C}}\left(\Delta_1\right)\right]_{p_T \le m_h} -\left[\mathcal{A}_S\left(\Delta_1,\Delta_2\right)\right]_{p_T \le m_h}. \label{eq:splitpt} \end{align} Notice again that we divide only the transverse momentum integration for the outmost quark loop. The pieces in the first line contain $1/\epsilon^n$ poles, which cancel each other, while the pieces in the second line contain no such poles and thus finite. The cancellation of $1/\epsilon^n$ poles is guaranteed to all orders in $\alpha_s$ because there is no such divergences on the left-hand side of this equation and the right-hand side is a complete leading power expansion. In fact, one can consider the division of the integration range of $p_T$ as a way of renormalization and the cutoff $m_h$ is the renormalization scale. It is possible to choose another renormalization scale without changing the final result. Since the intrinsic scale of $\mathcal{A}_H$ is $m_h$, setting $m_h$ as the cutoff scale could make the sum of the first line not contain any logarithms. The cancellation of the $\Delta$-regulators in the last line holds to all orders in $\alpha_s$. This is because the left-hand side does not depend on these regulators, and the regulators exist only in the $p^\pm$ integration, rather than the $p_T$ integration, similarly to the situation in transverse momentum resummation. Therefore, in each fixed value of $p_T$, the $\Delta$-dependences cancel out. Moreover, the pieces in the first line have a single intrinsic scale $m_h$ and therefore generate no large logarithms. So we can neglect these contributions if we want to study only large logarithms \footnote{We have checked that the poles in this part cancel out up to two-loop level. See the appendix.}. In the following we use a subscript $R$ to denote the quantities with the constraint $p_T\le m_h$. Each piece in the second line receives QCD corrections at higher orders, but can be renormalized as usual, since the leading order is finite now. We will show the next-to-leading (NLO) QCD corrections in the following section. \section{NLO corrections} \label{sec:nlo} We first consider the contribution from the $O_n$ current. The NLO hard function $H_n$ can be obtained by calculating the one-loop corrections, \begin{align} &H_n(z) =1 - \frac{\alpha_s C_F }{2\pi} \ln z\left( \frac{1}{\epsilon}- \ln \frac{-m_h^2}{\mu^2} -\frac{1}{2} \ln z \right) . \end{align} We show only the double logarithms in the result. The above result arises from the expansion of $$\frac{1}{\epsilon^2}\bigg[ \bigg(\frac{-m_h^2}{\mu^2} \bigg)^{-\epsilon} - \bigg(\frac{-m_h^2 z}{\mu^2} \bigg)^{-\epsilon} \bigg] .$$ The two scales reflect the fact that there are two collinear jet fields in the collinear direction, which is a feature of subleading power operators. By power counting, $m_h^2$ and $m_h^2 z$ are of the same scale, i.e. $O(1)$. From the definition in Eq.(\ref{eqJc}), we obtain the NLO jet function, \begin{align} &J_{n,R} (z) =\ln \frac{m_h^2}{m_b^2} \bigg\{ 1+ \frac{\alpha_s C_F}{2\pi} \ln z \bigg[ \frac{ 1}{ \epsilon} - \ln \frac{m_h^2}{\mu^2} + \frac{1}{2} \ln z +\frac{1}{2} \ln \frac{m_h^2}{m_b^2} \bigg]\bigg\}. \end{align} As shown in Eq.(\ref{eqJc}), the jet function is a function of $z$ and the bottom quark's mass $m_b$. The scale $m_h$ appears because we use the cutoff renormalization scheme. Before the integration over $p_T$, we can see that the intrinsic jet scale is of order $\sqrt{p_T^2+m_b^2}$. Since the corresponding hard function is insensitive to the transverse momentum of the external particles of the operators, we need to integrate over $p_T$. As a result, the resulting scale dependence is in such a complicated form. After performing the convolution between the hard and jet functions in Eq.(\ref{eqAc2}), we obtain the amplitude induced by $O_n$, \begin{align} \mathcal{A}_{C,R} &=\frac{y_b e_q^2}{8\pi^{2}} N_c m_b \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) \bigg\{ \ln \frac{m_h^2}{\Delta_2}\ln \frac{m_h^2}{m_b^2}\\& -\frac{\alpha_s C_F }{2\pi} \left[ \frac{1}{3}\ln^3 \frac{\Delta_2}{m_h^2} \ln \frac{m_h^2}{m_b^2} + \frac{1}{4} \ln^2\frac{\Delta_2}{m_h^2}\ln ^2 \frac{-m_h^2}{m_b^2} \right] \bigg\},\nonumber \end{align} where we have kept only the leading logarithms. We see that the $1/\epsilon$-poles and $\mu$ scale dependent terms, which are induced by higher-order gluon loops, cancel between the hard and jet function in this sector. Similarly, we get $\mathcal{A}_{\bar{C},R}$ by replacing $\Delta_2 \to \Delta_1$. Then we consider the contribution from the $O_s$ current. The hard function in this sector is straightforward to calculate, \begin{align} H^s= 1-\frac{\alpha_s C_F }{2\pi} \left[ \frac{ 1}{\epsilon^2} -\frac{\ln (-m_h^2/\mu^2)}{\epsilon}+\frac{1}{2} \ln^2 \frac{-m_h^2}{\mu^2} \right] . \label{eq:Hs} \end{align} From the definitions, we can also calculate the NLO jet functions \begin{align} J^s_n(p^+)=&1+ \frac{\alpha_s C_F }{2\pi} \left[ \frac{ 1}{\epsilon^2}-\frac{\ln (m_hp^+/\mu^2)}{\epsilon}+\frac{1}{2} \ln^2 \frac{m_hp^+}{\mu^2} \right] , \\ J^s_{\bar{n}}(\bar{p}^-)=& 1 +\frac{\alpha_s C_F }{2\pi} \left[ \frac{ 1}{\epsilon^2}-\frac{\ln (-m_h\bar{p}^-/\mu^2)}{\epsilon}+\frac{1}{2} \ln^2 \frac{-m_h\bar{p}^-}{\mu^2} \right] , \end{align} and the NLO soft function \begin{align} S^s(p)=& 1-\frac{\alpha_s C_F }{2\pi} \left[ \frac{ 1}{\epsilon^2}-\frac{\ln (p^+p^-/\mu^2)}{\epsilon}+\frac{1}{2} \ln^2 \frac{p^+p^-}{\mu^2} \right] . \label{eq:Ss} \end{align} Notice that these jet and soft functions do not contain any plus distributions. This is due to our choice of regulators (see Eq.(\ref{eqAs})), which makes the integrations over $p^+$ and $p^-$ well-defined even if $p^+\to 0$ or $p^-\to 0$. Inserting the hard, jet and soft functions to Eq.(\ref{eqAs}), we obtain the amplitude induced by $O_s$, \begin{align} &\mathcal{A}_{S,R} = \frac{y_b e_q^2}{8\pi^2}N_c m_b \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) \nonumber\\& \times \bigg\{ \ln\frac{m_h^2}{\Delta_1}\ln\frac{m_h^2}{m_b^2} +\ln\frac{m_h^2}{\Delta_2}\ln\frac{m_h^2}{m_b^2}-\frac{1}{2} L^2 \nonumber\\& -\frac{\alpha_s C_F}{2\pi} \bigg[ \bigg( \frac{1}{3}\ln^3 \frac{\Delta_2}{m_h^2} \ln \frac{m_h^2}{m_b^2} + \frac{1}{4} \ln^2\frac{\Delta_2}{m_h^2}\ln ^2 \frac{-m_h^2}{m_b^2} \bigg) \nonumber \\ &+\bigg( \frac{1}{3}\ln^3 \frac{\Delta_1}{m_h^2} \ln \frac{m_h^2}{m_b^2} + \frac{1}{4} \ln^2\frac{\Delta_1}{m_h^2}\ln ^2 \frac{-m_h^2}{m_b^2} -\frac{1}{24}L^4 \bigg)\bigg]\bigg\} \label{eq:AsR} \end{align} with $L\equiv \ln( -m_h^2/m_b^2-i0)$. Once again, we find that the poles and scale dependent terms from the gluon loops cancel. Combining the above contributions from the (anti-)collinear and soft sectors, we obtain \begin{align} &\mathcal{A}_{C,R} +\mathcal{A}_{\bar{C},R} - \mathcal{A}_{S,R} =\frac{ y_b e_q^2 }{8\pi^2 } N_c m_b \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) \bigg( \frac{1}{2} L^2- \frac{\alpha_s C_F}{2\pi}\frac{1}{24} L^4 +O(\alpha_s^2) \bigg). \end{align} All the dependence on the regulators $\Delta_1$ and $\Delta_2$ cancels, and we reproduce the leading large logarithms of the QCD result up to the second order in $\alpha_s$ \cite{Fleischer:2004vb,Harlander:2005rq,Aglietti:2006tp}. Another important observation is that $\mathcal{A}_{C,R} \propto \ln\frac{m_h^2}{\Delta_2} $ because of the $\Delta$-regulator in Eq.(\ref{eqAc2}). Similar argument shows that $\mathcal{A}_{\bar{C},R} \propto \ln\frac{m_h^2}{\Delta_1}$. Therefore, after evaluating the integrations, we can set $\Delta_{1,2}=m_h^2$ so that $\mathcal{A}_{C,R} =\mathcal{A}_{\bar{C},R}=0$ and that the final result gets contribution just from $\mathcal{A}_{S,R} $. This feature indicates that we need to analyze only the soft sector to resum the large double logarithms. \section{Resummation} From the above analysis, we have found that the main task is to calculate the result in the soft sector $\mathcal{A}_{S,R}\left(\Delta_1,\Delta_2\right)$. All the logarithms in the hard, jet and soft functions of this sector are scale dependent, and therefore can be resummed by using the corresponding renormalization group evolution equations. From the NLO results for the bare functions given in Eqs.(\ref{eq:Hs}-\ref{eq:Ss}), we derive the renormalization group evolution equations of the renormalized functions in the $\overline{\rm MS}$ scheme, \begin{align} \frac{dH^s(\mu)}{d\ln \mu} & = \bigg[ C_F \gamma_{\rm cusp} \ln \frac{-m_h^2}{\mu^2} \bigg]H^s(\mu), \label{eq:h}\\ \frac{dJ_n^s(\mu)}{d\ln \mu} & = \bigg[ -C_F \gamma_{\rm cusp} \ln \frac{m_hp^+}{\mu^2} \bigg]J_n^s(\mu),\\ \frac{dJ_{\bar{n}}^s(\mu)}{d\ln \mu} & = \bigg[- C_F \gamma_{\rm cusp} \ln \frac{-m_h p^-}{\mu^2} \bigg]J^s_{\bar{n}}(\mu),\\ \frac{dS^s(\mu)}{d\ln \mu} & = \bigg[ C_F \gamma_{\rm cusp} \ln \frac{p^+ p^-}{\mu^2} \bigg]S^s(\mu), \label{eq:s} \end{align} where $\gamma_{\rm cusp}$ is the cusp anomalous dimension \cite{Becher:2009qa}. It is evident that \begin{align} \frac{d\ln [H^s(\mu)J_n^s(\mu)S^s(\mu)J^s_{\bar{n}}(\mu)] }{d\ln \mu}=0. \end{align} Solving the evolution equations in Eqs.(\ref{eq:h}-\ref{eq:s}), we get \begin{align} \label{eq15} H^s(\mu)J_n^s(\mu)S^s(\mu)J^s_{\bar{n}}(\mu)&= \exp\big[2C_F\big(S(\mu_h,\mu)-S(\mu_c,\mu)-S(\mu_{\bar{c}},\mu)+S(\mu_s,\mu) \big)\big] \nonumber \\ &= \exp \bigg[ -\frac{\alpha_s C_F}{2\pi} \ln \frac{-p^+}{m_h} \ln \frac{p^-}{m_h} +O(\alpha_s^2) \bigg], \end{align} where the function $S(\mu_i,\mu_f)$ is defined by \cite{Neubert:2004dd} \begin{align} S(\nu,\mu) =-\int_{\alpha_s(\nu)}^{\alpha_s(\mu)}d\alpha \frac{\gamma_{\rm cusp}}{\beta(\alpha)} \int_{\alpha_s(\nu)}^{\alpha}\frac{d\alpha'}{\beta(\alpha')}. \end{align} In the last line, we have neglected those terms at $O(\alpha_s^2)$, which contribute at most $\alpha_s^2 L^3$. We have chosen the typical scales to be $\mu_h^2=-m_h^2, \mu_{c}^2=m_hp^+,\mu_{\bar{c}}^2=-m_hp^-,\mu_s^2=p^+ p^-$ as indicated in the fixed order calculation, though their explicit values do not affect the final result. Inserting Eq.(\ref{eq15}) in Eq.(\ref{eqAs}), we can perform the integrations over $p$, while keeping $\Delta_{1,2}$ as small regulators, to obtain the result of $\mathcal{A}_{S,R}(\Delta_1,\Delta_2)$ to all orders in $\alpha_s$; the first two orders have been given in Eq.(\ref{eq:AsR}). Then we set $\Delta_{1,2}=m_h^2$ so that $\mathcal{A}_{C,R} $ and $\mathcal{A}_{\bar{C},R} $ are vanishing. As a consequence, we obtain \begin{align} & \mathcal{A}_{C,R} + \mathcal{A}_{\bar{C},R} -\mathcal{A}_{S,R} = \frac{ y_b e_q^2 }{8 \pi^2 } N_c m_b \epsilon^*_{\perp}(k_1)\cdot \epsilon^*_{\perp}(k_2) \frac{1}{2} L^2 \times {}_{2}F_{2}(1,1;\frac{3}{2},2; -\frac{\alpha_s C_F}{8\pi}L^2 ), \label{eq16} \end{align} which agrees with the result in Ref.\cite{Akhoury:2001mz}, except that we have reproduced the double logarithms in the form $ \ln^2 (-m_h^2/m_b^2-i0) $, instead of $ \ln^2(m_h^2/m_b^2) $. This is because we have considered the hard and collinear sectors besides the soft sector. As a consequence, it allows us to resum the large $\pi^2$ terms in the perturbative calculations too. The generalized hypergeometric function ${}_{2}F_{2}(1,1;\frac{3}{2},2; -x)$ looks strange but actually has an exponential structure $\sqrt{\pi}x^{3/2}e^{-x}/2$ at $x\to \infty$. The impact of the resummed result in Eq.(\ref{eq16}) are shown in Fig.\ref{fig2} and Fig.\ref{fig3}. For the standard model value $m_h=125$ GeV, the ratio of the resummed result over the leading order result is $0.935+0.073i$. The impact becomes more significant if the Higgs boson mass $m_h$ increases. For comparison, the result of Ref.\cite{Akhoury:2001mz} is shown in the black dashed line in Fig.\ref{fig2}. It differs from the real part of our result by less than $2\%$, but it contains no imaginary part. We also show the comparison with the result in which the $\pi^2$ terms are kept, i.e., replacing $L^2$ by $\ln^2(m_h^2/m_b^2)-\pi^2$. We see that the difference is tiny around $m_h=125$ GeV, growing to about $2\%$ around $m_h=1000$ GeV. In Fig.\ref{fig3}, we expand the resummed result to the first few orders. The real part converges quickly since the contribution from the first three orders (NNLO) already overlaps with the resummed result over a large range of $m_h$. The imaginary part converges slower. But the sum of the first four orders is already a good approximation of the resummed result. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\linewidth]{Ratio-Re} \includegraphics[width=0.48\linewidth]{Ratio-Im} \\ \caption{The ratio $R$ of the resummed result over the leading order result. Its real and imaginary part are shown on the left and right plot, respectively. The dashed black and dotted red line (on the left plot) denotes the result of Eq.(\ref{eq16}) with $L^2$ replaced by $\ln^2(m_h^2/m_b^2)$ and $\ln^2(m_h^2/m_b^2)-\pi^2$, respectively. The blue line represents the result with $L^2=\ln^2(m_h^2/m_b^2)-\pi^2-2i\pi \ln (m_h^2/m_b^2)$. We have used $\alpha_s(m_h)=0.113, m_b=5 $ GeV. } \label{fig2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\linewidth]{Ratio-Re-Exp} \includegraphics[width=0.45\linewidth]{Ratio-Im-Exp} \\ \caption{Same as Fig. \ref{fig2} with $L^2=\ln^2(m_h^2/m_b^2)-\pi^2-2i\pi \ln (m_h^2/m_b^2)$ but with more lines to show the expanded results. Notice that the legends NLO, NNLO, NNNLO do not denote the full fixed-order results, but only the leading logarithms. } \label{fig3} \end{center} \end{figure} \section{Conclusions} We have provided a method to resum the large logarithms in loop-induced processes with soft-collinear effective theory. This method is different from the conventional threshold or transverse momentum resummation, because the leading-order contributions have already divergences in the soft and collinear limits. By adopting a $\Delta$-regulator and a cut-off renormalization for the quark loop transverse momentum, we can make the leading-order result finite. Further, there are several sectors in the effective field theory contributing to the leading power expansion of the QCD amplitude. Each of them depends on the rapidity regulators. In particular, the contributions from (anti-)collinear sectors are proportional to $\ln (\Delta_{1,2}/m_h^2)$ so that we can choose a special value of the regulator to make them vanishing. Then one only needs to consider the contribution from the soft sector, which has a structure ready to be resummed. Expanding the resummed result to the first two orders, we find agreement with previous QCD calculations. Compared with the resummation method using off-shell Sudakov form factor, we reproduce the full double logarithms $\ln ^2 (-m_h^2/m_b^2-i0)$. As a consequence, the large $\pi^2$ terms and the leading contribution of the imaginary part of the amplitude can also be resummed. In future, it would be interesting to explore the resummation beyond leading logarithmic accuracy. It is also promising to extend our scheme to more general cases, such as processes with non-Abelian gauge bosons or processes with more external particles, which are more important for collider phenomenology. \section*{Acknowledgments} We appreciate Matthias Neubert's suggestion of this project and many inspiring comments very much. We would like to thank Ze Long Liu and Ben D. Pecjak for useful discussions. The work of JW has been supported by the BMBF project No. 05H15WOCAA and 05H18WOCA1 when he was in Technische Universit\"at M\"unchen and by the program for Taishan scholars.
2,877,628,090,880
arxiv
\section{Introduction} \label{section_intro} In recent years, there is a growing interest in distributed algorithms for networks of rational agents that may deviate from the prescribed algorithm in order to increase their profit \cite{DisMeetsGame,DistRobust,LeaderADH,BARFault,RationalSecret}. For example, an agent may have a higher profit if zero is decided in a consensus algorithm, or an agent may prefer to be (or not to be) the elected leader in a leader election algorithm. The goal is to design distributed algorithms that reach \emph{equilibrium}, that is, where no agent can profit by cheating. In this paper we study the consensus problem in a network of rational agents, in which each agent has a preferred decision value. We consider $(n-1)$-resilient equilibrium, that is, an equilibrium that is resilient to any coalition of up to $n-1$ agents that may collude in order to increase their expected profit (utility). This problem was proposed in \cite{LeaderADH} and studied also in \cite{GTBlocks}, where the authors suggest an $(n-1)$-resilient equilibrium for binary consensus in a synchronous ring. We prove that in any $(n-1)$-resilient equilibrium for binary consensus, the output of the agents must be the XOR of the inputs of all agents. Thus, due to validity, there is \emph{no} $(n-1)$-resilient equilibrium for binary consensus in {\em even} sized networks, and the algorithm in \cite{GTBlocks} works well only for odd sized networks. Still, we show that the algorithm in \cite{GTBlocks} reaches $(n-2)$-resilient equilibrium for binary consensus with uniform input distribution, for \emph{any} $n$. We further show that multi-valued consensus is impossible, i.e., there is no $(n-1)$-resilient equilibrium for multi-valued consensus for $r>2$ where $r$ is the number of possible values, thus surprisingly there is a computational gap between binary and multi-valued consensus in this model. Note that it was previously shown that in this game theoretic model, leader election is also not equivalent to consensus \cite{GTBlocks}. Furthermore, we show that in this model, {\em deterministic} binary consensus is equivalent to resilient input sharing (RIS), a natural problem in distributed computing in which each agent $i$ shares its input with all other agents in the network (a variant of the knowledge sharing problem defined in \cite{GTBlocks}). That is, in any odd sized network with uniform input distribution, any algorithm for RIS can be transformed into a $(n-1)$-resilient equilibrium for deterministic binary consensus and vice versa. Thus, providing a sufficient and necessary condition for $(n-1)$-resilient equilibrium for deterministic binary consensus. \subsection{Our Contributions} \label{sub_contrib} are as follows: \begin{description} \item[$(\S \ref{section_xor})$] Any $(n-1)$-resilient equilibrium for binary consensus decides on the XOR of all input values. \item[$(\S \ref{section_xor_lemmas})$] In any $(n-1)$-resilient equilibrium for binary consensus the input and output distributions are uniform. \item[$(\S \ref{section_cover_bin_cons})$] The protocol suggested in \cite{GTBlocks} reaches $(n-2)$-resilient equilibrium for binary consensus with uniform input distribution, for any $n$. \item[$(\S \ref{section_multi})$] There is no $(n-1)$-resilient equilibrium for multi-valued consensus for $r>2$ possible inputs. \item[$(\S \ref{section_sufficient})$] \emph{Deterministic} $(n-1)$-resilient equilibrium for binary consensus in a network exists \emph{iff}: \begin{enumerate} \item The network size is odd. \item The input distribution is uniform. \item An equilibrium for Resilient Input Sharing (RIS) is possible in the network topology. \end{enumerate} \end{description} The model, notations and some definitions are given in Section \ref{section_model}, and we discuss our results and further thoughts in Section \ref{section_discussion}. \subsection{Related Work} The secret sharing problem \cite{HowToShare} initiated the connection between distributed computing and game theory. Further works in this line of research considered multiparty communication with Byzantine and rational agents \cite{DisMeetsGame,ScalableRational,EfficientRational,ByzantineWithRational,RationalityAndAdversarial}. In \cite{LeaderADH}, the first distributed protocols for a network of rational agents are presented, specifically protocols for \emph{fair} leader election. In \cite{GTBlocks}, the authors continue this line of research by providing basic building blocks for game theoretic distributed algorithms, namely a wake-up and knowledge sharing building blocks that are in equilibrium, and equilibria for consensus, renaming, and leader election are presented using these building blocks. The consensus algorithm in \cite{GTBlocks} claims to reach $(n-1)$-resilient equilibrium in a ring or complete network, using the knowledge sharing building block to share the input of all processors in the network, and outputting the XOR of all inputs. Consensus was further researched in \cite{RationalConsensus}, where the authors show that there is no ex-post Nash equilibrium for rational consensus, and present a Nash equilibrium that tolerates $f$ failures under some minimal assumptions on the failure pattern. Equilibrium for fair leader election and fair coin toss are also presented and discussed in \cite{YifrachLeader}, where it is shown to be resilient only to coalitions of sub-linear size, and a modification to the leader election protocol from \cite{LeaderADH,GTBlocks} that is resilient to every coalition of size $\Theta(\sqrt{n})$ is proposed. In \cite{GTColor}, the authors examine the impact of a-priori knowledge of the network size on the equilibrium of distributed algorithms, assuming the $id$ space is unlimited and thus vulnerable to a Sybil attack \cite{SybilAttack}. In \cite{GTidspace} the authors remove this assumption and assume the $id$ space is bounded, examining the relation between the size of the $id$ space and the number of agents in the network in which an equilibrium is possible. \section{Model} \label{section_model} We use the standard message-passing model, where the network is a bidirectional graph $G=(V,E)$ with $|V|=n$ nodes, each node representing a \emph{rational} agent, following the model in \cite{DistRobust,LeaderADH}. We assume $n$ is a-priori known to all agents, $G$ is $2$-vertex-connected, and all agents start the protocol together, i.e., all agents wake-up at the same time. We can use the Wake-Up \cite{GTBlocks} building block to relax this assumption. In Sections~\ref{section_necessary} and~\ref{section_multi} the results apply for both synchronous and asynchronous communication networks, while Section~\ref{section_sufficient} assumes a synchronous network. In the consensus problem, each agent $i$ has an id $id_i$ and an input $I_i \in \{0,...r-1\}$ and must output a decision $D_i \in \{0,...r-1, \bot\}$. The $\bot$ output can be output by an agent to abort the protocol when a deviation by another agent is detected. A protocol achieves consensus if it satisfies the following \cite{Cons4}: \begin{itemize} \item \textbf{Agreement}: All agents decide on the same value, $\forall{i,j}: D_i=D_j$. \item \textbf{Validity}: If $v$ was decided then it was the input of some agent, $\forall j \exists i: D_j = I_i$. \item \textbf{Termination}: Every agent eventually decides, $\forall{i}: D_i \neq \bot$. \end{itemize} \begin{definition}[Protocol Outcome] The outcome of the protocol is determined by the input and output of all agents. An outcome is \emph{legal} if it satisfies agreement, validity, and termination, otherwise the outcome is \emph{erroneous}. \end{definition} Considering individual rational agents, each agent $i$ has a utility function $U_i$ over the possible outcomes of the protocol. The higher the value assigned by $U_i$ to an outcome, the better this outcome is for $i$. We assume the utility function $U_i$ of each agent $i$ satisfies \emph{Solution Preference} \cite{LeaderADH}: \begin{definition}[Solution Preference] The utility function $U_i$ of any agent $i$ never assigns a higher utility to an erroneous outcome than to a legal one. \end{definition} Thus, the Solution Preference guarantees that an agent never has an incentive to sabotage the protocol, that is, to prefer an outcome that falsifies either agreement or validity, or termination. However, agents may take risks that might lead to erroneous outcomes if these risks also lead to a legal outcome which increases their expected utility, that is, if these risks increase the expected utility that the agent is expected to gain. An intuitive example for a utility function of an agent $I$ with a preference towards a decision value of $1$ is: $$ U_i = \begin{cases} \text{1} &\quad \exists j: I_j=1 \land \forall k: D_k=1 \text{ ($1$ is decided by all agents)} \\ \text{0} &\quad $otherwise$ \text{ ($0$ is decided or erroneous outcome)} \\ \end{cases} $$ All agents are given a protocol at the start of the execution, but any agent may deviate and execute a different protocol if it increases its expected utility. A protocol is said to \emph{reach equilibrium} if no agent can unilaterally increase its expected utility by deviating from the protocol. \begin{definition}[Nash Equilibrium\footnotemark] A protocol $\Phi$ is said to reach equilibrium if, for any agent $i$, there is no protocol $\Psi \neq \Phi$ that $i$ may execute and leads to a higher expected utility for $i$, assuming all other agents follow $\Phi$. \end{definition} \footnotetext{ Previous works defined equilibrium over each step of the protocol. For convenience, this definition is slightly different, but it is easy to see that it is equivalent.} \subsection{Coalitions} We define a coalition of size $t$ as a set of $t$ rational agents that cooperate to increase the utility of each agent in $t$. A protocol that reaches $t$-resilient equilibrium \cite{LeaderADH} is resilient to coalitions of size up to $t$, that is, no group of $t$ agents or less has an incentive to collude and deviate from the protocol. We assume coalition members may agree on a deviation from the protocol in advance, but can communicate only over the network links during the protocol execution. \begin{definition}[$t$-resilient Equilibrium] A protocol $\Phi$ is said to reach $t$-resilient equilibrium if, for any group of agents $C \subset V$ s.t., $|C| \leq t$, there is no protocol $\Psi (\neq\Phi)$ that agents in $C$ may execute and which would lead to a higher expected utility for each agent in $C$, assuming all agents not in $C$ follow $\Phi$. \end{definition} The same intuitive example for a utility function above holds for a coalition, in which the coalition has a preference towards a decision value $1$. \subsection{Notations} The following notations are used throughout this paper: \begin{itemize} \item $S_{-i}$ - all possible input vectors of agents in $V\setminus\{i\}$. \item $\#(b)$ - the number of agents in $V$ that receive $b$ as input. \item $\#_{-i}(b)$ - the number of agents in $V\setminus\{i\}$ that receive $b$ as input. \item $I_i$ - the input of agent $i$. \item $D_i$ - the output value decided by agent $i$ at the end of the algorithm. \item $r$ - the number of possible input and output values. For binary consensus: $r=2$. \end{itemize} \section{Necessary Conditions for ($n-1$)-resilient Consensus} \label{section_necessary} \begin{theorem} \label{th_xor} The decision of any $(n-1)$-resilient equilibrium for binary consensus must be the XOR of all inputs, that is, $\forall{i}: D_i = \bigoplus\limits_{j \in V} I_j = \sum\limits_{j \in V} I_j \mod 2$ \end{theorem} Before we turn to the proof of Theorem~\ref{th_xor} given in sections \ref{section_xor} and \ref{section_xor_lemmas}, note that according to this theorem, if $n$ is even and all inputs are $1$ the decision must be $0$, contradicting validity and leading to the following corollary: \begin{corollary} \label{cor_noeven} There is no $(n-1)$-resilient equilibrium for binary consensus for even sized networks \end{corollary} \subsection{Output is the XOR of the Inputs} \label{section_xor} Here we prove Theorem~\ref{th_xor} based on the following two theorems, that are proved in Section~\ref{section_xor_lemmas}: \begin{theorem} \label{th_uniform} If the distribution over the inputs is not uniform, there is no $(n-1)$-resilient equilibrium for consensus, i.e.: $\forall v_1,v_2: P[I_i=v_1]=P[I_i=v_2] = \frac{1}{r}$ \end{theorem} \begin{theorem} \label{th_uniform_out} In any $(n-1)$-resilient equilibrium for consensus, given any $n-1$ inputs, the distribution over the possible decision values is uniform: $\forall s \in S_{-i},v \in \{0,...r-1\}: P[D_i=v|s] = \frac{1}{r}$ \end{theorem} Notice that while the proof of theorem~\ref{th_xor} holds only for \emph{binary} consensus, theorems~\ref{th_uniform} and~\ref{th_uniform_out} are correct for multi-valued consensus as well. \begin{proof}[Proof of Theorem~\ref{th_xor}] We prove that the decision value of binary consensus must be the XOR of all inputs using induction on $\#(1)$, the number of agents in the network whose input value is $1$. In the base case $\#(1)=0$, the input of all agents is $0$. By validity the decision must be $0$. For clarity of exposition we spell out the next case of the induction, $\#(1)=1$, i.e., the input of one agent is $1$ and of all other $n-1$ agents is $0$. Assume by contradiction that the probability that $0$ is decided in this case is greater than $0$, i.e., $$\exists i\in V:\ P[D_i = 0\ |\ \#_{-i}(0)=0\ \land\ I_i = 1] = p > 0 $$ Let $s_{0}$ be an input configuration for a coalition in which all members of the coalition (i.e., $V\setminus \{i\}$) claim to receive 0 as input, i.e., $\#_{-i}(1)=0$. Notice that: \begin{align} P[D_i = 0 |\ s_0] &= P[I_i=0]\cdot P[D_i = 0\ |\ s_0 \land I_i = 0] + P[I_i=1]\cdot P[D_i = 0\ |\ s_0 \land I_i = 1] \notag\\ &= P[I_i=0]\cdot P[D_i = 0\ |\ base\ case\ ] + P[I_i=1]\cdot P[D_i = 0\ |\ s_0 \land I_i = 1] \notag\\ &= P[I_i=0]\cdot 1 + P[I_i=1]\cdot p \notag \end{align} By Theorem~\ref{th_uniform} (and since this is binary consensus) it follows that: $$P[D_i = 0 |\ s_0] = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot p > \frac{1}{2}$$ Thus, contradicting Theorem~\ref{th_uniform_out} and proving that, $\forall i\in V$: $ P[D_i = 0\ |\ \#_{-i}(0)=0\ \land\ I_i = 1] = 0$ Thus if $\#(1)=1$, the decision value must be $1$, proving the first induction step. By the inductive assumption, $\forall \#(1) < m$ the decision value of the consensus must be the XOR of all inputs, i.e., $\#(1)\ mod\ 2$. Let $s_{m-1}$ be an input configuration for the coalition ($V\setminus \{i\}$) in which $\#_{-i}(1) = m-1$, that is, $m-1$ members of the coalition claim to receive $1$, and the rest $0$. From Theorem~\ref{th_uniform_out} (and since this is binary consensus) we get: $$P[D_i = (m\ mod\ 2)\ |\ s_{m-1}] = \frac{1}{2}$$ If $I_i=0$ (which from Theorem~\ref{th_uniform} happens with probability $\frac{1}{2}$) and the coalition acts as if its input is $s_{m-1}$, then $\#(1) = m-1$. By the induction hypothesis, in such a case the decision value of the consensus must be $m-1\ mod\ 2$. To satisfy the equation above it must hold that: $$P[D_i = (m\ mod\ 2)\ |\ s_{m-1}\land I_i=1] = 1$$ Hence, in case $\#(1) = m$, the decision value must be $m\ mod\ 2$ - the XOR of all inputs. \end{proof} \subsection{Proving Theorems~\ref{th_uniform} and~\ref{th_uniform_out}} \label{section_xor_lemmas} While the above proof holds only for \emph{binary} consensus, the following lemmas and theorems are correct for multi-valued consensus. \begin{lemma} \label{lemma_set_equal} In any $(n-1)$-resilient equilibrium for consensus, for any $v \in \{0,\dots,r-1\}$, given any $n-1$ inputs, the probability to decide $v$ is the same:\\ $$\forall i \in V, s_1,s_2 \in S_{-i}, v: P[D_i=v|s_1] = P[D_i=v|s_2]$$ \end{lemma} \begin{proof} Assume by contradiction that $\exists i \in V, s_1,s_2 \in S_{-i},v: P[D_i=v|s_1] < P[D_i=v|s_2]$. A coalition $C = V \setminus \{i\}$ with a preference to decide $v$, and that receives $s_1$ as input, has an incentive to deviate and act as if their input is $s_2$, contradicting equilibrium. \end{proof} \begin{lemma} \label{lemma_out_in} In any $(n-1)$-resilient equilibrium for consensus, for any input $v\in \{0,\dots,r-1\}$, the probability to decide $v$ is the same as the probability to receive $v$ as an input: $$\forall i \in V, s \in S_{-i}, v: P[D_i=v | s] = P[I_i=v]$$ \end{lemma} \begin{proof} For any $v$, if all inputs are $v$ then by validity $v$ is decided. For any agent $i$, let $\tilde{s}=(v,\dots,v) \in S_{-i}$, then due to validity, the probability that $v$ is decided is at least $P[I_i=v]$, i.e., $P[D_i=v|\tilde{s}] \geq P[I_i=v]$. By Lemma~\ref{lemma_set_equal} this is true for any $s \in S_{-i}$. Thus, $P[D_i=v] \geq P[I_i=v]$. Since $\sum\limits_v P[D_i=v]=1$ and $\sum\limits_v P[I_i=v]=1$, then: $\forall s \in S_{-i},v: P[D_i=v|s] = P[I_i=v]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th_uniform}] Assume by contradiction that $\exists v1,v2: P[I_i=v_1] > P[I_i=v_2]$. If all agents receive as input the same value $v_1$, then by validity $v_1$ is decided. Given $s=(v_1,\dots,v_1)\in S_{-i}$, the probability that $v_1$ is decided is at least the probability that the input of agent $i$ is $v_1$, i.e., $P[D_i=v_1 | s] \geq P[I_i=v_1]$. If $n-1$ agents receive $v_1$ as input and one agent receives $v_2 \neq v_1$ as input the decision must not be $v_1$ otherwise $P[D_i=v_1 | s] > P[I_i=v_1]$ contradicting Lemma~\ref{lemma_out_in}, thus due to validity the decision must be $v_2$ when $n-1$ agents receive $v_1$ and one agent receives $v_2$. Let $s' = (v_2, v_1, \dots, v_1) \in S_{-i}$. If agent $i$ receives $v_1$ as input then as stated above $v_2$ is decided, thus: $P[D_i=v_2 | s'] \geq P[I_i=v_1] > P[I_i=v_2]$, contradicting Lemma~\ref{lemma_out_in}. Thus, the input distribution must be uniform, i.e.: $\forall v_1,v_2: P[I_i=v_1]=P[I_i=v_2]=\frac{1}{r}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th_uniform_out}] Combining Lemma~\ref{lemma_out_in} with Theorem~\ref{th_uniform} : $$\forall s \in S_{-i},v \in \{0,...r-1\}: P[D_i=v|s] = P[I_i=v]=\frac{1}{r} ~\qedhere$$ \end{proof} \subsubsection{$(n-2)$-resilient Binary Consensus for any $n$} \label{section_cover_bin_cons} A binary consensus protocol for any $n$ is presented in \cite{GTBlocks} combining a leader election algorithm with a XOR on selected inputs. In Appendix~\ref{section_res_even} we prove that this protocol reaches $(n-2)$-resilient equilibrium for binary consensus for any $n$, when the input distribution is uniform. Note that the algorithm in \cite{GTBlocks} does not work in any network topology, but on any network in which Resilient Input Sharing is possible (see \cite{GTBlocks} and Section~\ref{section_sufficient}). \section{No $(n-1)$-resilient Equilibrium for Multi Valued Consensus} \label{section_multi} Here we discuss multi-valued consensus, where the agreement is between $r>2$ possible values rather than two values. Applying the same logic as in the proof of Theorem~\ref{th_xor} one can deduce: \begin{lemma} \label{lemma_multi_val_helper} \quad \begin{enumerate} \item $\forall i\in V, v\in \{0,\dots,r-1\}: P[D_i=v | \#(0) = n-1 \land \#(v) = 1] = 1$ \item $\forall i\in V, v\in \{0,\dots,r-1\}: P[D_i=0 | \#(0) = n-2 \land \#(v) = 2] = 1$ \end{enumerate} \end{lemma} \begin{proof} The proof is the same as the first and second induction steps in the proof of Theorem~\ref{th_xor}. \end{proof} \begin{theorem} \label{th_multi} There is no $(n-1)$-resilient equilibrium for multi-valued consensus for any $r>2$ \end{theorem} \begin{proof} Assume towards a contradiction that there is an $(n-1)$-resilient equilibrium for multi-valued consensus for some $r>2$. Let $v,u \in \{1,\ldots, r-1\}$ s.t. $v \neq u$. Denote by $X$ any configuration in which the input of one agent is $v$, of another is $u$, and of the rest is $0$. In a run of the protocol starting from $X$, due to validity the network's decision value must be either $0$ or $u$ or $v$. We prove that none of these values can be decided in an equilibrium, reaching a contradiction. Consider some Agent $i$ and coalition $V \setminus \{i\}$. Define $s_v$ and $s_u$ as follows: \begin{itemize} \item $s_v :=$ a configuration in which $\#_{-i}(0) = n-2$, $\#_{-i}(v) = 1$ \item $s_u :=$ a configuration in which $\#_{-i}(0) = n-2$, $\#_{-i}(u) = 1$ \end{itemize} Assume towards a contradiction that $P[D_i=0 |s_v \land I_i = u ] = p > 0$. Notice that $(s_v \land I_i = u) \in X$. By point $2$ of Lemma~\ref{lemma_multi_val_helper}, if $I_i=v$ and the coalition acts as if their input vector is $s_v$, then $i$ must decide $0$. By Theorem~\ref{th_uniform}, $P[I_i=v] = \frac{1}{r}$, therefore, $P[D_i=0 | s_v] \geq \frac{1}{r} + \frac{p}{r} > \frac{1}{r}$, contradicting Lemma~\ref{lemma_out_in}. Thus, in an equilibrium starting from configuration $X$, the decision value cannot be $0$. Assume towards a contradiction that: $P[D_i=v | s_v \land I_i = u ] = p > 0$. Notice that from point $1$ of Lemma~\ref{lemma_multi_val_helper}, if $I_i=0$ and the coalition acts as if their input vector is $s_v$, then $i$ must decide upon $v$. As before we get: $P[D_i=v |s_v] \geq \frac{1}{r} + \frac{p}{r} > \frac{1}{r}$, contradicting Lemma~\ref{lemma_out_in}. Thus, in an equilibrium starting from configuration $X$, the decision value cannot be $v$. Applying the symmetric claim for $u$, with a coalition that acts as if their input vector is $s_u$, we get that in an equilibrium starting from configuration $X$, the decision value cannot be $u$. Thus, no value from $\{0,u,v\}$ can be decided in an $(n-1)$-resilient equilibrium for multi-valued consensus starting with configuration $X$. Hence, due to validity there is no $(n-1)$-resilient equilibrium for $r$-valued consensus for any $r>2$. \end{proof} \section{Necessary and Sufficient conditions for Deterministic Consensus} \label{section_sufficient} The necessary conditions from Section~\ref{section_necessary} are extended here into necessary and \emph{sufficient} conditions for a \emph{deterministic} $(n-1)$-resilient equilibrium for binary consensus. Deterministic means that the step of each agent in each round of the algorithm is determined completely by its input and the history of messages it has received up until the current round. In Appendix~\ref{section_non_det} some difficulties in trying to extend our proof to non-deterministic algorithms are provided. For the sufficient condition, a new problem - Resilient Input Sharing (RIS), a variant of knowledge sharing \cite{GTBlocks}, is introduced. \begin{theorem \label{th_iff} A deterministic $(n-1)$-resilient equilibrium for consensus exists \emph{iff}: \begin{enumerate} \item $n$ is odd \item The input distribution is uniform \item There exists an algorithm for deterministic \EDP{} (defined below). \end{enumerate} \end{theorem} \subsection{The Resilient Input Sharing Problem} \label{section_iks} In the \EDP{} problem, agents in $V$ share their binary inputs while each agent $i$ assumes $V \setminus \{i\}$ are in a coalition. Intuitively, each agent requires all other agents to commit their inputs before or simultaneously to them learning about its input. The motivation for this requirement is that we consider problems in which (1) all agents compute the same function on the inputs, and (2) if any one input is unknown, then any output in the range of the function is still equally possible \cite{GTBlocks,GTColor}. Therefore the above requirement ensures that the coalition cannot affect the computation after learning the remaining (honest) agent's input, which is necessary for the computation to reach $(n-1)$-resilient equilibrium. We use the following definitions: \begin{itemize} \item $K_{j}^{t}$- Agent j's knowledge at the beginning of round $t$, including any information the coalition could have shared with it. \item Agent $j$ is an $i$-knower($t$)- if at the beginning of round $t$ it can make a 'good' guess about $I_i$, i.e., $ \exists b \in\{0,1\}: P[I_i = b |K_{j}^{t}] > P[I_i = b]$ \item $Know(i,t)$ - the group of all $i$-knowers at the beginning of round $t$. In a \EDP{} algorithm, $Know(i,0) = \varnothing$ and $Know(i, \infty) = V \setminus \{i\}$ \end{itemize} Consider for example the network in Figure~\ref{fig_know}. At Round $0$, $A$ sends two different messages, whose XOR is its input, to $B$ and $C$. At Round $1$, $B$ and $C$ can pass these messages to $D$, even if this would not happen in a correct run. Thus: $Know(A,2) = \{D\}$, and $Know(A,3)=\{B,C,D\}$. \begin{figure} \captionsetup{justification=centering} \includegraphics[scale=0.65]{fig_know.png} \centering \caption{Messages sent by agent $A$ at round $0$. $R_A$ is a random number chosen by $A$.} \label{fig_know} \end{figure} \subsubsection{The RIS Problem} A solution to the \EDP{} problem satisfies the following conditions: \begin{enumerate} \item \textbf{Termination} - the algorithm must eventually terminate. \item \textbf{Input-sharing} - at termination, each agent knows the inputs of all other agents. \item \textbf{Resilient} - at any round $t$, Agent $i$ does not receive new information from agents in $Know(i,t)$. \end{enumerate} \textbf{Notice}: in a consensus protocol, if $j$ is an $i$-knower($t$), and $j$ can still influence the output at round $t$, then the protocol is not an $(n-1)$-resilient equilibrium. Thus, in an $(n-1)$-resilient equilibrium for consensus, no new information can be sent to $i$ from any $i$-knower($t$) at round $t$. \subsection{The effect of messages in a XOR computation} \label{section_cons_2_IKS} We prove that at the end of a distributed XOR computing algorithm, if an agent is given all the chains of messages that have affected its run, it can infer the input of every other agent (Theorem ~\ref{thm:inputEncoding}). This result applies for both deterministic and non-deterministic XOR algorithms. \textbf{Remark $1$}: In synchronous networks, an agent can pass information to its neighbor through a silent round. Hereafter, every protocol in which informative silent rounds (explained in the proof of Lemma \ref{lemma:inputEncodingSize3} and defined formally in Appendix \ref{section_formal_meaningful}) occur is altered, and a special message \emph{EMPTY} is sent instead on the corresponding link. \textbf{Remark $2$}: Hereafter, we consider networks in which every agent knows the topology of the network before the algorithm starts. Otherwise, the coalition could always cheat and choose a topology in which \EDP{} is not possible (for example, 1-connected topology) \begin{definition}[Messages recipient]\label{def:RecipientOfMessages))} Let $R$ be a run of the protocol and $C \subseteq V$ a group of agents. $$ Recv(C,t,R)=\{i\in V | i \text{ received a message from } C \text{ in round } t \text{ of } R\} $$ \end{definition} \begin{definition}[Agents affected by a message]\label{def:AgentsAffectedByMessage} In a run $R$, let $m$ be a message sent at round $t_{m}$ to $dst_m$ = agent $j$ from $src_m$. Then: \begin{itemize} \item $Aff_{(m,R,t_{m})}$ = $\{j\}$ - Agent $j$ is directly affected by $m$. \item $\forall k>0: Aff_{(m,R,t_{m}+k)}$ = $Aff_{(m,R,t_{m}+k-1)}$ $\cup$ Recv($Aff_{(m,R,t_{m}+k-1)}$,R,$t_{m}$+k) - Agents that were recursively affected by $m$. \end{itemize} \end{definition} $Aff_{(m,R,t)}$ illustrates that a message may affect more than just its recipient; Its potential effect propagates through the network, reaching different agents through other messages. \begin{definition}[All the (chains of) messages that have an effect on agent $i$ in run $R$]\label{def:MessagesAffectingAgent))} \quad \begin{itemize} \item $Aff_{(i,R)}$ = $\{$ $<m,t_m,src_m,dst_m>$, $m$ sent in $R\ |i\in Aff_{(m,R,T_{end})}\}$ ($R$ terminates at $T_{end}$) \end{itemize} \end{definition} \begin{theorem}[The encoding of all inputs]\label{thm:inputEncoding} Let R be a run of a distributed XOR computing algorithm. Let $i,j\in V$, Agent $i$ can compute $I_j$ from the following information: \begin{enumerate} \item $I_{i}$ - its input. \item Decision value i.e., the XOR of all inputs. \item $Aff_{(i,R)}$ - all the messages in $R$ that have an effect on Agent $i$. \end{enumerate} \end{theorem} To prove Theorem \ref{thm:inputEncoding}, assume the following base case is correct (to be proved in the sequel): \begin{lemma}\label{lemma:inputEncodingSize3} Theorem~\ref{thm:inputEncoding} is correct for a network of size $3$, $V=\{i,j,k\}$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:inputEncoding}] Let $G=(V,E)$ be a network where $n > 3$, such that $i,j \in V$. Create a new network $G'$ in which agents $i$ and $j$ are as in $G$, but all other agents in $V \setminus \{i,j\}$ are clustered into one 'virtual' agent $k$. A distributed XOR algorithm for $G'$ is: \begin{itemize} \item Agent $k$ chooses $n - 2$ bits such that the XOR of these bits is its $I_k$. \item Agents $i$ and $j$ behave in $G'$ as if they were in $G$, explicitly attaching to each message the id of its destination, while $k$ emulates the behavior of the other $n - 2$ agents in $V$, attaching to each message the id of its source. \end{itemize} Let $I_i^R$ and $D_i^R$ be the input and output of $i$ in run $R$. For any run $R$ of the algorithm in $G$, $\exists R'$ - a run of the algorithm in $G'$ s.t.,: \textbf{(1)} $I_i^R$=$I_i^{R'}$ , $I_j^R$=$I_j^{R'}$, \textbf{(2)} $D_i^R$=$D_i^{R'}$ and \textbf{(3)} $Aff_{(i,R)} \supseteq Aff_{(i,R')}$.\\ From lemma~\ref{lemma:inputEncodingSize3} we know that from $D_i^{R'}$, $I_i^{R'}$ and $Aff_{(i,R')}$, $I_j'$ can be computed. Therefore: $\forall i\neq j\in V$: - $D_i^{R}$, $I_i^R$ and $Aff_{(i,R)}$ are enough to compute $I_j^R$. \end{proof} \begin{proof}[Proof of ~\ref{lemma:inputEncodingSize3}] $V = \{i,j,k\}$. Assume towards a contradiction that $\exists R_1, R_2$, two runs of the algorithm such that \begin{enumerate} \item $I_i^{R_1} $ = $ I_i^{R_2}$ - Agent $i$'s inputs in $R_1$ and $R_2$ are the same. \item $\bigoplus_{l \in V} I_l^{R_1}=\bigoplus_{l \in V} I_l^{R_2}$ - The decision value is the same in both $R_1$ and $R_2$. \item $Aff_{(i,R_1)}$ = $Aff_{(i,R_2)}$ - Exactly the same set of messages affect $i$ in both runs. \item $I_j^{R_1}\neq I_j^{R_2}$ - Agent $j$'s input in $R_1$ is different than in $R_2$. \end{enumerate} Clearly from 1, 2, and 4 it must be that $I_k^{R_1}\neq I_k^{R_2}$. Towards a contradiction we construct run $R_3$, in which $i$'s and $k$'s inputs are the same as in $R_1$ and $j$'s input is the same as in $R_2$, but the decision value (XOR) in $R_3$ is the same as in $R_1$. In $R_3$, agents $i$ and $k$ start to perform their steps according to $R_1$ until the first round in which $i$ or $k$ receive a message that either does not receive in that round in $R_1$. Agent $j$ behaves the same as in $R_2$, until the first round, denoted round $T-1$, in which it receives a message $m$ it does not receive in that round in $R_2$. Notice that it is legal for all agents to act this way in round 0. Further, if $i$ and $k$ can continue according to $R_1$ and $j$ can continue according to $R_2$ until termination, then $i$ outputs the same value as it would in $R_1$, which is incorrect for $R_3$. \begin{description} \item[Observation $1$] From round $T$ until termination $j$ cannot send messages to $i$ in either $R_1$ or $R_2$ or otherwise, $m$'s effect would propagate to $i$, causing - $Aff_{(i,R_1)}\neq Aff_{(i,R_2)}$, contradicting point $3$ of the assumptions. \item[Observation $2$] Similarly from round $T$ until termination, $j$ cannot send messages to $i$ in $R_3$ or otherwise, let $t\geq T$ be the first round (after $T$) of $R_3$ in which $j$ sends a message to $i$. In $R_1$ - $j$ does not send a message to $i$ in round $t$ (see Observation $1$). This means that this silent round $t$ of $R_1$ between $j$ and $i$ is informative (it tells $i$ that the run is $R_1/R_2$ and not $R_3$). Since we do not allow informative silent rounds (see Remark $1$), we reach a contradiction. \end{description} Notice that by point $3$ in the assumptions, after $T$ $j$ cannot even communicate with $i$ through $k$, since $m$'s effect would propagate to $i$ through $k$. From the two observations above, from round $T$ of $R_3$, $j$ cannot communicate with $i$, and from $i$'s perspective, $j$ is running $R_1$. The same logic applies for $k$ - the first round in which it is illegal for $k$ to act according to $R_1$, is a round after which $k$ cannot send messages to $i$ (even not through $j$). Thus $i$'s experience throughout $R_3$ is the same as in $R_1$, resulting in $i$ making an incorrect output. Contradiction. \end{proof} \subsection{Deterministic $(n-1)$-resilient Consensus implies \EDP{}, completing the proof} \label{section_obs_det_cons} In a deterministic synchronous binary consensus protocol, in which all agents start at the same round, for each input vector the run of the algorithm is fully determined. Let us look at a network running some deterministic binary consensus, with agent $i\in V$ and coalition $V \setminus \{i\}$. Intuitively, agents in the coalition can choose in advance an input vector to be used in the algorithm. Thus, from the coalition's perspective, there can be only two possible runs - $R_0$ in which $I_i=0$, and $R_1$ in which $I_i=1$. For each agent in the coalition, there is the first round in which $R_0$ and $R_1$ differ, at that point this agent knows $I_i$. Thus, each agent in the coalition is in one of two states - knows nothing about $I_i$ or knows $I_i$, this is in contrast to non-deterministic algorithms, see for example Figure~\ref{fig_know}. Below we transform any deterministic $(n-1)$-resilient equilibrium for binary consensus into a deterministic \EDP{}. In Appendix ~\ref{section_non_det} the difficulties in the non-deterministic case are explained. \begin{theorem} \label{thm:consImpliesRKS} If there exists a deterministic $(n-1)$-resilient equilibrium for binary consensus, $A$ on network $G=(V,E)$ then there exists an algorithm $\tilde{A}$ for \EDP{}, on $G$. \end{theorem} \begin{proof} In $\tilde{A}$, each agent $i$ runs $A$ with the following modifications: \begin{itemize} \item For each message $m$ that $i$ receives, $i$ appends $<m,src_{m},dst_{m},t_{m}>$ to a local buffer $B$ of messages that has affected it. \item Agent $i$ appends $B$ to each message it sends. \item Agent $i$ adds to $B$ all the information piggy-bagged on incoming messages. \end{itemize} In this new algorithm $\tilde{A}$, every message propagates in the network, reaching all the agents it affects. By the end of the algorithm, the buffer maintained by agent $i$ contains $Aff_{(i,R)}$, where $R$ is the run of the original consensus protocol $A$. By theorem~\ref{th_xor}, $A$ is a XOR computing protocol, and by theorem~\ref{thm:inputEncoding}, $i$'s buffer contains enough information to infer all inputs. Thus $\tilde{A}$ is an \EDP{} protocol. It remains to prove that $\tilde{A}$ is resilient. An input sharing protocol is resilient (Subsection~\ref{section_iks}) if at any round $t$, $i$ does not receive new information from agents in $Know(i,t)$. As stated before, this demand applies for $(n-1)$-resilient equilibrium for binary consensus as well. Thus, to show that $\tilde{A}$ is resilient, it is enough to show that $\forall i\in V$: \begin{itemize} \item In each round $t$ of $\tilde{A}$, $i$ receives messages from the same neighbors it receives from in $A$ \item In each round $t$ of $\tilde{A}$, $\forall j\neq i$: $j\in Know(i,t)$ in $\tilde{A}$ $\implies$ $j\in Know(i,t)$ in $A$ \end{itemize} The first point is immediate from the construction of $\tilde{A}$. For the second point - observe some agent $j$ at round $t$ of $A$, which is not an $i$-knower in $A$. For $j$ to become an $i$-knower($t$) in $\tilde{A}$, the coalition must send $j$ enough information by $t$ for it to make a 'good' guess about $I_i$. There are two kind of paths in $G$ by which the coalition can send information to $j$ - paths that do not pass through $i$, and paths that do. Through paths not including $i$, the coalition can pass information in the same pace for both $A$ and $\tilde{A}$. Since $j\notin Know(i,t)$ in $A$, using these paths alone is not enough to make $j$ an $i$-knower($t$) in $\tilde{A}$. Regarding paths that include $i$ - as argued in the beginning of this subsection, in a \emph{deterministic} $(n-1)$-resilient equilibrium for binary consensus, if a member of the coalition has any information about $I_i$, then that member \emph{knows} $I_i$. Therefor, in $A$, $i$ should not receive messages from members of $Know(i,t)$ at round $t$. Thus if the coalition has information it wants to pass to $j$, it cannot do so using paths including agent $i$, since $i$ does not accept and propagate messages from $i$-knowers. To conclude, if $j$ is an $i$-knower in $\tilde{A}$, $j$ is an $i$-knower in $A$. Since $A$ is $(n-1)$-resilient equilibrium for consensus, $\tilde{A}$ is resilient as well. \end{proof} \subsubsection{Completing the proof, necessary and sufficient conditions for deterministic Consensus} \label{section_proof} \begin{proof}[{\bf Proof of Theorem~\ref{th_iff}$\Leftarrow$}] Assume that the $3$ conditions are realized, and let us suggest a simple $(n-1)$-resilient equilibrium for binary consensus: run the \EDP{} algorithm and output the XOR of all inputs. Since the \EDP{} algorithm is resilient, no coalition has an incentive to cheat. \end{proof} \begin{proof}[{\bf Proof of Theorem~\ref{th_iff}$\Rightarrow$}] Assume that $(n-1)$-resilient equilibrium for binary consensus exists. By \ref{cor_noeven} and \ref{th_uniform}, $n$ is odd and the input distribution is uniform. By theorem~\ref{thm:consImpliesRKS}, \EDP{} is possible. \end{proof} \section{Discussion} \label{section_discussion} Surprisingly, while there is an equilibrium for binary consensus resilient to coalitions of $n-1$ agents, no such equilibrium exists for multi valued consensus. This is the first model we know of in which there is a separation between binary and multi valued consensus. Intuitively, this is because a coalition with a preference towards $v$ has an incentive to cheat and act as if the input of all agents in the coalition is $v$, thus lowering the number of possible decision values (due to validity) to two values, at most. Consider for example the standard bit-by-bit reduction from binary to multi valued consensus, the probability to decide $v$ is now at least $\frac{1}{2}$ instead of $\frac{1}{r}$, since the decision value is determined by the decision on the first bit of the coalition input that differs from the input of the honest agent. We conjecture that this intuition holds even for smaller coalitions, up to a single cheater. The results in $\S \ref{section_necessary}$ and $\S \ref{section_multi}$ hold regardless of the network topology, scheduling models, or cryptographic solutions, as they are based solely on the input values and utility of the agents. Furthermore, we present necessary and sufficient conditions for $(n-1)$-resilient equilibrium for binary \emph{deterministic} consensus using the resilient input sharing (RIS) problem. This in fact means that an agent cannot hide its input from the rest of the network in any $(n-1)$-resilient equilibrium protocol that computes XOR, i.e., even though we only compute the XOR of inputs, at the end of the protocol all agents can deduce the input values of all other agents. There are several open directions for research: \begin{itemize} \item Extending the equivalence result to \emph{non-deterministic} consensus and RIS. \item Can binary consensus be solved without the conditions of even size and uniform input for coalitions of a smaller size, such as $n-2$ or $\frac{n}{2}$? \item Does an equilibrium for multi-valued consensus exist for coalitions of size $n-2$ or less? \end{itemize} \clearpage
2,877,628,090,881
arxiv
\section{Introduction} In recent years the phase diagram of strongly interacting matter has become the focus of theoretical and experimental attention. It is believed that the underlying fundamental theory of the strong interaction is described by Quantum Chromodynamics (QCD). At finite temperature and finite densities QCD predicts two different phase transitions which are associated with two opposite quark mass limits (for reviews see e.g.~\cite{MeyerOrtmanns:1996ea, Rischke:2003mt}). For vanishing quark masses, i.e., in the chiral limit, QCD has an exact global $U(N_f)_L \times U(N_f)_R$ chiral symmetry, where $N_f$ denotes the number of quark flavors. The axial $U(1)_A$ anomaly, induced by instantons, breaks the axial $U(N_f)_A$ part of the chiral symmetry explicitly to $SU(N_f)_A$. In addition, in the vacuum the $SU(N_f)_A$ is spontaneously broken by a finite expectation value of the quark condensate $\langle \bar q q \rangle \neq 0$. As a consequence of the Goldstone theorem, $N_f^2-1$ massless pseudoscalar Goldstone bosons are expected to emerge. For $N_f=2$ the associated Goldstone bosons are the three pseudoscalar pions and for $N_f=3$ one has additionally the four kaons and the pseudoscalar $\eta$ meson which together with the pions constitute the pseudoscalar meson octet. Once the quark masses obtain finite values, i.e., leaving the chiral limit, chiral symmetry is broken explicitly and all these Goldstone bosons acquire masses as measurable in the experiment. However, at high temperatures and densities this symmetry breaking pattern changes drastically. In hot and dense matter the $SU(N_f)_A$ symmetry and additionally, if instantons are sufficiently screened, the explicitly broken axial $U(1)_A$ symmetry will become restored again. As a consequence, the masses of the pseudoscalar Goldstone bosons will degenerate with the masses of the corresponding chiral scalar partners, signaling in this way the restoration of chiral symmetry. The associated phase transition is commonly referred to as the chiral phase transition. In the opposite quark mass limit, the so-called quenched limit of QCD with infinitely heavy quark masses, QCD reduces to a pure $SU(N_c)$ gauge theory which is invariant under a global $Z(N_c)$ center symmetry. In contrast to the chiral symmetry, the center symmetry is spontaneously broken at high temperatures and densities, i.e., in the color deconfined quark-gluon plasma phase and is restored in the hadronic phase at small temperatures and densities. The associated phase transition from the hadronic (glueball) phase to the color deconfined plasma phase is the confinement/deconfinement phase transition. The center symmetry is always broken explicitly when dynamical quarks are present, i.e., when the quenched limit of QCD is left. Both phase transitions are conceptually distinct phenomena of QCD. For the experiment it is important to investigate and understand the interplay between these phase transitions, in particular, for realistic quark masses. Based on theoretical models and QCD lattice simulations a generic phase diagram for $N_f = 2+1$ quark flavors in the temperature and (baryo)chemical potential $\mu$ plane can be drawn as in the left panel of Fig.~\ref{fig:columbia}. Here not only the chiral and deconfinement phase transition from the hadronic fluid phase to the quark-gluon plasma are shown. The regions probed by some already running or planned relativistic heavy-ion collision experiments such as ALICE, RHIC, CBM and SPS are also marked in the figure. So far, it is still an open issue whether both phase transitions, the chiral and deconfinement transition, take place at the same temperatures and densities yielding thus a single transition or crossover line in the QCD phase diagram as indicated in the figure. For example, McLerran and Pisarksi suggested that this is not the case at moderate temperature and large chemical potential. Based on large-$N_c$ arguments, they concluded that in this phase diagram region there might be a new, so-called quarkyonic phase which is still confining but chirally symmetric \cite{McLerran:2007qj}. \begin{figure}[tb] \begin{minipage}[t]{0.5\textwidth} \epsfig{file=qcd_phase_diagram.eps,height=6.5cm} \end{minipage} \hspace*{1cm} \begin{minipage}[t]{0.4\textwidth} \epsfig{file=columbia_plot.eps,height=7cm} \end{minipage} \begin{center} \begin{minipage}[t]{\textwidth} \caption{Schematic phase transition behavior of $N_f = 2+1$ flavor QCD in the ($T,\mu$) plane (left panel) and for vanishing chemical potential in the ($m_{u,d}, m_s$) quark mass plane (right panel)\cite{Brown:1988qe}. \label{fig:columbia}} \end{minipage} \end{center} \end{figure} The situation is even more sophisticated since some properties of the chiral phase transition such as its order depend on $N_f$ and the strength of the axial anomaly. The status for vanishing chemical potential is partly summarized in the right panel of Fig.~\ref{fig:columbia}: in the limiting case of two massless light quarks, $m_{u,d}=0$, and an infinite strange quark mass $m_s$, which corresponds to $N_f=2$, it is conjectured that the finite temperature chiral phase transition is of second-order for a constant anomaly strength and lies in the universality class of the Heisenberg $O(4)$ model in three dimensions. If the anomaly strength is identified with the instanton density, a temperature-dependent strength of the axial anomaly would arise. It is supposed that the strength vanishes at high temperatures. This temperature-dependent axial anomaly can change the chiral transition to first-order. The chiral transition will also be of first-order once the strange quark mass drops below a certain critical value. This critical mass value is a tricritical end point of the second-order transition line. For three vanishing quark masses the first-order transition has been confirmed by a renormalization group analysis and is independent of the strength of the axial anomaly~\cite{Pisarski:1983ms}. The first-order region in the ($m_{u,d},m_s$) plane still persists for small light and strange quark masses and is finally terminated by a second-order transition boundary. This boundary line separates the first-order region from the crossover region in the ($m_{u,d},m_s$) plane. QCD lattice simulations indicate that the physical mass point, labeled as a red point in the Fig.~\ref{fig:columbia}, is located in the crossover regime. In contrast to the boundary line at $m_{u,d}=0$, which lies presumably in the $O(4)$ universality class, the universality class changes for finite $m_{u,d}$ to the one of the $Z(2)$ three-dimensional Ising model. For finite chemical potential the area of the first-order regime in the quark mass plane also changes. If it grows for increasing chemical potential the boundary may hit and pass over the position of the physical mass point, turning the chiral phase transition from a crossover to a second-order or first-order transition and as a consequence, the existence of critical end point (CEP) in the QCD phase diagram becomes possible. This scenario is denoted as the standard one. On the other hand, if the first-order region shrinks for increasing chemical potential the chiral phase transition sticks to be a crossover and the existence of a CEP in the phase diagram is not possible, which is commonly labeled as the non-standard scenario and has been predicted by de Forcrand and Philipsen \cite{Forcrand2002}. The existence or exclusion of a QCD critical end point has not yet been confirmed by QCD lattice simulations. At finite chemical potential these Monte-Carlo simulations suffer from the notorious sign problem because the quark determinant in the QCD partition function becomes a complex quantity which entangles its probability interpretation. But recently, some progress could be achieved in extrapolating zero chemical potential Monte-Carlo simulations to finite chemical potentials \cite{Schmidt:2006us}. However, all these extrapolation techniques are still limited to small chemical potentials. Furthermore, both above mentioned scenarios, the standard and the non-standard ones, are seen on the lattice with different extrapolation techniques. So far, only model studies give direct and indirect evidences for the existence of the critical end point in the whole phase diagram. But they cannot predict its precise location. The location and even its existence depend also on the magnitude of the axial anomaly and on vector-channel interactions \cite{Struber:2007bm, Fukushima:2008is}. This is plausible because the zeroth-component of the vector-channel interaction is directly coupled to the quark density which surely modifies the interactions in the finite-density environment. Due to the repulsive vector-vector interaction the first-order transition is weakened. As a consequence a standard scenario could change to a non-standard one depending, of course, on the strength of the vector coupling. In all, the influence of the axial anomaly and also of the vector-vector interaction on the QCD phase structure is of relevance and should be investigated in a quantitative manner. Moreover, in order to bridge the gap between existing lattice data at zero chemical potential and interesting regions in the QCD phase diagram at finite chemical potential effective models are useful and inevitable tools. They share the relevant symmetries with the underlying QCD and allow the investigations in a simplified framework. One example of such effective models is the chiral quark-meson model which allows to explore the chiral phase transition, see e.g.~\cite{Schaefer:2006sr}. \section{Chiral effective quark-meson models} Based on a previous analysis within an effective quark-meson model with two quark flavors \cite{Schaefer:2007ep, Schaefer:2006ds, Schaefer:2004en} an extension to three quark flavors is done straightforwardly which enables the investigation of the chiral $SU(3)\times SU(3)$ symmetry restoration with temperature and quark chemical potential including the axial $U(1)_A$ anomaly. Here we briefly summarize some results of Ref.~\cite{Schaefer:2008hk} where the three-flavor quark-meson model has been treated in mean-field approximation. In this approximation the quantum and thermal meson fluctuations of the grand potential are neglected while the quarks/antiquarks are retained as quantum fields. The resulting integration over the Grassmann fields yields finally the $T$- and $\mu$-dependent quark/antiquark contribution $\Omega_{\bar q q} (T,\mu)$ of the grand potential wherein the ultraviolet divergent vacuum contribution has been neglected. After all the total grand potential is a sum of $\Omega_{\bar q q} (T,\mu)$ and a meson potential $U(\sigma_x, \sigma_y)$, \begin{equation} \label{eq:qmpot} \Omega(T,\mu) = \Omega_{\bar q q} (T,\mu) +U(\sigma_x, \sigma_y)\ , \end{equation} where $\sigma_x$ and $\sigma_y$ denote the nonstrange and strange condensates, respectively. The condensates are the corresponding chiral order parameters and depend on $T$ and $\mu$. Note, that the quark contribution also depends on these condensates implicitly via the quark masses. Since we consider symmetric quark matter a uniform quark chemical potential has been introduced. The resulting phase diagrams with explicit $U(1)_A$ symmetry breaking for three different values of $m_\sigma$ are shown in the left panel of Fig.~\ref{fig2}. \begin{figure}[tb] \begin{minipage}[t]{0.3\textwidth} \epsfig{file=mwmf_phasediag_muB2.eps,height=6cm} \end{minipage} \hspace*{3cm} \begin{minipage}[t]{0.3\textwidth} \epsfig{file=mwmf_phasediag_muB_mpiK.eps,height=6cm} \end{minipage} \begin{center} \begin{minipage}[t]{\textwidth} \caption{ Left: Phase diagrams with axial anomaly for different values of the sigma mass: $m_{\sigma} = 600$ MeV (lower lines), $800$ MeV and $900$ MeV (upper line). Right: Phase diagrams with axial anomaly for $m_\sigma =800$ MeV but different pion masses $m_{\pi}/m_{\pi}^{*}= 0.49$ (lower line), $0.6, 0.8, 1.0, 1.2, 1.36$ (upper line) where the ratio $m_{\pi}/m_{K}=m_{\pi}^{*}/m_{K}^{*}$ is kept fixed with $m_{\pi}^{*}=138$ MeV and $m_K^{*}=496$ MeV. \label{fig2}} \end{minipage} \end{center} \end{figure} For certain values of $m_\sigma$ a CEP is found. Compared to recent lattice simulations, this point is located at smaller temperatures and larger chemical potentials. Anyhow, its exact location in the phase diagram cannot be predicted by effective models. The mass of the $\sigma$ meson is one model input parameter which is poorly known experimentally. We therefore use different input values for $m_\sigma$ in order to study its mass dependence. One sees for increasing $m_\sigma$ that the location of the CEP moves towards the $\mu$-axis. Already for $m_\sigma=900$ MeV the CEP disappears and the chiral phase transition is a smooth crossover over the entire phase diagram. Without axial anomaly almost no difference of the phase boundary and hence of the location of the CEP is seen. In Ref.~\cite{Struber:2007bm} a gauged chiral $U(2)\times U(2)$ symmetric linear sigma model without quarks within the 2PI resummation scheme has been considered. If the influence of the vector mesons are neglected the opposite behavior of the chiral transition as a function of $m_\sigma$ is observed: at $\mu=0$ a crossover is found for a small $\sigma$ mass and a first-order transition for a large $\sigma$ mass. On the other hand, if vector mesons are incorporated the transition leads to a more rapid crossover and brings one closer to the second-order critical point. Independent of the $U(1)_A$ symmetry breaking a first-order phase transition in the chiral limit is expected. This behavior is shown in the right panel of Fig.~\ref{fig2} where the phase diagrams including the anomaly for varying pion and kaon masses are shown for $m_\sigma=800$ MeV. For this figure a path in the $(m_\pi, m_K)$-plane through the physical mass point towards the chiral limit has been chosen by varying the pion mass while keeping the ratio $m_\pi/m_K$ fixed. On the one hand, for a pion mass $1.36$ times larger than the physical one, the CEP lies exactly on the $\mu$-axis (for $m_\sigma = 800$ MeV) and the chiral transition is a smooth crossover over the entire phase diagram. On the other hand, for decreasing pion masses the location of the CEP moves towards the $T$-axis and for a pion mass below half of the physical one the chiral transition turns into a first-order one for all densities and no CEP exists any longer. This behavior of the CEP excludes the nonstandard scenario described in the introduction. The chiral critical surface which is defined by the value of the critical chemical potential of the CEP for a given mass pair ($m_\pi, m_K$), is evaluated in Fig.~\ref{fig3} as a function of the pion and the kaon masses with (left) and without (right) $U(1)_A$ symmetry breaking. For values of the chemical potential above the surface the chiral transition is of first-order while for values below the surface the transition lies in the crossover region. With or without anomaly the surface grows out perpendicular from the mass plane at $\mu=0$ and the tangent plane to the critical surface has a decreasing slope for larger masses. Consequently, the first-order region grows for increasing chemical potentials. Since the critical chemical potential cannot grow arbitrarily the surface must have a boundary at larger pion and kaon masses which is not shown in the figure. \begin{figure}[tb] \begin{minipage}[t]{0.3\textwidth} \epsfig{file=mwmf_pset1003_matte_2.eps,height=7cm} \end{minipage} \hspace*{3cm} \begin{minipage}[t]{0.3\textwidth} \epsfig{file=mwmf_pset3_matte_2.eps,height=7cm} \end{minipage} \begin{center} \begin{minipage}[t]{\textwidth} \caption{Chiral critical surface as a function of the pion and kaon masses for $m_\sigma = 800$ MeV with (left) and without axial anomaly (right). The arrow points to the critical quark chemical potential at realistic pion and meson masses and is denoted as physical point.\label{fig3}} \end{minipage} \end{center} \end{figure} Furthermore, the effect of the $U(1)_A$ anomaly on the shape of the surface is rather marginal for kaon masses greater than $400$ MeV. This is reasonable since for larger kaon masses the strange sector decouples effectively from the light nonstrange sector and the chiral transition is basically driven by the light nonstrange particles. For kaon masses smaller than $400$ MeV a considerable influence of the anomaly on the shape of the critical surface is seen: without anomaly the region of first-order phase transitions at $\mu=0$ is considerably reduced. \section{Emulating the Polyakov-loop dynamics} So far only the chiral phase transition has been considered. Recently, the quark-meson model could be combined with the Polyakov loop which allows to investigate both, the chiral and the deconfinement phase transition. In general, the Polyakov loop $\Phi$ is a complex scalar field and serves as an order parameter for the confinement/deconfinement transition in the quenched limit. Since it is related to the free energy of a static test quark, it vanishes in the confined phase where the free energy of a single quark diverges and takes a finite value in the deconfined phase. It is linked to the $Z(N_c)$ center symmetry of the $SU(N_c)$ gauge group. Thus, the confining phase is center symmetric, whereas the center symmetry is spontaneously broken in the deconfined phase. \begin{figure}[tb] \begin{minipage}[b]{0.3\textwidth} \epsfig{file=mwmf_pset_121002_pd.eps,totalheight=6cm} \vspace*{0.0mm}% \end{minipage}% \hspace*{3cm}% \begin{minipage}[b]{0.3\textwidth} \epsfig{file=mwmf_lp_pressureer.eps,clip=,totalheight=6.5cm} \end{minipage} \begin{center} \begin{minipage}[t]{\textwidth} \caption{\label{fig:4}Left: Phase diagram for the three flavor PQM model with a logarithmic Polyakov loop potential. Right: Scaled pressure for the PQM model for three different Polyakov loop potentials (labeled as pol, log and Fuku) in comparison with recent lattice data for $2+1$ flavors \cite{Cheng:2007jq}. The scaled pressure without the Polyakov loop dynamics (QM) is also shown. } \end{minipage} \end{center} \end{figure} In the presence of dynamical quarks and nonvanishing chemical potential it is not clear whether the Polyakov loop still serves as an order parameter. In this case, the free energy does not diverge anymore and the order parameter is always nonzero. Because the free energies of quarks and antiquarks are different in the medium, $\Phi$, related to quarks, and the Hermitian (charge) conjugate $\bar\Phi$, related to antiquarks, will differ. In pure Yang-Mills theory the real mean values $\Phi$ and $\bar\Phi$ are given by the minima of an effective Polyakov loop potential $\mathcal U (\Phi,\bar\Phi)$ which can be constructed from lattice data. Finally, the dynamical quark sector of QCD is included by coupling the Polyakov loop to the quark sector of the quark-meson model which leads to the Polyakov-quark-meson (PQM) model with an interaction potential between quarks, mesons and the Polyakov loop variables. For two quark flavors the PQM model has been introduced in ~\cite{Schaefer:2007pw}, wherein a polynomial ansatz for the Polyakov loop potential has been used. Here, we extend the two flavor PQM model to three quark flavors together with three different realizations of the Polyakov loop potential $\mathcal U (\Phi,\bar\Phi)$ \cite{Fukushima:2008wg, Schaefer:2007pw, Roessner:2006xn}. For three quark flavors the total grand potential of the PQM model is a sum of three contributions \begin{equation} \Omega(T,\mu) = \Omega_{\bar{q}{q}}(T,\mu;\Phi,\bar{\Phi}) + U(\sigma_x,\sigma_y) + \mathcal{U}(T;\Phi,\bar{\Phi})\ , \end{equation} where the quark/antiquark contribution $\Omega_{\bar{q}{q}}$ is modified by the Polyakov loop variables. The mesonic contribution $U(\sigma_x,\sigma_y)$ is the same as for the quark-meson model, Eq.~(\ref{eq:qmpot}). In Fig.~\ref{fig:4} (left panel) the resulting phase diagram of the PQM model with three quark flavors for a logarithmic Polyakov loop potential, adopted from~\cite{Roessner:2006xn}, is shown. The phase boundaries are extracted from the peak in the temperature derivative of the corresponding light nonstrange condensate and the Polyakov loop variable. The model parameters are adjusted in such a way that both phase transitions, the chiral and the deconfinement transition, coincide at $\mu=0$ as is indicated by recent lattice simulations~\cite{Cheng:2006qk}. In general, the chiral transition temperature is shifted to higher values if the Polyakov loop is included and a more rapid crossover is observed. For increasing chemical potential both transition still coincide initially but then start to deviate before the chiral critical point is reached. The chiral transition always occurs below the deconfinement transition. For small temperatures the chiral transition becomes of first-order and both transitions are well separated. This separation might be related to the existence of a quarkyonic phase but work in this direction is still in progress. The coupling of the quark dynamics to the Polyakov loop improves the equation of state in the chirally broken phase at low temperatures and densities. This is demonstrated in Fig.~\ref{fig:4} (right panel) where the scaled pressure, normalized to the Stefan-Boltzmann pressure, is shown at $\mu=0$ for three different realizations of the Polyakov loop potential. In comparison, recent $2+1$-flavor lattice data for $N_\tau = 6$~\cite{Cheng:2007jq} are also included and are in reasonable agreement with our results. The suppression of the quark contribution in the confined phase is clearly visible compared to the pure QM model calculation. \subsubsection*{Acknowledgments} BJS is grateful to the organizers of the 30th International School of Nuclear Physics in Erice, Italy, for the invitation and acknowledges the European Physical Society (EPS) for the scholarship. MW was supported by the BMBF grant 06DA123.
2,877,628,090,882
arxiv
\section{Introduction} \label{sec:intro} \setcounter{equation}{0} \setcounter{footnote}{0} The successes of the Standard Model (SM) became so boring that various physicists wonder if they contain an important message: the lack of evidence for new physics pushes many proposed solutions of the higgs mass hierarchy problem into more-or-less unnatural corners of their parameter space. Global fits do not provide much intuition into the origin of the strongest constraints, or even on the number of new-physics parameters that are strongly constrained. Here we present an efficient and simple general analysis of electroweak precision data using an effective-theory description. Assuming that new physics is somewhat above the weak scale, its low-energy effects can be described by an effective Lagrangian that contains leading non-renormalizable terms. Even assuming that the new physics is generation independent (i.e.\ no new flavour physics), previous analyses identified an irreducible set of 10 gauge-invariant operators~\cite{BS} contributing to precision measurements at and below the $Z$-pole. This list of operators has grown to about 20~\cite{skiba}, after that the relevance of LEP2 precision measurements above the $Z$-pole was pointed out~\cite{STWY}. We here show that experiments have so far precisely probed only about 10 combinations of the 20 operators. However, if one follows the traditional route of constraining new physics one must compute all operators and then perform a global fit to all 20 parameters: otherwise one cannot know if the new physics corresponds to a strongly or weakly constrained combination of higher dimensional operators. The main aim of this paper is to develop a simpler strategy: we identify a minimal set of parameters that are strongly constrained, extending the $Z$-pole parameters of~\cite{PT}. In this way, cancellations between the various operators, like the ones pointed out in~\cite{GST}, are already built-in to this formalism. The data requires almost all of these parameters to be compatible with the SM at the {\em per mille} level. Moreover, we want our minimal set to catch the main features of the measurements: a reasonably accurate bound on the scale of new physics can be extracted by just considering our minimal set of parameters and without the necessity of a complete analysis. We start by identifying the sub-set of most precise measurements, mostly performed at $e^+e^-$ colliders (LEP1, LEP2, SLD). Those experiments studied all $f\bar{f}$ final states, but could measure leptonic final states more precisely than hadronic final states. We will show that the corrections to all leptonic data can be converted into oblique corrections to the vector boson propagators, and condensed into the seven parameters $\hat{S}$, $\hat{T}$, $W$, $Y$, $X$, $\hat U$ and $V$ defined in~\cite{STWY}. (Unlike in~\cite{STWY} we do not restrict our attention to oblique new physics). Indeed, starting with a generic set of higher-dimensional operators, one can use the three equations of motion for $W^+,Z,\gamma$ to eliminate the three currents involving charged leptons from the higher dimensional operators: \begin{eqnarray}}% can be used as {equation} or {eqnarray\label{eq:currents} \bar{e}_L \gamma_\mu e_L,\qquad \bar{e}_R \gamma_\mu e_R,\qquad \bar{e}_L\gamma_\mu \nu_L+\hbox{h.c.}\,.\end{eqnarray} Parameterizing the new physics in terms of corrections to vector boson propagators is convenient because: i) in many models the oblique parameters can be calculated directly~\cite{arabic,STWY,LHSTWY}, without having to first calculate the general set of induced higher dimensional operators; ii) it is also easier to compute how the observables are affected by oblique corrections; iii) it allows one to unambiguously identify the most relevant corrections to electroweak precision measurements in any generic model. We will show that already this subset of parameters is enough to establish the correct bound on generic models within a `typical' 20\% accuracy. Thus for most models it suffices to calculate the seven generalized oblique parameters to establish a reasonably accurate bound on the scale of new physics, with the caveat that the approximation fails spectacularly if, for some reason, new physics is leptophobic (i.e.\ if quarks are much more strongly affected than leptons). A more accurate approximation is obtained by adding more parameters in the quark sector. Basically, we keep the oblique approximation in the ${\rm U}(1)_Y$ sector but not in the ${\rm SU}(2)_L$ sector. In practice, this amounts to adding 2 more parameters that describe the coupling of the left-handed quarks (which is better measured because the larger SM coupling to the $Z$ enhances the interference term with respect to the right-handed components). Finally, we allow the third-generation of quarks to behave differently from lighter quarks, and describe this possibility by adding one extra parameter: the traditional $\varepsilon_b$~\cite{epsb}. This choice is motivated by theoretical considerations (in many models of electroweak symmetry breaking the top sector is special), by experimental considerations ($b$-tagging allows to probe $b$-quarks more precisely than lighter quarks) and by phenomenological considerations (flavor universality can be significantly violated only in the third generation). We finally present numerical fits for our $7+2+1$ new-physics parameters, $$\hat{S},\hat{T},\hat{U},V,X,W, Y, C_q, \delta \varepsilon_q,\delta\varepsilon_b$$ emphasizing their combinations that are most strongly constrained. Furthermore, in section~\ref{sec:WX} we show that first principles imply positivity constraints on $W,Y \ge 0$. \medskip The paper is organized as follows: in section~\ref{sec:parameters} we introduce our formalism and identify the relevant parameters. In section~\ref{sec:fit} we fit these parameters and compare the results with the complete analysis, showing how accurate our approximation typically is. In sec~\ref{sec:example} we apply the formalism to the specific case of various extra $Z'$ bosons, compiling present constraints. In section~\ref{sec:WX} we demonstrate the positivity constraint on the oblique parameters $W$ and $Y$. In the appendix, we explicitly write the relation between our parameters and a general basis of gauge-invariant operators. \section{The minimal set of constrained parameters} \label{sec:parameters} \setcounter{equation}{0} The effects of heavy new physics on precision electroweak observables can be described by adding to the SM Lagrangian dimension 6 operators that depend on the SM fields: the gauge bosons $W^\pm$, $Z$ and the photon $A$, the Higgs vev $v$, the fermionic currents $J_{ff'} = \bar f \gamma^\mu f'$, and their derivatives: \begin{equation} \mathscr{L}_{\rm BSM}\, \big( W^\pm_\mu, Z_\mu, A_\mu, \partial_\mu, v, J_{ff'} \big)\,. \label{LBSM} \end{equation} We are interested here in terms that do not violate flavor and CP (and, of course, electric charge and color should also be conserved). The electroweak gauge symmetry ${\rm SU}(2)_L\otimes{\rm U}(1)_Y$, spontaneously broken by the Higgs vev, implies some relations among the coefficients of the dimension-6 terms. There are many such operators~\cite{BW}. After eliminating the operators that do not affect precision data and the operators that on-shell are equivalent to combinations of other operators, one still has to deal with many operators: 10 if LEP2 is not included~\cite{BS}, and, including LEP2, 20 operators were considered in~\cite{skiba}. In agreement with~\cite{GST} (where it was pointed out that two combinations can be expressed in terms of unconstrained operators) we find that precision data are affected by 18 independent operators, listed in Appendix~\ref{app}. \begin{figure} $$\includegraphics[width=0.8\textwidth]{sigmas}$$ \caption{\em \label{fig:sigmas} The red dots are the ordered eigenvalues of the full error matrix, that describe the sensitivity of present data (upper dots correspond to more precise combinations). Precision data significantly constrain only about 10 new-physics effects. The blue circles show the same eigenvalues recomputed making our simplifying approximation.} \end{figure} In practice, however, many combinations of different operators are poorly constrained. A global analysis contains this information: one can obtain electroweak precision bounds on a model by computing all induced higher dimensional operators. Our aim is to simplify this program by finding the suitable variables where possible cancellations are manifest, and drop the unnecessary information. In order to find the number of parameters that are strongly constrained by the electroweak precision data we first perform the traditional global analysis including all relevant higher dimensional operators. In Fig.~\ref{fig:sigmas} we plot the {\em eigenvalues} of the error matrix, computed in the uniformly-normalized basis described in Appendix~\ref{app}.\footnote{We use the $\chi^2$ code employed in~\cite{BS, STWY}, updating to the most recent value of the top mass~\cite{mtop}. It agrees reasonably well with the equivalent $\chi^2$ published in~\cite{skiba}. We however emphasize that the Higgs mass dependence is not correctly approximated by keeping only the leading logarithm analytically computed in the heavy Higgs limit, see also~\cite{STWY}.} This automatically identifies all correlations of theoretical, experimental and accidental nature. For example: i) if one measurement constrains one combination of many operators, it will appear here as one constraint; ii) if a combination of operators does not affect any observable, it will appear here as a zero eigenvalue. Fig.~\ref{fig:sigmas} shows that precision data really constrain about 10 new-physics effects, and that a few constraints often dominate the global fit. We want to find a simple physically motivated basis for the electroweak parameters that automatically separates the strongly constrained combinations from the weakly constrained ones. \medskip We will therefore use a different approach: once a specific set of higher dimensional operators of the form (\ref{LBSM}) is given, we can use the equations of motion of the 3 gauge bosons $W^\pm,Z,\gamma$ to eliminate 3 fermionic currents: we choose to eliminate the currents involving charged leptons listed in eq.~(\ref{eq:currents}). The reason is that most of the precision measurements have been perfomed at $e^+e^-$ colliders (LEP1, LEP2 and SLD), strongly constraining operators involving charged leptons. Neutrinos on the other hand are experimentally more difficult to deal with than charged leptons. This is the reason why we have chosen to use the equations of motion in a way that is not explicitly ${\rm SU}(2)_L$ invariant. Muon-decay, which gives the most precise test of neutrino couplings, is fully described by oblique couplings because it involves charged currents and we eliminated all new physics involving the $\bar{e}_L\gamma_\mu \nu_L$ current. In our formalism, the most general effective Lagrangian describing new physics can be split into two parts: \begin{equation} \label{eq:lagrour} \mathscr{L}_{\rm BSM} = \mathscr{L}_{\rm oblique} + \mathscr{L}_{\rm couplings} + \dots \end{equation} where the dots stand for terms that do not affect precision measurements. Note again that, due to our choice for the use of the equations of motion, $\mathscr{L}_{\rm couplings}$ will not contain any currents involving the charged leptons. Therefore the oblique terms in $\mathscr{L}_{\rm oblique}$ fully encode corrections to the most precisely measured precision observables involving charged lepton final states: $$\alpha_{\rm em}, ~\Gamma(\mu), ~ M_Z, ~M_W, ~ \Gamma(Z\to \ell\bar\ell),~ A^\ell_{FB}, ~A^\ell_{LR}, ~ A_{\rm pol}^\tau, ~\sigma_{\rm LEP2}(e\bar e\to \ell\bar\ell), ~ee\to ee.$$ $\mathscr{L}_{\rm couplings}$, on the other hand, contains corrections to the couplings of quark and neutrino currents: it affects observables involving neutrinos and quarks~\footnote{We do not include in the fit precision measurements of $\sigma(\nu\, {\rm Fe})$ because they are limited by the unprecisely known nucleon structure: e.g.\ a strange momentum asymmetry or an isospin breaking can account for the discrepancy with respect to the SM claimed by~\cite{NuTeV}. Although at this stage $\Gamma(Z\to\nu \bar\nu)$ is listed among the effects not fully described by the oblique approximation, a detailed analysis will show that it actually is. }: $$\Gamma(Z\to\nu \bar\nu), ~ \Gamma(Z\to q\bar q),~ A^b_{FB}, ~ A^b_{LR},~ A^c_{LR},~A^c_{FB}, \sigma_{\rm LEP2}(e\bar{e}\to q\bar{q}),~ Q_W\,. $$ This formalism therefore allows one to clearly distinguish which parameters are more constrained than others. This approach has already been used in the case of models with universal new physics~\cite{STWY} (e.g.\ gauge bosons in extra dimensions, most little Higgs models~\cite{littlehiggs}, Higgsless models~\cite{hless}), where all corrections involving fermions only appear in combinations proportional to SM gauge currents. As a consequence, all fermion operators can be completely transformed into oblique operators by using the equations of motion for vectors. More importantly, in various concrete models one can bypass the step of identifying the set of induced dimension six operators: by integrating out the combinations of new-physics vectors not coupled to fermions (rather than the heavy mass eigenstates) directly gives the Lagrangian in terms of the oblique parameters. Thus this method simplifies both the intermediate computations and the final result. Here we show that this formalism is also useful in the case of generic non-universal models (e.g. fermions that live in different places in extra dimensions, some Little Higgs models~\cite{simplest}, models with extra $Z'$ bosons). In the next part of this section, we review the standard parametrization of oblique new physics. We later present the generic form for $\mathscr{L}_{\rm couplings}$, emphasizing the (weak) restrictions imposed by ${\rm SU}(2)_L$-invariance, and discuss to which extent $\mathscr{L}_{\rm couplings}$ can be neglected. In Appendix~\ref{app} we also explicitly show how the equations of motion allow us to relate the standard basis of ${\rm SU}(2)_L$-invariant dimension 6 operators to our parametrization. These operators are assumed to have generic coefficients, such that Appendix~\ref{app} applies to generic new physics. More importantly, in section~\ref{sec:example} we show, in a specific example of new physics (a heavy $Z'$), how one can directly compute the full set of oblique parameters without having to pass trough the standard basis. \subsection{The oblique parameters} Here we review how generic heavy new physics can affect the kinetic terms of vector bosons, $\Pi_{33}(p^2)$, $\Pi_{30}(p^2)$, $\Pi_{30}(p^2)$, $\Pi_{WW}(p^2)$, defined by the effective Lagrangian \begin{equation} \mathscr{L}_{\rm oblique} = -\frac{1}{2} W^3_\mu \Pi_{33} (p^2) W^{3\mu} -\frac{1}{2} B_\mu \Pi_{00} (p^2) B^\mu - W^3_\mu \Pi_{30} (p^2) B^\mu - W_\mu^+ \Pi_{WW} (p^2) W^\mu_-\,. \end{equation} Since new physics is assumed to be heavy, we can expand the $\Pi$'s in powers of $p^2$: \begin{equation} \Pi (p^2) = \Pi(0) + p^2\, \Pi'(0) + \frac{(p^2)^2}{2}\, \Pi''(0) + \dots \end{equation} neglecting higher order terms, that for dimensional reasons correspond to operators of dimension higher than 6. This expansion contains 12 parameters: 3 can be reabsorbed in the definitions of the SM parameters $g$, $g'$ and $v$ and 2 vanish because of electro-magnetic gauge invariance: the photon is massless and couples to $Q= T_3+Y$. New physics is described by 7 dimensionless oblique parameters, defined as (contrary to~\cite{STWY} we use canonically normalized kinetic terms) \begin{equation} \begin{array}{c}\displaystyle \hat S = \frac{g}{g'} \Pi_{30}'\,, \quad \hat T = \frac{\Pi_{33} - \Pi_{WW}}{M_W^2}\,, \quad W = \frac{M_W^2}{2} \; \Pi''_{33}\,, \quad Y = \frac{M_W^2}{2} \; \Pi''_{00}\,,\\ \displaystyle \hat U =\Pi'_{WW} - \Pi'_{33} \,, \qquad V = \frac{M_W^2}{2} \; (\Pi''_{33} - \Pi''_{WW})\,, \qquad X = \frac{M_W^2}{2} \; \Pi''_{30}\,, \end{array} \end{equation} where all $\Pi$'s are computed at $p^2=0$. These parameters correct the propagators of the gauge bosons, affecting the precision observables. Only 6 combinations actually enter observables involving charged leptons: in particular, only the combination $\hat U - V$. $Z$-pole precision data can be encoded in the $\varepsilon$'s of~\cite{eps}. Low energy data do not depend on $\hat{U}, V$. The $e\bar{e}\to f\bar{f}$ cross sections measured at LEP2 are dominantly affected by $Y$, $W$ and $X$~\cite{STWY}. Using ${\rm SU}(2)_L$-invariance one can show that $V\ll \hat{U}\ll \hat{T}$ and $X\ll \hat{S}$: in the case of universal new physics, the sub-leading form factors $\hat{U},V,X$ can therefore be neglected and new physics is fully described by $\hat{S},\hat{T},W, Y$~\cite{STWY}. This argument however does not apply in our case, where the same parameters are applied in a different context: to describe how generic heavy new physics (not necessarily universal) affects observables that only involve charged leptons and vectors. To reach the basis in which charged-leptonic data are condensed into vector propagators we made a transformation which is not ${\rm SU}(2)_L$-invariant. As a consequence all oblique parameters generically arise at leading order. \subsection{Vertex corrections} \label{sec:vertex} Here we present the effective Lagrangian that describes new-physics corrections to $Z,\gamma$ couplings, taking into account a) that we eliminated currents involving charged leptons; b) that new physics is heavy, allowing a low-energy expansion in momenta; c) electromagnetic gauge invariance. A convenient parametrization is: \begin{equation}\label{eq:vert} \mathscr{L}_{\rm couplings} = \sum_f (\bar f \gamma^\mu f) \left[ e\, A_\mu\, \frac{C^\gamma_f}{M_W^2}\, p^2 + \sqrt{g^2+{g'}^2}\, Z_\mu \left( \frac{C^Z_f}{M_W^2}\, (p^2 - M_Z^2) + \delta g_{f} \right) \right]\,, \end{equation} where $f={u_L,d_L, u_R,d_R,\nu_L}$, and higher orders in the momentum again correspond to subleading effects due to operators with dimensions greater than 6. The $\delta g$'s are corrections to on-shell $Z$ couplings, tested by measurements at the $Z$-pole. The $C^\gamma$ and $C^Z$ are equivalent to 4-fermion contributions to $e^+ e^- \rightarrow q \bar q$: the $p$-dependence cancels the propagator of the gauge boson, and we are left with a constant ($p$-independent) contribution. They affect LEP2, atomic parity violation, etc. For the neutrinos only $\delta g_{\nu_L}$ is measured via the invisible decay ratio of the $Z$. ${\rm SU}(2)_L$ invariance implies some mild restrictions on these vertex parameters: \begin{enumerate} \item As shown in Appendix~\ref{app}, $\delta g_{L\nu}$ is fixed in terms of oblique parameters as: \begin{equation} \delta g_{L \nu}= V - \frac{1}{2} \hat{U} - \tan \theta_W X\,. \end{equation} Notice that it depends on a different combination of $\hat U$ and $V$ than the one entering corrections to the gauge boson propagators. This means that considering all the 7 oblique parameters defined in the previous subsection is enough to include the relevant neutrino measurements. \item In the quark sector we apparently have 12 new parameters: $\delta g_{L,R \, u,d}$ and $C^{Z,\gamma}_{L,R \, u,d}$. However only 11 of them are independent, and correspond to the 11 quark operators of~\cite{skiba}. Indeed, as explicitly shown in Appendix~\ref{app}, the following relation holds between the 4-fermion coefficients of the left-handed quarks: \begin{equation} (C^\gamma_{dL}-C^\gamma_{uL}) = \cos^2 \theta_W (C^Z_{d L}-C^Z_{u L}) + \frac{X}{\tan \theta_W}. \label{relations} \end{equation} \end{enumerate} \subsection{A simple approximation} We can now proceed with the final counting. We have 18 independent coefficients: the 7 oblique parameters $\hat{S},\hat{T},\hat{U}, V,X,Y,W$, 4 $\delta g$'s for the quarks, 4 quark $C^Z_q$'s and 3 independent $C^\gamma_q$'s. The counting agrees with the results of~\cite{GST}, that shows how 2 combinations of the 21 operators of~\cite{skiba} can be eliminated. (18 arises as $21-2-1$: one further operator, that only affects $e^- e^+\to W^+ W^-$, is ignored here because we do not view this as a `precision' measurement. This view is corroborated by the numerical results of~\cite{skiba,GST}.) In other words, the unconstrained combinations pointed out in~\cite{GST} are automatically eliminated in our formalism. Our basis makes a clear separation of which parameters contribute to which measurements. Corrections to observables involving leptons only are expressed in terms of the seven oblique parameters (which as we have seen also include neutrinos). Observables involving quarks in the final state at the $Z$-pole involve in addition only the four $\delta g$'s. The $C^{\gamma,Z}_q$'s are only necessary for $\sigma(e\bar{e}\to q\bar{q})$ at LEP2 and atomic parity violation. As leptonic final states are generically better measured than hadronic ones, this separation already suggests that describing the precision measurements in terms of only the 7 oblique parameters could be a reasonable approximation (oblique approximation). In the next section we will check numerically that this indeed happens. This approximation also includes the constraints on neutrinos. In order to be more accurate, we want to add a minimal set of parameters describing corrections in the hadronic sector. In fact, not all the quark observables are well measured, so that only a small subset of parameters will actually contribute most strongly to the bound. At the $Z$-pole, the better measured quantity is the hadronic branching ratio of the $Z$. It depends on the combination: $$ g^{\rm SM}_{qL}\, \delta g_{qL} + g^{\rm SM}_{qR}\, \delta g_{qR}\,. $$ Due to the fact that the couplings of the right-handed components to the $Z$ are generically smaller than the couplings of the left-handed component (by a factor of 0.18 for the down type quarks, and 0.44 for the up type), we expect in general that only the corrections involving left-handed quarks will be relevant. Moreover, when the contribution of up and down quarks are summed, the result is proportional to: $$ \delta g_{uL} - \delta g_{dL} - \frac{\tan^2 \theta_W}{3}\, (\delta g_{uL} + \delta g_{dL})\,, $$ so that the difference between the two parameters seems to be more relevant than the sum. Similar arguments apply for the hadronic cross section measured at LEP2. The main difference is the presence of interference with the SM diagram with a photon exchange, and the presence of 4-fermion operators. We first notice that the interference with the photon is generically suppressed by the gauge coupling $e$ versus $\sqrt{g^2 + {g'}^2}$: this results in a suppression of order $\sin \theta_W$. The contribution of the $\delta g$'s will therefore enter in the same way as in the hadronic branching ratio. A very similar argument can be applied to the 4-fermion contribution, so that only the combinations $C^Z_{Lu} - C^Z_{Ld}$ and $C^\gamma_{Lu} - C^\gamma_{Ld}$ are constrained: as already mentioned in (\ref{relations}) these two parameters are related to each other, so that they correspond to a single parameter. From this rough argument we can thus infer that 2 parameters will be most relevant in the quark sector: \begin{eqnsystem}{sys:sss} \delta \varepsilon_q & = & \delta g_{uL} - \delta g_{dL}\,, \\ \delta C_q & = & C^Z_{uL} - C^Z_{dL}\,. \end{eqnsystem} Again, in the next section we will numerically show that this is indeed the case. Until now we have assumed flavor universality including the third generation, and in particular the bottom quark. However, in many models of electroweak symmetry breaking the third generation of quarks is special due to the heavyness of the top quark, and it is differently affected by new physics. For this reason, we will relax the flavor universality for the bottom quark, and deal with it separately. This is also necessary since the bottom final state is well measured. At LEP1, only $\delta g_{bL}$ is well measured, because the SM coupling of the right-handed component is smaller, thus we can define: \begin{eqnarray}}% can be used as {equation} or {eqnarray \delta g_{bL} = -\frac{1}{2} \delta \varepsilon_b\,; \end{eqnarray} here the parameter $\delta \varepsilon_b$ coincides with the standard definition given in~\cite{epsb}. Notice that the anomalous $A_{FB}^b$ measurement gives a subleading contribution to the determination of $\delta\varepsilon_b$. The cross section $\sigma (e\bar e \to b\bar b)$ at LEP2 also depends on a combination of 4-fermion operators. In general an extra parameter should also be added to the fit: however, in model of electroweak symmetry breaking involving the top quark, we expect corrections to $\delta \varepsilon_b$ to be more important. The reason is that the 4-fermion operators with the bottom will also involve couplings of new physics with the electron, already tightly constrained by the oblique parameters. This is the case, for example, in models with dynamical symmetry breaking~\cite{contino}, gauge-Higgs unification~\cite{ghu} or Higgsless models~\cite{hless}. Thus, in order to simplify the analysis, we will approximate a flavour-universal contribution to the bottom 4-fermion operators. In this way, only one parameter is sufficient to describe the bottom. \section{Global fit} \label{sec:fit} \setcounter{equation}{0} In this section we study the fit of the precision electroweak measurements, and show that the approximations proposed in the previous section are actually sensible, and give a sufficiently reliable bound on generic models of new physics. One can express all the observables in terms of the following 18 parameters: the 7 oblique parameters $\hat S$, $\hat T$, $\hat U$, $W$, $Y$, $V$ and $X$, 4 corrections to the couplings of the $Z$ with quarks $\delta g_{uR}$, $\delta g_{dR}$, $\delta g_{uL}$, $\delta g_{dL}$, and 7 4-fermion parameters (4 involving right-handed quarks $C^\gamma_{uR}$, $C^\gamma_{dR}$, $C^Z_{uR}$, $C^Z_{dR}$, and 3 involving left-handed quarks $C^Z_{uL}$, $C^Z_{dL}$, and $C^\gamma_{uL} + C^\gamma_{dL}$). Note that in doing this we are not yet introducing any approximation: we are just choosing a particular basis for the dimension 6 operators affecting electroweak precision observables. The two approximations we want to pursue are the following: first we consider only the 7 oblique parameters $\hat S$, $\hat T$, $\hat U$, $W$, $Y$, $V$ and $X$ (oblique approximation) and set all the others to zero: this allows us to exactly describe the observables only involving vectors and leptons (charged and neutrinos), but, in general, does not correctly describe corrections to quark observables. Next, as argued in the previous section, in the quark sector two parameters should have the strongest effect on the bound on new physics. They are related to corrections to the couplings to the $Z$ and 4-fermion operators involving left-handed components, $\delta \varepsilon_q$ and $\delta C_q$. \medskip We now check how good our approximations are for guessing the bound on the scale $\Lambda$ of new physics in generic models. To do that, we generated many random models by writing each parameter as $r/\Lambda^2$, where $-1\le r \le 1$ are random numbers. This is an reasonably arbitrary procedure. We then extract the bound on $\Lambda$ both from the exact fit and the approximate fits. The result is graphically shown in Fig.~\ref{fig:histo}: in case (a) we show the oblique approximation; in (b) we add the two parameters $\delta C_q$ and $\delta \varepsilon_q$ for the quarks to the oblique parameters; in (c) we include all the parameters except $\delta C_q$ and $\delta \varepsilon_q$. In the following table we report, for the same cases, the average value and the variance of $\Lambda_{\rm approx}/\Lambda_{\rm true}$. \begin{figure}[tb] \begin{center}$$\hspace{-0.06\textwidth} \includegraphics[width=0.4\textwidth]{his1} \hspace{-1cm} \includegraphics[width=0.4\textwidth]{his2} \hspace{-1cm} \includegraphics[width=0.4\textwidth]{his3}$$ \caption{\em Distibutions of the ratios between the approximate over true bound in various approximations. In the first {\it ``oblique''} panel we include in the fit only $\hat S$, $\hat T$, $\hat U$, $W$, $Y$, $V$ and $X$. In the second panel we add the two parameters $\delta C_q$ and $\delta \varepsilon_q$ for the quarks. Finally we include all the parameters except $\delta C_q$ and $\delta \varepsilon_q$.} \label{fig:histo} \end{center} \end{figure} \vspace{0.3cm} \begin{center} \begin{tabular}{c|c} Approximation & $\Lambda_{\rm approx}/\Lambda_{\rm true}$ \\ \hline Oblique & $0.95 \pm 0.16$ \\ Oblique plus $C_q$, $\delta \varepsilon_q$ & $0.98 \pm 0.06$ \\ All but $C_q$, $\delta \varepsilon_q$ & $0.98 \pm 0.15$ \end{tabular} \end{center} \vspace{0.3cm} We see that the oblique approximation is already reasonable: in most of the cases the approximate bound is less than $25 \%$ away from the correct one. Adding the two parameters $\delta C_q, \delta \varepsilon_q$ improves the approximation significantly: in more than $90 \%$ of the cases the approximate bound reproduces the exact one within $10 \%$. Furthermore, it is important to notice that considering a fit where all the parameters except $\delta C_q , \delta \varepsilon_q$ are added does not improve much the approximation with respect to the oblique case. This is telling us that in the quark sector it is indeed $\delta C_q$ and $ \delta \varepsilon_q$ which are the most constrained parameters, while all the others are much less constrained (and mostly negligible for establishing a reliable bound on the scale of new physics). The arguments we have discussed in section \ref{sec:vertex} thus find a quantitative verification here. Out of the 18 initial parameters only 9 are truly constrained. The remaining 9 can be safely neglected. Fig.~\ref{fig:sigmas} compares the eigenvalues of the full error matrix with the eigenvalues recomputed using our simplified approximation (using of course the same normalization in the two cases). We see that the approximation catches the main constraints, ignoring the remaining weakly constrained combinations. We do not show the full eigenvalues extracted from the global fit of~\cite{skiba}, that show a similar level of agreement. \bigskip We now present how data determine our 10 parameters by presenting the `eigenvectors' of the global $\chi^2$, i.e.\ we show the orthogonal combinations that have been determined with no statistical correlation with the other combinations, such that a model is excluded if any one of these combinations contradicts experimental data. We order them starting from the most precise ones. They are: \begin{eqnarray}}% can be used as {equation} or {eqnarray R \cdot \left(\begin{array}{c} \hat{S}\\ \hat{T} \\ \hat{U} \\ V \\ W \\ X \\ Y \\ \delta C_q \\ \delta \varepsilon_b \\ \delta\varepsilon_q \end{array}\right)= 10^{-3} \left(\begin{array}{c} -0.04 + 0.54 \ell \pm 0.21\\ +0.13 + 0.08 \ell \pm 0.43\\ +0.41 + 0.21 \ell \pm 0.50\\ +0.16 + 0.72 \ell \pm 0.54\\ -0.36 - 0.33 \ell \pm 0.75\\ 0 + 0.16 \ell \pm 1.2\\ -0.9 - 0.12 \ell \pm 1.5\\ -5.6 - 0.31 \ell \pm 2.0\\ -0.4 + 0.18 \ell \pm 8.7\\ -26 + 0.66 \ell \pm 18 \end{array}\right) \end{eqnarray} where the factor $\ell = \ln (m_h/M_Z)$ encodes the approximate dependence on the Higgs mass and the orthogonal matrix $R$ equals $$R = 10^{-3} \left( \begin{array}{cccccccccc} -404 & 353 & -133 & 173 & 137 & -753 & 276 & 4 & 18 & 27 \\ -245 & -19 & 492 & -747 & 30 & -37 & 280 & 15 & -40 & -235 \\ -16 & 208 & 146 & -152 & -724 & -224 & -407 & 319 & 33 & 260 \\ -222 & 691 & -76 & 5 & -120 & 550 & 285 & -129 & 55 & 216 \\ -17 & -330 & 177 & -36 & 114 & -31 & 273 & -12 & 1 & 876 \\ 3 & 232 & -7 & -283 & 303 & -118 & -589 & -581 & -175 & 209 \\ -42 & -68 & 132 & 31 & -44 & -37 & -66 & -288 & 939 & -33 \\ -203 & -200 & 350 & 375 & -445 & -9 & 126 & -587 & -282 & -124 \\ -642 & -381 & -575 & -219 & -161 & 147 & -112 & -41 & 9 & 11 \\ 519 & 0 & -458 & -341 & -329 & -199 & 376 & -337 & -1 & 2 \end{array} \right). $$ The two last combinations have large uncertainties and can be ignored. The flavour-universal limit is obtained by setting $\delta \varepsilon_b = \delta \varepsilon_q$. \section{Example: a generic $Z'$} \label{sec:example} \setcounter{equation}{0} We now apply our results to a specific concrete example: a generic heavy non-universal $Z'$ vector boson, with mass $M_{Z'}$, gauge coupling $g_{Z'}$ and gauge charges $Z_X$ under the various SM fields $X=\{ H,E,L,Q,U,D\}$. The parameters defined in section \ref{sec:parameters} can be computed in various ways. One can integrate out the heavy mass eigenstate, obtaining a set of effective operators that can be converted into our parameters using the expressions in Appendix \ref{app}. A simpler technique~\cite{arabic} allows to directly compute our parameters. In the specific case of a $Z'$ this technique was described in section~7 of \cite{LHSTWY}: it consists of integrating out the combination of $Z'$ and $Z$ (which, in general, is not a mass eigenstate) that does not couple to charged leptons. Operatively, one rewrites the Lagrangian in terms of \begin{equation} \tilde B_\mu = B_\mu - \frac{g_{Z'}Z_E}{g' Y_E} Z'_\mu, \;\;\;\;\;\;\; \tilde W_\mu^3 = W_\mu^3 - \frac{2 g_{Z'}}{g Y_E} (Z_E Y_L-Z_L Y_E) Z'_\mu \end{equation} such that in the new basis $Z'$ no longer couples to charged leptons and can be integrated out without generating any operator involving charged leptons. One can then directly extract our 9 parameters from the effective Lagrangian, since it already is in the form of eq.~(\ref{eq:lagrour}). The explicit result is: \begin{eqnsystem}{sys:Z'} \hat S &=& \frac{2 M_W^2 g_{Z'}^2}{g^2 g'^2 M_{Z'}^2} (Z_E-Z_H+Z_L)(g^2 Z_E +g'^2 (Z_E+2Z_L)) \,,\\ \hat T &=& \frac{4 M_W^2 g_{Z'}^2}{g^2 M_{Z'}^2} (Z_E-Z_H+Z_L)^2 \,,\\ \hat U & = & \frac{4 M_W^2 g_{Z'}^2}{g^2 M_{Z'}^2} (Z_E-Z_H+Z_L)(Z_E+2Z_L) \,,\\ W &=& \frac{M_W^2 g_{Z'}^2}{g^2 M_{Z'}^2} (Z_E+2Z_L)^2 \,,\\ Y &=& \frac{M_W^2 g_{Z'}^2}{g'^2 M_{Z'}^2} Z_E^2 \,,\\ V &=& \frac{M_W^2 g_{Z'}^2}{g^2 M_{Z'}^2} (Z_E+2Z_L)^2 \,,\\ X &=& - \frac{M_W^2 g_{Z'}^2}{g g' M_{Z'}^2} Z_E (Z_E+2Z_L) \,, \\ \delta \varepsilon_q &=& \frac{2 M_W^2 g_{Z'}^2}{g^2 M_{Z'}^2} Z_H (Z_E+2 Z_L)\,, \\ \delta C_q &=& \frac{2 M_W^2 g_{Z'}^2}{(g^2+g'^2) M_{Z'}^2} (Z_E+2Z_L) (Z_E+Z_L) \,. \end{eqnsystem} It is important to notice a point missed in section~7 of \cite{LHSTWY}: $\hat U$, $V$ and $X$ are not subdominant with respect to $\hat{S}$, $\hat{T}$, $W$, $Y$. (The bounds presented here numerically differ from the ones in \cite{LHSTWY} also because we here updated the measurement of the top mass~\cite{mtop}). One can check that only with the correct full expressions of eq.~(\ref{sys:Z'}) the corrections to the parameters $\delta \varepsilon_{1,2,3}$ that summarize LEP1 observables are all proportional to $Z_H$ and therefore all vanish if the Higgs is neutral under the heavy $Z'$. This must happen because $Z_H=0$ means no $Z/Z'$ mixing and the $Z'$ manifests itself only as 4-fermion operators invisible at LEP1 and dominantly constrained by LEP2. \begin{table}[t] $$ \begin{array}{cc|cccccc|ccc} \hbox{U(1)}& \hbox{universal?}& Z_H & Z_L & Z_D & Z_U & Z_Q & Z_E & \hbox{full} &\hbox{approx} & \hbox{oblique} \\ \hline H & \hbox{yes} & 1 & 0 & 0 & 0 & 0 & 0 & 6.7 & 6.7 & 6.7\\ B' & \hbox{yes} & \frac{1}{2} & -\frac{1}{2} & \frac{1}{3} & -\frac{2}{3} & \frac{1}{6} & 1 & 6.7 & 6.7 & 6.7\\ B'_F & \hbox{yes} & 0 & -\frac{1}{2} & \frac{1}{3} & -\frac{2}{3} & \frac{1}{6} & 1 & 4.8 & 4.8 & 4.8\\ B-L & \hbox{no} & 0 & -1 & -\frac{1}{3} & -\frac{1}{3} & \frac{1}{3} & 1 & 6.7 & 7.1 & 7.1\\ L & \hbox{no} & 0 & 1 & 0 & 0 & 0 & -1 & 6.3 & 7.1 & 7.1\\ 10 & \hbox{no} & 0 & 0 & 0 & 1 & 1 & 1 & 2.5 & 2.9 & 3.4\\ 5 & \hbox{no} & 0 & 1 & 1 & 0 & 0 & 0 & 3.8 & 3.2 & 5.6\\ Y & \hbox{no} & \frac{2}{3} & 1 & 1 & -\frac{1}{3} & -\frac{1}{3} & -\frac{1}{3} & 4.8 & 5.0 & 6.0\\ 16 & \hbox{no} & 0 & 1 & 1 & 1 & 1 & 1 & 4.4 & 4.7 & 6.5\\ \hbox{SLH} & \hbox{no} & \multicolumn{6}{c|}{\hbox{Simplest little Higgs~\cite{simplest}}} & 2.7 & 2.5 & 2.7\\ \hbox{SU6} & \hbox{no} & \multicolumn{6}{c|}{\hbox{Super little Higgs~\cite{superlittle}}} & 3.1 & 3.3 & 3.3\\ \end{array}$$ \caption{\em $99\%$\,\hbox{\rm CL}\ bounds on the ratio $M_{Z'}/g_{Z'}$ in {\rm TeV} for a set of frequently studied $Z'$.} \label{tab:famZprimes} \end{table} \begin{figure}[tb] \begin{center}$$\hspace{-0.03\textwidth} \includegraphics[width=1.05\textwidth]{Zbounds}$$ \caption{\em Bounds on $M_{Z'}/g_{Z'}$ in $\,{\rm TeV}$ at $99\%$\,\hbox{\rm CL}\ for different $Z'$ models. Their effect dominantly depends on the charge of the Higgs and the leptons: we here assume the normalization $Z_L^2 + Z_E^2 + Z_H^2 = 2$ such that $Z_H=0$ at the boundary of the circles. The three plots, done assuming different sets of quark charges (zero, universal-like and SU(5)-unified) are almost identical, confirming the validity of an approximate analysis. The dashed line corresponds to a universal $Z'$, and the dashed ellipse to a $Z'$ compatible with SM Yukawa couplings. The dots show some well-known $Z'$s.} \label{fig:Z'} \end{center} \end{figure} In table \ref{tab:famZprimes} we report the $99\%$\,\hbox{\rm CL}\ bounds on $M_{Z'}/g_{Z'}$ for a set of $Z'$s, theoretically motivated by extra dimensions, unification models, little Higgs models.\footnote{We presented the results hiding a technical problem. We performed two different global fits: in the operator basis, and in the oblique basis. The simpler oblique analysis naturally allows to include minor effects. The minor difference between the two $\chi^2$ is comparable to the accuracy of our approximation, such that in the table we compensated for this.} We compare the bound obtained by performing an exact fit, that includes the effects of all the 18 relevant parameters, an approximate fit including the 9 parameters, and the purely oblique approximation. It is interesting to notice that the approximate bounds reproduce the exact one accurately in almost all the cases. There are few exceptions where the effect of quarks is relevant, and the oblique bound is overestimated. On the other hand, the 9-parameter approximation is always successful. Fig.~\ref{fig:Z'} shows iso-contours of bounds on $M_{Z'}/g_{Z'}$ (computed assuming a light Higgs) that approximately apply to all $Z'$. Indeed the constraint dominantly depends only on the leptonic and Higgs $Z'$ charges: $Z_H,Z_L,Z_E$. We here fixed their arbitrary overall normalization by assuming $Z_H^2 + Z_L^2 +Z_E^2=2$. Without loss of generality we can choose $Z_H\ge 0$, such that all the information lies on the surface of a half-sphere, and is plotted in fig.~\ref{fig:Z'}. The different panels show three different arbitrary choices for the quark $Z'$ charges: vanishing (left panel), universal (middle panel), SU(5)-unified (right panel). Each panel shows the exact bound on $M_{Z'}/g_{Z'}$: one sees that there are very minor differences between the bounds in the three panels, confirming that leptonic data dominate the present global fit. The dots show the locations of the theoretically-motivated $Z'$ listed in table~\ref{tab:famZprimes}. The dashed lines show special sub-classes of $Z'$: universal $Z'$ (oblique line) and $Z'$s that do not forbid the SM Yukawa couplings (ellipse). For example, only two $Z'$ have both these properties: a) the one denoted as $B'$: a duplicate of the SM hypercharge; b) the one denoted as `SU6' $Z'$, that arises in little-Higgs models~\cite{LHSTWY}. \section{Proof for $W,Y\ge 0$}\label{sec:WX} \setcounter{equation}{0} So far, the oblique parameters $W,Y$ have been computed in various models (in extra dimensions, Higgsless models, and litte Higgs at tree level~\cite{STWY, LHSTWY}, and in supersymmetry~\cite{SUSYSTWY} and Minimal Dark Matter~\cite{MDM} at one-loop level). In all of these cases it has been found that $W,Y\ge 0$. Next we discuss the general reason behind this result. The K\"allen--Lehmann representation implied by unitarity~\cite{book} tells us that propagators can be written as \begin{eqnarray}}% can be used as {equation} or {eqnarray \frac{1}{\Pi(p^2)} =\int_0^\infty d m^2 \frac{\rho(m^2)}{p^2 - m^2 - i\varepsilon}\qquad\hbox{with} \qquad \rho(m^2)\ge 0.\end{eqnarray} One can compute $\Pi''(0)$ and write in an appropriate form such that positivity is manifest: $$ \Pi''(0)= \frac{\int\!\int dm_1^2 dm_2^2\,\rho(m_1^2)\rho(m_2^2) (m_1^2-m_2^2)^2/m_1^6 m_2^6}{[\int dm^2 \,\rho(m^2)/m^2]^3} \ge 0.$$ We could similarly prove that $\Pi'(0)\ge 0$, and this indicates a potential caveat. The K\"allen--Lehmann representations applies to correlators of gauge invariant operators. In models where the SM gauge group is a subgroup of some larger non-abelian gauge group the relevant propagators are not gauge-invariant quantities: they can have matrix elements with unphysical negative-norm states, possibly giving $\Pi''(0)<0$. As well known, this is indeed what happens in the case of $\Pi'(0)$, that contributes to the $\beta$-function of gauge couplings: non-abelian vectors negatively contribute to the $\beta$-function. Littlest-Higgs models with $T$-parity~\cite{Tparity} might realize this caveat: the one-loop corrections to physical observables must be computed including the full gauge-invariant set of oblique, vertex and box diagrams. \section{Conclusions} We presented a simple and efficient general analysis of the constraints on heavy new physics from electroweak precision data measured below, at and above the $Z$-peak. We found that, out of a complete basis of 18 independent operators, precision data significantly constrain only about 10 combinations of new-physics parameters, see fig.~\ref{fig:sigmas}. We have condensed the dominant precision data into 7 generalized oblique parameters $\hat{S},\hat{T},\hat{U},V,X,Y,W$ (that fully describe how new physics affects vectors and leptons), plus two parameters that describe the main corrections involving quarks: $\delta \varepsilon_q$, that describes corrections to the on-shell $q\bar{q}Z$ vertex, and $C_q$, that describes the size of $e\bar{e}q\bar{q}$ four fermion operators. A 10th parameter, the traditional $\delta \varepsilon_b$, is necessary if (as in most models) third generation quarks have unique properties. We have shown that in most cases the simple oblique approximation (where only the seven oblique parameters are turned on) reasonably estimates the constraints on new physics, and that adding all 9 (or 10) parameters gives a bound that typically is within 10\% of the exact bound. We have shown how to calculate these parameters from a generic set of higher dimensional operators, and emphasized that an added advantage of our parameters is that in many cases they can be directly computed via integrating out proper combinations of heavy new physics. We applied our methods giving approximate bounds on generic $Z'$s (see fig.~\ref{fig:Z'}), and compared them with exact results in the specific cases of frequently-studied $Z'$ (see table~\ref{tab:famZprimes}). Finally, we have shown that first principles demand positivity constraints $W,Y\ge 0$ on these oblique parameters. \footnotesize \section*{Acknowledgments} We thank R.\ Rattazzi for very useful discussions. G.C. also thanks Graham Kribs for the organization of the UltraMini Workshop at the University of Oregon (Eugene), where part of this work was completed. The research of G.C. and C.C. is supported in part by the DOE OJI grant DE-FG02-01ER41206 and in part by the NSF grants PHY-0139738 and PHY-0098631. The work of G.M. is supported in part by the Department of Energy grant DE-FG02-91ER40674. The research of A.S.\ is supported in part by the European Programme `The Quest For Unification', contract MRTN-CT-2004-503369. \normalsize
2,877,628,090,883
arxiv
\section{#1} \setcounter{equation}{0}} \def\0 {\nonumber} \begin{document} \setcounter{page}{0} \newcommand{\inv}[1]{{#1}^{-1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\re}[1]{(\ref{#1})} \newcommand{\qv}{\quad ,} \newcommand{\qp}{\quad .} \def\qp{Q_+} \def\qm{Q_-} \def\qbp{\bar Q_+} \def\qbm{\bar Q_-} \def\sgh{\Sigma_{g,h}} \begin{titlepage} \begin{center} \hfill SISSA 17/2010/FM-EP\\ \vskip .3in \noindent {\Large \bf{Taming open/closed string duality with a Losev trick}} \\ \vskip .2in {\bf Giulio Bonelli, Andrea Prudenziati and Alessandro Tanzini} \vskip .05in {\em\small International School of Advanced Studies (SISSA) \\ and \\ INFN, Sezione di Trieste \\ via Beirut 2-4, 34014 Trieste, Italy} \vskip .5in \end{center} \begin{center} {\bf ABSTRACT } \end{center} \begin{quotation}\noindent A target space string field theory formulation for open and closed B-model is provided by giving a Batalin-Vilkovisky quantization of the holomorphic Chern-Simons theory with off-shell gravity background. The target space expression for the coefficients of the holomorphic anomaly equation for open strings are obtained. Furthermore, open/closed string duality is proved from a judicious integration over the open string fields. In particular, by restriction to the case of independence on continuous open moduli, the shift formulas of \cite{Oog} are reproduced and shown therefore to encode the data of a closed string dual. \end{quotation} \vfill \eject \end{titlepage} \section{Introduction} A proper target space formulation of open plus closed topological strings is important for several reasons, the most compelling in our opinion being a better understanding of open/closed string duality which, once an off shell formulation of the theory is given, should become manifest. Actually, this is the main subject of this paper. Open/closed duality is commonly believed \cite{OV1} to be the effect of integrating out open strings in the complete string field theory, leaving then a purely closed string theory on a suitably modified background. This program is very hard to be realized in the full string theory, but it becomes tractable in its truncation to its BPS protected sectors, namely in topological string theories \cite{Witten,BCOV}. This issue has been investigated by several authors in a first quantized or {\it on shell} framework. Actually, the first examples were discussed in terms of geometric transitions \cite{GV} which have been extended to the brane sector in \cite{OV1}. Then, this picture has been refined in terms of a proper world-sheet analysis in \cite{OV2}. More advances on-shell computations has been prompt by \cite{remodeling} and then further by \cite{tso} and \cite{emanuel}. A distinctive feature of topological strings is that the non-holomorphic dependence of its amplitudes can be recursively computed by means of the holomorphic anomaly equations (HAE) \cite{BCOV}. It turns out that the target space formulation of the closed string in terms of the Kodaira-Spencer gravity is very effective in reproducing these recurrence relations from a Feynman diagram's expansion. This also provides a target space interpretation of the various coefficients appearing in the HAE. These latter have been more recently extended to open strings in \cite{Wal1} and \cite{BT}. These were further studied in \cite{Alim}. The topological open string target space formulation has been actually obtained long ago in \cite{W1} where it was shown to be given by the Chern-Simons theory for the A-model and its holomorphic version for the B-model. These are formulated for a fixed on shell background geometry, in particular for the B-model the holomorphic Chern-Simons is formulated with respect to an integrable complex structure on the Calabi-Yau target. Since the aim of this paper is to study a string field theory formulation of topological open plus closed strings on equal footing, we will extend this framework to non-integrable structures. The formulation of holomorphic anomaly equations and the target space interpretation of its structure functions are very important tools to obtain a well defined computational framework for open topological strings. D-branes sources for closed strings are actually represented in the HAE by the Walcher's term \cite{Wal1} whose target space interpretation has been given in terms of the Griffith's normal function (see also \cite{MW}). For the B-model this boils down to the on shell holomorphic Chern-Simons action. A remarkable observation \cite{Oog} consists in the proof that the Walcher's term can be reabsorbed by a shift in the string coupling constant and the closed moduli. This indeed realizes an on shell proof of the open/closed duality, although at frozen open moduli. In the following we will study this problem from a second quantized point of view, which turns out to be the most appropriate to study open/closed duality in particular for the B-model. We will work out the BV formulation of the holomorphic Chern-Simons theory by leaving the gravitational background (Kodaira-Spencer gravity field) off shell. This allows us to reformulate open-closed duality as a process of partial functional integration over the open string fields. From the BV viewpoint this procedure follows by partial integration of a proper subset of fields and anti-fields of a solution of the BV master equation by which one gets another solution depending on a reduced set of fields. This is known as Losev trick \cite{Losev}. In particular, at frozen open string moduli, we will show that this partial integration exactly reproduces the shift formulas proposed in \cite{Oog}\cite{Wal2}. More in general, our BV formulation proves the existence of definite shift formulas also in presence of open moduli providing a computational set-up to determine them. Moreover, it yields a target space interpretation of the coefficients of the extended HAE for open string moduli as in \cite{BT}\cite{Wal3}. The paper is organized as follows. In section 2 we discuss the classical complete string field theory action for open plus closed B-model. In section 3 we proceed to its quantization using the BV formalism. In section 4 we discuss the target space interpretation of the coefficients in the open HAE from the string field theory. In section 5 we formulate and prove in general the open/closed duality or the B-model and apply it to the setting of \cite{Wal1,Oog}. In section 6 we collect few concluding comments. \section{Open-closed effective field theory} It is well known from \cite{BCOV} that the effective space-time theory corresponding to the B-model for closed strings is given by the Kodaira-Spencer theory of gravity: \begin{equation}\label{d} \lambda^{2}S_{KS} = \int_{X}\frac{1}{2}A'\frac{1}{\partial}\overline{\partial}A' - \frac{1}{3}[(A + x)(A + x)]'(A + x)' \end{equation} where $\lambda$ is the string coupling, $A$ and $x$ are $(0,1)$ forms with values in the $(1,0)$ vector field that is, in coordinates, $A = A_{\overline{i}}^{j}dz^{\overline{i}}\frac{\partial}{\partial z^{j}}$ and similarly for $x$. In (\ref{d}) $A'=i_A\Omega_0=3(\Omega_0)_{ijk}A^i_{\bar i}dz^j dz^k dz^{\bar i}$ and similarly for $x'$ where $\Omega_{0}$ is the holomorphic three form on the Calabi-Yau target space $X$\footnote{Factors may change depending on the conventions; we will use the ones of \cite{Tod} and \cite{KNS}.}. $A + x$ is defined to be a deformation of the complex structure of $X$ split into an infinitesimal part, $x$, and a finite one, $A$. The full deformation, $A + x$, is parametrized by the shift $\overline{\partial}_{\overline{i}} \rightarrow \overline{\partial}_{\overline{i}} - (x_{\overline{i}}^{j} + A_{\overline{i}}^{j})\partial_{j} $. By definition the coefficients of forms with barred indices transform in the same way : $w_{\overline{i}} \rightarrow w_{\overline{i}} - (x_{\overline{i}}^{j} + A_{\overline{i}}^{j})w_{j} $. In addition $dz^{j} \rightarrow dz^{j} + ( x^{j}_{\overline{i}} + A^{j}_{\overline{i}} )dz^{\overline{i}}$, while $\partial$ and $d\overline{z}$ are fixed (their shift refers to the antitopological theory). In this way real objects as the de Rham differential $d$ or a real form $w_{i}dz^{i} + w_{\overline{i}}dz^{\overline{i}}$ remains unchanged. The condition of integrability of the modified complex structure is \[ 0 = (\overline{\partial} - x - A)(\overline{\partial} - x - A) = - \overline{\partial}(A + x) + \frac{1}{2}[A + x,A + x] = 0 \] which can be rewritten, due to the fact that $\overline{\partial}x = 0$, $x$ being the background parameter valued in $H^{0,1}_{\overline{\partial}}(TM)$, as \be \overline{\partial}A' = \partial((A + x)\wedge ( A + x ))' . \label{kseom}\ee (\ref{kseom}) is the equation of motion of (\ref{d}). Let us stress that it is {\it crucial} the fact that $x$ does not appear in the kinetic term of (\ref{d}). In addition $A$ is required to satisfy the so called Tian's gauge, $\partial A' = 0$, in order to have a well defined kinetic term. The symmetries of (\ref{d}) are the $\Omega_0$ preserving reparameterizations of the complex coordinates $z^{i} \rightarrow z^{i} + \chi^{i}(z,\overline{z})$ and $z^{\overline{i}} \rightarrow z^{\overline{i}}$ while the condition of being $\Omega_{0}$ preserving reads $\partial\chi' = 0$. According to $\overline{\partial} \rightarrow \overline{\partial} - (A + x)$ and owing to the fact that $x$ is a background, $A$ transforms as \begin{equation}\label{e} \delta A = -\overline{\partial}\chi - {\cal L}_{\chi}(A + x) = -\overline{\partial}\chi - [\chi,(A + x)] \end{equation} Reinterpreting $\chi$ as a ghost field, this transformation can be promoted to a nilpotent BRST if \begin{equation} \delta \chi = -\frac{1}{2}{\cal L}_{\chi}\chi = -\chi^{i}\partial_{i}\chi. \end{equation} The open effective theory has been analysed by Witten in \cite{W1} and for the B-model it is given by the holomorphic Chern-Simons action \begin{equation}\label{a} \lambda S_{HCS} = \int_{X}\Omega_{0} Tr(\frac{1}{2}B^{0,1}\overline{\partial}B^{0,1} + \frac{1}{3}B^{0,1}B^{0,1}B^{0,1}) \end{equation} with $B^{0,1}$ a Lie algebra valued $(0,1)$-form. The precise definition of the model has been presented in \cite{Tho}. Indeed (\ref{a}) is globally ill defined. From the Chern-Weil theorem we know that only the difference of two invariant polynomials with respect to two different connections $\hat{B}$ and $B_{0}$ (dropping for the moment the label $(0,1)$) is an exact form. So using the reference connection $B_{0}$ we can write \begin{eqnarray}\label{m} -\int_{K_{4}}\frac{\Theta}{2} Tr(\hat{F}^{2} - F_{0}^{2}) &=& -\int_{K_{4}}\Theta Tr \;\overline{\partial}(\frac{1}{2}\hat{B}\overline{\partial}\hat{B} + \frac{1}{3}\hat{B}^{3} - \frac{1}{2}B_{0}\overline{\partial}B_{0} - \frac{1}{3}B_{0}^{3}) = \nonumber \\ &=& \int_{X}\Omega_{0} Tr(\frac{1}{2}\hat{B}\overline{\partial}\hat{B} + \frac{1}{3}\hat{B}^{3} - \frac{1}{2}B_{0}\overline{\partial}B_{0} - \frac{1}{3}B_{0}^{3}) \end{eqnarray} where $K_{4}$ is a fourfold containing $X$ as a divisor while $\Theta$ is a connection of the associated line bundle $\call_X$ so that $\overline{\partial}\Theta = \Omega_{0}\delta(X)$. We expand $\hat B$ with respect to the reference connection as \[ \hat{B} = B + B_{0} \] so that (\ref{m}) provides the globally well defined action \begin{equation}\label{b} \lambda S_{HCS} = \int_{X}\Omega_{0} Tr(\frac{1}{2}B^{0,1}\overline{\partial}_{B_{0}^{0,1}}B^{0,1} + \frac{1}{3}(B^{0,1})^{3} + F^{0,2}_{0}B^{0,1}) \end{equation} with $\overline{\partial}_{B_{0}^{0,1}}\varphi \equiv \overline{\partial}\varphi + [B_{0}^{0,1},\varphi]_{\pm}$ with $\pm$ depending on the grade of the form $\varphi$. $B_{0}$ is the open string background for the theory and as such it obeys the holomorphicity condition $F_{0}^{0,2} = 0$. The symmetries of (\ref{b}) -- at fixed background $B_0$ -- are given by \begin{equation}\label{gt} \delta B^{0,1} = \overline{\partial}_{B_{0}^{0,1}}\epsilon + [B^{0,1},\epsilon]. \end{equation} Now we want to explicitly couple the open theory to the closed field that is we want to deform the complex structure of $X$, over which the theory is defined, using the fields $A$ and $x$. Of course the closed fields are in general not on shell so the new complex structure (better call it almost complex structure) is generically not integrable. In addition we want to write the new action with respect to the undeformed complex structure in order to keep the closed field explicit. Actually, under the deformation $\Omega_{0}$ is mapped to \cite{Tod} \begin{equation}\label{x} \Omega = \Omega_{0} + (A + x)' - [(A + x)(A + x)]' - [(A + x)(A + x)(A + x)]' \end{equation} which is a $(\tilde{3},\tilde{0})$ form with respect to the new complex structure (from now on always indicated with a tilde) while with respect to the old one decomposes in forms of total degree 3, namely $(p,q)$ forms with $p+q=3$. We can now deform also the remaining $(0,3)$ part of the action, $L_{CS}^{0,3}$, with $ L_{CS}^{0,3} \equiv Tr(\frac{1}{2}B^{0,1} \overline{\partial}_{B_{0}^{0,1}}B^{0,1} + \frac{1}{3}(B^{0,1})^{3} + F^{0,2}_{0}B^{0,1}) $, into a $(\tilde{0},\tilde{3})$ form. In order to keep into account the deformation of the complex structure of the full action the simplest way is to use a real form for the Chern-Simons term, rewriting \be \int_{X}\Omega^{\tilde{3},\tilde{0}} L_{HCS}^{\tilde{0},\tilde{3}} = \int_{X}\Omega^{\tilde{3},\tilde{0}} L_{CS}=\int_{X}\Omega^{\tilde{3},\tilde{0}}Tr\left(\frac{1}{2}Bd_{B_{0}}B + \frac{1}{3}B^{3} + F_{0}B\right) \label{real}\ee where $B$ is a real Lie algebra valued 1-form on $X$. Indeed, being $\Omega$ a $(\tilde{3},\tilde{0})$ form, the added piece is zero. However, from the path integral quantization viewpoint, we have to define a suitable measure for the new field component $B^{\tilde{1},\tilde{0}}$. We will discuss this issue in the next section by using the Batalin-Vilkovisky formalism. For the Kodaira-Spencer gravity in antifield formalism see \cite{BCOV}. Let us notice that the real form $L_{CS}$ is completely independent from the closed field, while it is $\Omega$ which really takes care to project the action onto the new complex structure selecting the complementary form degree from $L_{CS}$. Let us consider the symmetries of (\ref{real}). As far as diffeomorphisms (\ref{e}) are concerned, $\Omega$ in (\ref{x}) transforms as ${\cal L}_{\chi}\Omega$ so that the whole action is invariant under the standard action on $B$, namely $\delta B=-\call_\chi B$. The situation for the Chan-Paton gauge symmetry is more subtle. Indeed, being the field $A$ off shell, we do not have $d\Omega = 0$. In fact it can be shown \cite{Tod} that $d\Omega = 0$ is equivalent to the equations of motion for the Kodaira-Spencer action, $\overline{\partial}A' = \partial((A + x ) \wedge ( A + x ))'$. So we expect a variation of the action under the gauge transformations (\ref{gt}) proportional to it. We find \begin{equation} \delta S_{HCS} = \frac{1}{\lambda }\int_{X}\Omega Tr\; d(\frac{1}{2}\epsilon d_{B_{0}}B + F_{0}\epsilon) \end{equation} We can save the day by adding to the action the term $-\frac{1}{2}\Omega db$, where $b$ is a real 2-form field transforming as \cite{BW}: \begin{eqnarray} \delta B &=& d_{B_{0}}\epsilon + [B,\epsilon] \nonumber \\ \delta b &=& Tr(\epsilon d_{B_{0}}B +2 F_{0}\epsilon) \end{eqnarray} The field $b$ acts as a Lagrange multiplier enforcing the Kodaira-Spencer equations for the closed field $A$. However the role of implementation of the associated delta function requires also a determinant factor such that \begin{equation}\label{fp} \int {\cal D}A{\cal D}b e^{-\frac{1}{2}\int_{X}\Omega db} det_{FP} = 1 \end{equation} This determinant measure has to be included in the very definition of the theory and will be explicitly derived in the next section. This isn't really the end of the story as $b$ has shift symmetries along its $(\tilde{2},\tilde{0})$ and $(\tilde{1},\tilde{1})$ components. In addition we should specify the full nilpotent symmetries and the gauge fixing. This will be the subject of the next section. Summarizing, the classical action for open and closed B-model is \begin{equation}\label{f} S_{tot}= \frac{1}{\lambda^{2}}\int_{X}\left(\frac{1}{2}A'\frac{1}{\partial}\overline{\partial}A' - \frac{1}{3}[(A + x)(A + x)]'(A + x)' \right)+ \end{equation} \[ + \frac{1}{\lambda}\int_{X} \Omega Tr(\frac{1}{2}Bd_{B_{0}}B + \frac{1}{3}B^{3} + F_{0}B) -\frac{1}{2}\Omega db \] \section{On the BV quantization of Holomorphic Chern-Simons} In this section we provide the BV action for the holomorphic Chern-Simons theory and a non singular gauge fixing fermion. For simplicity in this section we will drop the tilde in the notation for forms in the new complex structure. Still the coupling with the closed field is always present. The classical action is \be \lambda S_o=\int_X \Omega^{(3,0)} \left[Tr\left(\frac{1}{2} B d_{B_0} B + \frac{1}{3} B^3 + B F_0\right)-\frac{1}{2}db\right] \label{hcsa}\ee This is invariant under the infinitesimal gauge transformations \bea s B &=& d_{B_0}\epsilon +[B,\epsilon]+\psi^{(1,0)} \nonumber\\ s b &=& Tr\left(Bd_{B_0}\epsilon+2F_0\epsilon\right) + d\gamma +\eta^{(1,1)}+\eta^{(2,0)} \label{gsym}\eea where $\epsilon$ is the usual gauge symmetry ghost while $\psi^{(1,0)}$, $\eta^{(2,0)}$ and $\eta^{(1,1)}$ are the ghosts for the shift symmetries. By further defining \bea s\epsilon &=& -\epsilon^2 \nonumber\\ s\psi^{(1,0)} &=& \left[\epsilon,\psi^{(1,0)}\right] \nonumber\\ s\gamma^{(1,0)} &=& n^{(1,0)}- Tr\left(\epsilon \partial^{(1,0)}_{B_0}\epsilon\right)\nonumber\\ s\gamma^{(0,1)} &=& \partial^{(0,1)}m - Tr\left(\epsilon \partial^{(0,1)}_{B_0}\epsilon\right)\nonumber\\ s\eta^{(1,1)} &=& -Tr\left(\psi^{(1,0)}\partial^{(0,1)}_{B_0}\epsilon\right) -\partial^{(1,0)}\partial^{(0,1)} m -\partial^{(0,1)} n^{(1,0)}\nonumber\\ s\eta^{(2,0)} &=& -Tr\left(\psi^{(1,0)}\partial^{(1,0)}_{B_0}\epsilon\right) -\partial^{(1,0)} n^{(1,0)}\nonumber\\ sn^{(1,0)}&=&0\nonumber\\ sm &=& 0 \label{pseudobrs}\eea we get a {\it pseudo}-BRST operator. Actually the operator $s$ defined by (\ref{gsym}) and (\ref{pseudobrs}) is nilpotent only on shell. Explicitly, one gets \be s^2 b^{(0,2)}=\left(\partial^{(0,1)}\right)^2 m \label{nnp}\ee which is vanishing only on shell w.r.t. $b$. Actually, as discussed in \cite{Tod}, the differential of the shifted 3-form (\ref{x}) is proportional to the Nijenhuis tensor. Thus (\ref{nnp}) is proportional to the equation of motion of $b$. On all other fields one gets $s^2=0$. The BV recipe is in this case still simple, since one can check that second order in the antifields already closes in this case. By labeling all the fields entering (\ref{gsym}) and (\ref{pseudobrs}) as $\phi^i$, we have therefore \footnote{Here we use the $\lor$-operator as in \cite{BCOV} so that the $\lor$ of a $(3,p)$-form is a $(0,p)$-form.} \be S_{BV}= S_o + \int_X \sum_i \phi^*_i s\phi^i + c \int_X \left((b^*)^{(2,2)}\partial^{(1,0)}m\right)^{\lor} (b^*)^{(3,1)} \label{BV}\ee where $c$ is a non zero numerical constant which will not be relevant for our calculations (see later). One can explicitly show that $S_{BV}$ satisfies $\Delta S_{BV}=0$, where $\Delta$ is the BV-laplacian and $(S_{BV},S_{BV})=0$ the corresponding bracket. In our conventions, all antifields have complementary form degree with respect to fields. Let us notice that a parallel result has been obtained in \cite{camillo} by C.~Imbimbo for the A-model. Indeed, also in the case of the real Chern-Simons theory, the coupling with the gravitational background requires the use of the full BV formalism giving rise to quadratic terms in the anti-fields. While gauge fixing, we need to add the anti-ghost multiplets for all gauge fixed parameters. Actually we are going to gauge fix our theory only partially, that is we will keep the ($\epsilon$-)gauge freedom relative to the Chan-Paton bundle. By introducing the relevant anti-ghost multiplets, we define the gauge fixing fermion \bea \Psi= \int_X \left\{ {\bar\psi}^{(1,3)}\left(d_{B_0}B + B^2 + F_0\right)^{(2,0)} +{\bar\eta}^{(2,2)}b^{(1,1)} +{\bar\eta}^{(1,3)}b^{(2,0)} \right.\\\left. +{\bar n}^{(2,3)}\gamma^{(1,0)} +{\bar m}^{(3,3)} \left(\partial^{(0,1)}\right)^\dagger \gamma^{(0,1)} +{\bar\gamma}^{(3,2)}\left[\left(\partial^{(0,1)}\right)^\dagger b^{(0,2)}+\partial^{(0,1)}p\right] \right\} \eea by adding the anti-ghost (trivial) part of the BV action in the usual form. We extend therefore the s-operator action, that is the BV-bracket with the part of the BV action linear in the anti-fields, to the anti-ghosts in the trivial way, namely for any anti-ghost $\bar\psi$ we have $s\bar\psi=\Lambda_{\bar\psi}$ and $s\Lambda_{\bar\psi}=0$. The anti-ghost gauge freedom is fixed by the addition of the relevant further sector. Finally we can compute the (partially) gauge fixed action by specifying all anti-fields as derivatives with respect to their relative fields of gauge fermion $\Psi$. All in all, the (partially) gauge fixed action reads \bea S_{g.f.}= S_o + s\Psi + c\int_X\left( {\bar\eta}^{(2,2)}\partial^{(1,0)}m\right)^{\lor}\left(\partial^{(0,1)}\right)^\dagger{\bar\gamma}^{(3,2)} \label{gfa} \eea Let us now perform the path-integral in the different sectors (by naming them by the relative anti-ghost as appearing in the gauge fermion). \begin{itemize} \item The ${\bar\psi}^{(1,3)}$ is seen to decouple since $$s\left\{d_{B_0}B + B^2 + F_0\right\}^{(2,0)}=\partial^{(1,0)}_{B_0}\psi^{(1,0)}+\left[B^{(1,0)},\psi^{(1,0)}\right]_+$$ Therefore we get the contribution $$ \int \cald[B^{(1,0)}]\delta\left(\partial^{(1,0)}_{B_0}B^{(1,0)} + B^{(1,0)}B^{(1,0)} + F_0^{(2,0)}\right) {\rm det'}\left\{\partial^{(1,0)}_{B_0}+\left[B^{(1,0)},\cdot\right]_+\right\} $$ which counts the volume of the space of holomorphic connections. \item The two $\bar\eta$-sectors are just algebraic and give a constant contribution to the path-integral. Notice that while integrating over $\bar\eta^{(2,2)}$ also the last term in (\ref{gfa}) gets involved being reabsorbed in a shift of $\eta^{(1,1)}$. This gauge fixing of course restricts the field $b$ to be a $(0,2)$-form only and set to zero $\eta^{(1,1)}$ and $\eta^{(2,0)}$. \item The ${\bar n}^{(2,3)}$ sector is algebraic too and simply sets to zero $\gamma^{(1,0)}$ and its partner. \item The last part is the standard term for higher form BV quantization (see for example \cite{HT}). The fermionic bilinear operator reduces to $$ \calb= \left(\begin{matrix}-{\partial^{(0,1)}}^\dagger\partial^{(0,1)} & -\partial^{(0,1)} \\ {\partial^{(0,1)}}^\dagger & 0 \end{matrix}\right)$$ mapping $\Omega^{(0,1)}(X)\oplus \Omega^{(0,0)}(X)$ to itself. The bosonic bilinear operator is instead the anti-holomorphic laplacian $\Delta^{(0,0)}={\partial^{(0,1)}}^\dagger\partial^{(0,1)}$ on the scalars $\Omega^{(0,0)}(X)$. One therefore stays with the gauge fixed measure \be \int \cald[Y] e^{-\frac{1}{2}\int_X Y \calc Y +\int_X J^tY} \label{circa}\ee where $Y=\left(p,\Lambda_{\bar\gamma},b^{(0,2)}\right)$, $$ \calc= \left( \begin{matrix} 0 & -\partial^{(0,1)} & 0 \\ \partial^{(0,1)}& 0 & {\partial^{(0,1)}}^\dagger\\ 0 & -{\partial^{(0,1)}}^\dagger & 0 \end{matrix} \right) $$ and the source $J=(0,0,d\Omega)$ takes into account the classical action. Eq.(\ref{circa}) can be integrated being a Gaussian. \end{itemize} Therefore, all in all, we find that the quantum measure for the holomorphic Chern-Simons theory is \be \frac{det'[\calb]}{det'[\Delta^{(0,0)}]\left(det'[\calc]\right)^{1/2}} e^{J^t(\calc)^{-1}J} \label{qm}\ee for a (generically non integrable) almost complex structure. The determinant of the operator $\calc$ is easily obtained by noticing that $$ \{\calc,\calc^\dagger\}= \left( \begin{matrix} \Delta^{(0,0)} & 0 & \left(\partial^{(0,1)}\right)^2 + \left({\partial^{(0,1)}}^\dagger\right)^2\\ 0 & 2 \Delta^{(3,2)} & 0 \\ \left(\partial^{(0,1)}\right)^2 + \left({\partial^{(0,1)}}^\dagger\right)^2 &0 & \Delta^{(2,0)} \end{matrix} \right) $$ We want to compare our open theory, defined as coupled to the closed field $A$, with the standard holomorphic Chern-Simons, defined for an integrable complex structure. In particular the two theories should match once we put on shell the closed field. So the integral of all the additional fields should contribute as one. Notice that, if the complex structure is integrable, then $d\Omega=0$ and the source term is not contributing. On top of it, since $\left(\partial^{(0,1)}\right)^2=0$, the bosonic operator block-diagonalizes. Moreover, in this case, the determinant of the fermionic operator $\calb$ can be easily computed \footnote{This can be done by writing the eigenvector equation for $\calb$ as $\calb \tbinom{a}{b}=\lambda\tbinom{a}{b}$ and then expanding the 1-form $a=\partial^{(0,1)}x+{\partial^{(0,1)}}^\dagger y$ in exact and co-exact parts. Then one finds that $b=\lambda x$ and that the eigenvalues of $\calb$ coincide with those of $\Delta^{(0,2)}$ for $x=0$ or with the square roots of those of $\Delta^{(0,0)}$ for $y=0$.} to be equal to $det'\Delta^{(0,2)} \left(det'\Delta^{(0,0)}\right)^{1/2}$. All in all, we find an overall \be \frac{det'[\Delta^{(0,2)}] \left(det'[\Delta^{(0,0)}]\right)^{1/2}} {det'[\Delta^{(0,0)}]\left\{\left(det'[\Delta^{(0,2)}]\right)^2 det'[\Delta^{(0,0)}]\right\}^{1/2} }=\frac{1}{det'[\Delta^{(0,0)}]} \label{ciapa}\ee This determines the value of the quantum measure introduced in (\ref{fp}). The factor (\ref{ciapa}) counts the extra degree of freedom introduced by the $b$ field in the theory. Indeed the three components of $b^{(0,2)}$ are subject to the gauge freedom by the shift of an exact $\partial^{(0,1)}\gamma^{(0,1)}$ term up to the ghost-for-ghost shifting $\gamma^{(0,1)}$ by $\partial^{(0,1)}m$. Therefore the overall counting is $3-3+1=1$ complex modes. \section{String field theory as generating function of open and closed HAEs.} Our claim of having found the effective space-time theory for the open B-model should be checked explicitly. Because of tadpole cancellation, see \cite{nostro} and \cite{Wal3}, we know that the open theory is completely well defined only in its unoriented version ( as in the case of usual string theories ), so the most general case to consider is for open ( and closed ) unoriented strings. Closed moduli are known to be unobstructed and so expansions of the amplitudes in their value is always possible. We will proceed similarly for open moduli. An important result of \cite{BCOV} is that the partition function of Kodaira-Spencer theory encodes the recurrence relations of HAE via its Feynman diagram expansion. The generating function of the full HAE of \cite{BT} generalized to the unoriented case should be: \begin{equation}\label{t} e^{W(x,u;t,\overline{t})} \sim \exp\left(\sum_{g,h,c,n,m} \frac{\lambda^{2g - 2 + h + c}}{2^{\frac{\chi}{2} + 1}\;n!m!} {\cal F}^{(g,h,c)}_{i_{1}\dots i_{n}\alpha_{1}\dots \alpha_{m}}x^{i_{1}}\dots x^{i_{n}}u^{\alpha_{1}}\dots u^{\alpha_{m}}\right) \end{equation} up to an overall $\lambda$ dependent prefactor which encodes the contact terms in one loop calculations and will be discussed later. This prefactor $\lambda^{\dots}$ is encoded, in the field theory side, in the measure of the path integral, namely as the multiplicative term weighting the regularized determinants with omitted zero modes. From now on, in any case, we will focus on the perturbative expansion in $\lambda$. The notation is as follows: ${\cal F}^{(g,h,c)}_{i_{1}\dots i_{n}\alpha_{1}\dots \alpha_{m}}$ is the string amplitude with genus $g$, $h$ boundaries, $c$ crosscaps, $n$ marginal operator insertions in the bulk and $m$ on the boundary. The $x^{i}$'s are the expansion coefficients of $x$ in a base of Beltrami differentials, $x = x^{i}\mu_{i}$ and the $u^{\alpha}$'s are the expansion coefficients for $B_{0}$ in a basis $T_\alpha(x)$ of the open moduli $H^{(0,1)}(X,Ad_E)$, namely $B_{0} = u^{\alpha}T_{\alpha}$. Thus the fields appearing as backgrounds in the field theory are the open and closed moduli themselves. The factor $\frac{1}{2^{\frac{\chi}{2} + 1}}$ is explained in \cite{Wal3} and obviously $\chi = 2g -2 + h + c$. If what we are doing is consistent it should be true that \begin{equation}\label{g} \int {\cal D}A{\cal D}B{\cal D}b\dots e^{-S_{tot}(x,B_{0}(x);t,\overline{t};A,B,b,\dots)} = e^{W(x,u(x);t,\overline{t})}. \end{equation} We want to compare this at tree level, that is at $g = 0, h = 0,1, c = 0$ and $g = 0, h = 0, c = 1$, and obtain in this way some explicit expressions for all the basic objects entering the extended HAE of \cite{BT} computed at a generic background point. These amplitudes are already known and computed by worldsheet methods and the two results should of course match. To this end we will differentiate, at each order in $\lambda$, both members with respect to the moduli parameters $x^{i}$ and $u^{\alpha}$ and identify the corresponding coefficients. A comment is in order. We should remember that the expression (\ref{t}) is the partition function for the unoriented theory. As explained in \cite{Wal3} this differs from the oriented one simply projecting the space of operators in the theory to the unoriented sector that is the ones with eigenvalue $+1$ under the parity operator $\cal{P}$. Being these operators nothing else than deformations of the moduli space of the theory, we have to consider only its invariant part under $\cal{P}$ and then parametrise with $x^{i}$ and $u^{\alpha}$ its tangent space. This means that the $x^{i}$'s and the $u^{\alpha}$'s appearing in ( \ref{t} ) are really a subset of the ones in the oriented case. Specifically it implies a restriction on the space of complex structures for what matters $x$ and a reduction to $Sp(N)/SO(N)$ groups for $u$. Still some amplitudes, as the sphere with three insertions, are perfectly meaningful also in the oriented case. This is why we will generically not specify to which space the $x^{i}$'s and the $u^{\alpha}$'s belongs: it is possible to restrict their value depending on the case. \subsection{$g = 0$, $h = c = 0$} Here we start the comparison between the string theory partition function and the space-time path integral (\ref{g}). We begin from the coefficients at lowest order in $\lambda$. From the point of view of (\ref{t}) this is the amplitude at $g = h = c = 0$ with weight $\frac{1}{\lambda^{2}}$; on the field-theory side the contribution should come only from the Kodaira-Spencer action, also at weight $\frac{1}{\lambda^{2}}$. We know that the right-hand side of equation (\ref{g}) at this order in $\lambda$ has no dependence on open moduli (because without boundaries, $h = 0$, there is no space for open operator insertions) and the building block amplitude being $C_{ijk}(x)$: \[ C_{ijk}(x) = {\cal F}^{(0,0,0)}_{ijk}(x) = \sum_{n} \frac{1}{n!}{\cal F}^{(0,0,0)}_{ijki_{1}\dots i_{n}}x^{i_{1}}\dots x^{i_{n}} = \frac{\partial}{\partial x^{i}}\frac{\partial}{\partial x^{j}}\frac{\partial}{\partial x^{k}}W\mid_{(order \lambda^{-2})} \] Being at tree level and given (\ref{g}), the same result can be obtained ( see \cite{BCOV} ) deriving the Kodaira-Spencer action on shell ( $A = A(x)$ ) with respect to three $x^{i}$. The three derivative term gives\footnote{The factor $-2$ depends on our conventions which are slightly different from \cite{BCOV}.} \[ -2\int_{M}\left[\left(\mu_{i} + \frac{\partial A(x)}{\partial x^{i}}\right)\wedge \left(\mu_{j} + \frac{\partial A(x)}{\partial x^{j}}\right)\right]' \left(\mu_{k} + \frac{\partial A(x)}{\partial x^{k}}\right)' = C_{ijk}(x) \] The only point of possible confusion for the BCOV educated reader both here and in the subsequent computations, comes from the novel cross dependence of open and closed field on shell by each other by means of the field equations which are now modified with respect to the ones obtained with the open and closed actions separated. This might seem to carry on additional induced derivatives and contributions as, in this case, an induced open moduli dependence carried by the on shell closed field which would lead to the paradox of a non vanishing amplitude corresponding to a sphere with boundary insertions! Fortunately, integrating out the field $b$ does the job of enforcing the closed field solutions that would be obtained from the Kodaira-Spencer action alone! It will be true instead that the on shell open fields will carry some closed field dependence as the $B$-field equation is: $F_{B_{0}}^{\tilde{0},\tilde{2}} \equiv (d_{B_{0}}B + B^{2} + F_{0})\mid^{\tilde{0},\tilde{2}} = 0$ which is both $B_{0}(u)$ and $x$ dependent. This is a good place to stop and discuss the connections between our result for the coupling between the open theory and the closed one, and the comments made by Witten in \cite{W1} about this point. In his paper Witten uses an argument from the fatgraph description of a string tree level amplitude to infer that, if one considers a diagram with $n$ bulk and $m$ boundary insertions, in general the bulk operators will reduce to exact (with respect to the topological charge) objects and so will decouple. This goes through even in the case $m = 0$ as long as some boundaries are present. The direct consequence is that the on-shell couplings between closed and open strings are zero. How can then one justify the non vanishing of the $\Delta_{ij}$ amplitudes of \cite{Wal1}, \cite{Wal2} and \cite{Wal3}? Our answer is in a sense a weakened realization of Witten's idea, still allowing non zero amplitudes with bulk operators and boundaries. The key role is played by the field $b$, generated in the action to maintain the gauge symmetries in the Chern-Simons term. This field, once it is integrated over, fixes the closed field $A$ to be on shell with respect to the original Kodaira-Spencer equations and so defining a shift of an integrable complex structure. This translates to the fact that the original genuine coupling between open and closed fields in the action reduces to a coupling between an open, integrated field and an on-shell closed field. That is it represent a new Chern-Simons expansion around a new shifted and fixed complex structure. So the path integration of the closed field $A$ reduces to a single contribution coming from the unique deformation of the original complex structure with respect to which the Kodaira-Spencer action is written, this contribution being weighted by the corresponding Kodaira-Spencer on shell action. If closed strings are substantially decoupled by the open theory, what is then their role? This is the next point discussed by Witten in \cite{W1} where their crucial role in anomaly cancellation is pointed out. For example, in the A-model, whose effective theory is the real Chern-Simons, a well known topological anomaly is present. It comes from the $\eta$-invariant of \cite{W2}, whose dependence by the metric is compensated by the addition of a gravitational Chern-Simons. Then an additional anomaly connected to the framing of the target space is well known. In the case of the B-model however, the $\eta$-invariant is simply zero because the spectrum of eigenvalues of the determinant whose phase is $\eta$, is symmetric around zero \cite{Tho}. Instead we have one loop anomalies corresponding to a dependence by the wrong moduli \cite{new} (K$\ddot{a}$hler moduli in this case) which is cured by tadpole cancellation, \cite{Wal3} and \cite{nostro}, involving unoriented contributions in the closed strings sector (Klein bottle). \subsection{$g = 0$, $h + c = 1$ } In this subsection we want to compare the world-sheet and the target space perspective at order $1/\lambda$. From the string theory side the relevant amplitudes of weight $\frac{1}{\lambda}$ ( $g = 0$, $h + c = 1$ ) entering the HAE were discussed in \cite{BT}. From the field theory perspective all of them should be reproduced by the holomorphic Chern-Simons action. Let us start with purely closed moduli dependence. This can come either from both the explicit dependence by $x$ in $\Omega$ and by the induced dependence in the $A(x)$ and $B(x,u)$ fields on shell, or implicitly through the background $B_{0}(x)$. We will find that the dependence w.r.t. closed moduli explicit and in the on shell fields, both closed and open, correspond to bulk insertion in the string amplitude, while the dependence w.r.t. closed moduli in the background open field corresponds to induced boundary insertions\footnote{An additional closed moduli dependence in the worldsheet action would come also from the Warner term \cite{War}. For the B-model this additional boundary term, needed to make the action invariant, vanishes under the usual boundary conditions \cite{W1} as discussed in \cite{mirror}.}. The two operators will be indicated as $\phi_{i}$ and $\psi_{i}$ (so for example $C_{ijk} = \langle \phi_{i}\phi_{j}\phi_{k}\rangle_{0,0,0}$ where the subscript denotes the triple $g,h,c$). The first amplitude we want to derive is $\Delta_{ij} = \langle \phi_{i}\phi_{j}^{[1]}\rangle_{0,1,0 \; + \; 0,0,1}$ which was computed in \cite{Wal1} and \cite{Wal3} as additional building block for the extended HAE. This is the disk plus the crosscap with two bulk insertions. In particular $\phi_i$ is a local insertion while $\phi_j^{[1]}$ is an integrated one being the second step of the descent equation. So, from (\ref{f}) we get \begin{equation} \frac{1}{\sqrt{2}}\Delta_{ij}(x) = \int_{X} d_{i}d_{j}\Omega L_{CS} + \int_{X} d_{i}\Omega d_{j}L_{CS} + \int_{X} d_{j}\Omega d_{i}L_{CS} + \int_{X} \Omega d_{i}d_{j}L_{CS} \label{paletta} \end{equation} where all the fields are on shell; $d_{i}$ is the derivative with respect to the closed modulus $x^{i}$, both explicitly and through the dependence induced by $A(x)$ and $B(u,x)$; the factor $\frac{1}{\sqrt{2}}$ comes from the normalization in (\ref{t}). Using the field equations for $B$ we obtain the identity \[ 0 = d_{j}\left(\int_{X} \frac{\delta S_{HCS}}{\delta B}\mid_{B=B(u,x)}d_iB(u,x)\right)= d_{j}\left(\int_{X} \Omega d_{i}L_{CS} \right) = \int_{X} d_{j}\Omega d_{i}L_{CS} + \int_{X} \Omega d_{i}d_{j}L_{CS} \] that is, the last two terms in (\ref{paletta}) cancel. This is nothing but Griffith's transversality condition for the normal function as stated in \cite{Wal1}. So we get \begin{equation}\label{l} \frac{1}{\sqrt{2}}\Delta_{ij}(x) = \langle \phi_{i}\phi_{j}\rangle_{0,1,0 \; + \; 0,0,1} = \int_{X} d_{i}d_{j}\Omega L_{CS} + \int_{X} d_{i}\Omega d_{j}L_{CS} \end{equation} This differs from the expression derived in \cite{Wal1,Wal3} by the first term. However notice that (\ref{l}) is valid at a generic value $x$ for closed string moduli, while the ones of \cite{Wal1,Wal3} are evaluated at $x = 0$, where the double derivative of $\Omega$ is vanishing. This comes from expression (\ref{x}) and from the fact that $A(x) = O(x^{2})$ as follows by solving the Kodaira-Spencer equations iteratively. Let us now consider the amplitudes with one bulk and one boundary insertion. The latter, as already stated, is obtained from the derivative with respect to the background open field $B_0$ which depends on $x$: \[ \frac{1}{\sqrt{2}}\Delta'_{ij} = \langle \phi_{i}\psi_{j}^{[1]}\rangle_{0,1,0} = \left(d_j B_{0}(x) \frac{\delta}{\delta B_{0}(x)}\right)d_{i}S_{HCS} \] To compute this term from the space-time point of view it is easier to start from the action written in terms of $\hat{B}$ and $B_{0}$ (\ref{m}). The result follows immediately as \begin{equation} \frac{1}{\sqrt{2}}\Delta'_{ij} = \langle \phi_{i}\psi_{j}^{[1]}\rangle_{0,1,0} = -\int_{X} d_{i}\Omega Tr(d_{j}B_{0}(x)F_{0}) \end{equation} once the e.o.m. of the open field are imposed. Now we pass to the purely open moduli derivatives. The only term is the one derived three times or, equivalently, the one with three boundary operator insertions: $\Delta_{\alpha\beta\gamma}$. Again using the form (\ref{m}) we need only explicit derivatives with respect to $u^{\alpha}$ (remind that $B_{0} = u^{\alpha}T_{\alpha}$ ). The result is \begin{equation}\label{n} \frac{1}{\sqrt{2}}C_{\alpha\beta\gamma} = \langle \Theta_{\alpha}\Theta_{\beta}\Theta_{\gamma}\rangle_{0,1,0} = -\int_{X}\Omega Tr(T_{\alpha}T_{\beta}T_{\gamma}) \end{equation} which is the same that would be derived with worldsheet methods in analogy to $C_{ijk}$. Finally we have mixed terms. These are similarly obtained giving \begin{equation}\label{o} \frac{1}{\sqrt{2}}\Pi_{\alpha i} = \langle \Theta_{\alpha}\phi_{i}\rangle_{0,1,0} = -\int_{X} d_{i}\Omega Tr(T_{\alpha}F_{0}) \end{equation} and \begin{equation}\label{p} \frac{1}{\sqrt{2}}\Delta'_{\beta i \alpha } = \langle \Theta_{\beta}\psi_{i}^{[1]}\Theta_{\alpha}\rangle_{0,1,0} = -\int_{X} \Omega Tr(T_{\beta}d_{i}B_{0}T_{\alpha}) \end{equation} \section{Open-Closed string duality as a Losev trick} \label{zio} Let us explain a basic argument about open-closed string duality in second quantization. This is referred to the topological string theory at hand (B-model), but in principle should hold in a more general setting. The Losev trick, as explained in \cite{Losev}, consists in a procedure to obtain solutions of the quantum Master Equation in Batalin-Vilkovisky quantization by partial gauge fixing. In its generality it reads as follows. Let $S(\Phi,\Phi^*)$ be a solution of the quantum Master equation \be \Delta\left(e^{-S/\hbar}\right)=0 \label{qme}\ee where $\Delta=\partial_\Phi\partial_{\Phi^*}$ is the nilpotent BV laplacian. Suppose that the fields/anti-fields space $\calF$ is in the form of a fibration $$ \begin{matrix} \calF_2 & \hookrightarrow & \calF \\ \, & \, & \downarrow \\ \, & \, & \calF_1 \end{matrix} $$ so that one can choose a split coordinate system $(\Phi,\Phi^*)=(\Phi_1,\Phi^*_1, \Phi_2,\Phi^*_2)$ such that the BV laplacian splits consistently as $\Delta=\Delta_1 + \Delta_2$ with $\Delta_1^2=0$. Then, assuming the existence of a non singular gauge fermion $\Psi$, one can consider the partially gauge fixed BV effective action \be e^{-\frac{1}{\hbar} S_{eff}(\Phi_1,\Phi^*_1)} = \int_{\calF_2}\cald\left[\Phi_2,\Phi^*_2\right] e^{-\frac{1}{\hbar} S(\Phi,\Phi^*)} \delta\left(\Phi_2^*-\partial_{\Phi_2}\Psi\right). \ee which can be readily seen to satisfy the reduced BV Master equation \be \Delta_1 e^{-\frac{1}{\hbar} S_{eff}(\Phi_1,\Phi^*_1)}=0 \label{qme1}\ee Actually -- the proof is two lines -- one consider (\ref{qme}) partially gauge fixed on the fibers and integrated along the fiber $\calF_2$ $$ 0=\int_{\calF_2}\cald\left[\Phi_2,\Phi^*_2\right] \left\{\Delta_1+\Delta_2\right\} e^{-\frac{1}{\hbar} S(\Phi,\Phi^*)} \delta\left(\Phi_2^*-\partial_{\Phi_2}\Psi\right) = \Delta_1 e^{-\frac{1}{\hbar} S_{eff}(\Phi_1,\Phi^*_1)} + $$ $$+ \int_{\calF_2}\cald\left[\Phi_2\right] \left\{ \frac{d}{d\Phi_2} \left( \left[\partial_{\Phi_2^*} e^{-\frac{1}{\hbar} S(\Phi_1,\Phi_2,\Phi^*_1,\Phi_2^*)} \right]_{\Phi_2^*=\partial_{\Phi_2}\Psi} \right) - \partial^2_{\Phi_2}\Psi \cdot \left(\partial^2_{\Phi_2^*} e^{-\frac{1}{\hbar} S(\Phi,\Phi^*)}\right)\mid_{\Phi_2^*=\partial_{\Phi_2}\Psi} \right\}$$ Now, the last line vanishes because of translation invariance of the path-integral along the fiber and field/anti-field opposite statistics, so that we recover (\ref{qme1}). Let us notice that the resulting BV effective action depends on the particular gauge fixing chosen to integrate the fiber degrees of freedom. This dependence is BV trivial in the effective action. Let us now specify the above setup to open/closed string theory, namely we identify $\calF$ with the open and closed string theory, $\calF_2$ with the open strings and $\calF_1$ with closed strings. The complete theory is given by the BV action \be S_{c+o}(A,B;x,u,\lambda)=S_c(A;x,\lambda)+S_o(A,B;x,u,\lambda) \label{c+o}\ee where $S_c(A;x,\lambda)$ is the closed string BV action, and $S_o(A,B;x,u,\lambda)$ completes the open and closed BV action. The BV laplacian takes the form $\Delta_{c+o}=\Delta_c + \Delta_o$. We assume that both closed and open plus closed strings have been BV formulated, so that the corresponding quantum Master equations hold. Moreover, the uniqueness of closed string field theory is taken to mean that all solutions of the quantum Master equation, with proper boundary conditions in the string coupling dependence -- namely the background independence of the kinetic term, are given by $S_c(A;x,\lambda)$ for some background $x$ and the choice of the string coupling constant $\lambda$. For the B-model, this is explicitly proved in \cite{BCOV}. Therefore, by specifying the Losev trick to our case, we obtain that the effective action obtained from (\ref{c+o}) by partial gauge fixing and integration over the open string field, satisfies the quantum Master equation (\ref{qme1}) that is the quantum master equation for the {\it closed} string field theory. Notice that, by definition, \be e^{-S_{eff}(A,x,\lambda,u)}= e^{-S_c(A;x,\lambda)} \int_{\tiny \begin{matrix} gauge \\ fixed \end{matrix} } \cald[B]e^{-S_o(A,B;x,u,\lambda)} \label{yyy}\ee approaches the required boundary condition in the string coupling constant dependence. The actions entering (\ref{yyy}) are required to have a canonically normalized kinetic term. Therefore, we conclude that the effective action (\ref{yyy}) has to be the closed string field action (in some gauge determined by the gauge fixing in the open string sector) for a shifted set of background moduli and a redefined string coupling constant, that is, \be \caln\,\, e^{-S_{c}(A;x^\star,\lambda^\star)}= e^{-S_c(A;x,\lambda)} \int_{\tiny \begin{matrix} gauge \\ fixed \end{matrix} } \cald[B]e^{-S_o(A,B;x,u,\lambda)} \label{yygen}\ee up to a field independent normalization $\caln$. The particular case we have in mind is therefore the topological B-model, where $S_c$ is the Kodaira-Spencer gravity action and $S_o$ the holomorphic Chern-Simons action suitably coupled to the Kodaira-Spencer field as discussed in the previous sections. After passing to flat coordinates, (\ref{yygen}) then specifies to \be \caln(u,x,\lambda^{-1}\Omega_0)\,\, e^{-\frac{1}{{\lambda^\star}^2}S_{KS}(A^*,x^\star)}= e^{-\frac{1}{\lambda^2}S_{KS}(A,x)} \int_{\tiny \begin{matrix} gauge \\ fixed \end{matrix} } \cald[B]e^{-\frac{1}{\lambda}S_{HCS}(A,B,x,u)} \label{yy}\ee where the closed string field gets renormalized as $A^\star/\lambda^\star=A/\lambda$. In (\ref{yy}) $\caln$ is a normalization factor\footnote{The particular dependence on the ratio $\Omega_0/\lambda$ is due to the fact that we have chosen flat coordinates $u,x$ for the moduli. See next section for a specific discussion on the relevance of the normalization factor in comparing with \cite{Wal2}.} and \be \frac{1}{\lambda^\star}=\frac{1}{\lambda} + \delta(u,x,\lambda) \quad {\rm and} \quad (x^\star)^i=x^i +\delta^i(u,x,\lambda) \label{shift}\ee are some shifted background and string coupling. All these are {\it to be determined} and can be perturbatively computed from (\ref{yy}) by Feynman diagrams expansion or with non perturbative techniques when available. The redefinition (\ref{shift}) is a generalization (with tunable open moduli) of the moduli shift in \cite{Oog}. The aim of the next subsection is to show that, at frozen open moduli, the above formulas reproduce the shift of \cite{Oog}. \subsection{Open-closed duality at frozen open moduli}\label{z} In this subsection we want to apply the general arguments just explained in Section \ref{zio} to the oriented string theory with frozen open moduli \cite{Oog}. Indeed, since we will work just at tree level, we do not have to deal with unoriented amplitudes. The effect of freezing the open moduli is easily obtained by replacing the non abelian field $B$ with $N$ identical copies of an abelian one, reducing the trace simply to a Chan-Paton factor $\beta$, which takes into account the number of boundaries. Accordingly, we consider a slightly modified version of (\ref{t}) which better fits our purposes: \begin{equation}\label{y} e^{W(x,\lambda^{-1})} = \lambda^{\frac{\chi}{24} - 1 - \beta^{2}\frac{N}{2}}\exp\left(\sum_{g,h,n} \frac{\lambda^{2g - 2 + h + n}}{n!}\beta^{h}{\cal F}^{(g,h)}_{i_{1}\dots i_{n}}x^{i_{1}}\dots x^{i_{n}}\right) \end{equation} (\ref{y}) is obtained from (\ref{t}) suppressing all the open moduli parameters $u^{\alpha}$, rescaling $x^{i}\rightarrow \lambda x^{i}$ and considering the additional $\beta$-parameter dependence. The HAE for open strings of \cite{Wal1} are obtained as power expansion in $x^{i}$, $\lambda$ and $\beta$ of (\ref{y}) \begin{equation}\label{z1} \left(-\overline{\partial}_{\overline{i}} + \frac{1}{2}C^{jk}_{\overline{i}}\frac{\partial^{2}}{\partial x^{j}\partial x^{j}} + G_{j\overline{i}}x^{j}\frac{\partial}{\partial\lambda^{-1}} - \beta\Delta_{\overline{i}}^{j}\frac{\partial}{\partial x^{j}} \right) e^{W(x,\lambda^{-1})} = 0 . \end{equation} In \cite{Oog} it was shown that the above HAE (\ref{z1}) can be derived from the HAE of the closed theory by means of a suitable change of variables \begin{eqnarray}\label{q} x^{i} \rightarrow x^{i} + \beta\Delta^{i} \nonumber \\ \lambda^{-1} \rightarrow \lambda^{-1} - \beta\Delta \end{eqnarray} with $\overline{\partial}_{\overline{i}}\Delta = \Delta_{\overline{i}}$ and $\overline{\partial}_{\overline{i}}\Delta^{i} = \Delta_{\overline{i}}^{i}$ such that $G_{i\overline{i}}\Delta^{i} = \Delta_{\overline{i}}$ and explicitly \[ \Delta = g^{0\overline{0}}\int_{X}L_{CS}\wedge \overline{\Omega}_{0} \; \; \; \; g^{0\overline{0}} = \left(\int_{X}\Omega_{0} \wedge \overline{\Omega_{0}}\right)^{-1} \] \[ \Delta^{i} =g^{i\overline{j}} \left(\int_{X}L_{CS}\wedge d_{\overline{j}}\overline{\Omega}\right)_{x = 0} \; \; \; \; g^{i\overline{j}} = \left(\int_{X}d_{i}\Omega \wedge d_{\overline{j}}\overline{\Omega}\right)^{-1}_{x = 0} \] where all the fields are on shell and $x=0$. Notice also that $\Delta$ and $\Delta^{i}$ have been computed starting from the antitopological theory. Finally the closed field does not appear because on shell it goes as $O(x^{2})$. The shift (\ref{q}) allows to rewrite (\ref{z1}) in the same form as the master equation for purely closed strings \begin{equation}\label{z2} \left(-\overline{\partial}_{\overline{i}} + \frac{1}{2}C^{jk}_{\overline{i}}\frac{\partial^{2}}{\partial x^{j}\partial x^{j}} + G_{j\overline{i}}x^{j}\frac{\partial}{\partial\lambda^{-1}} \right)e^{W(x + \beta\Delta^{i},\lambda^{-1} - \beta\Delta)} = 0 \end{equation} as follows from an easy application of the chain rule. Before going on let us mention that a refined shift was proposed in \cite{Wal2} in order to have a detailed matching of the open and closed string amplitudes. The crucial point is that the change of variables proposed in \cite{Wal2} takes covariantly into account the constraint over the amplitudes \[ D_{i_{n}}{\cal F}^{(g,h)}_{i_{1}\dots i_{n-1}} = {\cal F}^{(g,h)}_{i_{1}\dots i_{n}}. \] However, since we are interested in checking the fact that the integration over the open string modes produces a wave function satisfying the shifted closed HAEs, we can restrict ourselves to (\ref{q}). A more refined analysis of the boundary conditions would require the calculation of the normalization factor $\caln$ in (\ref{yy}) corresponding to the rescaling in eq.(3.13) of \cite{Wal2}. It is now possible to postulate that an analog shift for $x$ and $\lambda^{-1}$ in the path integral with the Kodaira-Spencer action (corresponding to the closed partition function) would allow to obtain the full path integral with the complete action. In order to reproduce the power expansion of (\ref{y}) from the target space field theory we have to set $x\to \lambda x$, so that any bulk operator insertion carries a weight $\lambda x$ as in (\ref{y}). To maintain our setting we translate (\ref{q}) into a shift for the product $\lambda x$ \begin{eqnarray}\label{z3} \lambda x^{i} &\rightarrow & \lambda x^{i} + \lambda\beta\Delta^{i} - \lambda^{2}\beta\Delta x^{i} + o(\lambda^{3},\beta^{2})\nonumber \\ \lambda^{-1} &\rightarrow & \lambda^{-1} - \beta\Delta \end{eqnarray} of which we will keep only the lowest order term for the first line, discarding the $\lambda^{2}$ piece induced by the transformation of $\lambda$. From now on $\lambda x$ will be denoted simply as $x$. We want to check that \begin{equation}\label{v} \int {\cal D}A e^{-S_{KS}(x^{i} + \lambda\beta\Delta^{i} + \dots,\lambda^{-1} - \beta\Delta;t,\overline{t};A)} \simeq \int {\cal D}A{\cal D}B{\cal D}b\dots e^{-S_{tot}(x,B_{0},\lambda^{-1};t,\overline{t};A,B,b,\dots)} \end{equation} Let us consider (\ref{v}) at the tree level. Simply applying (\ref{z3}) to the Kodaira-Spencer action gives, at order $\beta$ and $\lambda^{-1}$, and redefining $S_{KS}$ in order to have the factor $\lambda^{-2}$ explicit, \[ \frac{1}{\lambda^{2}}S_{KS}(x^{i} + \lambda\beta\Delta^{i} + \dots,\lambda^{-1} - \beta\Delta;t,\overline{t};A) = \frac{1}{\lambda^{2}}S_{KS}(x^{i},\lambda^{-1};t,\overline{t};A) - \] \[ - \frac{\beta}{\lambda}\int_{M}[(A + x)(A + x)]'(\mu_{i})'\Delta^{i} - \frac{2\beta\Delta}{\lambda}S_{KS}(x^{i},\lambda^{-1};t,\overline{t};A) + O(\lambda^0,\beta^{2}) \] Going at tree level the $O(\lambda^0,\beta^{2})$ are not taken into account; in addition the $A$ field should be taken on shell with respect to the Kodaira-Spencer equation in the shifted background, that is \begin{equation}\label{u} A \rightarrow A(x^{i} + \lambda\beta\Delta^{i} + \dots) = A(x) + \lambda\beta\Delta^{i}\partial_{i}A(x) + O(\lambda^{2},\beta^{2}) \end{equation} Then, at order $\beta$, $\frac{1}{\lambda}$, the left side of (\ref{v}) is the exponential of \[ \frac{1}{\lambda^{2}}S_{KS}(x^{i},\lambda^{-1};t,\overline{t};A(x)) - \frac{\beta}{\lambda}\int_{X}[(A(x) + x)(A(x) + x)]'(\mu_{i})'\Delta^{i} - \] \[ - \frac{2\beta\Delta}{\lambda}S_{KS}(x^{i},\lambda^{-1};t,\overline{t};A(x)) + \] \begin{equation}\label{exp} + \frac{\beta}{\lambda}\int_{X}\Delta^{i}(\partial_{i}A(x))'\frac{1}{\partial}\overline{\partial}A(x) - [(A(x) + x)(A(x) + x)]'(\partial_{i}A(x))'\Delta^{i} \end{equation} where the last line is actually zero because of the equations obeyed by $A(x)$, and the second line reduces to \[ -\frac{\beta\Delta}{3\lambda}[(A(x) + x)(A(x) + x)]'(A(x) + x)' = \] \[ = -\frac{\beta\Delta}{\lambda}[(A(x) + x)(A(x) + x)(A(x) + x)]'\Omega_{0} \] Remembering the expression (\ref{x}) we can substitute the value of $\Delta^{i}$ in (\ref{exp}) and get, for the second term in the first line of (\ref{exp}), \begin{eqnarray}\label{s} \frac{\beta}{\lambda}\int_{X}\Omega^{(1,2)}_{A = A(x)}\wedge (d_{i}\Omega)^{(2,1)}_{x = 0}\left(\int_{X}(d_{i}\Omega)^{(2,1)}_{x = 0} \wedge (d_{\overline{j}}\overline{\Omega})^{(1,2)}_{\overline{x} = 0}\right)^{-1}\cdot \nonumber \\ \cdot \int_{X}L_{CS}^{(2,1)}\mid_{B = B(u,x)}\wedge (d_{\overline{j}}\overline{\Omega})^{(1,2)}_{\overline{x} = 0} = \frac{\beta}{\lambda}\int_{X}{\Omega}^{(1,2)}_{A = A(x)}\wedge L_{CS}^{(2,1)}\mid_{B = B(u,x)} \end{eqnarray} The last equality has been obtained using the Riemann bilinear relations: \[ \int_{X}w\wedge\hat{w} = \sum_{a = 0}^{h_{2,1}}\int_{\delta_{a}}w\int_{\delta_{a + h_{2,1}}}\hat{w} - \int_{\delta_{a + h_{2,1}}}w\int_{\delta_{a}}\hat{w} \] where $\delta_{a}$ is a base of 3-cycles on X. First we express in this way the integrals containing $\Omega \wedge d_{i}\Omega$ and $ L_{CS}\wedge d_{\overline{j}}\overline{\Omega}$. Then we can define $X^{i}$ and $\overline{X}^{j}$ as three forms such that \[ \left(\int_{X}d_{i}\Omega\wedge d_{\overline{j}}\overline{\Omega}\right)^{-1} \equiv \int_{X}X^{i}\wedge \overline{X}^{j} = \sum_{a = 0}^{h_{2,1}}\int_{\delta_{a}}X^{i}\int_{\delta_{a + h_{2,1}}}\overline{X}^{j} - \int_{\delta_{a + h_{2,1}}}X^{i}\int_{\delta_{a}}\overline{X}^{j} \] respects the definition \[ \sum_{\overline{j}}\left(\int_{X}d_{i}\Omega\wedge d_{\overline{j}}\overline{\Omega}\right)^{-1} \int_{X}d_{k}\Omega \wedge d_{\overline{j}}\overline{\Omega} = \delta_{i,k} \] that is \[ \sum_{i}\int_{\delta_{a}}d_{i}\Omega\int_{\delta_{b}}X^{i} \equiv \delta_{a,b} \;\; \;\;\; \sum_{a = 0}^{2h_{2,1} + 2}\int_{\delta_{a}}d_{i}\Omega\int_{\delta_{a}}X^{j}\equiv \delta_{i,j} \] and similarly with the barred quantities. Substituting these expressions in (\ref{s}) we obtain the result. Equivalently for the term in $\Delta$ in the second line of (\ref{exp}) we get \begin{equation} \frac{\beta}{\lambda}\int_{X}{\Omega}^{(0,3)}_{A = A(x)}\wedge L_{CS}^{(3,0)}\mid_{B = B(u,x)} \end{equation} In order to reconstruct the full integral $\int_{X}{\Omega}_{A = A(x)}\wedge L_{CS}\mid_{B = B(u,x)}$ from the above equation the $(0,3)$ and $(1,2)$ components of $L_{CS}$ are still missing. Notice however that they can be recovered by requiring CPT invariance. In particular, we modify (\ref{s}) as \[ \frac{\beta}{\lambda}\int_{X}\left(\Omega^{(1,2)}_{A = A(x)} + \Omega^{(2,1)}_{A = A(x)}\right) \wedge (d_{i}\Omega)^{(2,1)}_{x = 0}\cdot g^{i\overline{j}}\cdot \] \[ \cdot\int_{X}\left(L_{CS}^{(2,1)}\mid_{B = B(u,x)} + L_{CS}^{(1,2)}\mid_{B = B(u,x)}\right) \wedge (d_{\overline{j}}\overline{\Omega})^{(1,2)}_{\overline{x} = 0} \] where the extra term actually vanishes due to form degree reasons. This lead to an additional term \[ \frac{\beta}{\lambda}\int_{X}{\Omega}^{(2,1)}_{A = A(x)}\wedge L_{CS}^{(1,2)}\mid_{B = B(u,x)} \] An analogous modification has to be performed in order to obtain the $(0,3)$ component of $L_{CS}$. The geometrical counterpart of the above is as follows. We know from the discussion of \cite{Wal1} that the coupling of the on-shell Chern-Simons action to $\Omega_0$ can be translated in mathematical terms to the pairing with the related normal function, $\nu$, dual to a suitable three-chain, $\Gamma$, such that \[ \int_{X}\Omega_{0}\wedge L_{CS}\mid_{B=B(u,x)} = \int_{\Gamma}\Omega_{0} = \langle \Omega_0, \nu \rangle \] and similarly for a $(2,1)$ form. Then it exists a lift of $\nu$ such that the coupling with a $(0,3)$ and $(1,2)$ forms are defined to be obtained by CPT invariance, that is complex conjugation of the corresponding $(0,3)$ and $(2,1)$ couplings. Summarizing we have shown that \begin{eqnarray}\label{r} \frac{1}{\lambda^{2}}S_{KS}(x^{i} + \beta\lambda\Delta^{i},\lambda^{-1} - \beta\Delta;t,\overline{t})\mid_{on \; shell} = \nonumber \\ =\left(\frac{1}{\lambda^{2}}S_{KS}(x^{i},\lambda^{-1};t,\overline{t}) + \frac{\beta}{\lambda}\int_{X}\Omega \wedge L_{CS} - \Omega db \right)\mid_{on \; shell} \end{eqnarray} in the gauge $F_{B_{0}}^{\tilde{2},\tilde{0}}=0$. Notice that the completion of the solution via CPT invariance obtained by adding the classical solutions of the anti-topological theory is consistent with the fact that, in our gauge, the gauge fixing $F^{(2,0)}=0$ and the equation of motion $F^{(0,2)}=0$ of the topological theory are the same, up to a switch of role, as in the {\it on shell} anti-topological one which is then manifestly CPT conjugate. \section{Conclusions} In this paper we provided a target space string field theory formulation for open and closed B-model, by giving a BV quantization of the holomorphic Chern-Simons theory with off shell gravity background. This allowed us to design a target space interpretation of the coefficients in the HAE with open moduli in general. In this paper we applied our formalism to reproduce the results of \cite{Oog} and interpret them as an open/closed string duality. It would be interesting to study other explicit examples to refine the details of the scheme that we have been elaborating so far: on the conifold the on shell results of \cite{K} could be useful. Moreover, the target space formulas we obtained for the structure coefficients of the HAE should complete the data needed to rephrase the latter as conditions of background independence of the open B-model wave-function extending \cite{witten-ind}\cite{Wal2}. The picture we provided in this paper seems to allow an extension to generalized complex geometries. This should follow by the definition of an extended Chern-Simons functional where the 3-form $\Omega$ gets promoted to the relevant pure spinor as in \cite{luca}. Once this is done and the $b$ field promoted to a multiform, this would extend to open strings the proposal in \cite{pestun} to generalized complex geometry of an analog of the Kodaira-Spencer theory. \vspace{1 cm} {\bf Acknowledgements} We thank Camillo Imbimbo, Sara Pasquetti, Emanuel Scheidegger, Johannes Walcher and Jie Yang for useful discussions.
2,877,628,090,884
arxiv
\section*{Introduction} All fields considered in this paper are of characteristic zero. Let $R$ be a ring. A map $D\colon R \to R$ satisfying $D(a + b) = D(a) + D(b)$ and $D(ab) = aD(b) + D(a)b$ for all $a, b \in R$ is called \textit{derivation}. We will denote $D(x)$ by $x^{\prime}$ and $D^n(x)$ by $x^{(n)}$. A \textit{differential ring} $R$ is a ring with a specified derivation. A differential ring which is a field will be called a \textit{differential field}. Let $F \subset E$ be a differential field extension and $a \in E$. Let us denote by $F\langle a \rangle$ the differential subfield of $E$ generated by $F$ and $a$. If $F\langle a \rangle = E$, then element $a$ is said to be \textit{primitive}. An element $a \in R$ of the differential ring $R$ is said to be \textit{constant} if $a^{\prime} = 0$. Kolchin proved (\cite{kolchin}) a differential analogue of the primitive element theorem: \begin{theorem_old} Let $E = F\langle a_1, \ldots, a_n \rangle$ and $\trdeg_F E < \infty$. Assume also that $F$ contains a nonconstant element. Then, there exists $b \in E$ such that $E = F \langle b \rangle$. \end{theorem_old} \begin{corollary_old} Let $E = F\langle a_1, \ldots, a_n \rangle$ and $\trdeg_F E < \infty$. Assume also that $E$ contains a nonconstant element. Then, there exist $b, c \in E$ such that $E = F \langle b, c \rangle$. \end{corollary_old} \begin{remark} In \cite{kolchin} Kolchin considered a more general case, i.e. fields equipped with a set of commuting derivations. We restrict ourselves to the ordinary case. \end{remark} In \cite{bab} Babakhanian constructed primitive elements for several specific extensions $F \subset E$ with $F$ consisting of constant elements. The goal of the present paper is to prove the primitive element theorem for the case $f^{\prime} = 0$ for all $f \in F$. \section*{Main results} \begin{theorem}\label{th:density} Let $E = k \langle a, b \rangle$, $\trdeg_k E < \infty$, and $b^{\prime} \neq 0$. Then, there exists $p(x) \in k[x]$ such that $\trdeg_k k\langle a + p(b) \rangle = \trdeg_k k\langle a, b \rangle$. \end{theorem} \begin{theorem}\label{th:primitive} Let $E = k\langle a_1, \ldots, a_m \rangle$, $\trdeg_k E < \infty$, and $E$ contains a nonconstant. Then, there exists $a \in E$ such that $E = k\langle a \rangle$. \end{theorem} \begin{remark} Unlike Kolchin's proof it is not sufficient to consider elements of the form $a + \lambda b$ ($\lambda \in k$). For example, let $\mathbb{Q}(x, y)$ be a differential field with the derivation defined by $x^{\prime} = 1$ and $y^{\prime} = 0$. There is no primitive element of the form $y + \lambda x$ ($\lambda \in \mathbb{Q}$), but $\mathbb{Q}(x, y) = \mathbb{Q} \langle x^2 + y \rangle$. \end{remark} \begin{proof}[Proof of Theorem \ref{th:density}] We will need the following well-known lemmas: \begin{lemma} If $\trdeg_k k\langle a \rangle = n$, then $k\langle a \rangle = k\left( a, a^{\prime}, \ldots, a^{(n)} \right)$. \end{lemma} \begin{proof} Let $m$ be the minimal integer such that $a, \ldots, a^{(m)}$ are algebraically dependent over $k$. Let $R(a, \ldots, a^{(m)}) = 0$ be the corresponding algebraic relation. Hence $$0 = \left( R(a, \ldots, a^{(m)}) \right)^{\prime} = \sum\limits_{i = 0}^{m} a^{(i + 1)} \frac{\partial}{\partial a^{(i)}} R$$ , so $a^{(m + 1)} \in k(a, \ldots, a^{(m)})$. Similarly we obtain that $a^{(N)} \in k(a, \ldots, a^{(m)})$ for all $N$. Hence, $n = m$ and $k\langle a\rangle = k\left( a, \ldots, a^{(n)} \right)$. \end{proof} \begin{lemma}[\cite{ritt}, p.35]\label{lem:nonzero} Let $q(x, x^{\prime}, \ldots, x^{(n)})$ be a nonzero differential polynomial over a differential field $E$. Let $f \in E$ be a noncostant element. Then, there exists $p(t) \in \mathbb{Q}[t]$ such that $$\left.q(x, x^{\prime}, \ldots, x^{(n)})\right\rvert_{x = p(f)} \neq 0$$ \end{lemma} Without loss of generality, we can assume that $E = k\langle a, b\rangle$. Let us introduce algebraically independent variables $\Lambda_0, \Lambda_1, \ldots$. We extend the derivation from $E$ to $E[\Lambda_0, \Lambda_1, \ldots]$ by $\left( \Lambda_i \right)^{\prime} = b^{\prime} \Lambda_{i + 1}$. This construction can be explained by the following observation: let us fix a polynomial $p(x) \in \mathbb{Q}[x]$; the formulas $\varphi_p(\Lambda_i) = p^{(i)}(b)$ define a homomorphism of differential $k$-algebras $\varphi_p\colon E[\Lambda_0, \Lambda_1, \ldots] \to E$. Let $c = a + \Lambda_0$ and $K = k(\Lambda_0, \Lambda_1, \ldots) \subset E(\Lambda_0, \Lambda_1, \ldots)$. Since $K\langle c \rangle \subset K\langle a , b \rangle$, $\trdeg_K K\langle c \rangle = n < \infty$. Let nonzero $R(x_0, \ldots, x_n) \in K[x_0, \ldots, x_n]$ satisfy $R(c, c^{\prime}, \ldots, c^{(n)}) = 0$. Notice that it depends on $x_n$. Multiplying by the suitable element of $k[\Lambda_0, \Lambda_1, \ldots]$, we obtain a polynomial in both $c, c^{\prime}, \ldots, c^{(n)}$ and $\Lambda_0, \ldots, \Lambda_N$ over $k$. Let us denote it by $Q(c, \ldots, c^{(n)}, \Lambda_0, \ldots, \Lambda_N)$. Moreover, we assume that $Q$ satisfies the following conditions: \begin{enumerate} \item $\deg_{c^{(n)}} Q$ is a minimal possible; \item under the above condition, $N$ is a minimal possible; \item under the above conditions, $\deg_{\Lambda_N} Q$ is minimal possible. \end{enumerate} \begin{lemma} $N = n$. \end{lemma} \begin{proof} Assume that $N > n$. Let us rewrite $Q$ as a polynomial in $\Lambda_N$: $Q = q_m \Lambda_N^m + \ldots + q_0$, where $q_i$ are polynomials over $k$ in $c, \ldots, c^{(n)}, \Lambda_0, \ldots, \Lambda_{N - 1}$. Since $N > n$, $c, \ldots, c^{(n)} \in E(\Lambda_0, \ldots, \Lambda_{N - 1})$ and $\Lambda_N$ is transcendental over $k(c, \ldots, c^{(n)}, \Lambda_0, \ldots, \Lambda_{N - 1})$. Thus, $Q = 0$ implies $q_i = 0$ for all $i$. We obtained a contradiction with minimality of $N$. Assume that $N < n$. Clearly, $c^{(n)} = (b^{\prime})^n \Lambda_n + c_0$, where $c_0 \in E(\Lambda_0, \ldots, \Lambda_{n - 1})$. Thus, $c^{(n)}$ is transcendental over $k(c, \ldots, c^{(n - 1)}, \Lambda_0, \ldots, \Lambda_N) \subset E(\Lambda_0, \ldots, \Lambda_{n - 1})$. But $Q$ depends on $c^{(n)}$. This contradiction proves the lemma. \end{proof} \begin{lemma}\label{lem:part_dev_nonzero} $\frac{\partial}{\partial\Lambda_n} Q \neq 0$. \end{lemma} \begin{proof} It follows immediately from the minimality conditions for $Q$ and inequalities $\deg_{c^{(n)}} Q \geqslant \deg_{c^{(n)}} \frac{\partial}{\partial \Lambda_n} Q$ and $\deg_{\Lambda_n} Q > \deg_{\Lambda_n} \frac{\partial}{\partial\Lambda_n} Q > -\infty$. \end{proof} We return to the proof of Theorem~\ref{th:density}. Let $p(x) \in \mathbb{Q}[x]$. Applying $\varphi_p$ to $Q(c, \ldots, c^{(n)}, \Lambda_0, \ldots, \Lambda_n)$, we obtain an algebraic dependence for $\varphi_p(\Lambda_0), \ldots, \varphi_p(\Lambda_n) \in \mathbb{Q}[b]$ over $k\left( \varphi_p(c), \ldots, \varphi_p (c^{(n)})\right)$. It yields to an algebraic dependence for $b$ over $k\left( \varphi_p(c), \ldots, \varphi_p(c^{(n)}) \right)$. The goal is to find an appropriate $p(x)$ to make this dependence nontrivial. Let us compute its derivation with respect to $b$: \begin{multline} \frac{\partial}{\partial b} Q\left( \varphi_p(c), \ldots, \varphi_p(c^{(n)}), p(b), \ldots, p^{(n)}(b) \right) = \\ =\quad{} \sum\limits_{i = 0}^{n} \varphi_p \left( \frac{\partial}{\partial \Lambda_i} Q\right) p^{(i + 1)}(b) \quad{} = \quad \varphi_p \left( \sum\limits_{i = 0}^{n} \Lambda_{i + 1} \frac{\partial}{\partial \Lambda_i} Q \right) \end{multline} By Lemma \ref{lem:part_dev_nonzero} the polynomial $T = \sum\limits_{i = 0}^{n} \Lambda_{i + 1} \frac{\partial}{\partial\Lambda_i} Q$ is nonzero. Since $c = a + \Lambda_0$, we can rewrite $T$ as a nonzero polynomial in $\Lambda_0, \ldots, \Lambda_{n + 1}$ over $k\langle a, b\rangle$. Let us denote the derivation on $E$ by $D$. Then, $\tilde{D} = \frac{1}{b^{\prime}} D$ is also a derivation on $E$. Obviously, $\tilde{D} \Lambda_i = \Lambda_{i + 1}$. Hence, we can apply Lemma \ref{lem:nonzero} to the differential field $E$ with respect to $\tilde{D}$, nonconstant element $b \in E$, and the polynomial $T$ in variables $\Lambda_0, \ldots, \Lambda_n$. Therefore, we obtain such $p(x)\in \mathbb{Q}[x]$ that $\varphi_p (T) \neq 0$. Since $\varphi_p(c) = a + p(b)$ and $b$ are both algebraic over $k\langle \varphi_p(c) \rangle$, $a$ is also algebraic over $k\langle \varphi_p(c) \rangle$. Hence, $\trdeg_k k\langle \varphi_p(c) \rangle = \trdeg k\langle a, b \rangle$. \end{proof} The following corollary can be derived using exactly the same argument as above. \begin{corollary}\label{cor:density} Let $E = k\langle a, b \rangle$, $\trdeg_k E < \infty$, $b^{\prime} \neq 0$ and $c\in k\langle a, b\rangle$. Then, there exists $p(x) \in k[x]$ such that $\trdeg k\langle a + c \cdot p(b) \rangle = \trdeg k\langle a, b \rangle$. \end{corollary} \begin{proof}[Proof of Theorem \ref{th:primitive}.] Due to Theorem \ref{th:density} there exists $a\in E$ such that $\trdeg_k E = \trdeg_k k\langle a \rangle = n$. Since $\dim_{k\langle a \rangle} E < \infty$ there exists $b\in E$ such that $E = k\langle a, b\rangle$. We are going to find $\lambda_1, \ldots, \lambda_{n + 2} \in k$ such that $E = k\langle b + \lambda_1 a + \lambda_2 a^2 + \ldots + \lambda_{n + 2} a^{n + 2}\rangle$. We will use the method used by Kolchin in \cite[p.729]{kolchin}. Let us recall necessary definitions. Let $K_1$ be a differential extension field of $L$. By an \textit{isomorphism of $K_1$ with respect to $L$} we will mean an isomorphic mapping of $K_1$ onto a differential field $K_2$ such that \begin{enumerate} \item $K_2$ is an extension of $L$; \item the isomorphic mapping leaves each element of $L$ invariant; \end{enumerate} \begin{lemma}[Kolchin, (\cite{kolchin}, p.726)] \label{lem:invariant} Let $E$ be an extension of $F$ and $\gamma \in E$. A necessary and sufficent condition for $E = F\langle \gamma \rangle$ is that no isomorphism of $E$ with respect to $F$ other then the identity leaves $\gamma$ invariant. \end{lemma} Let $R(x, x^{\prime}, \ldots, x^{(n)}) \in k\{ x \}$ (shortly, $R(x)$) have a solution $x = a$ and $Q(x, x^{\prime}, \ldots, x^{(n - 1)}, y)\in k\{x, y\}$ (shortly, $Q(x, y)$) have a solution $x = a$, $y = b$. We will show that there exist elements $\lambda_1, \ldots, \lambda_n$ such that $z = y + \lambda_1 x + \ldots + \lambda_n x^n$ takes different values for different solutions of $\{ R(x), Q(x, y)\}$. Then, certainly $z$ will satisfy the requirements on $\gamma$ from Lemma \ref{lem:invariant}. To prove this statement, let $t_1, \ldots, t_{n + 2}$ be new indeterminates, and, in $E\{x, y, t_1, \ldots, t_{n + 2}\}$, consider the perfect differential ideal (for definitions see \cite[p.2, p.7]{ritt}) $$I = \{ R(x), Q(x, y), t_1^{\prime}, \ldots, t_{n + 2}^{\prime}, b - y + t_1(a - x) + t_2(a^2 - x^2) + \ldots + t_{n + 2}(a^{n + 2} - x^{n + 2}) \}$$ Let $I = \mathfrak{p}_1 \cap \ldots \cap \mathfrak{p}_s$ be the decomposition of $I$ into essential prime differential ideals (see \cite[p.13]{ritt}), and suppose the subscripts have been assigned so that each $\mathfrak{p}_1$, $\ldots$, $\mathfrak{p}_r$ contains both $a - x$ and $b - y$, whereas $\mathfrak{p}_{r + 1}$, $\ldots$, $\mathfrak{p}_s$ each fails to contain either $a - x$ or $b - y$. Consider $\mathfrak{p}_j$ with $j > r$. If $b - y \notin \mathfrak{p}_j$, then also $a - x \notin \mathfrak{p}_j$. Thus, $a - x \notin \mathfrak{p}_j$ for all $j > r$. Let $\overline{x}, \overline{y}, \overline{t}_1, \ldots, \overline{t}_n$ be a generic solution of $\mathfrak{p}_j$ (see \cite[p.725]{kolchin}). Differentiating the equation $$ b -\overline{y} + \overline{t}_1(a - \overline{x}) + \overline{t}_2(a^2 - \overline{x}^2) + \ldots + \overline{t}_n(a^n - \overline{x}^n) = 0 $$ $n + 1$ times, we obtain a system of linear equations in $\overline{t}_1, \ldots, \overline{t}_{n + 2}$. Let us investigate it. Let us denote by $\wronsk(f_1, \ldots, f_N)$ the wronskian of $f_1$, $\ldots$, $f_N$ (see \cite[chap. 2]{magid}). Let $W_{k, l}(x, y)$ be given by $\wronsk(x - y, \ldots, \widehat{x^l - y^l}, \ldots, x^{k + 1} - y^{k + 1})$ where $k \geqslant 2$ and $1 \leqslant l \leqslant k + 1$. \begin{lemma}\label{lem:trdeg_estim1} If $W_{k, l}(a, \overline{x}) = 0$ for all $1 \leqslant l \leqslant k + 1$, then $\trdeg_k k\langle a, \overline{x} \rangle \leqslant n + k - 2$. \end{lemma} \begin{proof} Let $x$ and $y$ be differential indeterminates. First of all, we are going to establish several properties of differential polynomials $W_{k, l}(x, y)$. \begin{lemma} $W_{k, l}(x, y) = A_l(x, y) + x^{(k - 1)}B_l(x, y) + y^{(k - 1)}C_l(x, y)$ where $A_l, B_l, C_l \in \mathbb{Q}[x, \ldots, x^{(k - 2)}, y, \ldots, y^{(k - 2)}]$. Moreover, if $k \geqslant 3$, then $B_l(x, y) = -y^{\prime}D_l(x, y)$ and $C_l(x, y) = x^{\prime}D_l(x, y)$ where $D_l \in \mathbb{Q}[x, \ldots, x^{(k - 2)}, y, \ldots, y^{(k - 2)}]$. \end{lemma} \begin{proof} For the sake of simplicity let us consider $l = k + 1$. The proof for the other cases is analogous. The last row of the corresponding matrix is a sum of three rows: $x^{(k - 1)}(1, 2x, \ldots, kx^{k - 1})$, $-y^{(k - 1)}(1, 2y, \ldots, y^{k - 1})$ and $(a_1, \ldots, a_k)$ where $a_i \in \mathbb{Q}[x, \ldots, x^{(k - 2)}, y, \ldots, y^{(k - 2)}]$ for all $i$. Thus, the determinant $W_{k, k + 1}(x, y)$ can be expressed as a sum: \resizebox{\hsize}{!}{$ \begin{vmatrix} x - y & \ldots & x^k - y^k \\ \vdots & \ddots & \vdots \\ (x - y)^{(k - 2)} & \ldots & \left(x^k - y^k\right)^{(k - 2)} \\ a_1 & \ldots & a_k \end{vmatrix} + x^{(k - 1)} \begin{vmatrix} x - y & \ldots & x^k - y^k \\ \vdots & \ddots & \vdots \\ (x - y)^{(k - 2)} & \ldots & \left(x^k - y^k\right)^{(k - 2)} \\ 1 & \ldots & kx^{k - 1} \end{vmatrix} - y^{(k - 1)} \begin{vmatrix} x - y & \ldots & x^k - y^k \\ \vdots & \ddots & \vdots \\ (x - y)^{(k - 2)} & \ldots & \left(x^k - y^k\right)^{(k - 2)} \\ 1 & \ldots & ky^{k - 1} \end{vmatrix} $} The above equality proves the first statement of the lemma. Now let $k \geqslant 3$. The second row of the corresponding matrix is a sum of $x^{\prime}(1, 2x, \ldots, kx^{k - 1})$ and $-y^{\prime}(1, 2y, \ldots, ky^{k - 1})$. Hence, substracting the last row from the second, we obtain: $$ B_l = \begin{vmatrix} x - y & \ldots & x^k - y^k \\ x^{\prime} - y^{\prime} & \ldots & x^{\prime}kx^{k - 1} - y^{\prime}ky^{k - 1} \\ \vdots & \ddots & \vdots \\ 1 & \ldots & kx^{k - 1} \end{vmatrix} = -y^{\prime} \begin{vmatrix} x - y & \ldots & x^k - y^k \\ 1 & \ldots & ky^{k - 1} \\ \vdots & \ddots & \vdots \\ 1 & \ldots & kx^{k - 1} \end{vmatrix} $$ Let us denote the latter determinant by $D_l$. Then, $B_l = -y^{\prime}D_l$. Likewise, $C_l = x^{\prime}D_l$, so we are done. \end{proof} \begin{lemma} \label{lem:cramer} At least one of $\frac{W_{k, 1}(x, y)}{W_{k, k + 1}(x, y)}$, $\ldots$, $\frac{W_{k, k}(x, y)}{W_{k, k + 1}(x, y)}$ depends on $x^{(k - 1)}$ and $y^{(k - 1)}$. \end{lemma} \begin{proof} Since all these differential polynomials are symmetric in $x$ and $y$, it suffices to prove that at least one of them depends on either $x^{(k - 1)}$ or $y^{(k - 1)}$. Assume the contrary, that $(-1)^{k - l}\frac{W_{k, l}(x, y)}{W_{k, k + 1}(x, y)} \in \mathbb{Q}(x, \ldots, x^{(k - 2)}, y, \ldots, y^{(k - 2)})$. By the Cramer's rule these fractions are solutions of the following system of linear equations in $\alpha_1, \ldots, \alpha_k$: \begin{equation} \label{eq:system} \tag{*} \begin{pmatrix} x - y & \ldots & x^k - y^k \\ \vdots & \ddots & \vdots \\ (x - y)^{(k - 1)} & \ldots & \left( x^k - y^k \right)^{(k - 1)} \end{pmatrix} \begin{pmatrix} \alpha_1 \\ \vdots \\ \alpha_k \end{pmatrix} = \begin{pmatrix} x^{k + 1} - y^{k + 1} \\ \vdots \\ \left( x^{k + 1} - y^{k + 1} \right)^{(k - 1)} \end{pmatrix} \end{equation} Since $\alpha_1, \ldots, \alpha_k \in \mathbb{Q}(x, \ldots, x^{(k - 2)}, y, \ldots, y^{(k - 2)})$ and both $x^{(k - 1)}$ and $y^{(k - 1)}$ are transcendental over this field, the last equation also implies two following equalities: \begin{equation} \label{eq:cons} \tag{**} \begin{cases} \alpha_1 + 2x\alpha_2 + \ldots + kx^{k - 1}\alpha_k = (k + 1)x^{k} \\ \alpha_1 + 2y\alpha_2 + \ldots + ky^{k - 1}\alpha_k = (k + 1)y^{k} \end{cases} \end{equation} We are going to assign values from a particular differential field to $x$ and $y$. Precisely, let $\mathbb{C}(t)$ be a field of rational functions equipped with standard derivation ($t^{\prime} = 1$) and $\xi$ be a primitive $(k + 1)$-th root of unity. Let $x = t$ and $y = \xi t$. The matrix of the system (\ref{eq:system}) is nondegenerate, because its determinant equals $\wronsk\left( (1 - \xi)t, \ldots, (1 - \xi^k)t^k \right)$, which is nonzero since $(1 - \xi)t$, $\ldots$, $(1 - \xi^k)t^k$ are linearly independent over constants (\cite[prop. 2.8]{magid}). Clearly, the unique solution of the system (\ref{eq:system}) in this case is $\alpha_1 = \ldots = \alpha_k = 0$. But the equalities (\ref{eq:cons}) do not hold. This contradiction proves the lemma. \end{proof} \begin{corollary} \label{cor:alg_relation} If $k \geqslant 3$ then there exists $1 \leqslant l \leqslant k$ such that $W_{k, l}(x, y)D_{k + 1}(x, y) - W_{k, k + 1}(x, y)D_l(x, y) = A_l(x, y)D_{k + 1}(x, y) - A_{k + 1}(x, y)D_l(x, y) \neq 0$. \end{corollary} \begin{proof} By the Lemma \ref{lem:cramer} there exists $1 \leqslant l \leqslant k$ such that $\frac{W_{k, l}(x, y)}{W_{k, k + 1}(x, y)}$ depends on $x^{(k - 1)}$ and $y^{(k - 1)}$. This means that vectors $(A_l, B_l, C_l)$ and $(A_{k + 1}, B_{k + 1}, C_{k + 1})$ are not proportional. Thus, $D_{k + 1}(A_l, B_l, C_l) - D_l(A_{k + 1}, B_{k + 1}, C_{k + 1}) = (A_lD_{k + 1} - A_{k + 1}D_l, 0, 0) \neq 0$. \end{proof} We return to the proof of Lemma~\ref{lem:trdeg_estim1}. Let us consider two cases: \begin{enumerate} \item $k \leqslant 3$. Let $l$ be the index from the corollary \ref{cor:alg_relation}. Since $W_{k, l}(a, \overline{x}) = W_{k, k + 1}(a, \overline{x}) = 0$, $W_{k, l}(a, \overline{x})D_{k + 1}(a, \overline{x}) - W_{k, k + 1}(a, \overline{x})D_l(a, \overline{x})$ provides us an algebraic relation between $a$, $\ldots$, $a^{(k - 2)}$, $\overline{x}$, $\ldots$, $\overline{x}^{(k - 2)}$. Hence, $\overline{x}^{(j)}$ is algebraic over $k(a, \ldots, a^{(n - 1)}, \overline{x}, \ldots, \overline{x}^{(k - 3)})$ for all $j$. Thus, $k\langle a, \overline{x} \rangle$ is algebraic over $k(a, \ldots, a^{(n - 1)}, \overline{x}, \ldots, \overline{x}^{(k - 3)})$. Since $\trdeg_k k(a, \ldots, a^{(n - 1)}, \overline{x}, \ldots, \overline{x}^{(k - 3)}) \leqslant n + k - 2$, we are done. \item $k = 2$. In this case both $W_{2, 2}(x, y)$ and $W_{2, 3}(x, y)$ can be computed directly: $$W_{2, 3}(x, y) = (x - y)^2(x^{\prime} + y^{\prime})$$ $$W_{2, 2}(x, y) = (x - y)( x^{\prime}(2x^2 - xy - y^2) + y^{\prime}(x^2 + xy - 2y^2) )$$ If both $W_{2, 3}(a, \overline{x})$ and $W_{2, 2}(a, \overline{x})$ vanish, either $a^{\prime} = \overline{x}^{\prime} = 0$ or the determinant of the system of linear equations $W_{2, 3}(a, \overline{x}) = W_{2, 2}(a, \overline{x}) = 0$ in variables $a^{\prime}$ and $\overline{x}^{\prime}$ vanishes, i.e. $-(a - \overline{x})^5 = 0$. Both cases are impossible since $a \neq \overline{x}$. \end{enumerate} \end{proof} \begin{lemma}\label{lem:trdeg_estim2} $\trdeg_k k\langle a, b, \overline{x}, \overline{y}, \overline{t}_1, \ldots, \overline{t}_{n + 2}\rangle \leqslant 2n + 1$. \end{lemma} \begin{proof} Differentiating the equation $\overline{y} - b = \overline{t}_1(a - \overline{x}) + \ldots + \overline{t}_{n + 2}(a^{n + 2} - \overline{x}^{n + 2})$, we obtain the following matrix equality: $$ \begin{pmatrix} \overline{y} - b \\ \overline{y}^{\prime} - b^{\prime} \\ \vdots \\ \overline{y}^{(n + 1)} - b^{(n + 1)} \end{pmatrix} = \begin{pmatrix} a - \overline{x} & \ldots & a^{n + 2} - \overline{x}^{n + 2} \\ a^{\prime} - \overline{x}^{\prime} & \ldots & \left(a^{n + 2} - \overline{x}^{n + 2} \right)^{\prime} \\ \vdots & \ddots & \vdots \\ a^{(n + 1)} - \overline{x}^{(n + 1)} & \ldots & \left(a^{n + 2} - \overline{x}^{n + 2} \right)^{(n + 1)} \end{pmatrix} \begin{pmatrix} \overline{t}_1 \\ \overline{t}_2 \\ \vdots \\ \overline{t}_{n + 2} \end{pmatrix} $$ Let $k$ be a minimal number such that for all $1 \leqslant l \leqslant k + 1$ the equality $W_{k, l}(a, \overline{x}) = 0$ holds. Let us consider two cases: \begin{enumerate} \item $k < n + 2$. Thus, at least one of $(k - 1) \times (k - 1)$ minors of the matrix: $$ \begin{pmatrix} a - \overline{x} & \ldots & a^{n + 2} - \overline{x}^{n + 2} \\ \ldots & \ddots & \vdots \\ a^{(k - 2)} - \overline{x}^{(k - 2)} & \ldots & \left( a^{n + 2} - \overline{x}^{n + 2} \right)^{(k - 2)} \end{pmatrix} $$ is nondegenerate. Let $W_{k - 1, l}(a, \overline{x}) \neq 0$. Multiplying by the inverse matrix, we obtain the formulas which express $\overline{t}_j$ as a rational function in $a$, $b$, $\overline{x}$, $\overline{y}$ and their derivations, $\overline{t}_l$, $\overline{t}_{k + 1}, \ldots, \overline{t}_{n + 2}$ for all $1 \leqslant j \leqslant k$ and $j \neq l$. Hence, $\overline{t}_1, \ldots, \overline{t}_{l - 1}, \overline{t}_{l + 1}, \ldots, \overline{t}_k \in k\langle a, b, \overline{x}, \overline{y} \rangle(\overline{t}_l, \overline{t}_{k + 1},\ldots, \overline{t}_{n + 2})$. By Lemma \ref{lem:trdeg_estim1}, $\trdeg_k k\langle a, b, \overline{x}, \overline{y} \rangle \leqslant n + k - 2$. Thus $$ \trdeg_k k\langle a, b, \overline{x}, \overline{y}, \overline{t}_1, \ldots, \overline{t}_{n + 2} \rangle \leqslant \trdeg_k k\langle a, b, \overline{x}, \overline{y} \rangle + n - k + 3 \leqslant (n + k - 2) + (n - k + 3) = 2n + 1 $$ \item $k \geqslant n + 2$. In this case $\trdeg_k k\langle a, b, \overline{x}, \overline{y} \rangle \leqslant 2n$. There exists $1 \leqslant l \leqslant n + 2$ such that $W_{n + 1, l}(a, \overline{x}) \neq 0$. By the same argument as above $\overline{t}_1, \ldots, \overline{t}_{l - 1}, \overline{t}_{l + 1}, \ldots, \overline{t}_{n + 2} \in k\langle a, b, \overline{x}, \overline{y} \rangle(\overline{t}_l)$. The desired inequality is now obvious. \end{enumerate} \end{proof} Lemma \ref{lem:trdeg_estim2} implies that $\overline{t}_1$, $\ldots$, $\overline{t}_{n + 2}$ are algebraically dependent over $k\langle a, b\rangle$. Let us denote this dependence by $P_j(t_1, \ldots, t_{n + 2}) \in E[t_1, \ldots t_{n + 2}]$. Consider the polynomial $P = P_{r + 1} \cdot \ldots \cdot P_s$. Let $\lambda_1, \ldots, \lambda_{n + 2} \in k$ satisfy $P(\lambda_1, \ldots, \lambda_{n + 2}) \neq 0$. Then, $b - y + \lambda_1(a - x) + \ldots + \lambda_{n + 2}(a^{n + 2} - a^{n + 2}) \neq 0$ for any solution of $\{ R(x), Q(x, y) \}$ other than $(b, a)$. Therefore, the proof of the theorem is complete. \end{proof} The author is grateful to Dmitry Trushin and Yu.P. Razmyslow for useful discussions.
2,877,628,090,885
arxiv
\section{Introduction} It is of vital importance when developing computer software to have testing procedures that ensure it functions as required. Testing aims to provide objective information about the quality of the software and its risks of failure. In general testing may include checking functionality given valid and invalid inputs, potential concurrency issues, security vulnerabilities and so on. Consequently testing during the software development and test cycles seeks to improve functionality and fix bugs. Moreover tests from the previous and current development cycles may be of use in helping to reduce the risk of malfunctions in future development cycles. However despite extensive literature on software testing, e.g. \cite{Tarlinder2016, Black2013, Myers2012}, it is too often given inadequate attention leading to avoidable errors \cite{Merali2010}. Testing frameworks are not only needed for assessing the functionality of software packages but are also vital for benchmarking numerical software. The latter requires an adequate amount of dynamic testing \cite{Fairley1978}. Floating-point issues aside it may be a significant challenge to find a suitable mathematical regime under which to perform sufficiently rigorous tests to verify the numerical solutions of mathematical models. As such benchmarking numerical solutions is often applied to very specific models motivated by a particular application for example: Proposal for numerical benchmarking of fluid-structure interaction between an elastic object and laminar incompressible flow \cite{Turek2006}, Benchmarking of numerical integration methods for ODE models of biological systems \cite{Stadter2020}, Benchmarking and developing numerical finite element models of volcanic deformation \cite{Hickey2014} and Benchmarking five numerical simulation techniques for computing resonance wavelengths and quality factors in photonic crystal membrane line defect cavities \cite{lasson2018}. This is by no means an exhaustive list since in general a given class of equations or models has a corresponding set of solutions therefore with regard to benchmarking numerical solutions each class will need to be considered on an individual basis. This paper introduces a procedure for numerically testing the solutions from systems of ordinary differential equations (ODEs) which are solved numerically using stochastic methods. This is of particular interest when the system of ODEs cannot be solved analytically. Although the basic procedure proposed in this paper may be easily adapted to systems of ODEs which are solved numerically by a purely deterministic method such as the Euler method, the primary interest here is the arguably harder problem of testing the validity of numerical solutions produced using stochastic methods. The purpose of this paper is to introduce a novel testing/benchmarking procedure for stochastic compartmental models. Given such a model the Reed-Frost chain Binomial algorithm \cite{Abbey1952, Fine1977} and/or Gillespie algorithm \cite{Gillespie1976, Gillespie1977} are used to simulate realisations of the solution at discrete time points. The distribution of realisations is compared using statistical techniques with an exact solution derived analytically from the underlying system of ODEs. The entire time evolution of the system of ODEs is included in this novel testing procedure, hence there is no requirement for the model to be in any particular state e.g. a thermodynamic limit or a late time steady state. Additionally this testing procedure does not rely upon setting a random seed or using a prespecified system architecture. The new methodology presented here therefore checks that the simulated realisations adhere to the underlying system of ODEs. Importantly this allows the solutions produced by computer code to be benchmarked and in a more formal testing setting this procedure could form the basis of a unit test. To demonstrate this novel testing approach the stochastic compartmental model for a susceptible-infected-recovered (SIR) epidemic process is used as an exemplar: this class of models is well documented for example see textbook \cite{Brauer2019}. The Lotka–Volterra, predator-prey, model \cite{Lotka1925, Volterra1926} is used to demonstrate both the generality of the testing approached and highlight subtle differences in its application to both models e.g. differences in the form of the numerical solutions. \iffalse First, given the SIR model system of ODEs we derive a partial solution analytically using standard integration techniques. Secondly, the Reed-Frost chain Binomial algorithm and Gillespie algorithm are both explicitly defined for clarity and then used to generate simulations of the SIR epidemic process. Finally, the resulting distribution of realisations is compared with the partial solution from the system of ODEs. This model is treated as a stochastic compartmental model so that the Gillespie algorithm can be used to simulate realisations that represent this biological process. The realisations estimated for both the SIR and Lotka–Volterra models are shown to be consistent with their respective underlying system of ODEs. As a result the procedure presented here is confirmed to be useful for checking that the computer code is producing plausible estimates of the realisations. It follows that the code producing the numerical solutions for these two stochastic compartmental models can be numerically benchmarked: \fi The extent to which the approach presented in this paper can be applied to arbitrary compartmental models is explored in the Discussion section. \section{Outline of testing approach} \label{section:outlineMeth} The following benchmarking procedure is proposed given the realisations (solution) computed numerically from a stochastic compartmental model: \begin{enumerate} \item For a given set of parameters simulate realisations from the epidemic or biological process using for example a Reed-Frost chain Binomial or Gillespie algorithm. \item Fully or partially integrate analytically the system of ODEs describing the compartmental model. Rearrange the solution into an \emph{expression} in terms of one, or a combination of, parameters (i.e. parameters from the system of ODEs). \item Compute estimates of the parameter (or combination of parameters) by substituting the aforementioned realisations (from part 1.) into the \emph{expression} (from part 2.). \item Compute the mode of the distribution of parameter estimates (from part 3.) and check it equals, within an acceptable degree of uncertainty, the actual parameter value. \item Where viable repeat this procedure as required for other parameters, or combination of parameters, in the system of ODEs. \end{enumerate} In general, given a the system of ODEs for a stochastic compartmental model, any applicable method can be used to fully or partially integrate analytically. In this paper it is convenient to use the separation of variables technique. \section{SIR model} \subsection{System of ODEs} The system of ODEs for the continuous SIR model \cite{Brauer2019, Kermack1927, Hethcote2000} with $S(t)$ susceptible, $I(t)$ infected and $R(t)$ recovered individuals at time $t$ is \begin{equation} \label{eq:ODE1} \frac{dS(t)}{dt} = \frac{-\beta I(t)}{\mathcal{N}} S(t), \end{equation} \begin{equation} \label{eq:ODE2} \frac{dI(t)}{dt} = \frac{\beta I(t)}{\mathcal{N}} S(t) - \gamma I(t), \end{equation} \begin{equation} \label{eq:ODE3} \frac{dR(t)}{dt} =\gamma I(t). \end{equation} Constant $\mathcal{N}$ represents the total number of individuals in the population while real constants $\beta>0$ and $\gamma>0$ determine the rate a which individuals move between states $S \rightarrow I$ and $I \rightarrow R$ respectively. Clearly $dS(t)/dt+dI(t)/dt+dR(t)/dt=0$ from which it follows that $\mathcal{N}$ is conserved i.e. $S(t)+I(t)+R(t)=\mathcal{N}$. Consequently this SIR model represents a closed system in other words individuals do not flow in or out of the system via birth, death, migration, or any other means. The number of individuals flowing between states $S \rightarrow I$ and $I \rightarrow R$ can respectively be expressed in terms of the number of events $N_{SI}(t)$ and $N_{IR}(t)$ as follows \begin{equation} \label{eq:ODEeventsSI} \frac{dN_{SI}(t)}{dt} = \frac{-\beta I(t)}{\mathcal{N}} S(t), \end{equation} \begin{equation} \label{eq:ODEeventsIR} \frac{dN_{IR}(t)}{dt} = \gamma I(t). \end{equation} \bigskip Given the chain rule $dS/dt=dS/dR \,\, dR/dt\,$ it follows from Equations \ref{eq:ODE1} and \ref{eq:ODE3} that \begin{equation} \frac{dS(t)}{dR(t)} = \frac{-\beta S(t)}{\gamma \mathcal{N}}. \end{equation} Separation of variables may be used to solve this differential equation hence \begin{equation} \int_{S(0)}^{S(t)} \frac{1}{S} \,dS = \frac{-\beta}{\gamma \mathcal{N}} \int_{R(0)}^{R(t)} \,dR. \end{equation} The corresponding solution to this integral equation is \begin{equation} \label{eq:transc} \ln \frac{S(t)}{S(0)} = \frac{\beta}{\gamma \mathcal{N}} (R(0) - R(t)). \end{equation} This solution is a transcendental equation which cannot be solved analytically. However rearranging Equation \ref{eq:transc} in terms of $R(t)$ and noting the number of individuals is conserved leads to \begin{equation} \label{eq:transcExp} R(t) = \mathcal{N}-I(t)-S(0) \exp\left({\frac{\beta}{\gamma \mathcal{N}} R(0)}\right) \exp\left({\frac{-\beta}{\gamma \mathcal{N}} R(t)}\right). \end{equation} Given the Lambert $W_k$ function, the solution to $x=a+be^{cx}$ is $x=a-c^{-1}W_k(-bce^{ac})$ where $a$, $b$, and $c$ are complex constants, $b$ and $c$ are not equal to zero, and $k$ is an integer. Therefore the solution to Equation \ref{eq:transcExp} in terms of the Lambert $W_k$ function is \begin{equation} \label{eq:lambertW0} R(t) = \mathcal{N} - I(t) - \left(\frac{-\beta}{\gamma\mathcal{N}}\right)^{-1} W_0\left( \frac{-\beta}{\gamma\mathcal{N}} S(0) \exp \left( \frac{-\beta}{\gamma\mathcal{N}}(\mathcal{N}-I(t)-R(0)) \right) \right) \end{equation} where $k=0$ (principle branch) and $W_0(\cdot)$ has a single real value provided $R(t) \geqslant 0 \,\,\, \forall \, t$. Note that the above equation could be written in terms of $S(t)$ since $\mathcal{N}=S(t)+I(t)+R(t)$. A computer algorithm which approximates Equations \ref{eq:ODE1}~to~\ref{eq:ODE3} using a stochastic compartmental model produces a distribution of realisations which are expected to adhere, at least in a stochastic sense, to the underlying system of ODEs. Due to the inherent randomness of such a method none of the Equations \ref{eq:transc}, \ref{eq:transcExp} or \ref{eq:lambertW0} are in a convenient form to check that the realisations, at each time step, follow the underlying system of ODEs. However rearranging Equation \ref{eq:transc} in terms of the reproduction number $\beta/\gamma$ yields \begin{equation} \label{eq:boverg} \frac{\beta}{\gamma} = \frac{\mathcal{N}}{R(0) - R(t)} \ln \frac{S(t)}{S(0)}. \end{equation} All time dependent factors are on the right side hence the left side is time invariant. It is therefore reasonable to expect that any computer algorithm which approximates Equations \ref{eq:ODE1}~to~\ref{eq:ODE3} using a stochastic compartmental model will, given Equation \ref{eq:boverg}, produce reproduction number estimates at every time step provided $S(t) \neq S(0)$, $R(0) \neq R(t)$, $S(t) \neq 0$, and/or $S(0) \neq 0$. In practice, this will give rise to a distribution of reproduction number estimates where the mode equals the parameter ratio $\beta/\gamma$. Note it is not the objective here to seek solutions to this system of ODEs. \subsection{Stochastic compartmental model} Numerical solutions to Equations \ref{eq:ODE1}~to~\ref{eq:ODE3} are sought for the number of individuals in each state. For the SIR stochastic compartmental model let the state variables be denoted $\tilde{S}$, $\tilde{I}$ and $\tilde{R}$. The events, the number of individuals moving between states during a given time interval, will be denoted $\tilde{N}_{SI}$ and $\tilde{N}_{IR}$ for the state transitions $S \rightarrow I$ and $I \rightarrow R$ respectively. Let $\Delta$ denote an integer increment in a process over a finite time interval $[t,t+\delta t)$. For example the incremental change in the number of $S \rightarrow I$ events is $\Delta \tilde{N}_{SI}(t) = \tilde{N}_{SI}(t+\delta t) - \tilde{N}_{SI}(t)$. The discrete analogue of Equations \ref{eq:ODE1}~to~\ref{eq:ODE3} is \begin{equation} \Delta \tilde{S} = - \Delta \tilde{N}_{SI}(t), \end{equation} \begin{equation} \Delta \tilde{I} = \Delta \tilde{N}_{SI}(t) - \Delta \tilde{N}_{IR}(t), \end{equation} \begin{equation} \Delta \tilde{R} = \Delta \tilde{N}_{IR}(t). \end{equation} With reference to Equations \ref{eq:ODEeventsSI} and \ref{eq:ODEeventsIR} let small increments in the number of events be denoted as $\delta \tilde{N}_{SI} = \tilde{\mu}_{SI} \tilde{S}(t) \delta t$ and $\delta \tilde{N}_{IR} = \tilde{\mu}_{IR} \tilde{I}(t) \delta t$ where for convenience $\tilde{\mu}_{SI}(t) = \beta I(t)/\mathcal{N}$ and $\tilde{\mu}_{IR}=\gamma$. In the following two commonly used algorithms, Reed–Frost chain Binomial and Gillespie, are used in the context of the SIR model. Given there are various versions of these algorithms they will be explicitly defined below for clarity. \subsubsection{Reed–Frost chain Binomial algorithm} There are several stochastic Euler schemes based on the Reed–Frost chain Binomial method \cite{Abbey1952, Fine1977} which could be used to determine the number of events $\Delta \tilde{N}_{SI}(t)$ and $\Delta \tilde{N}_{IR}(t)$. For example a Poisson distribution where at each time increment $\Delta \tilde{N}_{SI}(t) \sim \mathrm{Poi}(\tilde{\mu}_{SI}(t) \tilde{S(t)} \delta t)$ and $\Delta \tilde{N}_{IR}(t) \sim \mathrm{Poi}(\tilde{\mu}_{IR} \tilde{I(t)} \delta t)$. However here the focus is on the Binomial distribution where the realisations of events are given by \begin{equation} \Delta \tilde{N}_{SI}(t) \sim \mathrm{Bin}(\tilde{S}(t), \,\, 1-\exp(-\tilde{\mu}_{SI}(t) \delta t)), \end{equation} \begin{equation} \Delta \tilde{N}_{IR}(t) \sim \mathrm{Bin}(\tilde{I}(t), \,\, 1- \exp(-\tilde{\mu}_{IR} \delta t)). \end{equation} Hence the probability, conditional on all states, of one infection in time period $\delta t$ is given by the cumulative distribution function of the Exponential distribution. It follows that the number of individuals in each state at time $t$ is \begin{equation} \label{eq:solA} \tilde{S}(t) = \tilde{S}(0) - \sum_{\tau=1}^{t} \Delta \tilde{N}_{SI}(\tau \delta t), \end{equation} \begin{equation} \label{eq:solB} \tilde{I}(t) = \tilde{I}(0) + \sum_{\tau=1}^{t} \left(\Delta \tilde{N}_{SI}(\tau \delta t) - \Delta \tilde{N}_{IR}(\tau \delta t) \right), \end{equation} \begin{equation} \label{eq:solC} \tilde{R}(t) = \tilde{R}(0) + \sum_{\tau=1}^{t} \Delta \tilde{N}_{IR}(\tau \delta t). \end{equation} The Reed–Frost chain Binomial algorithm uses the Euler–Maruyama approximation to compute the states of this system at each time step: see Algorithm~\ref{al:cb}. In this algorithm the number of individuals arriving in each state at a given time step depends on the number in the corresponding state at the previous time step. Consequently the realisations from each state form a first-order Markov process. \bigskip \begin{algorithm}[H] \SetAlgoLined $\delta t>0$, $t=0,1,...,T$\\ $\beta>0$, $\gamma>0$\\ $S_0>0$, $I_0>0$, $R_0\geqslant0$\\ $\mathcal{N}=S_0+I_0+R_0$\\ \While{$(t<T$ $\mathrm{and}$ $I_t \neq0)$}{ Draw:\\ \hspace{1em}$\Delta N_{SI} \sim \mathrm{Bin}(S_t, \, 1-\exp(-\beta I_t \mathcal{N}^{-1} \delta t$)\\ \hspace{1em}$\Delta N_{IR} \sim \mathrm{Bin}(I_t, \, 1-\exp(-\gamma \delta t$)\\ Let:\\ \hspace{1em}$S_{t+1} = S_t - \Delta N_{SI} $\\ \hspace{1em}$I_{t+1} = I_t + \Delta N_{SI} - \Delta N_{IR} $\\ \hspace{1em}$R_{t+1} = R_t + \Delta N_{IR}$\\ $t = t + 1$ } \caption{Reed–Frost chain Binomial algorithm for SIR model} \label{al:cb} \end{algorithm} \bigskip \subsubsection{Gillespie algorithm} The Gillespie scheme \cite{ Gillespie1976, Gillespie1977, Doob1942, Doob1945} is a stochastic Euler scheme which differs from the Reed-Frost chain Binomial scheme in that time increment length varies stochastically, and the transition size per time increment is fixed such that $\Delta \tilde{N}_{SI}(t) = \pm 1$, $\Delta \tilde{N}_{IR}(t) = \pm 1$ and $\Delta \tilde{N}_{IR} = \pm 1$. Given this scheme for an SIR model then: \begin{itemize} \item $S \rightarrow I$: time to next infection, conditional on $\tilde{S}(t)$ and $\tilde{I}(t)$, is drawn from $\mathrm{Exp}(\tilde{\mu}_{SI}(t) \tilde{S}(t))$. \item $I \rightarrow R$: time to next removal, conditional on $\tilde{I}(t)$, is drawn from $\mathrm{Exp}(\tilde{\mu}_{IR} \tilde{I}(t))$. \end{itemize} With $\tilde{\mu}_{SIR}(t) = \tilde{\mu}_{SI}(t) \tilde{S}(t) + \tilde{\mu}_{IR}(t) \tilde{I}(t)$ it follows that the time to the next event, conditional on $\tilde{S}(t)$ and $\tilde{I}(t)$, is drawn from $\mathrm{Exp}(\tilde{\mu}_{SIR}(t))$. Therefore the probabilities of infection ($S \rightarrow I$) and removal ($I \rightarrow R$) are: \begin{itemize} \item $\mathrm{Pr}(\mathrm{infection}|\tilde{S}(t), \tilde{I}(t), \tilde{R}(t) ) = \tilde{\mu}_{SI}(t) \tilde{S}(t) \, / \, \tilde{\mu}_{SIR}(t)$. \item $\mathrm{Pr}(\mathrm{removal}|\tilde{S}(t), \tilde{I}(t), \tilde{R}(t) ) = 1 - \mathrm{Pr}(\mathrm{infection}|\tilde{S}(t), \tilde{I}(t), \tilde{R}(t) ) = \tilde{\mu}_{IR}(t) \tilde{I}(t) \, / \, \tilde{\mu}_{SIR}(t)$. \end{itemize} A Gillespie algorithm for the SIR model is given in Algorithm \ref{al:gl}. \bigskip \begin{algorithm}[H] \SetAlgoLined $t=0,1,...,T$\\ $\beta>0$, $\gamma>0$\\ $S_0>0$, $I_0>0$, $R_0\geqslant0$\\ $\mathcal{N}=S_0+I_0+R_0$\\ \While{$(t<T$ $\mathrm{and}$ $I_t \neq0)$}{ Draw:\\ \hspace{1em}$\tau \sim \mathrm{Exp}(\mu_{SIR})$\\ Choose index $i$ from list $[.\,,\,.]$:\\ \hspace{1em}$i \sim \mathrm{Discrete}([\mu_{SI} S_t \, / \, \mu_{SIR}, \,\, \mu_{IR} I_t \, / \, \mu_{SIR}])$\\ \If{$i=0$}{ $S_{t+1} = S_{t} - 1$\\ $I_{t+1} = I_{t} + 1$\\ }\Else{ $I_{t+1} = I_{t} - 1$\\ $R_{t+1} = R_{t} + 1$\\ } $t = t + \tau$\\ } \caption{Gillespie algorithm for SIR model} \label{al:gl} \end{algorithm} \bigskip \bigskip Note that similarly to the Reed-Frost chain Binomial algorithm the number of individuals in each state at a given time is given by Equations~\ref{eq:solA}~to~\ref{eq:solC}. \subsection{Verifying solutions and testing software} The realisations from the stochastic compartmental model are expected to be consistent with the solution obtained analytically, Equation \ref{eq:boverg}. Rewriting Equation \ref{eq:boverg} in terms of the numerically computed states $\tilde{S}(t)$ and $\tilde{R}(t)$, given by Equations \ref{eq:solA} and \ref{eq:solC}, leads to \begin{equation} \label{eq:tildebovergA} \frac{\tilde{\beta}}{\tilde{\gamma}} = \frac{\mathcal{N}}{\tilde{R}(0) - \tilde{R}(t)} \ln \frac{\tilde{S}(t)}{\tilde{S}(0)} \quad \forall t. \end{equation} Estimates of the reproduction number are denoted $\tilde{\beta}/\tilde{\gamma}$. Given realisations of the states, e.g. computed by Algorithm~\ref{al:cb} or \ref{al:gl}, it is expected that $\beta / \gamma$ will equal the mode of the distribution of $\tilde{\beta} / \tilde{\gamma}$ estimates. This is checked, using Algorithm \ref{al:r0}, where multiple simulations of the epidemic process are generated from either Algorithm~\ref{al:cb} or \ref{al:gl}. Note in Algorithm \ref{al:r0} that $\tilde{\beta}/\tilde{\gamma}$ is denoted as $r_0$. \bigskip \begin{algorithm}[H] \SetAlgoLined $T$ total number of time steps\\ $J$ total number of simulations\\ \For{$j=0$ \KwTo $J$ }{ Simulate using Algorithm~\ref{al:cb} or \ref{al:gl}:\\ \hspace{1em}$S[j,0...T]$; \hspace{1em}$I[j,0...T]$; \hspace{1em}$R[j,0...T]$;\\ } $N = S[0,0]+I[0,0]+R[0,0]$\\ \For{$j=0$ \KwTo $J$ }{ \For{$t=0$ \KwTo $T$ }{ \If{($S[j,t] \neq S[j,0]$ $\mathrm{and}$ $R[j,t] \neq R[j,0]$ $\mathrm{and}$ $S(t) \neq 0$ $\mathrm{and}$ $S(0) \neq 0$)}{ $r_0[j,t] = N \ln(\left. S[j,t] \,/ \,S[j,0]) \, \right/ (R[j,0] - R[j,t])$ }\Else{ $r_0[j,t]=\mathrm{NaN}$ } } $\overline{r}_0[j] = \mathrm{mean}(r_0[j,0...T], \mathrm{excluding \, NaNs})$ \,\,\, \# useful for Normal distributions } Estimate the mode of the $r_0$ distribution \caption{Estimate ${r}_0$ for SIR model} \label{al:r0} \end{algorithm} \bigskip There is a tacit assumption in Algorithm~\ref{al:r0} that most of the simulated processes have a good proportion of the population flowing from state $\tilde{S}(t)$ to $\tilde{R}(t)$ during the course of the epidemic: under these conditions the mode of the distribution of $\tilde{\beta}/\tilde{\gamma}$ estimates (i.e. $r_0$ estimates) will approximately equal $\beta / \gamma$. If most of the epidemics die out very quickly with only one or a few infections in total then it is very likely that there will be significant discrepancy between the mode of the $\tilde{\beta}/\tilde{\gamma}$ estimates and $\beta / \gamma$ due to insufficient realisations. However this is not an issue as Algorithm~\ref{al:r0} is intended to be used for testing/benchmarking purposes therefore suitable model parameters can be chosen. \subsection{Results for Reed-Frost chain Binomial algorithm} The python package \texttt{scm}~\cite{scm}, which accompanies this paper, implements a general Reed-Frost chain Binomial algorithm using the principles of Algorithm~\ref{al:cb}. This package contains a script, \texttt{examples/sir\_binomial.py}, which is used here to generate $500$ simulations of a stochastic SIR epidemic process over $400$ time steps with $\delta t = 0.25$. The initial conditions for the states are $(\tilde{S}(0), \tilde{I}(0), \tilde{R}(0)) = (990, 10, 0)$ and the transition rates are defined such that $\beta/\gamma=0.28/0.14=2$. Algorithm~\ref{al:r0}, also included in \texttt{scm}, is used to generate a distribution of $\tilde{\beta}/\tilde{\gamma}$ estimates i.e. $r_0$ estimates. An instance of running this code with these parameters is given in Figure~\ref{fig:Figure_1a_cb_ts}, it shows the timeseries of the states $\tilde{S}(t)$, $\tilde{I}(t)$ and $\tilde{R}(t)$ for each simulated process. Figures~\ref{fig:Figure_1b_cb_MeanPerSim} and \ref{fig:Figure_1c_cb_MeanAll} depict histograms of $\tilde{\beta}/\tilde{\gamma}$ estimates with quantiles $0.25$, $0.5$ and $0.75$ shown by the dashed lines. As expected the mode of the distribution of $\tilde{\beta}/\tilde{\gamma}$ estimates is very close to the model value $\beta/\gamma=2$. Given the mean $\overline{r}_0[j]$, i.e. the mean $\tilde{\beta}/\tilde{\gamma}$ from each simulation $j$, then the mean over all $500$ simulations is $2.025$ which is within $4$ standard errors ($n=500$) of the model value. This result is given in Table \ref{tab:binomialStepSize} along with other step sizes, all other simulations parameters are unchanged. As can be seen reducing $\delta t$ brings the mean of the $\tilde{\beta}/\tilde{\gamma}$ distribution closer to the model value of $\beta/\gamma$, this is reasonable under a Euler–Maruyama approximation \cite{Gerald1994}. Hence when benchmarking this code/algorithm consideration should be given to the size of $\delta t$ since it influences the variability of $\tilde{\beta}/\tilde{\gamma}$ around the model value $\beta/\gamma$. \iffalse In the following unless otherwise stated all simulation parameters are the same as above. Given the mean $\overline{r}_0[j]$, then the mean over all $j$ simulations yields a value of $1.99796$, when $\delta t=0.25$ is reduced by a factor of $10$ (i.e. $\delta t=0.025$) and the number of time steps is correspondingly increased to $4000$. In this instance the standard error is $0.00728$ ($n=500$) hence $1.99796$ is within one standard error of the model value $\beta/\gamma=2$. Conversely, let $\delta t= 0.25$ be increased by a factor of $4$ (i.e. $\delta t=1$) and correspondingly let the number of time steps be reduced to $100$: given the mean $\overline{r}_0[j]$, then the mean over all $j$ simulations yields a value of $2.15$ with a standard error of $0.00632$ ($n=500$). Hence $2.15$ is within $25$ standard errors of the model value $\beta/\gamma=2$. \fi \begin{table}[htb] \centering \begin{tabular}{l|l|l|l|l} $\delta t$ & time steps & mean & SE & number of SE from $\beta / \gamma$ \\ \hline 1.0 & 100 & 2.153 & 0.00632 & 25 \\ 0.25 & 400 & 2.025 & 0.00736 & 4 \\ 0.025 & 4000 & 1.99796 & 0.00728 & 1 \end{tabular} \caption{Estimates of the mean reproduction number ($1/500 \sum_{500} \overline{r}_0[j]$) for a selection of time step sizes $\delta t$. The standard error SE is computed with $n=500$ and the total simulation time period equals $\delta t \, \, \times$ (time steps) $= 100$. As $\delta t$ decreases the mean approaches $\beta / \gamma = 2$.} \label{tab:binomialStepSize} \end{table} \begin{figure}[htb] \centering \begin{subfigure}[t]{0.49\textwidth} \caption{Timeseries} \label{fig:Figure_1a_cb_ts} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_1a_cb_ts.png} \end{subfigure} \vspace{1.0cm} \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: mean $\tilde{\beta}/\tilde{\gamma}$} \label{fig:Figure_1b_cb_MeanPerSim} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_1b_cb_MeanPerSim.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: all $\tilde{\beta}/\tilde{\gamma}$} \label{fig:Figure_1c_cb_MeanAll} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_1c_cb_MeanAll.png} \end{subfigure} \caption{500 simulations of the SIR epidemic process using the Reed-Frost Chain Binomial algorithm. The top panel (\subref{fig:Figure_1a_cb_ts}) shows the timeseries for states $\tilde{S}(t)$, $\tilde{I}(t)$ and $\tilde{R}(t)$ for each simulated process. Panel (\subref{fig:Figure_1b_cb_MeanPerSim}) gives a histogram of the mean $\tilde{\beta}/\tilde{\gamma}$ from each simulation i.e. $\Bar{r}_0$. Lastly, panel (\subref{fig:Figure_1c_cb_MeanAll}) depicts a histogram of $\tilde{\beta}/\tilde{\gamma}$ at every time step, i.e. ${r}_0$, across all simulations. Note that ${r}_0$ and $\Bar{r}_0$ are defined in Algorithm~\ref{al:cb}.} \end{figure} \subsection{Results for Gillespie algorithm} A general Gillespie algorithm drawing on the principles of Algorithm~\ref{al:gl} is included in the aforementioned package \texttt{scm}~\cite{scm}. Using this package $500$ simulations of a stochastic SIR epidemic process were computed with the script \texttt{examples/sir\_gillespie.py}. There were $1700$ time steps per simulation. As above $(\tilde{S}(0), \tilde{I}(0), \tilde{R}(0)) = (990, 10, 0)$ and $\beta/\gamma=0.28/0.14=2$. The timeseries for $\tilde{S}(t)$, $\tilde{I}(t)$ and $\tilde{R}(t)$ is given in Figure~\ref{fig:Figure_2a_gl_ts}, as expected the dynamics are very similar to those shown in Figure~\ref{fig:Figure_1a_cb_ts}. Figures~\ref{fig:Figure_2b_gl_MeanPerSim} and \ref{fig:Figure_2c_gl_MeanAll} depict histograms of $\tilde{\beta}/\tilde{\gamma}$ estimates with quantiles $0.25$, $0.5$ and $0.75$ shown by the dashed lines. The mode of the $\tilde{\beta}/\tilde{\gamma}$ estimates are, as expected, close to the model value $\beta/\gamma=2$. Given mean $\overline{r}_0[j]$, then the mean over all $j$ simulations was $2.013$ which is within $3$ standard errors of $\beta/\gamma=2$: note that in this instance $SE=0.0062$ with $n=500$. \begin{figure}[htb] \centering \begin{subfigure}[t]{0.49\textwidth} \caption{Timeseries} \label{fig:Figure_2a_gl_ts} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_2a_gl_ts.png} \end{subfigure} \vspace{1.0cm} \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: mean $\tilde{\beta}/\tilde{\gamma}$} \label{fig:Figure_2b_gl_MeanPerSim} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_2b_gl_MeanPerSim.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: all $\tilde{\beta}/\tilde{\gamma}$} \label{fig:Figure_2c_gl_MeanAll} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_2c_gl_MeanAll.png} \end{subfigure} \caption{500 simulations of the SIR epidemic process using the Gillespie algorithm. The top panel (\subref{fig:Figure_2a_gl_ts}) shows the timeseries for states $\tilde{S}(t)$, $\tilde{I}(t)$ and $\tilde{R}(t)$ for each simulated process. Panel (\subref{fig:Figure_2b_gl_MeanPerSim}) gives a histogram of the mean $\tilde{\beta}/\tilde{\gamma}$ from each simulation i.e. $\Bar{r}_0$. Lastly, panel (\subref{fig:Figure_2c_gl_MeanAll}) depicts a histogram of $\tilde{\beta}/\tilde{\gamma}$ at every time step, i.e. ${r}_0$, given all simulations.} \end{figure} \subsection{Summary regarding SIR model tests} Given the results above it is concluded that when tested using Algorithm~\ref{al:r0} the computer code for the SIR model used in \texttt{scm}, and hence Algorithms~\ref{al:cb} and \ref{al:gl}, adhere to the underlying system of ODEs (Equations~\ref{eq:ODE1}~to~\ref{eq:ODE3}). Consequently it is confirmed that Algorithm~\ref{al:r0} could be used to both benchmark numerical results and write a unit test. For this model the distribution of $\tilde{\beta}/\tilde{\gamma}$ estimates are approximately Normal hence the mean can be used to estimate the mode. However if the same principles were applied to a different system of ODEs then Normality cannot be assumed. In addition to the aforementioned tests it would be prudent to check at every time step that $\tilde{S}(t)+\tilde{I}(t)+\tilde{R}(t)=\mathcal{N}$ or equivalently $d\tilde{S}(t)/dt+d\tilde{I}(t)/dt+d\tilde{R}(t)/dt=0$. \section{Lotka–Volterra model} The Lotka–Volterra, also called predator–prey, model \cite{Lotka1925, Volterra1926, Lotka1920, Volterra1928} is used as a second example to demonstrate how the general approach given in Section \ref{section:outlineMeth} can be applied beyond the SIR model. The Lotka–Volterra model may be used to describe the dynamics of a biological system where two species interact. One population consists of predators and the other of prey. This model can be viewed as a graph having two nodes (compartments) that are connected by a bidirectional edge. The system of ODEs in terms of time $t$ is \begin{equation} \label{eq:ppODE1} \frac{dx(t)}{dt} = \alpha x(t) - \beta x(t)y(t), \end{equation} \begin{equation} \label{eq:ppODE2} \frac{dy(t)}{dt} = \delta x(t)y(t) - \gamma y(t). \end{equation} The number of prey is denoted by $x(t)$ and number of predators by $y(t)$. Real constants $\alpha>0$, $\beta>0$, $\gamma>0$ and $\delta>0$ define the interaction between the two populations. The rate at which prey reproduce is represented by $\alpha x(t)$. The rate of predation upon these prey is proportional to the rate at which prey and predators meet, this is described by $\beta x(t)y(t)$. The rate of predator population growth is represented by $\delta x(t)y(t)$ and the loss rate of predators due to death or emigration is described by $\gamma y(t)$. It follows from Equation \ref{eq:ppODE1} and \ref{eq:ppODE2} that \begin{equation} \frac{dx(t)}{dy(t)} = \frac{x(t)(\alpha - \beta y(t))}{y(t)(\delta x(t) - \gamma)}. \end{equation} Separation of variables may be used to rewrite this differential equation in integral form \begin{equation} \int_{x(0)}^{x(t)} \frac{\delta x - \gamma}{x} \,dx = \int_{y(0)}^{y(t)} \frac{\alpha - \beta y}{y} \,dy. \end{equation} Solving this integral equation and rearranging leads to \iffalse \begin{equation} \label{eq:ag} \delta(x(t)-x(0)) - \gamma \ln \frac{x(t)}{x(0)} = \alpha \ln \frac{y(t)}{y(0)} - \beta (y(t)-y(0)) \end{equation} This equation can be rearranged in terms of each of the parameters e.g. \fi \begin{equation} \label{eq:ppalpha} \alpha = \frac{\beta (y(t)-y(0)) + \delta(x(t)-x(0)) - \gamma \ln \left(x(t) \middle/ x(0) \right) }{\ln \left( y(t) \middle/ y(0) \right) }. \end{equation} Note that Equation \ref{eq:ppalpha} in terms of $\gamma$ is: \begin{equation} \label{eq:ppgamma} \gamma = \frac{\beta (y(t)-y(0)) + \delta(x(t)-x(0)) - \alpha \ln \left( y(t) \middle/ y(0) \right) }{\ln \left( x(t) \middle/ x(0) \right)}. \end{equation} It follows that Equations \ref{eq:ppalpha} and \ref{eq:ppgamma} are not defined if the magnitude of a logarithm is infinity or either the numerator or denominator is zero. A predator-prey process can be simulated from Equations \ref{eq:ppODE1} and \ref{eq:ppODE2} using an algorithm suitable for stochastic compartmental models such as the Reed-Frost chain Binomial or Gillespie algorithm. Given realisations from such a simulation the distribution of each parameter estimate is computed using Equation \ref{eq:ppalpha} and \ref{eq:ppgamma}: a given parameter estimate is equivalent to $r_0$ in Algorithm \ref{al:r0}. The mode of the distribution of these estimates is expected to equal its corresponding model parameter value. As an example let $(\alpha, \beta, \delta, \gamma)=(0.2, 0.0005, 0.0005, 0.1)$ and $(x(0), y(0))=(500, 500)$. With these parameters and initial conditions the solution to Equations \ref{eq:ppODE1} and \ref{eq:ppODE2} is a limit cycle oscillator. Figure \ref{fig:Figure_3a_lv_ts} depicts the timeseries of $500$ simulations of the predator-prey process using the Gillespie algorithm: there are $30000$ time steps per simulation. Given these realisations then Equations \ref{eq:ppalpha} and \ref{eq:ppgamma} result in a distribution of estimates for $\alpha$ and $\gamma$ respectively. The distributions of these estimates are shown in Figures~\ref{fig:Figure_3b_lv_alpha} and \ref{fig:Figure_3c_lv_gamma}. As required Figure~\ref{fig:Figure_3b_lv_alpha} shows the mode of the distribution at $\alpha \approx 0.2$ similarly the mode in Figure~\ref{fig:Figure_3c_lv_gamma} is at $\gamma \approx 0.1$. These figures show that the parameter estimate distributions are not Normal so the mode cannot be estimated from the mean. Nontrivial methods to estimate the mode of an arbitrary distribution are beyond the scope of this paper hence the use of histograms. \begin{figure}[htb] \centering \begin{subfigure}[t]{0.49\textwidth} \caption{Timeseries} \label{fig:Figure_3a_lv_ts} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_3a_lv_ts.png} \end{subfigure} \vspace{1.0cm} \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: $\alpha$ estimates} \label{fig:Figure_3b_lv_alpha} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_3b_lv_alpha.png} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \caption{Histogram: $\gamma$ estimates} \label{fig:Figure_3c_lv_gamma} \vspace{-0.2cm} \centering \includegraphics[width=\linewidth]{Figure_3c_lv_gamma.png} \end{subfigure} \caption{500 simulations of the Lotka–Volterra biological process using the Gillespie algorithm. The top panel (\subref{fig:Figure_3a_lv_ts}) shows the prey $x(t)$ [blue] and predator $y(t)$ [red] timeseries for each simulated process. Panels (\subref{fig:Figure_3b_lv_alpha}) and (\subref{fig:Figure_3c_lv_gamma}) respectfully depict the histograms of the $\alpha$ and $\gamma$ parameter estimates over all times and simulations.} \end{figure} The code used to simulate these results for the Lotka–Volterra process is available in \texttt{scm}~\cite{scm}, specifically see \texttt{examples/lotka\_volterra\_gillespie.py}. In this instance the core Gillespie algorithm code has already been tested using the SIR model therefore in practice this code does not necessarily need testing a second time using a different model e.g. the Lotka–Volterra model. Note that in this section the aim was to demonstrate that the techniques outlined above in relation to the SIR model are also applicable to other similar classes of ODE systems. In summary, the Lotka–Volterra model is considered in the context of stochastic compartmental model framework. In terms of the testing approach given in Section \ref{section:outlineMeth} this model differs from the SIR model in that the exact solution can be expressed in terms of any given parameter and the distribution of parameter estimates is not Normal. The computer code which estimated $x(t)$ and $y(t)$ has been shown to produce numerical results that are consistent with the Lotka–Volterra system of ODEs. This testing approach could therefore be used to benchmark numerical results, as such it could form the basis of a unit test. \section{Discussion} In a real world setting it is likely that there will be discrepancies between the data collected from an actual epidemic or biological process, and the realisations of a stochastic compartmental model computed using either the Reed-Frost chain Binomial or the Gillespie algorithm. In spite of this the computer code used to generate realisations from a stochastic compartmental model should not fail to adhere the underlying system of ODEs. It is of critical importance to test that such computer code generates plausible numerical output. Consequently it is vital that the numerical output is benchmarked in a controlled manner within a testing framework with robust test procedures such as those explored above. Software for stochastic compartmental models is often more general than the SIR model with three compartments given in Algorithm~\ref{al:cb} or \ref{al:gl}. For instance it may accommodate models with any number of compartments and the directed graph could include branching and feedback: it is straightforward to extend Algorithms~\ref{al:cb} and \ref{al:gl} to such cases. In this regard the computer code in \texttt{scm}~\cite{scm} for the Reed-Frost chain Binomial and Gillespie algorithms respectively is quite general, although this particular implementation of the Gillespie algorithm does not include branching. As a consequence \texttt{scm} has the advantage of being applicable to any suitable compartmental model of arbitrary size and complexity. The main strength of the novel testing approach described in this paper is that it can be used to check that the numerical realisations adhere to the underlying system of ODEs independently of the algorithms used to generate these realisations. However this approach is limited in that depending on the system of ODEs it may not be possible to find solutions analytically in terms of every model parameter, or in terms of combinations of model parameters. By way of an example let the SIR model be extended such that it has an additional compartment $E$ between $S$ and $I$: this is the so-called SEIR model. To the author's knowledge it is not possible to solve this system of ODEs analytically in such a way that a solution can be written in terms of the transition parameter relating to $E \rightarrow I$. Specifically it is not possible to write an expression analogous to Equation \ref{eq:boverg} that involves the $E \rightarrow I$ transition parameter. However in terms of the underlying system of ODEs the first $S \rightarrow E$ and last $I \rightarrow R$ transition parameters of the SEIR model are equivalent to the $S \rightarrow I$ and last $I \rightarrow R$ parameters in the SIR model. Consequently one part of the SEIR model solution has the same form as Equation \ref{eq:boverg} therefore the novel testing approach developed in this paper could be directly applied. Any error in the code used to compute the realisations from an SEIR model would almost surely be exposed under such a test (using Equation \ref{eq:boverg}) due to the linear and sequential nature of the graph connecting the compartments. For stochastic compartmental models with more complicated graphs the testing methods explored in this paper may not be sufficient on their own or perhaps even applicable. In future work it would be of interest to explore the case where the system of ODEs for a stochastic compartmental model does not admit either a full or partial solution analytically. In this case it would be plausible to perform a test as follows: \begin{enumerate} \item compute the realisations of the process using for example the Gillespie algorithm; \item find the derivatives of these computed realisations for example by the finite difference method, see caveat below; \item rearrange one ODE in terms of a particular parameter and then compute estimates of that parameter using the aforementioned computed realisations and their derivatives; \item compute the mode of the resulting distribution of parameter estimates and check it matches the actual model parameter value. \item repeat this procedure, where possible/required, for every parameter of interest in the system of ODEs. \end{enumerate} For example, although trivial, Equation \ref{eq:ODE1} could be written as $\beta = -\mathcal{N} I^{-1} S^{-1} \, dS/dt$, in which case substituting numerically computed realisations into the right side would result in a distribution of $\beta$ estimates where the mode of the distribution is the model parameter value of $\beta$. The advantage of this method is that it is more general than the approach explored in this paper since it does not rely on being able to integrate all, or a subset of, the system of ODEs analytically. However the caveat is that it relies on being able to compute derivatives of realisations from a stochastic process, this may require a finite difference method with an order higher than first-order and/or a data smoothing method (e.g. weighted moving average) applied to the realisations prior to numerical differentiation. Although such a test has its place, it is also potentially problematic in that the logic is circular in so much as the numerical output (i.e. realisations of the process) from the code is directly fed back into the original system of ODEs. Hence, depending on how the test code is written, there is the possible danger that the numerical output (realisations), regardless of whether it is correct or erroneous, leads to its own confirmation! Often there is no completely foolproof way to test computer code. In the main the more general the code, the harder it is to thoroughly test. In terms of software which uses numerical methods it is prudent to benchmark the numerical output using a method which is as far removed as possible from the algorithm/code that was used to compute the original output. This was achieved in this paper by comparing solutions obtained analytically with numerical solutions. For general code designed for a class of models, e.g. stochastic compartmental model, `sufficiently' good test cases need to be designed. Here this could arguably be achieved by testing the software in \texttt{scm} using the SIR model as the test case. Ultimately during software development a judgement call will need to be made as to what constitutes a sufficient degree, and appropriate type, of testing. \bigskip \bigskip \textbf{Summary:} The novel key idea for benchmarking the solutions from stochastic compartmental models is to derive an exact, partial or full, solution analytically from the system of ODEs such that an expression can be written for the time dependent quantities in terms of a time independent quantity e.g. a model parameter. From this expression it then follows, given simulated realisations from an epidemic or biological process, that a distribution of time independent quantities is estimated. The mode of this distribution should equal the actual value of the time independent quantity. This procedure uses realisations from the entire simulation time interval without needing constraints such as the thermodynamic limit and/or long-time steady state limit. Furthermore these techniques could be applied to suitable systems of ODEs other than the SIR and Lotka–Volterra models. The novel techniques presented in this paper can therefore be used to create a robust test of the numerical solution produced by computer code used to generate realisations from stochastic compartmental models. \section{Acknowledgements} The author would like to thank the Wellcome Trust for funding this research: grant `GEM: translational software for outbreak analysis'. This research was funded in whole, or in part, by the Wellcome Trust [Grant number]. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2,877,628,090,886
arxiv
\section{Experimental Evaluation}\label{sec:evaluation} We use two Mini-PCs with Intel 5300 wireless NICs as the transmitter and receiver, respectively. The frequency is set at 5GHz and the packet transmission rate is set to 500Hz. As illustrated in Fig.~\ref{fig:system}~(leftmost), the transmitter and receiver are placed on two cartons of 1.2$m$ high and 2.4$m$ apart. We recruit 30 subjects for our experiments, in which subjects are required to stand still at the mid-perpendicular of the Tx-Rx link. \begin{figure} \centering \includegraphics[width=0.24\textwidth]{figs/valid.pdf} \includegraphics[width=0.24\textwidth]{figs/invalid.pdf} \caption{\textbf{Left.} When user volume increases from 2 to 30, identification accuracy gradually decreases from 100\% to 92\%. \textbf{Right.} Evaluation on Threshold Learning Algorithm. We randomly select k (2 to 29) subjects as valid users to learn thresholds, meanwhile the remaining 30-k (28 to 1) users attack WiPIN. With more legal users and less attackers, balanced accuracy increases.} \label{fig:acc30} \vspace{-15pt} \end{figure} \begin{figure*}[tb] \hspace{-0.1in} \centering \hspace{-0.2in} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=0.93\textwidth]{figs/time_acc1.pdf} \caption{Accuracy vs. Time.} \label{fig:time_acc} \end{minipage} \begin{minipage}[t]{0.246\linewidth} \centering \includegraphics[width=0.97\textwidth]{figs/cloth_acc1.pdf} \caption{Accuracy vs. Clothing.} \label{fig:cloth_acc} \end{minipage} \begin{minipage}[t]{0.246\linewidth} \centering \includegraphics[width=0.975\textwidth]{figs/room_acc1.pdf} \caption{Accuracy vs. Rooms.} \label{fig:room_acc} \end{minipage} \begin{minipage}[t]{0.245\linewidth} \centering \includegraphics[width=0.9\textwidth]{figs/sample_acc1.pdf} \caption{Accuracy vs. Sampling Time.} \label{fig:accu_samplingTime} \end{minipage} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=.9\columnwidth]{figs/clothes_1.pdf} \caption{Three Apparel Categories.}\label{fig:clothes} \end{figure} \subsection{Quantitative Results}\label{sec:accuracy} For one subject, we record CSI time-serial data 30 times, each data lasting five seconds. We randomly select 20 out of 30 for training classifiers and learning defending thresholds, and the remaining 10 for the evaluation purpose. We perform the above data collection and segmentation on all 30 subjects. \subsubsection{Performance of Identity Classifiers} We would like to evaluate the accuracy of identity classifiers~(ratio of correctly identified in the test and the all test) with the user volume increasing. In particular, to evaluate the user volume of $k$, we randomly select $k$ subjects as users, apply their training data to train ID classifiers, and compute the test accuracy with their test data. Moreover, we do above randomly user selection, classifier training, and accuracy computing for 100 times. Thus for $k$, we have 100 accuracies, whose averages and quartiles are plotted in Fig.~\ref{fig:acc30}~(left). With this approach, we have an accuracy curve of ID classifiers. We see that WiPIN works pretty well in relatively small user volume, and gradually decreases to 92\% when all subjects are considered as users. \subsubsection{Performance of Threshold Learning} WiPIN can also reject illegal users that not seen before via threshold learning as described in Section.~\ref{sec:id}. To test this, we select $k$ subjects as authenticated users to learn threshold, and apply the threshold on all subjects, where $k$ subjects are valid that should be classified as legal users~(true positive, TP), whereas the remaining $30-k$ users are illegal users that should be correctly rejected~(true negative, TN). We use the balanced accuracy~(BA = 0.5$\times$TP-Rate + 0.5$\times$TN-Rate) to evaluate the performance of the threshold learning algorithm. As similar scheme above, we compute BAs and quartiles of threshold learning with the legal user volume increasing (illegal user volume decreasing), and show them in Fig.~\ref{fig:acc30}~(right). We see the balanced accuracy increases from 0.87~(k=2) to 0.94~(k=29), maintaining at a high level. In addition, we ascribe the the increment of BA to that the learning algorithm gain more knowledge in CSI of legal users to reject illegal attacks, i.e., higher TN-Rate, when trained with data that correspond to increasing legal users (with $k$ increasing). \subsection{Evaluation on Robustness} \subsubsection{CSI Stability vs. Time} We recruit 10 subjects, record CSI, and prepare data as in Section.~\ref{sec:accuracy} over 15 consecutive days. We apply 2 strategies to train ID classifiers to evaluate the stability of WiPIN. Strategy 1: using the CSI data of the first day as the training set, and the data of other days as the test set. This strategy simulates the scenario of no updates in WiPIN. Strategy 2: using all CSI data in the first $j$th days as the training set, and the data after the $j$th day as the test set. This strategy simulates the WiPIN-updating enabled scenario. We plot results in Fig.~\ref{fig:time_acc}. We find that when using Strategy 1~(blue line), the accuracy of WiPIN decreases gradually. This result shows that time-varying does leave an impact on the feature stability, decreasing to 90\% after 10 days, implying the necessity of updating users' features. On the other hand, if we update WiPIN~(red line), it can keep high accuracy in certain period. With above analysis, we suggest that a proper updating period is about 10 days in this situation. \subsubsection{Impact of Apparel Changing} It is common that users change their apparels. In this evaluation, we roughly divide apparels into three categories, i.e., summer apparels (e.g., T-shirt), autumn/spring apparels (e.g., windbreaker), and winter apparels (e.g., down jacket). We recruit 10 subjects and ask them to wear three categories of apparels, shown in Fig.~\ref{fig:clothes}. When one wears one clothes, we record corresponding CSI 15 times, where 10 out of 15 for training, the remaining 5 for test. Other settings in evaluation keep the same as in Section.~\ref{sec:accuracy}. We perform 10 cases of evaluation process. Case 1-3. We alternately select CSI data collected when subjects wear one category of apparels as the training set, train WiPIN, and evaluate it via the data of all three categories. We call this One for All. Case 4-6. We select the data of two categories of apparels as the training set, train WiPIN, and and evaluate it via the data of all categories. We call this Two for All. Case 7. We select all training data, train WiPIN, and evaluate it via all test set. We call this All for All. Case 8-10. We select the data of one category of apparels as the training set, train WiPIN, and predict by using the selected category of apparels as the test set. We call this One for itself. We plot the average classification accuracy for each case in Fig.~\ref{fig:cloth_acc}. The first six bars indicate that apparel changing to different categories has certain impacts on WiPIN. However, WiPIN can achieve at least 77\% accuracy when only utilizing one category apparel as the training set. On the other hand, bars of 7-10 in Fig.~\ref{fig:cloth_acc} demonstrate that WiPIN achieves an average accuracy of 94\% if the apparel category keep the same during training and testing. Above results mean that within a certain period, e.g., in summer or in wither, in which people generally do not change their apparels drastically, high performance can be guaranteed. \subsubsection{Impact of Environment Noise} We choose five different places in our office building, including the places at an empty laboratory \#1, a crowded laboratory \#2, a crowded seminar room, a empty meeting room, and a narrow corridor, respectively. All these places are diverse in terms of surroundings, i.e., environment noise. We involve 10 subjects in this experiment. At each place, we collect 15 CSI series for every subjects, in which 10 are used as training data and the left as test data. Other settings keep the same as in Section.~\ref{sec:accuracy}. The results are shown in Fig.~\ref{fig:room_acc}. We see that the average accuracy is about 94\%. Specifically, in the seminar room and corridor, WiPIN does not work as well as in laboratory \#1 and the meeting room. This is because the multi-path effect is much more stronger in the former places, making CSI components more complex and reducing the component weight that reflected from human body. \begin{figure}[t] \centering \begin{minipage}[tb]{0.47\linewidth} \centering \includegraphics[width=0.95\textwidth]{figs/computing_time.pdf} \caption{Computation Overhead.} \label{fig:overhead} \end{minipage} \begin{minipage}[tb]{0.47\linewidth} \centering \includegraphics[width=1\textwidth]{figs/compare_acc1.pdf} \caption{Comparison to Prior Work.} \label{fig:compare_acc} \end{minipage} \vspace{-10pt} \end{figure} \subsection{Computation Overhead} We do computation overhead evaluation using data collected in Section.~\ref{sec:accuracy}. We adjust the sampling time from 0 to 5 seconds to determine the CSI sampling requirement~(user standing overhead) for a good identity matching performance. Fig.~\ref{fig:accu_samplingTime} shows the results, where we see WiPIN can achieve 92\% mean accuracy while the time that user stands is more than 200$ms$. Besides the user standing overhead, overhead in signal preprocessing, feature extraction, and identity mapping, comprise computation time cost of WiPIN. According to Fig.~\ref{fig:accu_samplingTime}, we use CSI samples that collected in 200$ms$ to calculate the overhead. The computation is done in a desktop PC~(with a CPU mode 2.7 GHz Intel Core i5, memory of 8GB DDR3) via Matlab R2015b. The cumulative computation overhead plus user standing overhead are shown in Fig.~\ref{fig:overhead}. indicating that WiPIN requires about 230$ms$ to identify a person, which is greatly time efficient. The low overhead proves that WiPIN is applicable to real-time person identification applications. \subsection{Comparison with Prior Work} We compare WiPIN with three previous work, Wi-Who\cite{zeng2016wiwho}, WiFi-ID\cite{zhang2016wifi}, and FreeSense\cite{xin2016freesense}, in Fig.~\ref{fig:compare_acc}. As Fig.~\ref{fig:compare_acc} shows, WiPIN greatly outperforms those three approaches in terms of accuracy. In addition, these works are all operation-based approaches, requiring the user to walk 2$m$-6$m$, which is inconvenient and raises barriers for real-world applications. \section{Introduction}\label{sec:introduction} Recently, channel state information~(CSI) of Wi-Fi signals has been increasingly exploited for person identification~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,wang2016gait,shi2017smart,lv2017device,pokkunuru2018neuralwave} due to the pervasiveness and low cost in deployment. Besides, Wi-Fi based person identification enables passive identification, i.e., high user-friendly. What's more important, unlike popular face recognition systems being vulnerable to the replay attacks \cite{smith2015face}, or fingerprint recognition struggling for spoofing attacks from 3D printed models \cite{foolphone}, it is harder to fool the Wi-Fi based person identification system because the attack needs extremely vivid imitation on user behaviors. Among previous work, CSI monitoring during user ambulation are the most prevailing CSI-based identification approaches ~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,wang2016gait,lv2017device,pokkunuru2018neuralwave}, which demand users to walk along pre-defined paths and record corresponding Wi-Fi CSI series simultaneously. Then the recorded CSI is either to extract specific gait metric such as walking speed for identification~\cite{wang2016gait}, or directly to learn identity classifiers with machine learning algorithms, e.g., support vector machine~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,lv2017device} or deep neural networks~\cite{pokkunuru2018neuralwave}. However, to recognize a person's gait, one must walk along the pre-defined paths again for several meters, e.g., 2-3m in \cite{zeng2016wiwho} and 5m in \cite{xin2016freesense}, which is labor intensive and time-consuming, thus largely limits use scenarios. To overcome limitations in gait-based approaches, Shi \textit{et al.}~\cite{shi2017smart} propose an activity-based person identification, which is built on the finding that CSI series corresponding to user daily activities like opening micro-oven carries identity-related patterns. Though the approach in \cite{shi2017smart} requires less labor work, daily activity patterns are inferior in robustness compared to gait. \begin{figure}[t] \centering \includegraphics[width=0.47\linewidth]{figs/fig1_final.pdf} \includegraphics[width=0.5\linewidth]{figs/simple_mean.pdf} \caption{WiPIN rationale. When one stands in a Wi-Fi environment~(left), multi-path effect on the person body would lead to discriminated distortions in the Wi-Fi amplitude (right), which carries body information and can be well-designed for person identification.} \label{fig:first} \vspace{-10pt} \end{figure} Retracing previous work, we conclude that Wi-Fi distortions caused by user behaviors, such as walking and other daily activities embedding behavior patterns, can be used for person identification. Moreover, we are wondering how CSI varies if a person does no operation just standing in a Wi-Fi environment. Fig.~\ref{fig:first} illustrates our preliminary experiment results for this wondering. We recruited 10 subjects to stand in a same position, respectively, near the Wi-Fi transmitter and receiver~(left), meanwhile the corresponding CSI series are recorded. We average the CSI amplitudes over all recorded samples and plot them in the Fig.~\ref{fig:first}~(right). Majority of the amplitudes are discriminated, which demonstrates the potentiality for person identification. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figs/wipin_system.pdf} \caption{System framework. WiPIN is comprised of hardware and algorithms. The algorithms contain modules of signal pre-processing, data preparation, and identity matching. After deployed, WiPIN can identify legal users while reject illegal users. } \label{fig:system} \end{figure*} \begin{figure}[b] \centering \includegraphics[width=0.48\linewidth]{figs/fat.pdf} \includegraphics[width=0.48\linewidth]{figs/muscle.pdf} \caption{Information related to human body such as the fat rate and the muscle rate is embedded in the Wi-Fi distortion and can be well predicted by Wi-Fi signals with support vector regression~\cite{smola2004tutorial}.} \label{fig:fat} \vspace{-5pt} \end{figure} We ascribe the reason of results shown in Fig.~\ref{fig:first} to that during the propagation, Wi-Fi signals must be embedded with certain information related to human body such as body shape, body fat rate, and body muscle rate, because of the multi-path effect on the body, and the absorption and reflection effect in the body. If this guess is correct, it is possible to perform person identification using Wi-Fi signals without any user activities like walking for several meters, but only standing for a second. To further confirm the guess, we measure the body fat rate and muscle rate of these subjects by a Mi$^\circledR$ body fat scale, then train a mapping function from CSI to aforesaid rates with support vector regression (SVR)~\cite{smola2004tutorial} (our not inferring body shape is because it is hard to get the ground-truth of body shape). Fig.~\ref{fig:fat} demonstrates that CSI distortions caused by standing persons are highly relevant with body fat rate and body muscle rate. With the above analysis, we propose an operation-free passive person identification system, namely WiPIN. It can deeply extracts human body information from Wi-Fi distortion series caused by the user's presence, hence further identifies people with well-designed signal pre-processing and identity matching algorithms. One challenge in realizing the system is the mixture of body relevant signal and interference signals from multiple Wi-Fi propagation paths, and we tackle this problem by implementing a multi-path effect mitigation. Experimental results show that WiPIN achieves 92\% identification accuracy over 30 subjects with robustness to various experimental settings and low identifying time overhead, i.e., less than 300 ms. We summarize the main contributions of this paper as follows. \begin{itemize} \item We propose WiPIN, a novel Wi-Fi signals-based passive person identification system, which has no requirement for the proactive user engagement in traditional identification systems, such as facing camera and scanning finger/iris. In addition, WiPIN is user-friendly and time-efficient in practice. \item We quantitatively study the rationale behind WiPIN, and conclude that during propagation, Wi-Fi signals are embedded with information related to human body. The intrinsic body information further used for person identification is robuster than behavior patterns. \item WiPIN can classify authenticated users, as well as to reject illegal users that not seen before. We prototype WiPIN using commodity off-the-shelf Wi-Fi devices, and conduct extensive experiments to validate advances of WiPIN in various aspects including accuracy, robustness, time consumption, etc. \end{itemize} \section{Related Work} Passive person identification that mainly utilizes person behavior patterns, such as when typing~\cite{de2012touch, shahzad2013secure} and breathing~\cite{chauhan2017breathprint}, has proven popular. It is hard to fool these system because the attack needs extremely vivid imitation on user behaviors. Among past work, applying Wi-Fi signals to detect person walking pattern (aka gait) is one leading schema in wireless security and privacy community, such as WiFiU~\cite{wang2016gait}, WiWho~\cite{zeng2016wiwho}, WiFi-ID~\cite{zhang2016wifi}, etc. In gait-based work, to get authenticated, users must walk along the pre-defined path several meters, which is labor intensive and time-consuming, thus making it unpractical for use. In addition, an attacker can record a video when users walking, then practice to walk with a similar gait as the users to be a fake. Compared to gait-based work, WiPIN requires user to do no operation, but to stand for more than 200$ms$, which is user-friendly, time efficient, and meets requirements of real-time use scenarios. Moreover, WiPIN captures whole body information that is with high resilient to attacks. \section{Introduction}\label{sec:introduction} Recently, channel state information~(CSI) of Wi-Fi signals has been increasingly exploited for person identification~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,wang2016gait,shi2017smart,lv2017device,pokkunuru2018neuralwave} due to the pervasiveness and low cost in deployment. Besides, Wi-Fi based person identification enables passive identification, i.e., high user-friendly. What's more important, unlike popular face recognition systems being vulnerable to the replay attacks \cite{smith2015face}, or fingerprint recognition struggling for spoofing attacks from 3D printed models \cite{foolphone}, it is harder to fool the Wi-Fi based person identification system because the attack needs extremely vivid imitation on user behaviors. Among previous work, CSI monitoring during user ambulation are the most prevailing CSI-based identification approaches ~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,wang2016gait,lv2017device,pokkunuru2018neuralwave}, which demand users to walk along pre-defined paths and record corresponding Wi-Fi CSI series simultaneously. Then the recorded CSI is either to extract specific gait metric such as walking speed for identification~\cite{wang2016gait}, or directly to learn identity classifiers with machine learning algorithms, e.g., support vector machine~\cite{xin2016freesense,zeng2016wiwho,zhang2016wifi,lv2017device} or deep neural networks~\cite{pokkunuru2018neuralwave}. However, to recognize a person's gait, one must walk along the pre-defined paths again for several meters, e.g., 2-3m in \cite{zeng2016wiwho} and 5m in \cite{xin2016freesense}, which is labor intensive and time-consuming, thus largely limits use scenarios. To overcome limitations in gait-based approaches, Shi \textit{et al.}~\cite{shi2017smart} propose an activity-based person identification, which is built on the finding that CSI series corresponding to user daily activities like opening micro-oven carries identity-related patterns. Though the approach in \cite{shi2017smart} requires less labor work, daily activity patterns are inferior in robustness compared to gait. \begin{figure}[t] \centering \includegraphics[width=0.47\linewidth]{figs/fig1_final.pdf} \includegraphics[width=0.5\linewidth]{figs/simple_mean.pdf} \caption{WiPIN rationale. When one stands in a Wi-Fi environment~(left), multi-path effect on the person body would lead to discriminated distortions in the Wi-Fi amplitude (right), which carries body information and can be well-designed for person identification.} \label{fig:first} \vspace{-10pt} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figs/wipin_system.pdf} \caption{System framework. WiPIN is comprised of hardware and algorithms. The algorithms contain modules of signal pre-processing, data preparation, and identity matching. After deployment, WiPIN can identify legal users while reject illegal users. } \label{fig:system} \vspace{-10pt} \end{figure*} Previous work utilizes Wi-Fi distortions caused by user behaviors, such as walking and other daily activities embedding behavior patterns, to identify users. Moreover, we find that not only user behaviors, the presence of user can result in identity-related Wi-Fi distortions due to unique body information, e.g., body shape, body fat rate, and body muscle rate, which can also be used to identify users~(see Section~\ref{sec:study} for the study). With the above observation, we propose an operation-free passive person identification system, namely WiPIN. It can deeply extracts human body information from Wi-Fi distortion series caused by the user's presence, hence further identifies people with well-designed signal pre-processing and identity matching algorithms. One challenge in realizing the system is the mixture of body relevant signal and interference signals from multiple Wi-Fi propagation paths, and we tackle this problem by implementing a multi-path effect mitigation. Experimental results show that WiPIN achieves 92\% identification accuracy over 30 subjects with robustness to various experimental settings and low identifying time overhead, i.e., less than 300 ms. We summarize the main contributions of this paper as follows. \begin{itemize} \item We propose WiPIN, a novel Wi-Fi signals-based passive person identification system, which has no requirement for the proactive user engagement in traditional identification systems, such as facing camera and scanning finger/iris. In addition, WiPIN is user-friendly and time-efficient in practice. \item We quantitatively study the rationale behind WiPIN, and conclude that during propagation, Wi-Fi signals are embedded with information related to human body. The intrinsic body information further used for person identification is robuster than behavior patterns. \item WiPIN can classify authenticated users, as well as to reject illegal users that not seen before. We prototype WiPIN using commodity off-the-shelf Wi-Fi devices, and conduct extensive experiments to validate advances of WiPIN in various aspects including accuracy, robustness, time consumption, etc. \end{itemize} \begin{figure}[b] \vspace{-5pt} \centering \includegraphics[width=0.48\linewidth]{figs/fat.pdf} \includegraphics[width=0.48\linewidth]{figs/muscle.pdf} \caption{Information related to human body such as the fat rate and the muscle rate is embedded in the Wi-Fi distortion and can be well predicted by Wi-Fi signals with support vector regression~\cite{smola2004tutorial}.} \label{fig:fat} \vspace{-10pt} \end{figure} \section{Rationale Study}\label{sec:study} Past work uses Wi-Fi distortions that embedded user behaviors patterns for person identification. Moreover, We are wondering how CSI varies if a person does no operation just standing in a Wi-Fi environment. Fig.~\ref{fig:first} illustrates our preliminary experiment results for this wondering. We recruited 10 subjects to stand in a same position, respectively, near the Wi-Fi transmitter and receiver~(left), meanwhile the corresponding CSI series are recorded. We average the CSI amplitudes over all recorded samples and plot them in the Fig.~\ref{fig:first}~(right). Majority of the amplitudes are discriminated, which demonstrates the potentiality for person identification. We ascribe the reason of results shown in Fig.~\ref{fig:first} to that during the propagation, Wi-Fi signals must be embedded with certain information related to human body such as body shape, body fat rate, and body muscle rate, because of the multi-path effect on the body, and the absorption and reflection effect in the body. If this guess is correct, it is possible to perform person identification using Wi-Fi signals without any user activities like walking for several meters, but only standing for a second. To further confirm the guess, we measure the body fat rate and muscle rate of these subjects by a Mi$^\circledR$ body fat scale, then train a mapping function from CSI to aforesaid rates with support vector regression (SVR)~\cite{smola2004tutorial} (our not inferring body shape is because it is hard to get the ground-truth of body shape). Fig.~\ref{fig:fat} demonstrates that CSI distortions caused by standing persons are highly relevant with body fat rate and body muscle rate. \section{System Design}\label{sec:sys} WiPIN can identify authenticated users while reject illegal users. Fig.~\ref{fig:system} illustrates WiPIN framework that is comprised of CSI generation hardware and person identification algorithms. In hardware-end, the transmitter~(Tx) broadcasts Wi-Fi signals, and the receiver~(Rx) records the signals. Using a Linux 802.11n CSI tool~\cite{halperin2011tool}, we can parse CSI of 30 orthogonal frequency division multiplexing (OFDM) subcarriers at 5GHz central frequency from the recorded signals. Formally, denote CSI as $H$ and suppose we record $t$ Wi-Fi samples, then $H \in C^{t\times 30} $, where $C$ is for the Complex value, which is time-serial data essentially. In this paper, we only use the amplitudes of CSI, leading to $H \in R^{t\times 30} $~($R$ for the Real value). Next we go details in the algorithm-end of WiPIN. \subsection{Signal Processing} \subsubsection{Noise Removal} Due to the hardware imperfection~\cite{xie2015precise}, the sampled CSI series, i.e., $H$, have considerable noise. As an example in Fig.~\ref{fig:filter}, we plot CSI series recorded within 1 second when a subject stands as shown in Fig.~\ref{fig:system}. In the figure, each line stands for the amplitude series of one subcarrier (30 lines in all). To eliminate the high-frequency jitters, we design a low-pass Butterworth filter~\cite{selesnick1998generalized}. In particular, we experimentally set the parameter of the Butterworth filter as 5th-order with cut-off frequency of 10Hz. The filtering results are illustrated in Fig.~\ref{fig:filter} (b), which demonstrate that the Butterworth filter with above settings can significantly reduce noises in the CSI series. \begin{figure}[t] \subfigure[Raw CSI.]{ \centering \hspace{-0.1in} \includegraphics[width=0.48\columnwidth]{figs/raw_csi2.pdf} } \subfigure[Filtered CSI.]{ \centering \hspace{-0.1in} \includegraphics[width=0.48\columnwidth]{figs/filter_csi2.pdf} } \caption{Noise removal via Butterworth filter.}\label{fig:filter} \vspace{-10pt} \end{figure} \subsubsection{Multi-path Effect Mitigation} WiPIN uses omnidirectional antennas to broadcast and receive Wi-Fi signals, which makes CSI the mixture of signals from multiple propagation paths, including the line-of-sight path, the paths reflected from human body, and other reflection paths. This phenomenon is called multi-path effect~\cite{rappaport1996wireless} and can be expressed by following formula, \begin{equation}\label{eq:multipath} \textup{H}=\sum_{k=1}^{n}a_{k}e^{-j2{\pi}f{\tau}_{k} }, \end{equation} where $k$ is the index of the paths, $a_{k}$ and ${\tau}_{k}$ are the power decay and time delay of the $k$-th path, respectively. To make CSI more relevant with person body, we aim to mitigate signals received from other paths. The bandwidth~(B) in WiPIN is 40MHz. Correspondingly, the time resolution is $\Delta t=1/B=25ns$, which yields a distance resolution of $\Delta t=c/B=7.5m$, where $c$ is the electromagnetic wave speed in the air, approximately $3\times 10^8 m/s$. In the settings of this paper~(see one example in Fig.~\ref{fig:system} leftmost), major paths reflected from person body are within $7.5m$, indicating that CSI from body paths should almost have the least time delays (within $25ns$). Thus, we first apply the inverse fast Fourier transform~(IFFT) one each sample of $H$, i.e., $h_f \in R^{1\times 30}$, to convert $h_f$ to time domain~($h_t$), keep the item in the least time delay whereas suppress the subsequent items~(divided by a large number, i.e., 1000), then convert $h_t$ back to frequency domain by fast Fourier transform~(FFT). We apply IFFT \& FFT operations on every sample of $H$, greatly mitigating multi-path effect on CSI and making CSI much more relevant to human body. \begin{figure}[b]\label{fig:multipath} \subfigure[CSI Time Domain.]{ \centering \hspace{-0.1in} \includegraphics[width=0.465\columnwidth]{figs/ifft_2.pdf} } \subfigure[CSI Frequency Domain.]{ \centering \hspace{-0.1in} \includegraphics[width=0.465\columnwidth]{figs/fft2.pdf} } \caption{Partial multipath effect mitigation. \textbf{Left:} in CSI time domain, we keep the item with the least time delay~(red box), and suppress~(dividing by 1000) remaining items~(blue box). \textbf{Right:} then we covert CSI time domain back to frequency domain. Above operations largely mitigate CSI compositions reflected from other paths, and focus on person body.}\label{fig:fft} \end{figure} \subsection{Feature Extraction} We retain CSI in all subcarriers since the absorption and reflection at different frequencies are necessarily to be involved in the ultimate features. Specifically, CSI at all subcarriers implies Wi-Fi signals present selective decline at different frequencies \cite{franceschetti2006wireless} over unique person body. After mitigating the impact of multi-path effect, we average CSI time series for 30 distinct features. Besides them, we leverage another 9 features to depict CSI frequency domain profile, one profile example shown in Fig.~\ref{fig:fft}~(Right). The features are the (1) mean, (2) standard deviation, (3) median absolute deviation, (4) the mean absolute deviation, (5) interquartile range, (6) root mean square, (7) skewness, (8) kurtosis, and (9) entropy. The first 8 statistics values are common in time series mining~\cite{thomaz2015practical,reyes2016transition}, thus we only explain the entropy we proposed in WiPIN. The entropy is used to describe the discrete degree of the CSI profile. Assume that the maximum and minimum values of CSI are $M$ and $m$, respectively. To calculate the entropy, we equally divide [m, M] into 10 bins, and count the number of CSI that fall in the $i$-th bin, i.e., $n_i$. Then we take $\frac{n_i}{30}$ as the probability that CSI falls in the $i$-th bin, donated as $p_i$. The entropy of CSI profile is computed via Equation.~\ref{eq:entropy}: \begin{equation}\label{eq:entropy} \boldsymbol{E} =- \sum_{i=1}^{10}p_i \log p_i. \end{equation} note that we define $p_i\log p_i=0$ if $p_i=0$. After obtaining features, we segment collected CSI into training set and test set (segmentation detail will be descried in Section.~\ref{sec:evaluation}). Because metrics of these 39 features are not identical, we normalize them into [-1, +1] via following equation, \begin{equation}\label{eq:scale} x_i'= \frac{(2x_i -\max - \min)}{(\max - \min)}, \end{equation} where $max$ and $min$ are the maximum and minimum of the $i$-th features in the training set, respectively; $x_i$ is the $i$-th feature of one training/test instance $x$, $x_i'$ is the corresponding normalized feature. \subsection{Identity Matching}\label{sec:id} \begin{figure}[tbp]\label{fig:recflow} \centering \includegraphics[width=0.9\linewidth]{figs/threshold_1.pdf} \caption{Threshold learning process. We learn the identification threshold from the distribution of prediction scores. Then, those the maximal prediction score exceeding the threshold are considered as authenticated users. Once authenticated, index of maximal prediction score is the user ID prediction.}\label{fig:threshold} \end{figure} \subsubsection{Classifier Training} Person ID classifiers are trained to identify the user ID. Using training set, we train person ID classifiers via a support vector machine (SVM) toolbox, i.e., LIBLINEAR~\cite{fan2008liblinear}. The classifiers are trained with the L2-regularized, L2-loss, primal, radial base function~(RBF) kernel, and their default hyper-parameters. Besides, we use one-against-all training strategy, which learns $N$ classifiers if the volume of users is $N$. The $N$ classifiers produce $N$ prediction scores on one instance. In this setting, the prediction ID is the index of classifier that outputs the highest score, formally as Equation.~\ref{eq:prediction}. \begin{equation}\label{eq:prediction} i = \arg\underset{i \in{\{1,2,...,N}\}} \max y_i. \end{equation} where $y_i$ is the score of classifiers on the $i$-th person. \subsubsection{Threshold Learning} According to Equation.~\ref{eq:prediction}, classifiers always make prediction from trained IDs~(aka authenticated IDs) even for illegal users, which makes WiPIN vulnerable. Thus we propose threshold learning algorithm to enable WiPIN to reject illegal users. Recall that $N$ classifiers produce $N$ scores over trained users. With SoftMax function below, we normalize these $N$ scores into [0, 1]. \begin{equation} \label{eq:softmax} y_i' = \frac{e^{y_i}} {\sum_{i=1}^{N} e^{y_i}}. \end{equation} where $y_i$ is the same as Equation.~\ref{eq:prediction}; $y_i'$ is the normalized score. The normalized score can be interpreted as prediction confidence, e.g., $y_4'=0.6$ means that classifiers regard this person as the 4th user with 60\% confidence. The threshold to identify illegal users is learned from the distribution of normalized scores. More precisely, for one training instance, we can compute its maximal normalized prediction score, denoted as $s$. Then we have the maximal prediction score set, $S$, computing from the whole training set. After removing mis-classified instances, we take the 5th percentile of these scores as the threshold. For example, we plot 100 maximal scores of 100 training instances in Fig.~\ref{fig:scores}. The 5th percentile is 0.5278, thus in this situation WiPIN regards a person as illegal user if her maximal normalized prediction score is lower than 0.5278. Otherwise, trained classifiers will output an ID prediction via Equation.~\ref{eq:prediction}. \begin{figure}[tbp] \centering \includegraphics[width=0.85\linewidth]{figs/normalized_score.pdf} \caption{100 examples of maximal normalized classifying scores. The value of the 5th percentile point~(red circle) as the rejecting threshold.}\label{fig:scores} \vspace{-10pt} \end{figure} \section{Conclusion} In this paper, we propose WiPIN, an operation-free person identification system using Wi-Fi signals. Compared to previous work, it is much more user-friendly, time efficient, and robuster. To achieve WiPIN, we carefully design algorithms in signal processing, feature extraction, and identify matching. Besides, we prototype WiPIN in commdity off-the-shelf Wi-Fi devices and extensively evaluate WiPIN in a group of 30 subjects from various aspects. Experimental results show that WiPIN achieves competitive performance in identifying authenticated users as well as rejecting illegal users. {\small \bibliographystyle{IEEEtran}
2,877,628,090,887
arxiv
\section{Introduction} Let $\mathbb N$ denote the set of nonnegative integers. A \emph{numerical semigroup} $\Gamma$ is a submonoid of $\mathbb N$ with finite complement in $\mathbb N$ (this condition is equivalent to $\gcd(\Gamma)=1$). If $\Gamma$ is a numerical semigroup, the elements in $\mathbb N\setminus \Gamma$ are the \emph{gaps} of $\Gamma$. The cardinality of $\mathbb N\setminus \Gamma$ is the \emph{genus} of $\Gamma$, $\mathrm g(\Gamma)$. The largest integer not in $\Gamma$ is called the \emph{Frobenius number} of $\Gamma$, and will be denoted by $\mathrm F(\Gamma)$. Clearly, $\mathrm F(\Gamma)+1 + \mathbb N\subseteq \Gamma$, and this is why $\mathrm c(\Gamma)=\mathrm F(\Gamma)+1$ is known as the \emph{conductor} of $\Gamma$. Since for every $x\in \Gamma$, $\mathrm F(\Gamma)-x$ cannot be in $\Gamma$, we deduce that $\mathrm g(\Gamma)\ge \frac{\mathrm c(\Gamma)}2$. We say that $\Gamma$ is \emph{symmetric} when the equality holds, or equivalently, for every integer $x$, $x\not\in \Gamma$ implies $\mathrm F(\Gamma)-x\in \Gamma$. In this setting, $\mathrm c(\Gamma)$ is an even integer, and thus $\mathrm F(\Gamma)$ is odd. It can be easily proved that any numerical semigroup admits a unique \emph{minimal generating system} (every element is a linear combination of elements in this set with nonnegative integer coefficients and none of its proper subsets fulfills this condition; see for instance \cite[Chapter 1]{ns-book}). If $A=\{r_0,\ldots,r_h\}$ is the minimal generating set of $\Gamma$, then its elements are called \emph{minimal generators}, and its cardinality is the \emph{embedding dimension} of $\Gamma$, $\mathrm e(\Gamma)$. The smallest minimal generator is the smallest positive integer belonging to the semigroup, and it is known as the \emph{multiplicity} of $\Gamma$, denoted by $\mathrm m(\Gamma)$. The map \[ \mathbb N^{\mathrm e(\Gamma)}\to \Gamma,\ \varphi(a_0,\ldots,a_h)=a_0r_0+\cdots +a_hr_h\] is a monoid epimorphism. Hence $\Gamma$ is isomorphic to $\mathbb N^{\mathrm e(\Gamma)}/\ker \varphi$, where $\ker \varphi=\{ (a,b)\in \mathbb N^{\mathrm e(\Gamma)}\times \mathbb N^{\mathrm e(\Gamma)} ~|~ \varphi(a)=\varphi(b)\}$ ($\ker \varphi$ is a congruence on $\mathbb N^{\mathrm e(\Gamma)}$). A \emph{presentation} for $\Gamma$ is a set of generators of the congruence $\varphi$, and a \emph{minimal presentation} is a set of generators minimal with respect to set inclusion (actually, in our setting also with respect to cardinality; see \cite[Corollary 8.13]{ns-book}). It can be shown that the cardinality of any minimal presentation is greater than or equal to $\mathrm e(\Gamma)-1$, \cite[Theorem 9.6]{ns-book}. A numerical semigroup is a \emph{complete intersection} if this equality holds. Given $A$ a set positive integers, and $A=A_1\cup A_2$ a non trivial partition of $A$, we say that $A$ is the \emph{gluing} of $A_1$ and $A_2$ if $\mathrm{lcm}(d_1,d_2)\in \langle A_1\rangle \cap \langle A_2\rangle$, where $d_i=\gcd(A_i)$ and $\langle A_i\rangle$ denotes the monoid generated by $A_i$, $i=1,2$. If $A$ is the minimal system of generators of $\Gamma$, and $\Gamma_i$ is the numerical semigroup generated by $A_i/d_i$, $i=1,2$, we also say that $\Gamma$ is the \emph{gluing} of $\Gamma_1$ and $\Gamma_2$. It turns out that $d_1\in\Gamma_2$, $d_2\in\Gamma_1$, $\gcd(d_1,d_2)=1$, and neither $d_1$ is a minimal generator of $\Gamma_2$ nor $d_2$ is a minimal generator of $\Gamma_1$ (\cite[Section 8.3]{ns-book}). Delorme proved in \cite[Proposition 9]{delorme} that a numerical semigroup is a complete intersection if and only if it is a gluing of two complete intersection numerical semigroups (though with a different notation; the concept of gluing was introduced in \cite{gluing}). The gluing of symmetric numerical semigroups is symmetric (\cite[Proposition 10 (iii)]{delorme}), and as a consequence of this, complete intersections are symmetric. In \cite{fundamental-gaps} there is a procedure to construct the set of all numerical semigroups with a given Frobenius number. We show in this manuscript how can we use the concept of gluing to compute the set of all complete intersection numerical semigroups with a given Frobenius number (or equivalently with fixed genus). Recently there have been some experimental results that point out to the possibility that the number of numerical semigroups with a fixed genus has a Fibonacci like behaviour (\cite{bras}). Indeed, it is known that asymptotically the number of numerical semigroups with given genus grows as the Fibonacci sequence (\cite{zhai}). However there is not a proof for this for all genus, and we still do not even have a demonstration that there are more numerical semigroups with genus $g+1$ than numerical semigroups with genus $g$. This is not the case for complete intersection numerical semigroups, as we see in the last section. We also show how to calculate the set of all free (in the sense of \cite{bertin}) numerical semigroups, which is a special subclass of complete intersections, the set of all telescopic numerical semigroups (contained in the set of free numerical semigroups), and that of numerical semigroups associated to an irreducible plane curve singularity (these are a particular case of telescopic numerical semigroups). The recursive nature of gluing also allows us to give some bounds for the generators and embedding dimension for these families of semigroups when we fix the Frobenius number. The deeper we go in the chain of inclusions given in the preceding paragraph, the smaller are the bounds. \section{The Frobenius number and multiplicity of a complete intersection} Let $\Gamma$ be a numerical semigroup. We know that $\Gamma$ is a complete intersection if and only if it is the gluing of two complete intersections. Delorme (though with a different notation) highlighted in \cite[Section 11]{delorme} that this fact can used to determine if a numerical semigroup is a complete intersection (this idea has already been exploited in \cite{bermejo}; and in \cite{free} one can find a procedure to determine if an affine semigroup is the gluing of two affine semigroups), and also to compute the set of all complete intersections. In order to construct the set of all complete intersection numerical semigroups with given Frobenius number, we can proceed recursively by using the following formula for the Frobenius number of the gluing of two numerical semigroups, which is just a reformulation of Delorme's description of the conductor of a gluing. \begin{proposition}\label{frob-gluing} Assume that $\Gamma$ is a numerical semigroup minimally generated by $A=A_1\cup A_2$, and that $A$ is the gluing of $A_1$ and $A_2$. Let $d_1=\gcd(A_1)$ and $d_2=\gcd(A_2)$. Define $\Gamma_1=\langle A_1/d_1\rangle$ and $\Gamma_2=\langle A_2/d_2\rangle$. Then \ \mathrm F(\Gamma)= d_1\mathrm F(\Gamma_1)+d_2\mathrm F(\Gamma_2)+ d_1d_2. \ \end{proposition} \begin{proof} Observe that $\Gamma = d_1 \Gamma_1+ d_2\Gamma_2$. By \cite[Proposition 10 (i)]{delorme}, \begin{equation}\label{formula-c} \mathrm c(\Gamma)=d_1 \mathrm c(\Gamma_1) + d_2 \mathrm c(\Gamma_2)+(d_1-1)(d_2-1). \end{equation} Having in mind the relationship between Frobenius number and conductor, the formula follows easily. \end{proof} In Proposition \ref{frob-gluing}, $\Gamma= d_1\Gamma_1+d_2\Gamma_2$ and $d=d_1d_2=d_1\Gamma_1\cap d_2\Gamma_2$. The integer $d$ is the element where the gluing takes place. If we repeat the process with $d_1\Gamma_1$ and $d_2\Gamma_2$ in this result, we construct a decomposition tree of $\Gamma$, whose leaves are copies isomorphic to of $\mathbb N$ (this was the idea followed in \cite{b-g-r-v}). Assume that $d^{(1)},\ldots,d^{(h)}$ are the elements where the gluings take place in this splitting. The Frobenius number of $\Gamma$ is precisely $\sum_{i=1}^h d^{(i)} -\sum_{a\in A}a$ (see \cite[Section 11]{delorme} where it is highlighted that this formula is a particular case of a result given in \cite{herzog-kunz}). \begin{example} Let $\Gamma=\langle 10,14,15,21\rangle$. Then $\Gamma= \langle 10, 15\rangle + \langle 14,21\rangle$ and $35=5\times 7\in \langle 10,15\rangle$. We repeat the process for $\langle 10,15\rangle=\langle 10\rangle + \langle 15\rangle$ and $\langle 14,21\rangle= \langle 14\rangle +\langle 21\rangle$. We get $30\in \langle 10\rangle \cap \langle 15\rangle$ and $42\in \langle 14\rangle \cap \langle 21\rangle$. Hence the gluings take place at $35$, $30$ and $42$. Thus $\mathrm F(\Gamma)= (35+30+42)-(10+14+15+21)=47$. \end{example} \begin{example} We construct a complete intersection numerical semigroup with four generators, by gluing two embedding dimension two numerical semigroups. \begin{verbatim} gap> s:=NumericalSemigroup(10,11);; gap> t:=NumericalSemigroup(7,9);; gap> g:=NumericalSemigroup(16*10,16*11,21*7,21*9);; gap> FrobeniusNumber(g); 2747 gap> 16*FrobeniusNumber(s)+21*FrobeniusNumber(t)+16*21; 2747 \end{verbatim} \end{example} \begin{remark} For $\Gamma=\mathbb N$, we have \[\mathrm c(\mathbb N)=0,\quad \mathrm F(\mathbb N)=-1,\quad \mathrm g(\mathbb N)=0,\quad \mathrm m(\mathbb N)=1, \quad \mathrm e(\mathbb N)=1. \] \end{remark} \begin{proposition}\label{lower-bound-m-complete-intersection} If $\Gamma$ is a complete intersection, then \[\mathrm m(\Gamma)\ge 2^{\mathrm e(\Gamma)-1}.\] \end{proposition} \begin{proof} Let $h=\mathrm e(\Gamma)-1$. We use induction on $h$. For $h=1$, the statement follows trivially. As $\Gamma$ is a complete intersection, if $A$ is its minimal set of generators, we can find a partition of $A=A_1\cup A_2$ such that $A$ is the gluing of $A_1$ and $A_2$. Set as above $d_i=\gcd(A_i)$, and $\Gamma_i=\langle A_i/d_i\rangle$. Let $h_i=\mathrm e(\Gamma_i)-1$. Hence $h=h_1+h_2+1$. By induction hypothesis $\mathrm m(\Gamma_i)\ge 2^{h_i}$. Recall that $d_1\in \Gamma_2$ and $d_2\in \Gamma_1$, and they are not minimal generators. Thus $d_1\ge 2\mathrm m(\Gamma_2)\ge 2^{h_2+1}$, and analogously $d_2\ge 2^{h_1+1}$. For every $a\in A_1$, $a/d_1$ is a minimal generator of $\Gamma_1$, whence $a/d_1\ge 2^{h_1}$. Therefore $a\ge 2^{h_1+h_2+1}=2^h$. The same argument shows that any element in $A_2$ is greater than or equal to $2^h$. \end{proof} \begin{example}\label{example-recursive-family} We construct recursively a family $\{\Gamma^{(n)}\}_{n\in \mathbb N}$ of complete intersection numerical semigroups reaching the bound of Proposition \ref{lower-bound-m-complete-intersection}. We start with $\Gamma^{(1)}=\langle 2,3\rangle$, and the general element in the sequence is defined as $\Gamma^{(n+1)}= 2\Gamma^{(n)}+(2^{n+1}+1)\mathbb N$. For instance, $\Gamma^{(2)}=2\langle 2,3\rangle+ 5\mathbb N=\langle 4,5,6\rangle$, $\Gamma^{(3)}=2\langle 4,5,6\rangle + 9\mathbb N= \langle 8,9,10,12\rangle$, and so on. It is not hard to prove that \begin{multline*} \Gamma^{(n+1)}=\langle 2^{n+1}, 2^{n+1}+1, 2^{n+1}+2, 2^{n+1}+2^2,\ldots, 2^{n+1}+2^{n}\rangle\\= 2\langle 2^n, 2^n+1,\ldots, 2^n+2^{n-1}\rangle +(2^{n+1}+1)\mathbb N. \end{multline*} Notice that $\Gamma^{(n+1)}$ is a gluing of $\Gamma^{(n)}$ and $\mathbb N$, since \begin{itemize} \item $2\in \mathbb N$ and $2$ is not a minimal generator of $\mathbb N$, \item $2^{n+1}+1$ is the sum of the two smallest minimal generators of $\Gamma^{(n)}$; thus $2^{n+1}+1$ belongs to $\Gamma^{(n)}$ and it is not a minimal generator of $\Gamma^{(n)}$, \item $\gcd(2,2^{n+1}+1)=1$. \end{itemize} It follows that $\mathrm m(\Gamma^{(n)})=2^n$ and $\mathrm e(\Gamma^{(n)})=n+1$. Thus the bound in Proposition \ref{lower-bound-m-complete-intersection} is attained. \end{example} \begin{corollary}\label{upper-bound-ed-complete-intersections} If $\Gamma$ is a complete intersection numerical semigroup other than $\mathbb N$, then \[\mathrm e(\Gamma) \le \log_2(\mathrm c(\Gamma))+1.\] \end{corollary} \begin{proof} By Proposition \ref{lower-bound-m-complete-intersection}, $2^{\mathrm e(\Gamma)-1}\le \mathrm m(\Gamma)$. Since $\Gamma\neq \mathbb N$, we have that $\mathrm m(\Gamma)\le \mathrm c(\Gamma)$, and the bound follows. \end{proof} \begin{remark} Notice that in the proof of Corollary \ref{upper-bound-ed-complete-intersections} we use $\mathrm m(\Gamma)\le \mathrm c(\Gamma)$. For $\Gamma=\langle 2,3\rangle$, we get an equality and also the bound given in this corollary is reached. If $\mathrm m(\Gamma)=\mathrm c(\Gamma)$, then $\Gamma=\langle m, m+1,\ldots, 2m-1\rangle$, with $m=\mathrm m(\Gamma)$. Hence $\mathrm e(\Gamma)=m$, that is, $\Gamma$ has maximal embedding dimension (it is easy to see that the embedding dimension of a numerical semigroup is always less than or equal to its multiplicity; see for instance \cite[Chapter 1]{ns-book}). It is well known that the cardinality of a minimal presentation of a maximal embedding dimension numerical semigroup with multiplicity $m$ is $\frac{m(m-1)}2$ (see for instance \cite[Corollary 8.27]{ns-book}). Hence a maximal embedding dimension numerical semigroup with multiplicity $m$ is a complete intersection if and only if $\frac{m(m-1)}2=m-1$, or equivalently, either the numerical semigroup is $\mathbb N$ or $m=2$. If in addition we impose that the conductor and the multiplicity agree, then the only two possibilities are $\mathbb N$ and $\langle 2,3\rangle$. From the definitions of multiplicity and conductor, it is easy to see that there is no numerical semigroup $\Gamma$ such that $\mathrm c(\Gamma)=1+\mathrm m(\Gamma)$. If $\mathrm c(\Gamma)=2+\mathrm m(\Gamma)$, then $\Gamma = \langle m, m+2,m+3,\ldots, 2m-1,2m+1\rangle$, which is a maximal embedding dimension numerical semigroup. So the only complete intersection with $\mathrm c(\Gamma)=2+\mathrm m(\Gamma)$ is $\langle 2,5\rangle$. The case $\mathrm c(\Gamma)=3+\mathrm m(\Gamma)$ requires more effort. In this setting $m=\mathrm m(\Gamma)>2$. We have two possibilities. \begin{itemize} \item $\Gamma= \langle m, m+3,m+4,\ldots, 2m-1,2m+1,2m+2\rangle$, which has maximal embedding dimension, and so it cannot be a complete intersection numerical semigroup, because $m>2$. \item $\Gamma= \langle m, m+1, m+3,m+4,\ldots, 2m-1\rangle$. Here $\mathrm e(\Gamma)= m-1$ and the minimum element in $\Gamma$ congruent with 2 modulo $m$ is $2m+1=(m+1)+(m+1)$. Thus in view of \cite[Theorem 1(2)]{high-ed}, the cardinality of a minimal presentation for $\Gamma$ is $\frac{(m-1)(m-2)}{2}$. We conclude that $\Gamma$ is a complete intersection if and only if $\frac{(m-1)(m-2)}2=m-2$, and as $m>2$, this is equivalent to $m=3$. Hence $\Gamma=\langle 3,4\rangle$. \end{itemize} Therefore, if we assume that $\Gamma\not\in \{\mathbb N, \langle 2,3\rangle, \langle 2,5\rangle, \langle 3,4\rangle\}$, and $\Gamma$ is a complete intersection numerical semigroup, then we can assert that $\mathrm c(\Gamma)\ge \mathrm m(\Gamma)+4$, and the bound in Corollary \ref{upper-bound-ed-complete-intersections} can be slightly improved to \[ \mathrm e(\Gamma) \le \log_2(\mathrm c(\Gamma)-4)+1. \] This bound is attained for instance by $ \langle 2, 7 \rangle$, $\langle 4, 5, 6 \rangle$ and $\langle 4, 6, 7 \rangle$. By using \cite[Section 1.2]{high-ed}, we can determine those complete intersections with $\mathrm c(\Gamma)=\mathrm m(\Gamma)+4$, and thus obtain another small improvement of the above bound. \end{remark} We can improve this bound by using a different strategy. \begin{proposition}\label{lower-bound-c-ci} Let $\Gamma$ be a complete intersection numerical semigroup. Then \[(\mathrm e(\Gamma)-1) 2^{\mathrm e(\Gamma)-1}\le \mathrm c(\Gamma).\] \end{proposition} \begin{proof} We use induction on the embedding dimension of $\Gamma$. If the embedding dimension of $\Gamma$ is either one or two, then the result holds trivially. So assume that $\mathrm e(\Gamma)\ge 3$. As $\Gamma$ is a complete intersection, we know that there exist two complete intersection numerical semigroups $\Gamma_1$ and $\Gamma_2$ such that $\Gamma$ is the gluing of $\Gamma_1$ and $\Gamma_2$. Thus there exist $d_1\in \Gamma_2$ and $d_2\in \Gamma_1$, that are not minimal generators, such that $\Gamma=d_1\Gamma_1+d_2\Gamma_2$. For sake of simplicity write $c=\mathrm c(\Gamma)$, $e=\mathrm e(\Gamma)$, $c_i=\mathrm c(\Gamma_i)$ and $e_i=\mathrm e(\Gamma_i)$, $i=1,2$. Then from the definition of gluing we already know that $e=e_1+e_2$. Since $e\ge 3$, we may assume without loss of generality that $e_1\ge 2$. As $d_1$ is not a minimal generator of $\Gamma_2$, $d_1\ge 2\mathrm m(\Gamma_2)$, and as $e_1\ge 2$ and $d_2$ is not a minimal generator of $\Gamma_1$, $d_2\ge 2\mathrm m(\Gamma_1)+1$. In view of Proposition \ref{lower-bound-m-complete-intersection}, we deduce $d_1\ge 2^{e_2}$ and $d_2\ge 2^{e_1}+1$. Now from \eqref{formula-c}, we have $c=d_1c_1+d_2c_2+(d_1-1)(d_2-1)$. By induction hypothesis and the preceding paragraph, we get $c\ge 2^{e_2}(e_1-1)2^{e_1-1}+(2^{e_1}+1)(e_2-1)2^{e_2-1}+(2^{e_2}-1)2^{e_1} = (e-2)2^{e-1}+ (e_2-1)2^{e_2-1}+ 2^e-2^{e_1} \ge (e-1)2^{e-1}-2^{e-1}+2^e-2^{e_1}= (e-1)2^{e-1}+2^{e-1}-2^{e_1}\ge (e-1)2^{e-1}$. \end{proof} \begin{example}\label{recursive-family-ii} Let $\{\Gamma^{(n)}\}_{n\in \mathbb N}$ be the family of numerical semigroups presented in Example \ref{example-recursive-family}. By using \eqref{formula-c}, it is not hard to check inductively that $\mathrm c(\Gamma^{(n)})= n2^n$, and thus the bound of Proposition \ref{lower-bound-c-ci} is attained. If we have a closer look at the proof of Proposition \ref{lower-bound-c-ci}, then we easily deduce that for the bound to be attained, the following must hold in all induction steps with $e\ge 3$: \begin{itemize} \item $(e_2-1)2^{e_2-1}=0$ and thus $e_2=1$, that is, $\Gamma_2$ is $\mathbb N$ (we will study these semigroups in the next section); \item from $e_2=1$ it follows that $e_1=e-1$ and $2^{e-1}-2^{e_1}=0$; \item $\mathrm m(\Gamma_1)=2^{e_1-1}$ and $d_2=2\mathrm m(\Gamma_1)+1=2^{e_1}+1$, whence $\mathrm m(\Gamma_1)+1\in \Gamma_1$; \item $c_1=(e_1-1)2^{e_1-1}=(e-2)2^{e-2}$; \item $d_1=2$. \end{itemize} Also the only embedding dimension two numerical semigroup for which the equality holds is $\langle 2,3\rangle$. If follows that the family given in Example \ref{example-recursive-family} contains all possible complete intersection numerical semigroups with the property that the bound in Proposition \ref{lower-bound-c-ci} becomes an equality. \end{example} \begin{proposition}\label{upper-bound-gen-complete-intersections} Let $\Gamma$ be a complete intersection numerical semigroup other than $\mathbb N$, minimally generated by $\{r_0,\ldots, r_h\}$. If $\mathrm m(\Gamma)\neq 2$, for all $k$, $r_k<\mathrm F(\Gamma)$. \end{proposition} \begin{proof} Assume without loss of generality that $r_0=\mathrm m(\Gamma)$. The numerical semigroup $\Gamma$ is symmetric and thus for every $i>0$, $\mathrm F(\Gamma)+r_0- r_i\in \Gamma$. If $r_k>\mathrm F(\Gamma)$, for some $k>0$, then $\mathrm F(\Gamma)+r_0-r_k<r_0$, which forces $\mathrm F(\Gamma)+r_0=r_k$. If $h>1$, choose $0<i\neq k$. Then $r_k-r_i=\mathrm F(\Gamma)+r_0-r_i\in \Gamma$, contradicting that $r_k$ is a minimal generator. This proves $r_k<\mathrm F(\Gamma)$, whenever $h>1$. For $h=1$, $\mathrm F(\Gamma)=(r_0-1)(r_1-1)-1$. In this setting, $\Gamma=\langle 2,f+2\rangle$ has $\mathrm F(\Gamma)=f$. For $\mathrm m(\Gamma)>2$, we get $\mathrm F(\Gamma)=(r_0-1)(r_1-1)-1\ge 2(r_1-1)-1 =(r_1-1)+(r_1-2)\ge r_1$. \end{proof} \begin{remark}\label{some-facts-complete-intersections} If we want to compute the set of all complete intersection numerical semigroups with Frobenius number $f$, then we can use the formula given in Proposition \ref{frob-gluing}. Hence $f=d_1 f_1+d_2 f_2+ d_1 d_2$, and we recursively construct all possible numerical semigroups with Frobenius number $f_1$, and then the set with Frobenius number $f_2$. We next give some useful bounds and facts to perform this task. Denote $f+1$ by $c$. \begin{enumerate}[i)] \item $d_1\neq 1\neq d_2$. This is because $d_1\in \Gamma_2$ and it is not a minimal generator of $\Gamma_2$. The only possibility to have $d_1=1\in \Gamma_2$ would be $\Gamma_2=\mathbb N=\langle 1\rangle$. But then $d_1$ would be a minimal generator. The same argument is valid for $d_2$. \item Since $\gcd(d_1,d_2)=1$, we can assume without loss of generality that $2\le d_2 < d_1$. \item Since $f_1,f_2\ge -1$, $f\ge -d_1-d_2+d_1d_2= (d_1-1)(d_2-1)-1$. Hence $d_2\le \frac{c}{d_1-1}+1$; and consequently, $d_2\le \min\{ d_1-1, \frac{c}{d_1-1}+1\}$. \item $f-d_jf_j\equiv 0\bmod d_i$, $\{i,j\}=\{1,2\}$. In particular, if $f_j=-1$, then $f+d_j\equiv 0 \bmod d_i$. \item $d_1<f$, except in the case $\Gamma= \langle 2=d_2, f+2=d_1\rangle$. \begin{enumerate}[a)] \item If $f_1=f_2=-1$, then $\Gamma_1=\Gamma_2=\mathbb N$, and $\Gamma$ is $\langle d_2, d_1\rangle$. If $d_2\neq 2$, then Proposition \ref{upper-bound-gen-complete-intersections}, asserts that $d_1<f$. \item If $f_2>0$, then $f\ge -d_1+d_2+d_1d_2=(d_1+1)(d_2-1)+1\ge d_1+2$. Hence $d_1\le f-2$. \item If $f_1>0$, then $f\ge d_1-d_2+d_1d_2=(d_1-1)(d_2+1)+1> 3(d_1-1)\ge d_1+ 2(d_1-1)-1> d_1$. \end{enumerate} \item If $f_1\neq -1\neq f_2$, then $f-d_1d_2\in \langle d_1,d_2\rangle$. We are only interested in factorizations $f-d_1d_2=a_1d_1+a_2d_2$, $a_1,a_2\in \mathbb N$, with $a_1\equiv a_2\equiv 1 \bmod 2$, since the Frobenius number of a complete intersection is an odd integer. \end{enumerate} \end{remark} \begin{example} We compute the set of all complete intersection numerical semigroups with Frobenius number 11. First note that $\langle 2,13\rangle$ is in this set. The possible $d_1$ belong to $\{3,\ldots,10\}$. \renewcommand{\labelitemii}{$\star$} \begin{itemize} \item $d_1=10$. Then $2\le d_2 \le \min \{9,\lfloor \frac{12}9\rfloor+1\}=2$. Hence $d_2$ must be 2, but then $\gcd(d_1,d_2)\neq 1$, and we have no complete intersections under these conditions. \item $d_1=9$. Then $2\le d_2 \le \min \{8,\lfloor \frac{12}8\rfloor+1\}=2$. This forces $d_2=2$, which in addition is coprime with 9. \begin{itemize} \item $11+9\equiv 0\bmod 2$, and thus $f_1=-1$ ($\Gamma_1=\mathbb N$) is a possible choice. In this setting $f_2=(11-18+0)/2=1$, whence $\Gamma_2=\langle 2,3\rangle$. We obtain a new complete intersection $\Gamma=9\mathbb N+2\langle 2,3\rangle= \langle 4,6,9\rangle$, because $9\in \langle 2,3\rangle$ is not a minimal generator. \item $11+2\not\equiv 0\bmod 9$, so $f_2$ cannot be $-1$. \item $11-18\not \in\langle 2,9\rangle$, so we have no more complete intersections with this data. \end{itemize} \item For $d_1=8$, we have $2\le d_2 \le \min \{7,\lfloor \frac{12}7\rfloor+1\}=2$. However $\gcd\{d_1,d_2\}\neq 1$. \item If $d_1=7$, then $2\le d_2 \le \min \{6,\lfloor \frac{12}6\rfloor+1\}=3$. \begin{itemize} \item $d_2=2$. \begin{itemize} \item $11+7\equiv 0\bmod 2$, and thus $\Gamma_1$ can be $\mathbb N$. But then $f_2=(11-14+7)/2=2$, which is even. So this case cannot occur. \item $11+2\not\equiv 0\bmod 7$, and so $\Gamma_2$ will not be $\mathbb N$. \item Finally, $11-14\not \in \langle 2,7\rangle$, so no complete intersections can be found with properties. \end{itemize} \item $d_2=3$. \begin{itemize} \item $11+7\equiv 0\bmod 3$, and thus $\Gamma_1$ could be $\mathbb N$. In this setting $f_2=(11-21+7)/3=-1$, and so $\Gamma_2$ is also $\mathbb N$. We get a new complete intersection $\Gamma=7\mathbb N+3\mathbb N=\langle 3,7\rangle$ with Frobenius number 11. \item $11-21\not\in\langle 3,7\rangle$, so no more complete intersections are obtained for this choice of $d_1$ and $d_2$. \end{itemize} \end{itemize} \item For $d_1=6$, $2\le d_2 \le \min \{5,\lfloor \frac{12}5\rfloor+1\}=3$, but both 2 and 3 are not coprime with 6. \item $d_1=5$. Then $d_2\in \{2,3,4\}$. \begin{itemize} \item $d_2=2$. \begin{itemize} \item $11+5\equiv 0\bmod 2$, and so $\Gamma_1$ can possibly be $\mathbb N$. Hence $f_2=(11-10+5)/2=3$. The only possible complete intersection numerical semigroup with Frobenius number $3$ is $\langle 2,5\rangle$. But $5$ is a minimal generator of this semigroup. \item $11+2\not\equiv 0\bmod 5$. \item $11-10\not\in\langle 2,5\rangle$. \end{itemize} \item $d_2=3$. In this case $11+5\not\equiv 0\bmod 3$, $11+3\not\equiv 0\bmod 5$, and $11-15\not \in\langle 3,5\rangle$. \item $d_2=4$. \begin{itemize} \item $11+5\equiv 0\bmod 4$, and $f_2=(11-20+5)/4=-1$. So $\Gamma=5\mathbb N+4\mathbb N=\langle 4,5\rangle$ is another complete intersection with Frobenius number 11. \item $11-20\not\in \langle 4,5\rangle$. \end{itemize} \end{itemize} \item $d_1=4$, $2\le d_2 \le \min \{3,\lfloor \frac{12}3\rfloor+1\}=3$, and as $\gcd(2,4)\neq 1$, we get $d_2=3$. \begin{itemize} \item $11+4\equiv 0 \bmod 3$. So $\Gamma_1$ could be $\mathbb N$. If this is the case, $f_2=(11-12+4)/3=1$, which forces $\Gamma_2$ to be $\langle 2,3\rangle$, and $4\in \Gamma_2$ is not a minimal generator. So we obtain $\Gamma=4\mathbb N+3\langle 2,3\rangle= \langle 4,6,9\rangle$, which was already computed before. \item $11+3\not\equiv 0\bmod 4$. \item $11-12\not\in \langle 3,4\rangle$ \end{itemize} \item $d_3=3$ and $d_2=2$. \begin{itemize} \item $11+3\equiv 0\bmod 2$, and $\Gamma_1=\mathbb N$ can be a possibility. Then $f_2=(11-6+3)/2=7$. If we apply this procedure recursively for $f=7$, we obtain that $\{\langle 2,9\rangle, \langle 3,5\rangle, \langle 4,5,6\rangle\}$ is the set of all possible complete intersection numerical semigroups with Frobenius number 7. However, $3\not \in \langle 2,9\rangle$, $3$ is a minimal generator of $\langle 3,5\rangle$, and $3\not\in \langle 4,5,6\rangle$. \item $11+2\not\equiv 0\bmod 3$. \item $11-6=5\in \langle 2,3\rangle$, and $5=1\cdot 2+1\cdot 3$ is the only factorization. So the only possible choice for $f_1$ and $f_2$ is 1. This means that $\Gamma_1$ and $\Gamma_2$ must be $\langle 2,3\rangle$. Again we obtain no new semigroups, since $2$ and $3$ are minimal generators of $\langle 2,3\rangle$. \end{itemize} \end{itemize} Thus the set of complete intersection numerical semigroups with Frobenius number 11 is \[\{\langle 2,13\rangle, \langle 4,6,9\rangle, \langle 3,7\rangle, \langle 4,5\rangle \}.\] \end{example} \section{Free numerical semigroups} Throughout this section, let $\Gamma$ be the numerical semigroup $\Gamma$ minimally generated by $\{r_0,\ldots, r_h\}$. For $k\in\{1,\ldots,h+1\}$, set $d_k=\gcd(\{r_0,\ldots,r_{k-1}\})$ ($d_1=r_0$). Write $\Gamma_k=\left\langle {\frac{r_0}{d_{k+1}}},\ldots,{\frac{r_k}{d_{k+1}}}\right\rangle$, and $c_k=\mathrm c(\Gamma_k)$ for all $k\in \{1,\ldots,h\}$. Set $c=c_h=\mathrm c(\Gamma)$. We say that $\Gamma$ is \emph{free} if either $h=0$ (and thus $r_0=1$) or $\Gamma$ is the gluing of the free numerical semigroup $\Gamma_{h-1}$ and $\mathbb N$. Free numerical semigroups were introduced in \cite{bertin}. For other characterizations and properties of free numerical semigroups see \cite[Section 8.4]{ns-book}. \begin{example} Notice that the order in which the generators are given is crucial. For instance, $S=\langle 8,10,9\rangle$ is free for the arrangement $(8,10,9)$ but it is not free for $(8,9,10)$. And a numerical semigroup can be free for different arrangements, for example, $S=\langle 4,6,9\rangle$ has this property. If we take $c_0,\ldots, c_h$ pairwise coprime integers greater than one, and $r_i= \prod_{j=0, i\neq j}^h c_j$, $j=0,\ldots, h$, then the numerical semigroup generated by $\{r_0,\ldots, r_h\}$ is free for any arrangement of its minimal generating set (see \cite{single-betti}). \end{example} According to Proposition \ref{frob-gluing}, with $A_2=\{r_h\}$, we obtain the following consequence. \begin{corollary}\label{frob-free} If $\Gamma$ is free, then \[\mathrm F(\Gamma)= d_h \mathrm F(\Gamma_{h-1})+r_h(d_h-1).\] \end{corollary} In this way we retrieve Johnson's formula (\cite{johnson}). Notice also that $\Gamma_{h-1}$ is again free, so if we expand recursively this formula we obtain the formula given by Bertin and Carbonne for free numerical semigroups (see \cite{bertin}; these authors named these semigroups in this way). This equation can be reformulated in terms of the conductor as \begin{equation}\label{conductor-free-gluing} \mathrm c(\Gamma)=c_h=d_h c_{h-1}+(d_h-1)(r_h-1). \end{equation} \begin{lemma} \label{some-facts-free} If $\Gamma$ is free, then \begin{enumerate} \item $\gcd(d_h,r_h)=1$; \item $d_h\mid \mathrm F(\Gamma)+r_h$ (consequently $d_h \not| \, \mathrm F(\Gamma)$); \item if we define $e_k={\frac{d_k}{d_{k+1}}},k=1,\ldots,h$, then e_k {r}_k\in \langle r_0,\ldots,r_{k-1}\rangle$, for all $k=1,\ldots,h$; in particular, $e_k\ge 2$; \item $d_1>d_2>\cdots>d_{h+1}=1$; \item $d_h\le \frac{\mathrm c(\Gamma)}{r_h-1}+1$; \item for $h\ge 1$, $(d_h-1)(r_h-1)\ge 2^h$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item This follows from the fact that $\Gamma$ is a numerical semigroup, and thus $\gcd(d_h,r_h)=d_{h+1}=1$. \item $\mathrm F(\Gamma_{h-1})=(\mathrm F(\Gamma)+r_h(1-d_h))/d_h=(\mathrm F(\Gamma)+r_h)/d_h-1$. \item As $\Gamma_k$ is the gluing of $\Gamma_{k-1}$ and $\mathbb N$, we have that $\frac{r_k}{d_{k+1}}\in \Gamma_{k-1}$. Hence $\frac{d_k}{d_{k-1}}r_k\in \langle r_0,\ldots,r_{k-1}\rangle$. If $e_k=1$, then $r_k\in \langle r_0,\ldots,r_{k-1}\rangle$, contradicting that $r_k$ is a minimal generator. \item By definition, $d_k\ge d_{k+1}$. As $e_k=\frac{d_k}{d_{k+1}}\ge 2$, we get $d_k>d_{k-1}$. \item Notice that $\mathrm F(\Gamma)\ge (r_h-1)(d_h-1) -1$, since $\mathrm F(\Gamma_{h-1})\ge -1$. \item If $d_h=2$, then we show that $r_h>\mathrm m(\Gamma)$. Assume to the contrary that $r_h=\mathrm m(\Gamma)$. Then we already proved above that $e_h r_h\in \langle r_0,\ldots,r_{h-1}\rangle$. Since $e_h=d_h$ and $r_i$ is a minimal generator of $\Gamma$ for all $i$, we deduce that $2 r_h = \sum_{i=0}^{h-1}a_i r_i$, with $\sum_{i=0}^{h-1}a_i\ge 2$. As $r_i>r_h$ for every $i=0,\ldots,h-1$, we get $2 r_h > r_h\sum_{i=0}^{h-1} a_i$, and thus $\sum_{i=0}^{h-1}a_i<2$, a contradiction. Thus in view of Proposition \ref{lower-bound-m-complete-intersection}, we have that $r_h\ge 2^h$, and if $d_h=2$, then $r_h\ge 2^h+1$. Hence for $d_h=2$ the proof follows easily, and for $d_h>2$ we get $(d_h-1)(r_h-1)\ge 2(r_h-1)\ge 2(2^h-1)\ge 2^h$ (we are assuming $h\ge 1$). \end{enumerate} \end{proof} In view of Example \ref{recursive-family-ii}, the bound proposed in Proposition \ref{lower-bound-c-ci} cannot be improved for free numerical semigroups, since the family introduced in Example \ref{example-recursive-family} consists on free numerical semigroups. However, we can use Proposition \ref{lower-bound-c-ci} to find an upper bound for $r_h$, as we show next. For all $h\geq 2$, $c_{h-1}=\frac{c-(d_h-1)(r_h-1)}{d_h}$ is an even integer, and $c=c_h\geq h 2^h$. In particular, $$ -c_{h-1}d_h \leq -(h-1) 2^{h-1}d_h. $$ Hence $$ (r_h-1)(d_h-1)\leq c-(h-1)2^{h-1}d_h $$ This gives us the following upper bound for $r_h$. $$ r_h\leq \frac{c}{d_h-1}-(h-1)2^{h-1}\frac{d_h}{d_h-1}+1. $$ \begin{corollary}\label{bound-rh-free} For all $h\geq 2$, $$ 2^{h}+1\leq r_h \leq \frac{c}{d_h-1}-(h-1)2^{h-1}\frac{d_h}{d_h-1}+1 \le c -(h-1)2^{h-1}+1. $$ \end{corollary} \begin{remark} In order to compute the set of all free numerical semigroups with a given Frobenius number, we make use of the formula given in Corollary \ref{frob-free}, by taking into account the restrictions given in this section for $d_h$ and $r_h$. \end{remark} \section{Telescopic numerical semigroups} We keep using the same notation as in the preceding section. We say that the numerical semigroup $\Gamma$ minimally generated by $\{r_0,\ldots, r_h\}$ is \emph{telescopic} if it is free for the arrangement of the generators $r_0<\cdots<r_h$ (see for instance \cite{telescopic}). This motivates the notation $\{r_0<\cdots<r_h\}$, that means that the elements in the set $\{r_0,\ldots,r_h\}$ fulfill the extra condition $r_0<\cdots <r_h$. We will also write $\Gamma=\langle r_0<\cdots <r_h\rangle$ when $\{r_0,\ldots,r_h\}$ is a generating system for $\Gamma$ and $r_0<\cdots <r_h$. Notice that in addition to the properties we had for free numerical semigroups, if $\Gamma$ is telescopic, then \begin{enumerate} \item $d_h<r_h$, because $d_h\mid r_{h-1}<r_h$; \item $\mathrm F(\Gamma)\ge (r_h-1)(d_h-1) -1>(d_h-1)^2-1$, whence $d_h\le \min\left\{r_h-1,\frac{\mathrm c(\Gamma)}{r_h-1}+1,\sqrt{\mathrm c(\Gamma)}+1\right\}$. \end{enumerate} \begin{proposition}\label{lower-bound-rh-telescopic} Let $\Gamma$ be a telescopic numerical semigroup minimally generated by $\{r_0<\cdots < r_h\}$. If $h\geq 2$, then $r_h\geq 2^{h+1}-1$. \end{proposition} \begin{proof} Let $h=2$, and let $\Gamma_1=\left\langle \frac{r_0}{d_2},\frac{r_1}{d_2}\right\rangle$. Since $\frac{r_1}{d_2}\geq 3$ and $d_2\geq 2$, we have $r_1\geq 6$. Besides, $r_2> r_1$, whence $r_2\geq 7$. Note that this bound is attained for $\Gamma_2=\langle 4,6,7\rangle$. Assume that $h\geq 3$, and that the formula is true for $h-1$. We have $r_h \geq r_{h-1}+1$ and $r_h\in \left\langle \frac{r_0}{d_h},\ldots,\frac{r_{h-1}}{d_h}\right\rangle$. By induction hypothesis, we have $\frac{r_{h-1}}{d_h}\geq 2^{h}-1$. Hence $r_h\geq 2 (2^{h}-1)+1=2^{h+1}-1$. Note that this bound is reached by $\Gamma_h=\langle 2^{h},3\cdot2^{h-1},7\cdot 2^{h-2},\ldots,(2^{k+1}-1)\cdot 2^{h-k},\ldots,2^{h+1}-1\rangle$. \end{proof} As in the free case, we can describe a bound for the embedding dimension of a telescopic numerical semigroup. \begin{proposition} \label{lower-bound-c-telescopic} Let $\Gamma$ be a telescopic numerical semigroup other than $\mathbb N$. Then \[(\mathrm e(\Gamma)-2)2^{\mathrm e(\Gamma)}+2 \le \mathrm c(\Gamma).\] \end{proposition} \begin{proof} Assume that $\Gamma$ is minimally generated by $\{r_0<\cdots< r_h\}$. Denote as usual $\mathrm c(\Gamma)$ by $c$. We use once more induction on $h$. The case $h=1$ is evident. Suppose that $h\geq 2$, and that our inequality is true for $h-1$. By (\ref{conductor-free-gluing}), we have $c=d_h c_{h-1}+(d_h-1)(r_h-1)$. By induction hypothesis, $c_{h-1}\geq (h-2)2^h+2$, and as $d_h\geq 2$, and $r_h\geq 2^{h+1}-1$, we get $c\geq (h-2)2^{h+1}+4+2^{h+1}-2=(h-1)2^{h+1}+2$. \end{proof} Note that for all $h\geq 2$, $c_{h-1}={\displaystyle{\frac{c-(d_h-1)(r_h-1)}{d_h}}}$ is an even integer, and that $c=c_h\geq (h-1)2^{h+1}+2$. In particular $$ -c_{h-1}d_h \leq -\big((h-2)2^h+2\big)d_h. $$ Hence $$ (r_h-1)(d_h-1)\leq c-\big((h-2)2^h+2\big)d_h. $$ This gives us the following upper bound for $r_h$: $$ r_h\leq {\frac{c}{d_h-1}}-\big((h-2)2^h+2\big){\frac{d_h}{d_h-1}}+1\le c-(h-2)2^h-1. $$ \begin{corollary}\label{bounds-rh-telescopic} For all $h\geq 2$, we have $$ \quad 2^{h+1}-1\leq r_h \le {\frac{c}{d_h-1}}-\big((h-2)2^h+2\big)\frac{d_h}{d_h-1}+1 \le c-(h-2)2^h-1. $$ \end{corollary} \begin{remark} For computing the set of all telescopic numerical semigroups with fixed Frobenius number, we proceed as in the free case, ensuring that $r_h$ is larger than the largest generator of $\Gamma_1$ multiplied by $d_h$. Notice that $d_h$ must now be smaller than $r_h$. \end{remark} \section{Plane curve singularities} Let $\Gamma$ be the numerical semigroup minimally generated by $\{r_0 <r_1<\ldots <r_h\}$. Let $d_k$, $\Gamma_k$, $c_k$, and $e_k$ be as in the preceding section. The numerical semigroup $\Gamma$ is the numerical semigroup associated to an irreducible plane curve singularity if $\Gamma$ is telescopic and $e_kr_k<r_{k+1}$ for all $k=1,\ldots,h-1$ (see \cite{zar}). \begin{proposition}\label{lower-bound-rh-planar} Let $\Gamma$ be the semigroup associated to an irreducible plane curve singularity minimally generated by $\{r_0<\cdots <r_h\}$, with $h\ge 2$. Then $r_h\geq \frac{1}{3}(5\cdot 2^{2h-1}-1)$. \end{proposition} \begin{proof} For $h=2$, as $\Gamma_1=\left\langle \frac{r_0}{d_2}< \frac{r_1}{d_2}\right\rangle$, we obtain $\frac{r_1}2\geq 3$. Since $d_2\geq 2$, we deduce that $r_1\geq 6$. The plane singularity condition implies $r_2> e_1r_1\geq 12$, because we know that $e_1\ge 2$ (Lemma \ref{some-facts-free}). Hence $r_2\ge 13$. Assume that $h\geq 3$, and that the formula is true for $h-1$. The plane singularity condition for $k=h-1$ implies that $r_h \geq \frac{r_{h-1}}{d_h}d_{h-1}+1$. The quotient $\frac{r_{h-1}}{d_h}$ is the largest generator of $\Gamma_{h-1}$. The induction hypothesis then asserts that $\frac{r_{h-1}}{d_h} \geq \frac{1}{3}(5\cdot 2^{2(h-1)-1}-1)$. By using that $e_k\ge 2$ for all $k$ (Lemma \ref{some-facts-free}), we deduce that $d_{h-1} \geq 4$. By putting all this together, we get $r_h\geq 4(\frac{1}{3}(5\cdot 2^{2(h-1)-1}-1))+1=\frac{1}{3}(5\cdot 2^{2h-1}-1)$. \end{proof} \begin{proposition} \label{lower-bound-c-planar} Let $\Gamma\neq \mathbb N$ be the semigroup associated to an irreducible plane curve singularity minimally generated by $\{r_0<\cdots <r_h\}$ and with conductor $c$. Then \[c\geq \frac{5}{3}2^{2h}-3\cdot 2^h+\frac{4}{3}.\] \end{proposition} \begin{proof} The case $h=1$ is evident. Assume that $h\geq 2$ and that our inequality holds for $h-1$. We have: $c=d_h c_{h-1}+(d_h-1)(r_h-1)$. By induction hypothesis $c_{h-1}\geq \frac{5}{3}2^{2h-2}-3\cdot 2^{h-1}+\frac{4}{3}$. Notice that $d_h\geq 2$. Thus our assertion follows from Proposition \ref{lower-bound-rh-planar}. \end{proof} We proceed now as we did in the telescopic case to obtain also an upper bound for $r_h$. Note that for all $h\geq 2$, $c_{h-1}={{\frac{c-(d_h-1)(r_h-1)}{d_h}}}$ is an even integer, and that $c_{h-1}\geq {\frac{5}{3}}2^{2h-2}-3\cdot 2^{h-1}+ {\frac{4}{3}}$. Thus $$ -c_{h-1}d_h \leq -\left({\frac{5}{3}}2^{2h-2}-3\cdot 2^{h-1}+ {\frac{4}{3}}\right)d_h. $$ Hence $$ (r_h-1)(d_h-1)\leq c-\left({\frac{5}{3}}2^{2h-2}-3\cdot 2^{h-1}+ {\frac{4}{3}}\right)d_h. $$ This gives us the following upper bound for $r_h$. $$ r_h\leq \frac{c}{d_h-1}-\left(\frac{5}{3}2^{2h-2}-3\cdot 2^{h-1}+ {\frac{4}{3}}\right)\frac{d_h}{d_h-1}+1. $$ \begin{corollary}\label{bounds-rh-planar} For all $h\geq 2$, we have $$ \frac{5}{3}2^{2h-1}-\frac{1}{3}\le r_h \le \frac{c}{d_h-1}-\left(\frac{5}{3}2^{2h-2}-3\cdot 2^{h-1}+ \frac{4}{3}\right)\frac{d_h}{d_h-1}+1 \le c- \frac{5}{3}2^{2h-2}-3 \cdot 2^{h-1}+ \frac{7}{3}. $$ \end{corollary} A bound for the embedding dimension also follows from the above proposition. \begin{corollary} If $h\ge 2$, then \[h\le \log_2\left(\frac{\sqrt{60\,c+1}+9}{10}\right).\] \end{corollary} \begin{proof} From Proposition \ref{lower-bound-c-planar}, $\frac{5}{3}2^{2h}-3\cdot 2^h+\frac{4}{3}\le c$. Write $x=2^h$, we get $5/3x^2-3 x+\frac{4}{3}\le c$. By solving $5/3x^2-3 x+\frac{4}{3}- c=0$, we get $x\in \left\{-\frac{\sqrt{60\,c+1}-9}{10},\frac{\sqrt{60\,c+1}+9}{10}\right\}$. As the minimum of $5/3x^2-3 x+\frac{4}{3}- c$ is reached in $x=9/10$, and in our setting $x=2^h>1$, we have that the maximum possible $x>0$ such that $\frac{5}{3}2^{2h}-3\cdot 2^h+\frac{4}{3}\le c$ is $x=\frac{\sqrt{60\,c+1}+9}{10}$. \end{proof} \begin{remark} The set of all numerical semiogrups with fixed Frobenius number associated to an irreducible planar curve singularity is calculated as in the free case, by imposing the condition $e_k r_k<r_{k+1}$. \end{remark} \section{Experimental results} With the ideas given in the preceding sections, we implemented in \texttt{GAP} (\cite{gap}), with the help of the \texttt{numericalsgps} package (\cite{numericalsgps}), functions to compute the set of all complete intersection, free and telescopic numerical semigroups, as well as the set of all numerical semigroups associated to irreducible planar curve singularities with fixed Frobenius number (these functions will be included in the next release of this package). The following table was computed in 6932 milliseconds on a 2.5GHz desktop computer, and it shows, for fixed genus g, the number of complete intersections (ci(g)), free (fr(g)), telescopic (tl(g)), associated to an irreducible planar curve singularity (pc(g)) numerical semigroups, respectively. Recall that for a symmetric numerical semigroup its conductor is twice its genus. Observe that almost all complete intersections in this table are free. This is due to the fact that the embedding dimension of all numerical semigroups appearing there is small, and for embedding dimension three or less, the concepts of free and complete intersections coincide (among the complete intersection numerical semigroups represented in the table 158 of them have embedding dimension 2, 1525 have embedding dimension 3, 1862 have embedding dimension 4, and 205 have embedding dimension 5). \medskip \begin{tabular}{|l|l|l|l|l||l|l|l|l|l||l|l|l|l|l|} g & ci(g) & fr(g) & tl(g) & pc(g) & g & ci(g) & fr(g) & tl(g) & pc(g) & g & ci(g) & fr(g) & tl(g) & pc(g) \\ \hline 0 & 1 & 1 & 1 & 1 & 19 & 24 & 24 & 12 & 5 & 38 & 61 & 61 & 37 & 12 \\ 1 & 1 & 1 & 1 & 1 & 20 & 16 & 16 & 11 & 6 & 39 & 100 & 100 & 52 & 16 \\ 2 & 1 & 1 & 1 & 1 & 21 & 27 & 27 & 18 & 9 & 40 & 110 & 109 & 54 & 19 \\ 3 & 2 & 2 & 2 & 2 & 22 & 31 & 31 & 19 & 8 & 41 & 80 & 79 & 47 & 12 \\ 4 & 3 & 3 & 2 & 2 & 23 & 21 & 21 & 13 & 6 & 42 & 122 & 120 & 61 & 20 \\ 5 & 2 & 2 & 2 & 1 & 24 & 36 & 35 & 20 & 11 & 43 & 120 & 120 & 60 & 17 \\ 6 & 4 & 4 & 4 & 3 & 25 & 38 & 38 & 22 & 9 & 44 & 94 & 94 & 48 & 15 \\ 7 & 5 & 5 & 3 & 2 & 26 & 27 & 27 & 16 & 8 & 45 & 143 & 142 & 73 & 22 \\ 8 & 3 & 3 & 2 & 2 & 27 & 46 & 46 & 24 & 11 & 46 & 151 & 149 & 72 & 21 \\ 9 & 7 & 7 & 5 & 4 & 28 & 45 & 45 & 25 & 10 & 47 & 108 & 106 & 57 & 15 \\ 10 & 8 & 8 & 6 & 4 & 29 & 34 & 33 & 20 & 7 & 48 & 158 & 157 & 75 & 24 \\ 11 & 5 & 5 & 4 & 2 & 30 & 57 & 57 & 32 & 13 & 49 & 179 & 179 & 84 & 23 \\ 12 & 11 & 11 & 8 & 5 & 31 & 62 & 62 & 31 & 9 & 50 & 128 & 128 & 68 & 20 \\ 13 & 11 & 11 & 8 & 3 & 32 & 43 & 43 & 25 & 10 & 51 & 197 & 194 & 86 & 26 \\ 14 & 9 & 9 & 7 & 4 & 33 & 65 & 65 & 37 & 14 & 52 & 209 & 207 & 89 & 27 \\ 15 & 14 & 14 & 10 & 6 & 34 & 77 & 76 & 39 & 13 & 53 & 142 & 142 & 76 & 20 \\ 16 & 17 & 17 & 9 & 5 & 35 & 53 & 52 & 29 & 11 & 54 & 229 & 227 & 101 & 30 \\ 17 & 12 & 12 & 8 & 3 & 36 & 84 & 83 & 43 & 17 & 55 & 238 & 235 & 104 & 29 \\ 18 & 18 & 18 & 12 & 6 & 37 & 90 & 90 & 47 & 13 & 56 & 172 & 169 & 83 & 24 \\ \hline \end{tabular} \medskip The largest genus, for which the set of numerical semigroups with this genus is known, is 55, and the number of numerical semigroups with genus 55 is 1142140736859 (\cite{genus}), while there are just 2496 symmetric numerical semigroup with genus 55 (this last amount can be computed by using the \texttt{Irreducible\-Numerical\-Semigroups\-With\-Frobenius\-Number} command of the \texttt{numericalsgps} package). The proportion of complete intersections among symmetric numerical semigroups is small, and tiny compared with the whole set of numerical semigroups. \medskip \begin{tikzpicture} \pgfplotsset{every axis legend/.append style={ at={(1.02,1)}, anchor=north west}} \begin{axis}[ width=12cm, xlabel=genus, ylabel=\# numerical semigroup ] \addplot[smooth,mark=*,black] plot coordinates { (0,1) (1,1) (2,1) (3,2) (4,3) (5,2) (6,4) (7,5) (8,3) (9,7) (10,8) (11,5) (12,11) (13,11) (14,9) (15,14) (16,17) (17,12) (18,18) (19,24) (20,16) (21,27) (22,31) (23,21) (24,36) (25,38) (26,27) (27,46) (28,45) (29,34) (30,57) (31,62) (32,43) (33,65) (34,77) (35,53) (36,84) (37,90) (38,61) (39,100) (40,110) (41,80) (42,122) (43,120) (44,94) (45,143) (46,151) (47,108) (48,158) (49,179) (50,128) (51,197) (52,209) (53,142) (54,229) (55,238) (56,172) }; \addlegendentry{complete intersections} \addplot[smooth,mark=x,blue] plot coordinates { (0,1) (1,1) (2,1) (3,2) (4,3) (5,2) (6,4) (7,5) (8,3) (9,7) (10,8) (11,5) (12,11) (13,11) (14,9) (15,14) (16,17) (17,12) (18,18) (19,24) (20,16) (21,27) (22,31) (23,21) (24,35) (25,38) (26,27) (27,46) (28,45) (29,33) (30,57) (31,62) (32,43) (33,65) (34,76) (35,52) (36,83) (37,90) (38,61) (39,100) (40,109) (41,79) (42,120) (43,120) (44,94) (45,142) (46,149) (47,106) (48,157) (49,179) (50,128) (51,194) (52,207) (53,142) (54,227) (55,235) (56,169) }; \addlegendentry{free} \addplot[smooth,color=red,mark=x] plot coordinates { (0,1) (1,1) (2,1) (3,2) (4,2) (5,2) (6,4) (7,3) (8,2) (9,5) (10,6) (11,4) (12,8) (13,8) (14,7) (15,10) (16,9) (17,8) (18,12) (19,12) (20,11) (21,18) (22,19) (23,13) (24,20) (25,22) (26,16) (27,24) (28,25) (29,20) (30,32) (31,31) (32,25) (33,37) (34,39) (35,29) (36,43) (37,47) (38,37) (39,52) (40,54) (41,47) (42,61) (43,60) (44,48) (45,73) (46,72) (47,57) (48,75) (49,84) (50,68) (51,86) (52,89) (53,76) (54,101) (55,104) (56,83) }; \addlegendentry{telescopic} \addplot[smooth,mark=o,green] plot coordinates { (0,1) (1,1) (2,1) (3,2) (4,2) (5,1) (6,3) (7,2) (8,2) (9,4) (10,4) (11,2) (12,5) (13,3) (14,4) (15,6) (16,5) (17,3) (18,6) (19,5) (20,6) (21,9) (22,8) (23,6) (24,11) (25,9) (26,8) (27,11) (28,10) (29,7) (30,13) (31,9) (32,10) (33,14) (34,13) (35,11) (36,17) (37,13) (38,12) (39,16) (40,19) (41,12) (42,20) (43,17) (44,15) (45,22) (46,21) (47,15) (48,24) (49,23) (50,20) (51,26) (52,27) (53,20) (54,30) (55,29) (56,24) }; \addlegendentry{planar} \end{axis} \end{tikzpicture} \medskip Also as one of the referees observed, local minimums in the graph are attained when the genus is congruent with 2 modulo 3. We have not a proof for this fact. May be this behavior is inherited from the symmetric case. The sequence \[ \begin{array}{l} 1, 1, 1, 2, 3, 3, 6, 8, 7, 15, 20, 18, 36, 44, 45, 83, 109, 101, 174, 246, 227,420, 546, 498, 926, 1182, 1121,\\ 2015, 2496, 2436, 4350, 5602, 5317, 8925, 11971, 11276, \end{array} \] represents the number of symmetric numerical semigroups with genus ranging from 0 to 35. The following table shows that the proportion between complete intersections and free numerical semigroups remains similar even for larger genus. Observe that for genus 310 it takes 70 minutes to compute the set of all complete intersections, while it takes approximately 8 minutes and 30 seconds to determine all free numerical semigroups with this genus. For genus 55, computing the set of all numerical semigroups with this genus might take several months and a few terabytes (this was communicated to us by Manuel Delgado, see \cite{genus}). \medskip \begin{center} \begin{tabular}{|l|l|l|l|l|l| }\hline g & ci(g) & milliseconds & fr(g) & milliseconds & fr(g)/ci(g) \\ \hline \hline 220 & 18018 & 538213 & 17675 & 94134 & 0.98\\ \hline 230 & 16333 & 660838 & 16026 & 108187 & 0.98 \\ \hline 240 & 24862 & 924409 & 24359 & 153069 & 0.98\\ \hline 250 & 28934 & 1167901 & 28355 & 158706 & 0.98 \\ \hline 260 & 25721 & 1389167 & 25186 & 177691 & 0.98\\ \hline 310 & 66335 & 4206374 & 64959 & 509691 & 0.98\\ \hline \end{tabular} \end{center}
2,877,628,090,888
arxiv
\section{Introduction} Research on the intrinsically quantum properties present in physical systems allows the gaining of a better understanding of the fundamental characteristics of nature and has promoted the construction of methods that contribute to the description, transmission, and manipulation of information contained in its microscopic compounds, leading to developments that have a positive impact on diverse areas of Science \cite{Ac_n_2018,Adesso_LoFranco_Parigi2018}. Such properties are primordial for the execution of certain tasks that are impracticable in the context of classical physics. For instance, entanglement is an indispensable attribute in the composition of quantum communication protocols such as teleportation \cite{Bennett1993}, a promising mechanism for the development of a future quantum internet \cite{Pirandola_Braunstein2016}, in the realization of controlled gates applicable in quantum computing, and in the implementation of quantum simulations, where well controlled quantum systems can be used to reproduce the behavior of complex uncontrollable systems \cite{Ac_n_2018}, in addition to enabling several other useful applications in the progress of quantum information science technologies \cite{Dowling-Milburn2003,MasoudMohseni2017}. Recently, the resource theory of asymmetry has received attention in the area of Quantum Information Science, favoring applications in various contexts involving both single and composite quantum systems \cite{Girolami2017,Marvian2013,MarvianThesis, Marvian2014, Marvian-Spekkens2014, Piani2016, Marvian-Spekkens-Zanardi2016, D.J.Zhang2017, C.Zhang2017, Bu2018, Marvian2019, Takagi2019}. For example, in multipartite systems the states which break the dynamic symmetry associated with additive Hamiltonians make it possible to quantify or to witness entanglement by the Wigner-Yanase skew information \cite{Girolami2017,Wigner1963}, a function that has the properties necessary for characterization of asymmetry measures in the resource theory of asymmetry \cite{Takagi2019}. Actually, the relationships between skew information measures and quantum correlations have been studied for some time now \cite{Luo-Fu-Oh2012}. And recently, in Ref. \cite{Dai_2020}, the behavior of the non-classicality of spins systems was studied using the measure of Wigner-Yanase skew information as a quantifier. Using the contextual re-interpretation of state asymmetry in relation to local unitary groups, which are associated with additive Hamiltonian eigenvectors, it was shown that the difference between the asymmetry of a general global state and the asymmetry of the composition of local states is related to total correlations \cite{Li_2020}. Asymmetry has also been investigated from an open systems perspective, where it is a useful resource in the processing of quantum information, which in turn suffers deterioration due to interactions between system and environment. However, the freezing of asymmetry in scenarios where open systems are restricted by superselection rules have been studied, proving useful mainly in occasions where noise is described by invariant operations, which enables asymmetry conservation mechanisms in the presence of noise described by such operations \cite{D.J.Zhang2017}, an advantage that can be employed in noisy quantum channels. Alike to quantum coherence, state asymmetry properties in relation to the generator of a quantum evolution can play important roles in the entanglement produced due to interactions between subsystems \cite{Pinto-Maziero2018}. Moreover, in a more general context, asymmetry properties have useful characteristics in applications such as in: Alignment of reference frames \cite{Bartlett2009,Bartlett2007}, quantum speed limits \cite{Marvian-Spekkens-Zanardi2016}, quantum metrology \cite{Takagi2019,C.Zhang2017,Banik2017}, quantum thermodynamics \cite{Marvian2019, Lostaglio-Jennings-Rudolph2015, Lostaglio-Matteo2019}, and quantum communication \cite{Bu2018,Duan-Lukin-Cirac-Zoller2001}. The Wigner-Yanase asymmetry, associated with the Hamiltonian, measures how far the state of the system is from sharing the same base of eigenstates with the observable, established by the commutation relationship between the state of the system and the Hamiltonian associated with its dynamics. Therefore, if the quantum system breaks the dynamic symmetry generated by the Hamiltonian, the commutator is different from zero and, consequently, we say that the state of the system presents non-zero asymmetry. The presence of the MDI in several physical systems that have properties of coherence and quantum correlations, such as spins systems \cite{Furman2011, Furman2012, Neumann2010}, nitrogen vacancy centers in diamond \cite{Dolde2013, Choi2019}, and rotational states of molecules \cite{Yun2015}, has motivated research in the context of Quantum Information Science in order to analyze the usefulness of the MDI in the application of resources for the execution of quantum gates for quantum computing \cite{Yun2015, Zhang2020}, quantum simulation execution \cite{Zhou2015}, realization of quantum channels for quantum communication, besides stimulating the investigation of quantum properties of Gibbs thermal states \cite{Kuznetsova2013, Furman2014, Castro2016} and the dynamics of quantum correlations and entanglement \cite{Furman2008, Mohamed2013, Hu2015, Khan2016, Namitha2018, Pinto-Maziero2018}. More recently, it has been shown that, for dipole-dipole interaction and two-photon resonance between two qubits and a coherent cavity field, the dipolar interaction can contribute to robustness against intrinsic decoherence and preserve a higher entanglement rate \cite{Mohamed_2020}. In addition, the MDI plays the role of noise source in several physical systems, leading to the decay of the quantum properties of the system \cite{Klauder1962, Annabestani2018, Ota2007}. It has also been shown that for specific configurations of systems interacting via the MDI, they can preserve the entanglement properties and quantum coherence along the dynamics \cite{Pinto-Maziero2018}. So, it is relevant to investigate the asymmetry properties of the system configurations in relation to MDI Hamiltonian and to analyze the role that asymmetry plays in respect to the entanglement produced along MDI unitary dynamics. In this article, we study the asymmetry of bipartite quantum states described by two magnetic dipoles that evolve due to the MDI, whose observable associated with the generator of the unitary dynamics is non additive (non local). As we showed recently \cite{Pinto-Maziero2018}, the existence of local quantum coherence in the initial states (before MDI occurs) is generally sufficient for the production of entanglement during the course of the MDI dynamics. However, we also showed that it is not a necessary condition, since some initial states with null local coherence are also able to produce maximum entanglement. In this particular case, the peculiar property present in the initial global configuration of the system is the anti-alignment of the dipoles, which, in a way, indicates the presence of asymmetry in relation to the Hamiltonian generating the unitary dynamics. The structure of the remainder of this article is the following. In Sec. \ref{sec:Observables and configurations}, we present the Hamiltonian and the system states that we will evaluate from the point of view of the asymmetry measure associated with the Wigner-Yanase skew information, which is described in Sec. \ref{sec:AWY}. In Sec. \ref{sec:AsymmetryIDM}, we examine the behavior of the asymmetry function in relation to the Hamiltonian of the MDI for pure and mixed product states. In Sec. \ref{sec:local}, we deal with the asymmetry of local states, and, in Sec. \ref{sec:unitary}, we define and study the asymmetry in relation to the unitary evolution operator. We present our conclusions in Sec. \ref{sec:conc}. \section{Observables and states} \label{sec:Observables and configurations} In this section, we describe the dynamics generator, the Hamiltonian, and states we use to study asymmetry subsequently. Let us start by introducing the general shape of the Magnetic Dipolar Interaction (MDI) Hamiltonian, expressed in the same form as in Ref. \cite{Pinto-Maziero2018}: $\mathcal{H}=D[(\vec{\sigma}\otimes\sigma_{0})\cdot(\sigma_{0}\otimes\vec{\sigma})-3\hat{n}\cdot\vec{\sigma}\otimes\hat{n}\cdot\vec{\sigma}]$, where $D=\mu_{0}\gamma_{a}\gamma_{b}\hbar^{2}/16\pi r^{3}$ is the parameter that describes the magnitude of magnetic dipole interaction, $r$ is the distance between the dipoles, $\mu_{0}$ symbolizes the permeability of vacuum, $\gamma_{a,b}$ represents the gyromagnetic factor of the subsystems $a$ and $b$, respectively, $\vec{\sigma}$ is the vector of Pauli matrices, $\sigma_{0}$ is the identity matrix of dimension $2$ and $\hat{n}\in\mathbb{R}^{3}$ is the unit vector pointing in the direction of the line that connects the dipole centers. Here, we consider Planck's constant $\hbar=1$ and we fix $D=1$. For simplicity, but without loss of generality, we assume that the dipoles centers lie along the $z$-axis $\left(\hat{n}=\left(0,0,1\right)\right)$. Thus \begin{align} \label{equation:Hdip} \mathcal{H} & = 2^{-1}(\sigma_{1}\otimes\sigma_{1}+\sigma_{2}\otimes\sigma_{2}-2\sigma_{3}\otimes\sigma_{3}) \nonumber = 0|\Psi_{-}\rangle\langle\Psi_{-}|+2|\Psi_{+}\rangle\langle\Psi_{+}|-(|\Phi_{-}\rangle\langle\Phi_{-}|+|\Phi_{+}\rangle\langle\Phi_{+}|), \end{align} where $|\Psi_{\pm}\rangle=2^{-{1}/{2}}\left(|01\rangle\pm|10\rangle\right)$, $|\Phi_{\pm}\rangle=2^{-{1}/{2}}\left(|00\rangle\pm|11\rangle\right)$ form the Bell basis of maximally entangled states, with $\left\{ |0\rangle,|1\rangle\right\}$ being the standard basis, and we use the notation $|xy\rangle\equiv|x\rangle\otimes|y\rangle$ for the tensor product. We will first consider that the dipoles are prepared in the configuration of pure-product states, so that \begin{equation} |\psi_{ab}\rangle\coloneqq|\psi_{a}\rangle\otimes|\psi_{b}\rangle=\left(\alpha_{a}|0\rangle+\beta_{a}|1\rangle\right)\otimes\left(\alpha_{b}|0\rangle+\beta_{b}|1\rangle\right), \end{equation} where $\alpha_{a,b}=\cos(\theta_{a,b}/2)$ and $\beta_{a,b}=\sin(\theta_{a,b}/2)$ are the probability amplitudes associated with each particle, and $\theta_{a,b}\in\left[0,2\pi\right]$, i.e, we consider the dipoles' configurations along coaxial rings in the Bloch's sphere representation \cite{Pinto-Maziero2018}. The dynamics of states evolving under the MDI Hamiltonian is given by the unitary operator $U=\exp\left(-i\mathcal{H}t\right)$. For the pure initial states above, ignoring a global phase, the evolved quantum state reads: \begin{align} |\psi_{t}^{ab}\rangle &=e^{-i\mathcal{H}t}|\psi_{ab}\rangle \nonumber \\ & = \left(\alpha_{a}\beta_{b}\cos t-i\beta_{a}\alpha_{b}\sin t\right)|01\rangle + (\beta_{a}\alpha_{b}\cos t-i\alpha_{a}\beta_{b}\sin t)|10\rangle + e^{2it}\left(\alpha_{a}\alpha_{b}|00\rangle+\beta_{a}\beta_{b}|11\rangle\right). \end{align} In order to evaluate the consequences of increased entropy of the states in the asymmetry relationship with the MDI Hamiltonian, we also consider two classes of mixed states: \begin{equation} \rho_{j}^{ab} := \rho_{ja}\otimes\rho_{jb} = 2^{-1}\left(\sigma_{0}+r_{ja}\sigma_{j}\right)\otimes2^{-1}\left(\sigma_{0}+r_{jb}\sigma_{j}\right), \end{equation} where $r_{js}=tr\left(\rho_{js}\sigma_{j}\right)\in[-1,1]$ for $s=a,b$. This states correspond to local Bloch's vectors in the $j$ axis for each subsystem, and we shall take $j=1$ or $j=3$. In these cases, the evolution provided by MDI is given by $\rho_{j}^{ab}(t)=U\rho_{j}^{ab} U^{\dagger}$, so that \begin{align} 4\rho_{1}^{ab}\left(t\right)= & \left(1+r_{1a}r_{1b}\right)\left[|\Psi_{+}\rangle\langle\Psi_{+}|+|\Phi_{+}\rangle\langle\Phi_{+}|\right]+\left(1-r_{1a}r_{1b}\right)\left[|\Psi_{-}\rangle\langle\Psi_{-}|+|\Phi_{-}\rangle\langle\Phi_{-}|\right] \nonumber \\ & +\left(r_{1b}+r_{1a}\right)\left[e^{3it}|\Phi_{+}\rangle\langle\Psi_{+}|+e^{-3it}|\Psi_{+}\rangle\langle\Phi_{+}|\right]+\left(r_{1b}-r_{1a}\right)\left[e^{it}|\Phi_{-}\rangle\langle\Psi_{-}|+e^{-it}|\Psi_{-}\rangle\langle\Phi_{-}|\right] \end{align} and \begin{align} 4\rho_{3}^{ab}\left(t\right)= & \left(1+r_{3a}\right)\left(1+r_{3b}\right)|00\rangle\langle00|+\left(1-r_{3a}r_{3b}+\left(r_{3a}-r_{3b}\right)\cos\left(2t\right)\right)|01\rangle\langle01| \nonumber \\ & +i\left(r_{3a}-r_{3b}\right)\sin\left(2t\right)|01\rangle\langle10|-i\left(r_{3a}-r_{3b}\right)\sin\left(2t\right)|10\rangle\langle01| \nonumber \\ & +\left(1-r_{3a}r_{3b}-\left(r_{3a}-r_{3b}\right)\cos\left(2t\right)\right)|10\rangle\langle10|+\left(1-r_{3a}\right)\left(1-r_{3b}\right)|11\rangle\langle11|. \end{align} \begin{comment} Thus, for $j=3$ the states are restricted to the line of incoherent states in each Bloch sphere and \begin{equation} \rho_{3}^{ab}=4^{-1}\left[\sigma_{0}\otimes\sigma_{0}+r_{3b}\sigma_{0}\otimes\sigma_{3}+r_{3a}\sigma_{3}\otimes\sigma_{0}+r_{3a}r_{3b}\sigma_{3}\otimes\sigma_{3}\right] \end{equation} For $j=1$, the states are restricted to the $x$ axis of the Bloch sphere of each system \begin{align} \rho_{1}^{ab}&=4^{-1}\left[\sigma_{0}\otimes\sigma_{0}+r_{1b}\sigma_{0}\otimes\sigma_{1}+r_{1a}\sigma_{1}\otimes\sigma_{0}+r_{1a}r_{1b}\sigma_{1}\otimes\sigma_{1}\right] \\ &= \left(1+r_{1a}\right)\left(1+r_{1b}\right)|++\rangle\langle++| \nonumber+\left(1+r_{1a}\right)\left(1-r_{1b}\right)|+-\rangle\langle+-| \\ &\hspace{1cm}+\left(1-r_{1a}\right)\left(1+r_{1b}\right)|-+\rangle\langle-+|+\left(1-r_{1a}\right)\left(1-r_{1b}\right)|--\rangle\langle--|, \end{align} with $|\pm\rangle=2^{-{1}/{2}}\left(|0\rangle\pm|1\rangle\right).$ In this case the quantum coherence is given by $\mathcal{C}\left(\rho_{1}^{ab}\right)=\left|r_{1a}\right|\left|r_{1b}\right|$. \end{comment} \section{Wigner-Yanase asymmetry measure} \label{sec:AWY} In this section, we present the asymmetry measure based on the skew information of Wigner and Yanase \cite{Wigner1963}. Here we are interested in quantifying the presence of asymmetry of states in relation to the Hamiltonian that generates the dynamics for MDI. The Wigner-Yanase asymmetry of an arbitrary state $\rho$ in relation to $\mathcal{H}$ is given by the following expression \cite{Marvian2014}: \begin{equation} \label{equation:Asymetry} A\left(\rho,\mathcal{H}\right) := -\frac{1}{2}tr\left[\rho^{{1}/{2}},\mathcal{H}\right]^{2} = tr\left(\rho\mathcal{H}^{2}\right)-tr\left(\rho^{{1}/{2}}\mathcal{H}\rho^{{1}/{2}}\mathcal{H}\right), \end{equation} where $\left[.,.\right]$ represents the commutator. It is easy to verify that the asymmetry of states evolved under a time-independent Hamiltonian is also time-independent, i.e., \begin{equation} A(\rho_{t},\mathcal{H})=A(\rho,\mathcal{H}), \end{equation} where $\rho_{t}=U\rho U^{\dagger}$ and $U=e^{-i\mathcal{H}t}$. Besides, for pure states $\sqrt{\rho}=\sqrt{|\psi\rangle\langle\psi|}=|\psi\rangle\langle\psi|$, and the calculation of asymmetry is reduced to computing the variance of the generator of the dynamics: \begin{equation} \label{equation:AEP} A\left(|\psi\rangle\langle\psi|,\mathcal{H}\right)=\langle\psi|\mathcal{H}^{2}|\psi\rangle-\langle\psi|\mathcal{H}|\psi\rangle^{2}. \end{equation} Such expressions, besides of being associated to the quantum coherence related to the generator eigenbasis, due to the Hamiltonian's structure considered in this work, they also reveal an important role of their respective eigenvalues, a fact that will become evident in the next sections. Actually, the Hamiltonian of MDI has a null eigenvalue, and the contribution of the subspace associated with such eigenvalue is not captured by the asymmetry measure defined above. This fact will be used as an argument to consider a Wigner-Yanase asymmetry defined using the unitary operation, rather than using the Hamiltonian, for measuring how much a given initial state changes under the action of a given transformation. \section{Asymmetry of states in relation to the MDI Hamiltonian} \label{sec:AsymmetryIDM} For pure states, using the expression for asymmetry in Eq. (\ref{equation:AEP}), we shall have \begin{equation} \label{equation:AsyMDI} A\left(|\psi_{ab}\rangle,\mathcal{H}\right)=\frac{4}{9} \left\{ 2\left(\alpha_{a}\beta_{b}+\beta_{a}\alpha_{b}\right)^{2}+\left(\alpha_{a}^{2}\alpha_{b}^{2}+\beta_{a}^{2}\beta_{b}^{2}\right)\right. \left.-\left[\left(\alpha_{a}\beta_{b}+\beta_{a}\alpha_{b}\right)^{2}-\left(\alpha_{a}^{2}\alpha_{b}^{2}+\beta_{a}^{2}\beta_{b}^{2}\right)\right]^{2}\right\}. \end{equation} The dependency of the asymmetry function on the parameters $\theta_{a}$ and $\theta_{b}$, that define the possible initial state configurations, is illustrated graphically in Fig. \ref{fig:Asymmetry}. We can observe in this figure that the maximum values of asymmetry are reached in the regions around the following two states \begin{equation}|++\rangle=\left(|\Phi_{+}\rangle+|\Psi_{+}\rangle\right)/\sqrt{2} \text{ and } |--\rangle=\left(|\Phi_{+}\rangle-|\Psi_{+}\rangle\right)/\sqrt{2}, \end{equation} that are balanced superpositions of a pair of the Hamiltonian eigenstates corresponding to non-zero and distinct eigenvalues. In this case, the maximum asymmetry coincides with the maximum local coherence of each dipole, and corresponds to initial states for which maximum entanglement is generated in some instant of time along the MDI dynamics (see Ref. \cite{Pinto-Maziero2018}). Throughout this article, quantum coherence is measured with relation to the standard basis $\{|0\rangle,|1\rangle\}$. \begin{figure \centering \includegraphics[width=0.66\textwidth]{AsyPure.pdf} \caption{\label{fig:Asymmetry} (Color online) Behavior of the Wigner-Yanase asymmetry of states described by $|\psi_{ab}\rangle$ as a function of the initial state parameters $\theta_{a}$ and $\theta_{b}$. It can be seen that the maximum value of the asymmetry function, $A=1$, is reached when the initial state is $|++\rangle$ or $|--\rangle$, and its minimum value, $A=0$, is obtained for the states $|00\rangle$ and $|11\rangle$. Besides, we identify the intermediate values of asymmetry $A=4/9$ and $A=6/9$, which are associated with the initial states $\left\{ |01\rangle,|10\rangle\right\}$ and $\sum_{j=1}^{4}\frac{1}{\sqrt{4}}|E_{j}\rangle$, respectively, with $|E_{j}\rangle$ being the eigenvectors of $\mathcal{H}$.} \end{figure} We observe a similar pattern in the regions of asymmetry values around $6/9$ $(\approx 0,666)$, which are associated with balanced superpositions of all Bell base states. These states correspond to dipole $a$ having null local coherence and dipole $b$ showing maximum coherence, or vice versa: \begin{align} & |0\rangle\otimes|\pm\rangle=2^{-1}\left(\pm|\Psi_{+}\rangle\pm|\Psi_{-}\rangle+|\Phi_{+}\rangle+|\Phi_{-}\rangle\right)\text{, } |\pm\rangle\otimes|0\rangle=2^{-1}\left(\pm|\Psi_{+}\rangle\mp|\Psi_{-}\rangle+|\Phi_{+}\rangle+|\Phi_{-}\rangle\right), \\ & |1\rangle\otimes|\pm\rangle=2^{-1}\left(|\Psi_{+}\rangle-|\Psi_{-}\rangle\pm|\Phi_{+}\rangle\mp|\Phi_{-}\rangle\right)\text{, } |\pm\rangle\otimes|1\rangle=2^{-1}\left(|\Psi_{+}\rangle+|\Psi_{-}\rangle\pm|\Phi_{+}\rangle\mp|\Phi_{-}\rangle\right), \end{align} and lead to similar intermediate entanglement values ($\approx 0,6$) at specific times ($t=\pi/4$) of the MDI dynamics \cite{Pinto-Maziero2018}. Above, and in what follows, we ignore global phases, once the Wigner-Yanase asymmetry doesn't depend on them. We notice also the existence of states with asymmetry equal to $4/9$, corresponding to the states \begin{equation} |01\rangle=\frac{1}{\sqrt{2}}\left(|\Psi_{+}\rangle+|\Psi_{-}\rangle\right) \text{ and } |10\rangle=\frac{1}{\sqrt{2}}\left(|\Psi_{+}\rangle-|\Psi_{-}\rangle\right), \end{equation} leading to the production of maximum entanglement in some period of the MDI dynamics (as e.g. at $t=\pi/4$). This observation leads us to conclude that the amount of initial state asymmetry does not necessarily establish a direct and unambiguous relationship with entanglement creation There are also states with maximum local coherence, such as \begin{equation} |+\rangle\otimes|-\rangle=\frac{1}{\sqrt{2}}\left(|\Phi_{-}\rangle-|\Psi_{-}\rangle\right) \text{ and } |-\rangle\otimes|+\rangle=\frac{1}{\sqrt{2}}\left(|\Phi_{-}\rangle+|\Psi_{-}\rangle\right), \end{equation} that have asymmetry around $1/9$ and also lead to maximum dynamic entanglement. From the results presented above, we infer that the presence of asymmetry in relation to the Hamiltonian of the MDI does lead to the existence of entanglement at some time. However, in general there is no direct quantitative relationship between $A$ and $E$. This fact is more evident for initial state configurations involving a pair of states in which one of the elements is the singlet state $|\Psi_{-}\rangle$. This is so because the nullity of the corresponding eigenvalue leads to the non-contribution of this state in the calculation of the asymmetry, reducing its magnitude in these cases. On the other hand, we can observe that the line of states $\theta_{b}=\theta_{a}$ and $\theta_{b}=2\pi-\theta_{a}$ produce null asymmetry in all regions whose configurations are $\theta_{a} = n\pi$ with $n\in\mathbb{Z}$, a behavior analogous to what occurs during the entanglement dynamics \cite{Pinto-Maziero2018}. So, in the next section, we will calculate the asymmetry of local states, which, in addition to help in understanding the role of local states in the evolution of asymmetry along the dynamics of the MDI, this kind of function also contains the partial contributions of the subspace associated with the null eigenvalue of the Hamiltonian Regarding the mixed states configurations defined in Sec. \ref{sec:Observables and configurations}, the Wigner-Yanase asymmetry for the class of incoherent states is given by \begin{equation} A(\rho_{3}^{ab},\mathcal{H})=2\left(1-r_{3a}r_{3b}-\sqrt{\left(1-r_{3a}^{2}\right)\left(1-r_{3b}^{2}\right)}\right)/9, \end{equation} while for the class of states with coherence the asymmetry reads \begin{equation} A\left(\rho_{1}^{ab},\mathcal{H}_{d}\right)=\left(5+4r_{1a}r_{1b}-5\sqrt{\left(1-r_{1a}^{2}\right)\left(1-r_{1b}^{2}\right)}\right)/9. \end{equation} These functions are shown in Fig. \ref{fig:second}. These results reveal the main aspects already obtained for pure states, but also show that the decrease in purity leads to the diminishing in the asymmetry of the states. \begin{figure \centering \label{fig:first}% \includegraphics[height=2.8in]{Asymmrho3.pdf \qquad \label{fig:second}% \includegraphics[height=2.8in]{Asymmrho1.pdf \caption{(Color online) (a) Wigner-Yanase asymmetry of the $\rho_{3}^{ab}$ class of states, whose quantum coherence is null in the standard basis, as a function of the $r_{3a}$ and $r_{3b}$ parameters. For $r_{3a}=-r_{3b}$ we have $A=4r_{3a}^{2}/9$. So, the maximum value of asymmetry for this class of states is obtained when $r_{3a}=\pm1$ and $r_{3b}=\mp1$, which corresponds to the states $|01\rangle$ and $|10\rangle$, respectively. The line of states where $r_{3a}=r_{3b}$ has asymmetry equal to zero. (b) Asymmetry of the $\rho_{1}^{ab}$ class of states as a function of the parameters $r_{1a}$ and $r_{1b}$, associated with the Bloch vector of each dipole. For $r_{1a}=r_{1b}$ we have $A=r_{1a}^{2}$. Thus, for such a class of states, the maximum values for asymmetry are obtained for $r_{1a}=\pm1$ and $r_{1b}=\pm1$, which corresponds to the states $|++\rangle$ and $|--\rangle$, respectively. The line of states with $r_{1a}=-r_{1b}$ has $A=r_{1a}^{2}/9$.} \end{figure} \section{Asymmetry of local states} \label{sec:local} With the study of asymmetry associated with the product state classes developed so far, it was possible to observe that despite positive asymmetry being a necessary condition for the production of entanglement under MDI dynamics, there is no direct relationship between them. Thus, as the asymmetry is not affected by the unitary dynamics of global states, i.e., $A(\rho_{t}^{ab},\mathcal{H})=A(\rho^{ab},\mathcal{H})$, in order to understand the mechanism of temporal evolution of asymmetry and their local contributions, we will evaluate the asymmetry of the local evolved states, i.e., we want to quantify the susceptibility of local states under the action of the dynamics generator $\mathcal{H}$. For defining this quantifier, we take the partial trace \cite{ptr} over one of the dipoles, e.g., $$\rho_{t}^{a}=tr_{b}(\rho_{t}^{ab}),$$ and, in order to preserve the correspondence with the Hamiltonian's dimensionality, we compose this reduced state with the maximally mixed state for the other dipole, i.e., \begin{equation} \tilde{\rho}_{t}^{a} := \rho_{t}^{a}\otimes\frac{\sigma_{0}}{2}. \end{equation} Once constructed this state, we defined the local Wigner-Yanase asymmetry of subsystem $A$ as \begin{equation} A_{l}(\rho_{t}^{a},\mathcal{H}):=A\left(\tilde{\rho}_{t}^{a},\mathcal{H}\right) :=-\frac{1}{2}tr\left[\tilde{\rho}_{t}^{a},\mathcal{H}\right]^{2}. \end{equation} The local asymmetry $A_{l}(\rho_{t}^{b},\mathcal{H})$ is defined in a similar way. \subsection{Local asymmetry for pure-product initial states} For the evolved pure states shown in Sec. \ref{sec:Observables and configurations}, using the state \begin{align} \tilde{\rho}_{t}^{a} &= tr_{b}|\Psi_{t}^{ab}\rangle\langle\Psi_{t}^{ab}|\otimes\frac{\sigma_{0}}{2} \\ & =\left\{ \left(\alpha_{a}^{2}\alpha_{b}^{2}+\alpha_{a}^{2}\beta_{b}^{2}\cos^{2}\left(t\right)+\beta_{a}^{2}\alpha_{b}^{2}\sin^{2}\left(t\right)\right)|0\rangle\langle0|+\left(\beta_{a}^{2}\beta_{b}^{2}+\beta_{a}^{2}\alpha_{b}^{2}\cos^{2}\left(t\right)+\alpha_{a}^{2}\beta_{b}^{2}\sin^{2}\left(t\right)\right)|1\rangle\langle1|\right. \nonumber \\ & +\left[\alpha_{a}\beta_{a}\cos\left(t\right)\left(\alpha_{b}^{2}\exp\left(2it\right)+\beta_{b}^{2}\exp\left(-2it\right)\right)-i\alpha_{b}\beta_{b}\sin\left(t\right)\left(\alpha_{a}^{2}\exp\left(2it\right)+\beta_{a}^{2}\exp\left(-2it\right)\right)\right]|0\rangle\langle1| \nonumber \\ & \left. +\left[\alpha_{a}\beta_{a}\cos\left(t\right)\left(\beta_{b}^{2}\exp\left(2it\right)+\alpha_{b}^{2}\exp\left(-2it\right)\right)+i\alpha_{b}\beta_{b}\sin\left(t\right)\left(\beta_{a}^{2}\exp\left(2it\right)+\alpha_{a}^{2}\exp\left(-2it\right)\right)\right]|1\rangle\langle0|\right\} \otimes\frac{\sigma_{0}}{2} \end{align} we compute the local asymmetry of subsystem $a$. The expression for $A_{l}(\rho_{t}^{a},\mathcal{H})$ is too cumbersome to be shown here. So, this local asymmetry is shown graphically in Fig. \ref{fig:LocalAsymmetrythb} as a function of dipole $a$ initial state and time for some initial states of dipole $b$. From these plots, it is possible to determine the favorable time bands for obtaining the highest values of local asymmetry, that occurs for $t=k\pi$, with $k=0,1,2,\cdots$, and for regions where $\theta_{a}=\pi/2$ or $\theta_{a}=3\pi/2$, regardless of $\theta_{b}$, and for $t=k\pi/3$ in the region where $\theta_{a}=\theta_{b}=\pi/2$. Besides, we can see that $A_{l}(\rho_{t}^{a},\mathcal{H})$ has a period equal to $\pi.$ In Fig. \ref{fig:LocalAsymmetry}, the local asymmetry is shown as a function of the initial state parameters for some instants of time. The maximum values of $A_{l}(\rho_{t}^{a},\mathcal{H})$ are around $0.55$, and are obtained in the regions of $\theta_{a}=\pi/2$ or $\theta_{a}=3\pi/2$ with $t=k\pi$ for $k = 0,1,2, \cdots.$ In addition, for $t=\pi/3 $ or $ t=2\pi/3$, in the regions $\theta_{a}=\theta_{b}=\pi/2$ and $ \theta_{a} = \theta_{b}=3\pi/2,$ respectively, the value $ 0.55 $ is also obtained for $A_{l}(\rho_{t}^{a},\mathcal{H})$. We observe that the general dependence of local asymmetry with the initial state and time is intricate and quite similar with the dependence of local quantum coherence given by the $l_{1}$-norm coherence. Besides, there is even less direct relationship between local asymmetry and entanglement than what was observed for global asymmetry. So, in the next section, we shall study the system state global asymmetry with regard to the element of the group of time transformations, i.e., we shall look at the quantum state susceptibility in relation to the time evolution operator associated with the MDI Hamiltonian. \begin{figure}[H] \centering \includegraphics[width=1.04\textwidth]{Asylocpurethb.pdf} \caption{\label{fig:LocalAsymmetrythb} (Color online) Dynamics of local asymmetry, $A_{l}(\rho_{t}^{a},\mathcal{H}),$ of dipole $a$ as a function of time and of the pure-product initial states configurations parameters $\theta_{a}$ and $\theta_{b}$. For these plots, we consider the parameter $\theta_{b}$ fixed and allow the parameters $t$ and $\theta_{a}$ to vary ``continuously'' in their respective intervals.} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1.03\textwidth]{Asylocpure.pdf} \caption{\label{fig:LocalAsymmetry} (Color online) These plots illustrate the behavior of the local asymmetry of subsystem $a$, $A_{l}(\rho_{t}^{a},\mathcal{H}),$ as a function of time and of the parameters of the initial pure-product state configurations represented by $\theta_{a}$ and $\theta_{b}$.} \end{figure} \subsection{Local asymmetry for mixed-product initial states} For the evolved mixed states shown in Sec. \ref{sec:Observables and configurations}, we have the reduced states: $\rho_{j}^{a}(t)=tr_{b}\left(\rho_{j}^{ab}(t)\right)$. So, the local asymmetry for dipole $a$ is computed using the density matrices: \begin{align} 4\tilde{\rho}_{1}^{a}(t)= & 4\rho_{j}^{a}(t) \otimes\frac{\sigma_{0}}{2} \\ = & \left(\left(r_{1b}+r_{1a}\right)\cos\left(3t\right)-\left(r_{1b}-r_{1a}\right)\cos\left(t\right)\right)\left(|00\rangle\langle10|+|01\rangle\langle11|+|10\rangle\langle00|+|11\rangle\langle01|\right)/8 \nonumber \\ & + \left(|00\rangle\langle00|+|01\rangle\langle01|+|10\rangle\langle10|+|11\rangle\langle11|\right) \nonumber \end{align} and \begin{align} \tilde{\rho}_{3}^{a}(t)= & \rho_{3}^{a}(t)\otimes\sigma_{0}/2 \\ = & \left(2+\left(r_{3b}+r_{3a}\right)+\left(r_{3a}-r_{3b}\right)\cos\left(2t\right)\right)\left(|00\rangle\langle00|+|01\rangle\langle01|\right)/8 \nonumber \\ & +\left(2-\left(r_{3b}+r_{3a}\right)-\left(r_{3a}-r_{3b}\right)\cos\left(2t\right)\right)\left(|10\rangle\langle10|+|11\rangle\langle11|\right)/8. \end{align} Thereby, the respective asymmetries of these local states are given by the expressions: \begin{align} 4\sqrt{\frac{A_{l}\left(\rho_{1}^{a}(t),\mathcal{H}\right)}{5}}= & \sqrt{\left(r_{1a}-r_{1b}\right)\cos\left(t\right)+\left(r_{1a}+r_{1b}\right)\cos\left(3t\right)+2} \nonumber \\ &-\sqrt{\left(r_{1b}-r_{1a}\right)\cos\left(t\right)-\left(r_{1a}+r_{1b}\right)\cos\left(3t\right)+2} \end{align} and \begin{equation} 3\sqrt{A_{l}\left(\rho_{3}^{a}\left(t\right),\mathcal{H}\right)}=\sqrt{1-r_{3a}+\left(r_{3a}-r_{3b}\right)\sin^{2}\left(t\right)}-\sqrt{1+r_{3a}+\left(r_{3b}-r_{3a}\right)\sin^{2}\left(t\right)}. \end{equation} These functions are shown graphically in Fig. \ref{fig:rho3LocalAsymmetry}. The same observations made in the last subsection for pure-product initial states also hold here. \begin{figure}[H] \centering \includegraphics[width=1.02\textwidth]{localasymmetry.pdf} \caption{\label{fig:rho3LocalAsymmetry} (Color online) Local asymmetry of dipole $a$, $A_{l}(\rho_{j}^{a}(t),\mathcal{H})$, as a function of time and $r_{ja}$ for some values of $r_{jb}$, for the initial states $\rho_{j}^{ab}=\rho_{j}^{a}\otimes\rho_{j}^{b}$ evolved under the MDI Hamiltonian. We see that the local asymmetry has period equal to $\pi$.} \end{figure} \section{Asymmetry in relation to the unitary operator} \label{sec:unitary} In this section, we study the Wigner-Yanase skew information in relation to the unitary operator generated by the magnetic dipolar interaction Hamiltonian. We do this expecting to obtain a more sensitive measure regarding the temporal evolution of the states and which includes the contribution of the phases corresponding to the eigenvalues of the observable responsible for the dynamics. So, we define the unitary asymmetry as the Wigner-Yanase skew information of a state $\rho$ with respect to the non-Hermitian unitary time evolution operator: \begin{align} A_{U}\left(\rho,U_{t}\right)& := \frac{1}{2}tr\left\{ \left[\sqrt{\rho},U_{t}\right]^{\dagger}\left[\sqrt{\rho},U_{t}\right]\right\} \\ & = 1-tr\left\{ \sqrt{\rho}U_{t}\sqrt{\rho}U_{t}^{\dagger}\right\} , \end{align} For pure states, this function can be recast as \begin{align} A_{U}\left(|\psi\rangle,U_{t}\right) & =1-tr\left\{ |\psi\rangle\langle\psi|U_{t}|\psi\rangle\langle\psi|U_{t}^{\dagger}\right\} \\ & =1-|\langle \psi|\psi_{t}\rangle|^{2}, \end{align} where $|\psi_{t}\rangle=U_{t}|\psi\rangle$. Such a function determines the degree of dissimilarity between the initial prepared state and the state obtained from the evolution dictated by the unitary operator. To assess the unitary asymmetry in the context of the MDI, we consider the spectral decomposition of the unitary operator given by the following Bell-diagonal matrix: \begin{equation} U_{t}=|\Psi_{-}\rangle\langle\Psi_{-}|+\exp\left(-2it\right)|\Psi_{+}\rangle\langle\Psi_{+}|+\exp\left(it\right)\left(|\Phi_{-}\rangle\langle\Phi_{-}|+|\Phi_{+}\rangle\langle\Phi_{+}|\right). \end{equation} Considering the same class of pure states of the previous sections, $$|\psi\rangle=|\psi_{ab}\rangle\equiv c_{1}|\Psi_{-}\rangle+c_{2}|\Psi_{+}\rangle+c_{3}|\Phi_{-}\rangle+c_{4}|\Phi_{+}\rangle,$$ with \begin{equation} \begin{cases} c_{1}=2^{-1/2}\left[\cos\left(\frac{\theta_{a}}{2}\right)\sin\left(\frac{\theta_{b}}{2}\right)-\sin\left(\frac{\theta_{a}}{2}\right)\cos\left(\frac{\theta_{b}}{2}\right)\right],\\ c_{2}=2^{-1/2}\left[\cos\left(\frac{\theta_{a}}{2}\right)\sin\left(\frac{\theta_{b}}{2}\right)+\sin\left(\frac{\theta_{a}}{2}\right)\cos\left(\frac{\theta_{b}}{2}\right)\right],\\ c_{3}=2^{-1/2}\left[\cos\left(\frac{\theta_{a}}{2}\right)\cos\left(\frac{\theta_{b}}{2}\right)-\sin\left(\frac{\theta_{a}}{2}\right)\sin\left(\frac{\theta_{b}}{2}\right)\right],\\ c_{4}=2^{-1/2}\left[\cos\left(\frac{\theta_{a}}{2}\right)\cos\left(\frac{\theta_{b}}{2}\right)+\sin\left(\frac{\theta_{a}}{2}\right)\sin\left(\frac{\theta_{b}}{2}\right)\right], \end{cases} \end{equation} the unitary asymmetry is given by \begin{equation} A_{U}\left(|\psi_{ab}\rangle,U_{t}\right)=1-\left[c_{1}^{2}+c_{2}^{2}\cos\left(2t\right)+\left(c_{3}^{2}+c_{4}^{2}\right)\cos\left(t\right)\right]^{2}-\left[\left(c_{3}^{2}+c_{4}^{2}\right)\sin\left(t\right)-c_{2}^{2}\sin\left(2t\right)\right]^{2}. \end{equation} Analyzing the behavior of the unitary asymmetry dynamics illustrated in Figs. \ref{fig:UnitSkewInfthb} and \ref{fig:UnitSkewInf}, and comparing with the evolution of entanglement under the MDI \cite{Pinto-Maziero2018}, we emphasize that the regions of maximum entanglement during the dynamics of the MDI are contained in the regions of states of maximum unitary asymmetry $A_{U}$, although they may occur in different periods of their temporal dynamics. In addition, the regions of null entanglement coincide with regions of null unitary asymmetry. On the other hand, there are also regions with maximum unitary asymmetry but with partial entanglement. \begin{figure}[H] \centering \includegraphics[width=1.03\textwidth]{AsyUopthb.pdf} \caption{\label{fig:UnitSkewInfthb} (Color online Behavior of the unitary asymmetry as a function of dipole $a$ initial state parameter $\theta_{a}$ and of the time parameter $t$, for some fixed values of dipole $b$ initial state parameter $\theta_{b}$. This sequence of plots allow us to identify the periods of time resulting in a greater unitary asymmetry $A_{U}$. } \end{figure} \begin{figure}[H] \centering \includegraphics[width=1.03\textwidth]{AsyUop.pdf} \caption{\label{fig:UnitSkewInf} (Color online) These graphs illustrate the behavior of asymmetry of states with respect to the MDI unitary operator as a function of the parameters determining the initial conditions, $\theta_{a}$ and $\theta_{b}$, for some values fixed for the time parameter $t$. The only region of states that remain constant throughout the MDI dynamics are those where the parameters $\theta_{a}$ and $\theta_{b}$ corresponds to the states $|00\rangle$ and $|11\rangle$, whose asymmetry $A_{U}$ is null, and which coincides with the regions where the global asymmetry $A\left(|\psi_{ab}\rangle,\mathcal {H}\right)$ is null. However, in general the state asymmetry $A_{U}$ behaves similarly to the global asymmetry $A\left(|\psi_{ab}\rangle,\mathcal{H}\right)$ when the time parameter is fixed around $\pi/4$. In this figure, we can see that the maximum value of the unitary asymmetry $A_{U}$ is obtained in three periods of the $A$'s dynamics, with regions of different state configurations. For the period $t=\pi/3$, we have maximum asymmetry $A_{U}$ in the region of states with $\theta_{a}=\theta_{b}=\pi/2$ or $\theta_{a}=\theta_{b}=3\pi/2$. For the case where $t =\pi/2$, the maximum unitary asymmetry $A_{U}$ is reached in regions where $\theta_ {a}=\pi$ and $\theta_{b}=0$ or $\theta_{b}=2\pi$ and vice versa. In addition, for the period $t=\pi$, any line of states where $\theta_{a}=\pi/2$ or $\theta_{a}=3\pi/2$ for any regions of $\theta_{b}$, and vice versa, lead to the maximum value of the unitary asymmetry $A_{U}$. } \end{figure} \section{Conclusions} \label{sec:conc} In this work, we analyzed the quantum state Wigner-Yanase asymmetry in relation to the Magnetic Dipolar Interaction (MDI) Hamiltonian as the generator of the temporal evolution. We described the dependence of asymmetry in terms of the parameters that define the Hamiltonian and in terms of the initial state configurations of the established bipartite system, where we considered classes of pure and mixed initial states separable and restricted to real local phases. We obtained analytical expressions for the asymmetry of pure and mixed states, from which it was possible to observe the regions that admit maximum asymmetry and thus establish relations with the purity and entanglement production during the dynamics under the MDI. We also defined the local asymmetry, a quantity that reveals the local states susceptibility under the action of the Hamiltonian generator of the global dynamics under the MDI. Furthermore, in order to quantify the role of Hamiltonian eigenvalues for MDI dynamics, we defined the Wigner-Yanase skew information measure in relation to the MDI unitary operator, obtaining thus a better agreement between states with greater skew-information and states capable of producing entanglement along the MDI dynamics. \begin{acknowledgments} This work was supported by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES), process 88882.427913/2019-01, and by the Instituto Nacional de Ci\^encia e Tecnologia de Informa\c{c}\~ao Qu\^antica (INCT-IQ), process 465469/2014-0. \end{acknowledgments}
2,877,628,090,889
arxiv
\section{Introduction}\label{sec:intr} \renewcommand{\theequation}{{\thesection}.\arabic{equation}} \setcounter{equation}{0} \subsection{Motivation}\label{sec:setting} We consider the following first-order integro-differential hyperbolic system in one space variable \begin{equation} \begin{array}{ll} \displaystyle\partial_{t}u_j +a_j(x,t)\partial_{x}u_j +\sum_{k=1}^{n}b_{jk}(x,t)u_k +\sum_{k=1}^{n}\int_{0}^{x}g_{jk}(y,t)u_k(y,t)dy & \\ [2mm] \qquad =\displaystyle \sum_{k=m+1}^{n}h_{jk}(x,t)u_k(0,t) +\sum_{k=1}^{m}h_{jk}(x,t)u_k(1,t) + f_j(x,t), \;\;\; x\in(0,1),\; j\le n,& \end{array} \label{f1} \end{equation} subjected to periodic conditions in time \begin{equation}\label{f1*} u_{j}(x,t)=u_{j}(x,t+2\pi), \;\;\; j\le n, \end{equation} and integral boundary conditions in space \begin{equation}\label{f2} \begin{array}{ll} u_{j}(0,t)= \displaystyle\sum_{k=1}^{n}\int_{0}^{1}r_{jk}(x,t)u_{k}(x,t)\, dx, \;\;\; 1\le j\le m, \\ [2mm] u_{j}(1,t)= \displaystyle\sum_{k=1}^{n}\int_{0}^{1}r_{jk}(x,t)u_{k}(x,t)\, dx, \;\;\; m<j\le n, \end{array} \end{equation} where $0\le m\le n$ are positive integers. Note that the boundary terms $u_{m+1}(0,t), \dots, u_n(0,t)$ and $u_1(1,t), \dots,$ $u_m(1,t)$ contribute into the differential system \reff{f1}, while the boundary terms $u_{1}(0,t), \dots,$ $u_m(0,t)$ and $u_{m+1}(1,t)$, $\dots,u_n(1,t)$ contribute into the boundary conditions \reff{f2}. In this form, which is motivated by applications, the problem has been studied in \cite{KrsSm,SanNak}. The Volterra integral terms in \reff{f1} are motivated by the aforementioned applications (see, e.g., \cite{KrsSm,SanNak}). As it will be seen from our proof of Theorem \ref{thm:th12}, our analysis applies also to the case when these terms are replaced by the Fredholm integral terms. In general, systems of the type \reff{f1}, \reff{f2} model a broad range of physical problems such as traffic flows, chemical reactors and heat exchangers \cite{SanNak}. They are also used to describe problems of population dynamics (see, e.g., \cite{eft,Keyfitz,mogulru,webb} and references therein) and polymer rheology \cite{engl}. Moreover, they appear in the study of optimal boundary control problems \cite{KrsSm,Naka,SanNak,coron}. Establishing a Fredholm property is a first step in developing a theory of local smooth continuation \cite{KmRe4} and bifurcation \cite{bueft,cush,KmRe3} for Fredholm hyperbolic operators, in particular, such tools as Lyapunov-Schmidt reduction. Buono and Eftimie \cite{bueft} consider autonomous $2\times 2$ nonlocal hyperbolic systems in a single space variable, describing formation and movement of various animal, cell and bacterial aggregations, with some biologically motivated integral terms in the differential equations. One of the main results in \cite{bueft} is a Fredholm alternative for the linearizations at a steady-state, which enables performing a bifurcation analysis by means of the Lyapunov-Schmidt reduction. Here we continue this line of research, establishing the Fredholm property for a wide range of non-autonomous nonlocal problems for $(n\times n)$-hyperbolic systems, with nonlocalities both in the differential equations and in the boundary conditions. We show that the problem \reff{f1}--\reff{f2} demonstrates a completely non-resonant behavior (in other terms, no {\it small divisors} occur). More precisely, we prove the Fredholm alternative for \reff{f1}--\reff{f2} under the only assumptions that the coefficients in \reff{f1} and \reff{f2} are sufficiently smooth and a kind of Levy condition is fulfilled. The proof extends the ideas of \cite{KR3,km1} for proving the Fredholm alternative for first-order one-dimensional hyperbolic systems with reflection boundary conditions, and also the ideas of \cite{Kmit} for proving a smoothing property for boundary value hyperbolic problems. In contrast to \cite{KR3} and \cite{km1}, where conditions excluding a resonant behavior are imposed, the present Fredholmness result is unconditional, in this respect. \subsection{Our result}\label{sec:contr} By $C_{n,2\pi}$ we denote the vector space of all $2\pi$-periodic in $t$ and continuous maps $u:[0,1]\times{\mathbb R}\to{\mathbb R}^{n}$, with the norm $$ \|u\|_{\infty}=\max_{j\le n}\max_{x\in [0,1]}\max_{t\in{\mathbb R}}|u_j|. $$ Similarly, $C_{n,2\pi}^1$ denotes the Banach space of all $u\in C_{n,2\pi}$ such that $\d_xu,\d_tu\in C_{n,2\pi}$, with the norm $$ \|u\|_{1}=\|u\|_{\infty}+\|\partial_x u\|_{\infty}+\|\partial_t u\|_{\infty}. $$ For simplicity, we skip subscript $n$ if $n=1$ and write $C_{2\pi}$ and $C^1_{2\pi}$ for $C_{1,2\pi}$ and $C^{1}_{1,2\pi}$, respectively. We make the following natural assumptions on the coefficients of \reff{f1} and \reff{f2}: \begin{eqnarray} && \hskip-10mm a_j\in C^1_{2\pi} \mbox{ and } b_{jk}, \d_tb_{jk}, g_{jk}, h_{jk}, r_{jk}, \d_tr_{jk}\in C_{2\pi} \mbox{ for all } j\le n \mbox{ and } k\le n,\label{f4} \\ [1mm] && \hskip-10mm a_j\neq0 \mbox{ for all } (x,t)\in[0,1]\times{\mathbb R} \mbox{ and } j\le n,\label{f5} \end{eqnarray} and \begin{equation}\label{fz8} \begin{array}{ll} \mbox{for all } 1\le j\neq k\le n \mbox{ there exists } \tilde{b}_{jk}\in C_{2\pi} \mbox{ such that }\\ \d_t\tilde b_{jk} \in C_{2\pi} \mbox{ and } b_{jk}=\tilde{b}_{jk}(a_k-a_j). \end{array} \end{equation} The assumption \reff{f5} is standard and means the non-degeneracy of the hyperbolic system (1.1). The assumption (1.6) is a kind of the well-known Levy condition appearing in various aspects of the hyperbolic theory, for instance, for proving the spectrum-determined growth condition for semiflows generated by initial value problems for hyperbolic systems \cite{Guo,Lichtner,Neves}. It plays also a crucial role in the Fredholm analysis of hyperbolic PDEs (see Example \ref{thm:ex1} below). Given $j\le n$, $x\in[0,1]$, and $t\in{\mathbb R}$, the $j$-th characteristic of \reff{f1} is defined as the solution $\xi\in[0,1]\mapsto\omega_j(\xi,x,t)\in{\mathbb R}$ of the initial value problem \begin{equation}\label{f7} \d_{\xi}\omega_{j}(\xi,x,t)=\frac{1}{a_j(\xi,\omega_{j}(\xi,x,t))}, \;\;\; \omega_{j}(x,x,t)=t. \end{equation} To shorten notation, we will write $\omega_j(\xi)=\omega_j(\xi,x,t)$. In what follows we will use the equalities \begin{equation} \label{f22} \partial_x\omega_{j}(\xi)= -\frac{1}{a_j(x,t)}\exp{{\int_{\xi}^{x}\left (\frac{\partial_2a_j}{a_j^2}\right )(\eta,\omega_j(\eta))\, d\eta}} \end{equation} and \begin{equation} \label{f23} \partial_t\omega_{j}(\xi)= \exp{{\int_{\xi}^{x}\left (\frac{\partial_2a_j}{a_j^2}\right )(\eta,\omega_j(\eta))\, d\eta}}, \end{equation} where by $\d_i$ here and below we denote the partial derivative with respect to the $i$-th argument. Set \begin{equation}\label{f8} \begin{array}{ll} \displaystyle c_j(\xi,x,t)= \exp{{\int_{x}^{\xi}\left (\frac{b_{jj}}{a_j}\right )(\eta,\omega_j(\eta))\, d\eta}}, \;\;\;\\ [5mm] \displaystyle d_j(\xi,x,t)= \frac{c_j(\xi,x,t)}{a_j(\xi,\omega_j(\xi))}, \end{array} \end{equation} and $$ x_j=\left\{ \begin{array}{rl} 0 &\mbox{if}\ 1\le j\le m,\\ 1 &\mbox{if}\ m<j\le n. \end{array} \right. $$ Integration along the characteristic curves brings the system \reff{f1}--\reff{f2} to the integral form \begin{eqnarray}\label{f10} \displaystyle u_j(x,t) &=&\displaystyle c_j(x_j,x,t)\sum_{k=1}^{n}\int_{0}^{1}r_{jk}(\eta,\omega_j(x_j))u_{k}(\eta,\omega_{j}(x_j))\, d\eta \nonumber \\ &&-\displaystyle\sum_{k\neq j}\int_{x_j}^{x}d_{j}(\xi,x,t)b_{jk}(\xi,\omega_j(\xi))u_{k}(\xi,\omega_j(\xi))\, d\xi \nonumber \\ &&-\displaystyle\sum_{k=1}^{n}\int_{x_j}^{x}d_{j}(\xi,x,t)\int_{0}^{\xi}g_{jk}(y,\omega_j(\xi))u_{k}(y,\omega_j(\xi))\, dyd\xi \\ &&+\displaystyle \sum_{k=1}^{n}\int_{x_j}^{x}d_{j}(\xi,x,t)h_{jk}(\xi,\omega_j(\xi))u_{k}(1-x_k,\omega_j(\xi))\, d\xi \nonumber \\ &&+\displaystyle\int_{x_j}^{x}d_{j}(\xi,x,t)f_{j}(\xi,\omega_j(\xi))\, d\xi, \quad j\le n.\nonumber \end{eqnarray} Indeed, let $u$ be a $C^1$-solution to \reff{f1}--\reff{f2}. Then, using \reff{f1} and \reff{f7}, for all $j\le n$ we have \begin{eqnarray*} \lefteqn{ \frac{d}{d\xi} u_j(\xi,\omega_j(\xi))= \d_1u_j(\xi,\omega_j(\xi))+\frac{\d_2u_j(\xi,\omega_j(\xi))}{a_j(\xi,\omega_j(\xi))}}\\ &&=\frac{1}{a_j(\xi,\omega_j(\xi))}\Biggl(-\sum_{k=1}^n b_{jk}(\xi,\omega_j(\xi))u_k(\xi,\omega_j(\xi))+\displaystyle \sum_{k=m+1}^{n}h_{jk}(\xi,\omega_j(\xi))u_k(0,\omega_j(\xi)) \\ [2mm] &&\displaystyle+\sum_{k=1}^{m}h_{jk}(\xi,\omega_j(\xi))u_k(1,\omega_j(\xi))- \sum_{k=1}^{n}\int_{0}^{\xi}g_{jk}(y,\omega_j(\xi))u_k(y,\omega_j(\xi))\,dy +f_j(\xi,\omega_j(\xi))\Biggr). \end{eqnarray*} This is a linear inhomogeneous ordinary differential equation for the function $u_j(\cdot,\omega_j(\cdot,x,t))$, and the variation of constants formula (with initial condition at $x_j$) gives \begin{eqnarray*} \lefteqn{ u_j(x,t)=u_j(x_j,\omega_j(x_j))\exp \int^{x_j}_{x} \left(\frac{b_{jj}}{a_j}\right) (\xi,\omega_j(\xi))\,d\xi -\int^{x_j}_{x}\exp \int_\xi^x\left(\frac{b_{jj}}{a_j}\right)(\eta,\omega_j(\eta))\, d \eta} \\ &&\times \frac{1}{a_j(\xi,\omega_j(\xi))}\Biggl(-\sum_{k\ne j} b_{jk}(\xi,\omega_j(\xi))u_k(\xi,\omega_j(\xi))+\displaystyle \sum_{k=m+1}^{n}h_{jk}(\xi,\omega_j(\xi))u_k(0,\omega_j(\xi)) \\ [2mm] &&\displaystyle+\sum_{k=1}^{m}h_{jk}(\xi,\omega_j(\xi))u_k(1,\omega_j(\xi))- \sum_{k=1}^{n}\int_{0}^{\xi}g_{jk}(y,\omega_j(\xi))u_k(y,\omega_j(\xi))\,dy +f_j(\xi,\omega_j(\xi))\Biggr)\, d\xi. \end{eqnarray*} Inserting the boundary conditions \reff{f2} and using the notation \reff{f8}, we get \reff{f10}, as desired. \begin{defn}\rm A function $u\in C_{n,2\pi}$ is called a {\it continuous solution} to \reff{f1}--\reff{f2} if it satisfies \reff{f10}. \end{defn} Our result states that either the space of nontrivial solutions to \reff{f1}--\reff{f2} with $f=(f_{1},...,f_{n})=0$ is not empty and has finite dimension or the system \reff{f1}--\reff{f2} has a unique solution for any $f$. \begin{thm}\label{thm:th12} Suppose that the conditions \reff{f4}--\reff{fz8} are fulfilled. Let $\mathcal{K}$ denote the vector space of all continuous solutions to \reff{f1}--\reff{f2} with $f\equiv0$. Then $(i)$ $\dim \mathcal{K}<\infty$ and the vector space of all $f\in C_{n,2\pi}$ such that there exists a continuous solution to \reff{f1}--\reff{f2} is a closed subspace of codimension $\dim \mathcal{K}$ in $C_{n,2\pi}$. $(ii)$ If $\dim \mathcal{K}=0$, then for any $f\in C_{n,2\pi}$ there exists a unique continuous solution $u$ to \reff{f1}--\reff{f2}. \end{thm} \begin{example}\label{thm:ex1}\rm Consider the following example showing that the condition \reff{fz8} plays a crucial role for our result: \begin{equation} \begin{array}{ll} \displaystyle\partial_{t}u_1 +\frac{2}{\pi}\partial_{x}u_1 -u_2=0 & \\ [3mm] \displaystyle\partial_{t}u_2 +\frac{2}{\pi}\partial_{x}u_2 +u_1=0, & \end{array} \label{f1ex} \end{equation} \begin{equation}\label{f3} u_{1}(x,t)=u_{1}(x,t+2\pi), \;\;\; u_{2}(x,t)=u_{2}(x,t+2\pi), \end{equation} \begin{equation}\label{f2ex} \begin{array}{ll} u_{1}(0,t)=0, \;\;\;\; u_{2}(1,t)=0. \end{array} \end{equation} This problem is a particular case of \reff{f1}--\reff{f2} and satisfies all assumptions of Theorem~\ref{thm:th12} with the exception of \reff{fz8}. It is straightforward to check that $$ u_1=\sin{\frac{\pi}{2}x}\sin{l\left(t-\frac{\pi}{2}x\right)}, \;\;\; u_2=\cos{\frac{\pi}{2}x}\sin{l\left(t-\frac{\pi}{2}x\right)}, \quad l\in{\mathbb N}, $$ are infinitely many linearly independent solutions to the problem \reff{f1ex}--\reff{f2ex} and, therefore, the kernel of the operator of \reff{f1ex}--\reff{f2ex} is infinite dimensional. Thus, the conclusion of Theorem \ref{thm:th12} is not true without \reff{fz8}. \end{example} \section{Proof of Theorem \ref{thm:th12}}\label{sec:Fredh} \renewcommand{\theequation}{{\thesection}.\arabic{equation}} \setcounter{equation}{0} Define linear bounded operators $R,B,G,H,F: C_{n,2\pi}\to C_{n,2\pi}$ by \begin{eqnarray} (Ru)_j(x,t)&=& c_j(x_j,x,t)\sum_{k=1}^{n}\int_{0}^{1}r_{jk}(\eta,\omega_j(x_j))u_{k}(\eta,\omega_{j}(x_j))\,d\eta,\quad j\le n, \nonumber \\ \label{f34} (Bu)_j(x,t)&=& -\sum_{k\neq j}\int_{x_j}^{x}d_j(\xi,x,t)b_{jk}(\xi,\omega_j(\xi))u_k(\xi,\omega_j(\xi))\, d\xi, \quad j\le n, \\ \label{f3014} (Gu)_j(x,t)&=& -\sum_{k=1}^{n}\int_{x_j}^{x}\int_{0}^{\xi}d_{j}(\xi,x,t)g_{jk}(y,\omega_j(\xi)) u_{k}(y,\omega_j(\xi))\, dyd\xi, \quad j\le n, \\ \label{f3024} (Hu)_j(x,t)&=& \sum_{k=1}^{n}\int_{x_j}^{x}d_{j}(\xi,x,t)h_{jk}(\xi,\omega_j(\xi))u_{k}(1-x_k,\omega_j(\xi))\, d\xi,\quad j\le n, \end{eqnarray} and $$ (Ff)_j(x,t)= \int_{x_j}^{x}d_j(\xi,x,t)f_j(\xi,\omega_j(\xi))\, d\xi,\quad j\le n. $$ Then the system \reff{f10} can be written as the operator equation $$ u=Ru+Bu+Gu+Hu+Fu. $$ Note that Theorem \ref{thm:th12} says exactly that the operator $I-R-B-G-H: C_{n,2\pi}\to C_{n,2\pi}$ is Fredholm of index zero. Nikolsky's criterion \cite[Theorem XIII.5.2]{KA} says that an operator $I+K$ on a Banach space is Fredholm of index zero whenever $K^2$ is compact. It is interesting to note that the compactness of $K^2$ and the identity $I-K^2=(I+K)(I-K)$ imply that the operator $I-K$ is a parametrix of the operator $I+K$ (see \cite{zeid}). We, therefore, have to show that the operator $K^2: C_{n,2\pi}\to C_{n,2\pi}$ for $K^2=(R+B+G+H)^2$ is compact. Since the operators $R,B,G$, and $H$ are bounded and the composition of a bounded and a compact operator is compact, it is enough to show that \begin{equation} \label{hrrbbrbg} \textrm{the operators } \; H, G, R^2, RB, B^2, BR: C_{n,2\pi}\to C_{n,2\pi} \; \textrm{ are compact.} \end{equation} We start with the compactness of $H$. By $C_{2\pi}({\mathbb R})$ we denote the space of all continuous and $2\pi$-time-periodic maps $v:{\mathbb R}\to{\mathbb R}$. Fix arbitrary $j\le n$ and $k\le n$ and define the operator $H_{jk}\in\mathcal{L}(C_{2\pi}({\mathbb R}),C_{2\pi})$ by \begin{equation}\label{hj} (H_{jk}v)(x,t) =\int_{x_j}^{x}d_{j}(\xi,x,t)h_{jk}(\xi,\omega_j(\xi))v(\omega_j(\xi))\, d\xi. \end{equation} It suffices to show the compactness of $H_{jk}$. Change the variable $\xi$ to $z=\omega_j(\xi)$ and denote the inverse map by $\xi=\tilde{\omega}_{j}(z)=\tilde{\omega}_{j}(z,x,t)$. Afterwards \reff{hj} reads \begin{equation}\label{frfr} (H_{jk}v)(x,t) =\int_{\omega_j(x_j)}^{t}d_{j}(\tilde{\omega}_{j}(z),x,t)h_{jk}(\tilde{\omega}_{j}(z),z)a_{j}(\tilde{\omega}_{j}(z),z)v(z)\, dz. \end{equation} By the regularity assumption \reff{f4}, the functions $\omega_j(x_j)$, $\tilde{\omega}_{j}(z)$, $d_{j}(\xi,x,t)$, $h_{jk}(x,z)$, and $a_{j}(x,z)$ are continuous in all their arguments and $2\pi$-periodic in $t$ and, hence, are uniformly continuous in $x$ and $t$. Then the equicontinuity property of $(H_{jk}v)(x,t)$ for $v$ over a bounded subset of $C_{2\pi}({\mathbb R})$ straightforwardly follows. Using the Arzela-Ascoli precompactness criterion, we conclude that $H_{jk}$ and, hence, $H$ are compact. Now we consider the operator $G$. Changing the variable $\xi$ to $z=\omega_j(\xi,x,t)$ in \reff{f3014}, we get \begin{equation}\label{frfrfr} (Gu)_j(x,t)= -\sum_{k=1}^{n}\int_{\omega_j(x_j)}^{t}\int_{0}^{\tilde{\omega}_j(z)}d_{j}(\tilde{\omega}_j(z),x,t)g_{jk}(y,z)a_j(\tilde{\omega}_{j}(z),z)u_{k}(y,z)\, dydz. \end{equation} Similarly to the above, the functions $\omega_j(x_j), \tilde{\omega}_j(z), d_j(\tilde{\omega}_j(z),x,t)$, and $a_j(\tilde{\omega}_j(z),z)$ are $2\pi$-periodic in $t$ and uniformly continuous in $x$ and $t$. This entails the equicontinuity property for $(Gu)_j(x,t)$ for $u$ over a bounded subset of $C_{n,2\pi}$. The compactness of $G$ again follows from the Arzela-Ascoli theorem. We further proceed with the compactness of $R^2$. For $j\le n$ and $k\le n$ define operators $R_{jk}\in\mathcal{L}(C_{2\pi})$ by $$ (R_{jk}w)(x,t)=c_j(x_j,x,t)\int_{0}^{1}r_{jk}(\eta,\omega_j(x_j))w(\eta,\omega_j(x_j))\,d\eta. $$ Fix arbitrary $j\le n$, $k\le n$, and $i\le n$. We prove the compactness of the operator $R_{jk}R_{ki}$; the compactness of all other operators contributing into the $R^2$ will follow from the same argument. Introduce operators $P_j,Q_{jk} : C_{2\pi}\to C_{2\pi}$ by \begin{eqnarray} (P_jw)(x,t) &=&c_{j}(x_j,x,t)\int_{0}^{1}w(\eta,t)\,d\eta, \\ (Q_{jk}w)(x,t) &=&r_{jk}(x,\omega_j(x_j))w(x,\omega_j(x_j)). \end{eqnarray} Then we have $$ R_{jk}=P_jQ_{jk}, \;\;\; R_{ki}=P_kQ_{ki} $$ and, hence $$ R_{jk}R_{ki}=P_jQ_{jk}P_kQ_{ki}. $$ We aim at showing the compactness of $P_jQ_{jk}P_k$, as this and the boundedness of $Q_{ki}$ will entail the compactness of $R_{jk}R_{ki}$. The operator $P_jQ_{jk}P_k$ reads \begin{equation}\label{ghng} \begin{array}{lc} (P_jQ_{jk}P_{k}w)(x,t) = c_j(x_j,x,t) \\ [2mm] \displaystyle \quad\times\int_{0}^{1} r_{jk}(\xi,\omega_j(x_j,\xi,t))c_k(x_k,\xi,\omega_j(x_j,\xi,t))\int_{0}^{1}w(\eta,\omega_k(x_k,\xi,t))\, d\eta d\xi. & \end{array} \end{equation} Changing the variable $\xi$ to $z=\omega_k(x_k,\xi,t)$, we get \begin{equation}\label{ghng1} \begin{array}{lr} (P_jQ_{jk}P_{k}w)(x,t) = c_j(x_j,x,t) \\ [2mm] \displaystyle\quad\times\int_{\omega_k(x_k,0,t)}^{\omega_k(x_k,1,t)}\hskip-5mm r_{jk}(\tilde{\omega}_k(t,x_k,z),z)c_k(x_k,\tilde{\omega}_k(t,x_k,z),z)\int_{0}^{1}\d_{3}\tilde{\omega}_k(t,x_k,z)w(\eta,z)\, d\eta dz, & \end{array} \end{equation} where \begin{equation} \label{2star} \partial_3\tilde{\omega}_{k}(\tau,x,t)= a_k(x,t)\exp{\int_{\tau}^{t}\partial_1a_k(\tilde{\omega}_k(\rho,x,t),\rho)\, d\rho}. \end{equation} Similarly to the above, the compactness of $P_jQ_{jk}P_k$ now immediately follows from the regularity assumption \reff{f4} and the Arzela-Ascoli theorem. Now we treat the operator \begin{eqnarray*} &(RBu)_{j}(x,t) =\displaystyle-c_{j}(x_j,x,t)\sum_{k\neq l}\int_{0}^{1}\int_{x_k}^{\eta}r_{jk}(\eta,\omega_j(x_j)) d_{k}(\xi,\eta,\omega_j(x_j)) \nonumber& \\ [4mm] &\displaystyle \qquad\times b_{kl}(\xi,\omega_k(\xi,\eta,\omega_j(x_j)))u_{l}(\xi,\omega_k(\xi,\eta,\omega_j(x_j)))\,d\xi d\eta \nonumber& \end{eqnarray*} for an arbitrary fixed $j\le n$. After changing the order of integration we get the equality \begin{eqnarray*} &(RBu)_{j}(x,t) =\displaystyle-c_{j}(x_j,x,t)\sum_{k\neq l}\int_{0}^{1}\int_{\xi}^{1-x_k}r_{jk}(\eta,\omega_j(x_j))d_{k}(\xi,\eta,\omega_j(x_j)) & \\ [2mm] &\displaystyle\times b_{kl}(\xi,\omega_k(\xi,\eta,\omega_j(x_j)))u_{l}(\xi,\omega_k(\xi,\eta,\omega_j(x_j)))\,d\eta d\xi. & \end{eqnarray*} Then we change the variable $\eta$ to $z=\omega_k(\xi,\eta,\omega_j(x_j))$. Since the inverse is given by $\eta=\tilde{\omega}_k(\omega_j(x_j),\xi,z)$, we get \begin{eqnarray} \lefteqn{ (RBu)_{j}(x,t) =}\nonumber\\ &&-\displaystyle c_{j}(x_j,x,t)\sum_{k\neq l} \displaystyle\int_{0}^{1}\int_{\omega_j(x_j)}^{\omega_k(\xi,1-x_k,\omega_j(x_j))}r_{jk}(\tilde{\omega}_k(\omega_j(x_j),\xi,z),\omega_j(x_j)) \label{rdoc} \\ [2mm] &&\times\displaystyle d_{k}(\xi,\tilde{\omega}_k(\omega_j(x_j),\xi,z),\omega_j(x_j))b_{kl}(\xi,z) \displaystyle \d_{3}\tilde{\omega}_k(\omega_j(x_j),\xi,z)u_{l}(\xi,z)\,dz d\xi,\nonumber \end{eqnarray} where $\d_{3}\tilde{\omega}_k(\omega_j(x_j),\xi,z)$ is given by \reff{2star}. The functions $\omega_j(\xi,x,t)$ and the kernels of the integral operators in \reff{rdoc} are continuous and $t$-periodic functions and, hence, are uniformly continuous functions in $x$ and $t$. This means that we are again in the conditions of the Arzela-Ascoli theorem, as desired. We proceed to show that $B^2:C_{n,2\pi}\to C_{n,2\pi}$ is compact. By the Arcela-Ascoli theorem, $C_{n,2\pi}^1$ is compactly embedded into $C_{n,2\pi}$. Then the desired compactness property will follow if we show that \begin{equation} \label{f309} B^2 \;\textrm{maps continuously}\; C_{n,2\pi}\; \textrm{into} \; C_{n,2\pi}^1. \end{equation} By using the equalities \reff{f22}, \reff{f23}, and \reff{f34}, the partial derivatives $\partial_xB^2u$, $\partial_{t}B^2u$ exist and are continuous for each $u\in C_{n,2\pi}^1$. Since $C^1_{n,2\pi}$ is dense in $C_{n,2\pi}$, the desired condition \reff{f309} will follow from the bound \begin{equation} \label{bbboun} \left\|B^2u\right\|_{1} = O\left(\|u\|_{\infty}\right)\; \textrm{for all} \; u\in C^1_{n,2\pi}. \end{equation} To prove \reff{bbboun}, for given $j\le n$ and $u\in C_{n,2\pi}^1$, let us consider the following representation for $(B^2u)_j(x,t)$ obtained after the application of the Fubini's theorem: \begin{equation} \label{f311} (B^2u)_j(x,t) =\sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x} d_{jkl}(\xi,\eta,x,t)b_{jk}(\xi,\omega_j(\xi))u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta, \end{equation} where \begin{equation} \label{f311d} d_{jkl}(\xi,\eta,x,t)=d_j(\xi,x,t)d_{k}(\eta,\xi,\omega_{j}(\xi))b_{kl}(\eta,\omega_{k}(\eta,\xi,\omega_j(\xi))). \end{equation} The estimate $\left\|B^2u\right\|_{\infty}=O\left(\|u\|_{\infty}\right)$ is obvious. Since $$ (\d_t+a_j(x,t)\d_x)\varphi(\omega_j(\xi,x,t))=0 $$ for all $j\le n, \varphi\in C^1({\mathbb R}), x,\xi\in[0,1]$, and $t\in{\mathbb R}$, one can easily check that $$ \|[(\d_t+a_j(x,t)\d_x)(B^2u)_j]\|_{\infty} = O\left(\|u\|_{\infty}\right) \mbox{ for all } j\le n \mbox{ and } u\in C^1_{n,2\pi}. $$ Hence the estimate $\left\|\d_xB^2u\right\|_{\infty}=O\left(\|u\|_{\infty}\right)$ will follow from the following one: \begin{equation} \label{f31jhr1} \|\d_tB^2u||_{\infty}= O\left(\|u\|_{\infty}\right). \end{equation} In order to prove \reff{bbboun}, we are therefore reduced to prove \reff{f31jhr1}. To this end, we start with the following consequence of \reff{f311}: \begin{eqnarray} \lefteqn{ \d_t[(B^2u)_j(x,t)]} \nonumber\\ && =\displaystyle\sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x} \frac{d}{dt}\Bigl[ d_{jkl}(\xi,\eta,x,t)b_{jk}(\xi,\omega_j(\xi))\Bigr] u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta \nonumber\\ && +\displaystyle\sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x} d_{jkl}(\xi,\eta,x,t) b_{jk}(\xi,\omega_j(\xi)) \nonumber\\ && \times \d_t\omega_k(\eta,\xi,\omega_j(\xi))\d_t\omega_j(\xi)\d_2u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi))) \,d\xi d\eta. \nonumber \end{eqnarray} Let us transform the second summand. Using \reff{f7}, \reff{f22}, and \reff{f23}, we get \begin{eqnarray} \lefteqn{ \frac{d}{d\xi} u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))} \nonumber \\ && =\Bigl[\d_x\omega_k(\eta,\xi,\omega_j(\xi))+\d_t\omega_k(\eta,\xi,\omega_j(\xi))\d_{\xi}\omega_j(\xi)\Bigr] \d_2u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi))) \label{eqwn} \\ && =\left ( \frac{1}{a_j(\xi,\omega_j(\xi))}-\frac{1}{a_k(\xi,\omega_j(\xi))}\right ) \d_t\omega_k(\eta,\xi,\omega_j(\xi))\d_2u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi))) \nonumber. \end{eqnarray} Therefore, \begin{eqnarray} \lefteqn{ b_{jk}(\xi,\omega_j(\xi))\d_t\omega_k(\eta,\xi,\omega_j(\xi))\d_2u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))} \nonumber \\ && =\displaystyle a_j(\xi,\omega_j(\xi))a_k(\xi,\omega_j(\xi))\tilde{b}_{jk}(\xi,\omega_j(\xi))\frac{d}{d\xi} u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi))), \label{f313} \end{eqnarray} where the functions $\tilde{b}_{jk}\in C_{2\pi}$ are fixed to satisfy \reff{fz8}. Note that $\tilde{b}_{jk}$ are not uniquely defined by \reff{fz8} for $(x,t)$ with $a_{j}(x,t)=a_{k}(x,t)$. Nevertheless, as it follows from \reff{eqwn}, the right-hand side (and, hence, the left-hand side of \reff{f313}) do not depend on the choice of $\tilde{b}_{jk}$, since $\frac{d}{d\xi}u_{l}(\eta,\omega_{k}(\eta,\xi,\omega_{j}(\xi)))=0$ if $a_{j}(x,t)=a_{k}(x,t)$. Write $$ \tilde{d}_{jkl}(\xi,\eta,x,t) =d_{jkl}(\xi,\eta,x,t)\d_t\omega_j(\xi)a_k(\xi,\omega_j(\xi))a_{j}(\xi,\omega_j(\xi))\tilde{b}_{jk}(\xi,\omega_j(\xi)), $$ where $d_{jkl}$ are introduced by \reff{f311d} and \reff{f8}. Using \reff{f7} and \reff{f22}, we see that the function $\tilde{d}_{jkl}(\xi,\eta,x,t)$ is $C^1$-regular in $\xi$ due to regularity assumptions \reff{f4} and \reff{fz8}. Similarly, using \reff{f23}, we see that the functions $d_{jkl}(\xi,\eta,x,t)$ and $b_{jk}(\xi,\omega_j(\xi))$ are $C^1$-smooth in $t$. By \reff{f313} we have \begin{eqnarray*} \lefteqn{ (\d_tB^2u)_j(x,t)}\\ && = \displaystyle \sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x} \frac{d}{dt}[ d_{jkl}(\xi,\eta,x,t)b_{jk}(\xi,\omega_j(\xi))] u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta \\ && +\displaystyle \sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x}\tilde{d}_{jkl}(\xi,\eta,x,t)\frac{d}{d\xi} u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta \\ && =\displaystyle \sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x} \frac{d}{dt} [d_{jkl}(\xi,\eta,x,t)b_{jk}(\xi,\omega_j(\xi))] u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta \\ && -\displaystyle \sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\int_{\eta}^{x}\d_{\xi}\tilde{d}_{jkl}(\xi,\eta,x,t)u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\, d\xi d\eta \\ && +\displaystyle \sum_{k\neq j}\sum_{l\neq k}\int_{x_j}^x\left [\tilde{d}_{jkl}(\xi,\eta,x,t) u_l(\eta,\omega_k(\eta,\xi,\omega_j(\xi)))\right ]_{\xi=\eta}^{\xi=x}\, d\eta. \end{eqnarray*} The desired estimate \reff{f31jhr1} now easily follows from the assumptions \reff{f4}--\reff{fz8}. Returning back to \reff{hrrbbrbg}, it remains to prove that the operator $BR:C_{n,2\pi}\to C_{n,2\pi}$ is compact. By the definitions of $B$ and $R$, \begin{equation}\label{dro} \begin{array}{cc} (BRu)_{j}(x,t) =-\displaystyle \sum_{k\neq j}\sum_{l=1}^{n}\int_{0}^{1}\int_{x_j}^{x}d_{j}(\xi,x,t)b_{jk}(\xi,\omega_j(\xi))c_{k}(x_k,\xi,\omega_j(\xi)) & \\ \displaystyle \times r_{kl}(\eta,\omega_k(x_k,\xi,\omega_j(\xi)))u_{l}(\eta,\omega_k(x_k,\xi,\omega_j(\xi)))\,d\xi d\eta, \;\;\; j\le n. & \end{array} \end{equation} The integral operators in \reff{dro} are similar to those in \reff{f311} and, therefore, the proof of the compactness of $BR$ follows along the same line as the proof of the compactness of $B^2$. The proof of Theorem \ref{thm:th12} is complete. \section*{Acknowledgments} The second author was supported by the BMU-MID Erasmus Mundus Action 2 grant. He expresses his gratitude to the Applied Analysis group at the Humboldt University of Berlin for its kind hospitality.
2,877,628,090,890
arxiv
\section*{Introduction} La th\'eorie des nombres multiz\'etas a d\'ebut\'e dans \cite{Za} avec la construction de familles de relations entre ces nombres, reposant en partie sur le lien observ\'e par Kontsevich entre multiz\'etas et int\'egrales it\'er\'ees sur les espaces de modules de courbes rationnelles avec points marqu\'es. Parall\`element, Drinfeld a \'etabli des relations d'origine g\'eom\'etrique satisfaites par une s\'erie non-commutative, l'associateur KZ (\cite{Dr}) ; Le et Murakami ont identifi\'e l'associateur KZ \`a une s\'erie g\'en\'eratrice des multiz\'etas (\cite{LM}), ce qui a permis de consid\'erer les relations de l'associateur KZ comme un deuxi\`eme syst\`eme de relations entre multiz\'etas. Le lien entre les deux syst\`emes de relations a \'et\'e \'etudi\'e par Furusho (\cite{Fu}). Un analogue elliptique de la th\'eorie des associateurs a \'et\'e construit dans \cite{En} \`a partir d'un analogue elliptique de la connection de Knizhnik-Zamolodchikov (\cite{CEE}, voir aussi \cite{LR}). Le r\^{o}le de l'associateur KZ y est tenu par un couple de fonctions $(A(\tau),B(\tau))$, $\tau\in\HH$ \'etant un param\`etre elliptique, \`a valeurs dans un groupe de s\'eries non-commutatives \`a deux variables $\on{exp}(\hat{\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$. Les r\'esultats principaux de \cite{En} sur le sujet sont : le comportement de $(A(\tau),B(\tau))$ sous les transformations modulaires ; une famille de relations alg\'ebriques (d'origine g\'eom\'etrique) satisfaites par $(A(\tau),B(\tau))$ ; une \'equation diff\'erentielle satisfaite par le m\^{e}me objet ; son comportement en $\tau\to\on{i}\infty$. Un corollaire de cette \'etude est une famille de relations alg\'ebriques entre int\'egrales it\'er\'ees de s\'eries d'Eisenstein et multiz\'etas. Un r\^{o}le important est jou\'e dans cette th\'eorie, et \'egalement dans la th\'eorie reli\'ee des motifs elliptiques universels (\cite{HM,Pk}), par une alg\`ebre de Lie $\langle \delta_{2n},n\geq -1\rangle \subset \on{Der}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$. Le pr\'esent article \'etudie les coefficients des s\'eries $A(\tau),B(\tau)$. Il s'agit de fonctions $$ I_{\underline{d}}(\tau), J_{\underline{d}}(\tau), \quad \underline d = (d_{1},\ldots,d_{n})\in \{-1,0,1,\ldots\}^{n} $$ du param\`etre elliptique, qui sont des analogues elliptiques des nombres multiz\'etas. Nous en obtenons des expressions int\'egrales (relations (\ref{def:I}), (\ref{def:J}), (\ref{form:A}), (\ref{form:B})). Les r\'esultats de \cite{En} peuvent se traduire en termes de ces fonctions~: en particulier, l'\'equation diff\'erentielle satisfaite par $(A(\tau),B(\tau))$ se traduit en un syst\`eme diff\'erentiel satisfait par les $I_{\underline{d}}(\tau), J_{\underline{d}}(\tau)$. Nous construisons un tel syst\`eme diff\'erentiel \`a partir de leur expression int\'egrale (thm. \ref{thm:ode}) ; ce syst\`eme pr\'esente des analogies avec les syst\`emes diff\'e\-ren\-tiels de \cite{BL}. Nous construisons une r\'ealisation fonctionnelle de l'alg\`ebre de Lie $\langle \delta_{2n},n\geq -1\rangle$ (section \ref{sect:real:fonct}), permettant d'\'etablir un lien direct entre le syst\`eme diff\'erentiel de \cite{En} et celui obtenu en thm. \ref{thm:ode}. Ceci apporte un nouvel \'eclairage \`a ce syst\`eme diff\'erentiel, qui permet par ailleurs de donner un d\'eveloppement asymptotique des $I_{\underline{d}}(\tau), J_{\underline{d}}(\tau)$ en $\tau\to\on{i}\infty$ en termes de nombres multiz\'etas (section \ref{section:DA}). Signalons enfin les liens possibles entre le pr\'esent travail et \cite{BL}. Dans ce travail, qui se place dans le cadre de la th\'eorie des motifs, Brown et Levin introduisent des fonctions appel\'ees ``multiple elliptic zeta values'', qui devraient avoir des liens avec les fonctions introduites ici. \section{Pr\'eliminaires : associateurs elliptiques} Dans cette section, nous rappelons la construction et les propri\'et\'es de la fonction $\tau\mapsto (A(\tau),B(\tau))$ (\cite{En}). \subsection{D\'efinition de $(A(\tau),B(\tau))$} Soit pour $n\geq 2$, $\bar\t_{1,n}$ l'alg\`ebre de Lie pr\'esent\'ee par les g\'en\'erateurs $x_{i},y_{i}$, $i\in\{1,\ldots,n\}$ et les relations $\sum_{i}x_{i} = \sum_{i}y_{i}=0$, $[x_{i},x_{j}] = [y_{i},y_{j}] = 0$, $[x_{i},y_{j}] = [x_{j},y_{i}]=:t_{ij}$ si $i\neq j$, $[x_{k},t_{ij}] = [y_{k},t_{ij}]=0$ si $i,j,k$ sont distincts. En particulier, l'alg\`ebre de Lie $\bar\t_{1,2}$ s'identifie \`a l'alg\`ebre de Lie ${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$ librement engendr\'ee par les deux g\'en\'erateurs $x:= x_{1}$ et $y:= y_{1}$. Soit $\HH := \{\tau\in{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} |\Im(\tau)>0\}$ le demi-plan de Poincar\'e et $\tau\in\HH$. On note $z\mapsto\theta_{\tau}(z)$ la fonction holomorphe d\'efinie sur ${\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$, telle que\footnote{On note $\on{i}:= \sqrt{-1}$.} $\theta_{\tau}(z+1) = -\theta_{\tau}(z)$, $\theta_{\tau}(z+\tau) = -e^{-\on{i}\pi\tau}e^{-2\pi\on{i}z}\theta_{\tau}(z)$, $\theta_{\tau}'(0)=1$, et $\theta_{\tau}^{-1}(0) = {\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}+\tau{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}$ ; $\theta_{\tau}$ est impaire. $A(\tau),B(\tau)$ sont les holonomies renormalis\'ees de l'\'equation diff\'erentielle \begin{equation} \label{eq:dept} G'(z) = -{{\theta_{\tau}(z+\on{ad}x)\on{ad}x}\over{\theta_{\tau}(z) \theta_{\tau}(\on{ad}x)}}(y)\cdot G(z), \end{equation} \`a valeurs dans le groupe $\on{exp}(\hat{\bar\t}_{1,2})$ le long les chemins $[0,1]$ et $[0,\tau]$ (l'alg\`ebre de Lie est compl\'et\'ee pour le degr\'e en $x,y$). Plus pr\'ecis\'ement, cette \'equation admet une solution $G(z)$ d\'efinie sur $\{a+b\tau, a$ ou $b\in]0,1[\}$ telle que $G(z)\simeq (-2\pi\on{i}z)^{-[x,y]}$ en $z\to 0$. Alors $$ A(\tau):= G(z)^{-1}G(z+1), \quad B(\tau):= G(z)^{-1}e^{2\pi\on{i}x}G(z+\tau). $$ Ce sont des \'el\'ements du groupe $\on{exp}(\hat{\bar\t}_{1,2})$. \subsection{Propri\'et\'es de $A(\tau),B(\tau)$} \subsubsection{Propri\'et\'es modulaires} On a d'apr\`es \cite{En}, Proposition 66 \begin{equation} \label{id:mod:A:B} A({{-1}\over{\tau}}) = \on{Ad}\big({{-1}\over\tau}\big)^{-t} \circ \alpha_{\tau}(B(\tau)^{-1}), \quad B({{-1}\over{\tau}}) = \on{Ad}\big({{-1}\over{\tau}}\big)^{-1} \circ\alpha_{\tau}(BAB^{-1}(\tau)) , \end{equation} o\`u $\alpha_{\tau}\in\on{Aut}(\on{exp}(\hat{\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}))$ est donn\'e par $x\mapsto -\tau x$, $y\mapsto - 2\pi\on{i}x - \tau^{-1}y$, $$t:= -[x,y]$$ et $(-\tau^{-1})^{-t}:= \on{exp}(-\on{log}(-\tau^{-1})t)$, la d\'etermination du logarithme \'etant de partie imaginaire comprise entre $0$ et $\pi$. La compatibilit\'e de ces relations est assur\'ee par l'identit\'e (\ref{comm:A:B}) ainsi que par les identit\'es $$ (-I_{2})(A(\tau)) = e^{-\on{i}\pi\tau} A(\tau)^{-1} e^{-\on{i}\pi\tau}, \quad (-I_{2})(B(\tau)) = e^{\on{i}\pi\tau}B(\tau)^{-1}e^{\on{i}\pi\tau}, $$ o\`u $(-I_{2})$ est l'automorphisme de $\on{exp}(\hat{\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ induit par $x\mapsto -x$, $y\mapsto -y$ ; ces identit\'es proviennent de $(-I_{2})(A(\tau)) = A(\tau)^{2,1}$, $(-I_{2})(B(\tau)) = B(\tau)^{2,1}$ et des identit\'es (\ref{A:A:B:B}) (voir section \ref{rels:alg}). \subsubsection{Relations alg\'ebriques} \label{rels:alg} Soit $\Phi(a,b)$ l'associateur KZ, d\'efini par $\Phi(a,b) = H_{1}^{-1}H_{0}$, o\`u $H_{i}$ sont les solutions de $H'(z) = (a/z+b/(z-1))H(z)$ sur $]0,1[$ telles que $H_{0}(z)\simeq z^{a}$ en $z\to 0$, $H_{1}(z)\simeq (1-z)^{b}$ en $z\to 1$, et o\`u $a,b$ sont des variables formelles non-commutatives. On pose $$ \alpha_{+}:= e^{\on{i}\pi(t_{12}+t_{13})}A(\tau)^{1,23}\Phi(t_{12},t_{23}), \quad \alpha_{-}:= e^{-\on{i}\pi(t_{12}+t_{13})}B(\tau)^{1,23}\Phi(t_{12},t_{23}), $$ o\`u $a\mapsto a^{1,23}$ est le morphisme d'alg\`ebres de Lie $\bar\t_{1,2}\to\bar\t_{1,3}$ tel que $x_{1}\mapsto x_{1}$, $x_{2}\mapsto x_{2}+x_{3}$, $y_{1}\mapsto y_{1}$, $y_{2}\mapsto y_{2}+y_{3}$. La premi\`ere famille de relations satisfaites par $(A(\tau),B(\tau))$ est \begin{equation} \label{first:rel} \alpha_{\pm}^{3,1,2}\alpha_{\pm}^{2,3,1}\alpha_{\pm} = 1 \quad \text{(dans } \on{exp}(\hat{\bar\t}_{1,3})), \end{equation} o\`u $a\mapsto a^{2,3,1}$ est l'automorphisme de $\bar\t_{1,3}$ tel que $x_{i}\mapsto x_{i+1\text{ mod }3}$, $y\mapsto y_{i+1\text{ mod }3}$, et $a\mapsto a^{2,3,1}$ est le carr\'e de cet automorphisme. Le couple $(A(\tau),B(\tau))$ satisfait d'autre part la relation \begin{equation} \label{second:rel} (\Phi(t_{12},t_{23})^{-1} * B(\tau)^{1,23}, (e^{-\on{i}\pi t_{12}}\Phi(t_{21},t_{13})) * (A(\tau)^{2,13})^{-1}) = e^{2\pi\on{i}t_{12}} \quad \text{(dans } \on{exp}(\hat{\bar\t}_{1,3})), \end{equation} o\`u $(x,y):= xyx^{-1}y^{-1}$, $x*y:= xyx^{-1}$, $t_{12}:= [x_{1},y_{2}]$ et $a\mapsto a^{2,13}$ est le morphisme $\bar\t_{1,2}\to\bar\t_{1,3}$ donn\'e par $x_{1}\mapsto x_{2}$, $x_{2}\mapsto x_{1}+x_{3}$, $y_{1}\mapsto y_{2}$, $y_{2}\mapsto y_{1}+y_{3}$. Les relations (\ref{first:rel}) impliquent alors \begin{equation} \label{A:A:B:B} e^{\on{i}\pi t}A(\tau)e^{\on{i}\pi t}A(\tau)^{2,1} = e^{-\on{i}\pi t}B(\tau)e^{-\on{i}\pi t}B(\tau)^{2,1} = 1 \quad \text{(dans }\on{exp}(\hat{\bar\t}_{1,2})), \end{equation} o\`u $a\mapsto a^{2,1}$ est l'automorphisme involutif de $\overline\t_{1,2}$ donn\'e par $x_{1}\leftrightarrow x_{2}$, $y_{1}\leftrightarrow y_{2}$, et la relation (\ref{second:rel}) implique \begin{equation} \label{comm:A:B} (A(\tau),B(\tau)) = e^{-2\pi\on{i}t} \quad \text{(dans } \on{exp}(\hat{\bar\t}_{1,2})). \end{equation} \subsubsection{Equations diff\'erentielles} Pour chaque $n\geq -1$, il existe une unique d\'erivation de ${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$, homog\`ene pour le bidegr\'e en $(x,y)$ et telle que $\delta_{2n}(x) = \on{ad}(x)^{2n+1}(y) =:[x^{2n+1}y]$. Les fonctions $A(\tau),B(\tau)$ satisfont alors les \'equations diff\'erentielles \begin{equation} \label{ED:A:B} 2\pi\on{i}\partial_{\tau}A(\tau) = -(\sum_{n\geq -1}(2n+1)G_{2n+2}(\tau)\delta_{2n})(A(\tau)), \quad 2\pi\on{i}\partial_{\tau}B(\tau) = -(\sum_{n\geq -1} (2n+1)G_{2n+2}(\tau)\delta_{2n})(B(\tau)). \end{equation} (cf. \cite{En}, Proposition 67), o\`u les s\'eries d'Eisenstein sont d\'efinies par $$ G_{k}(\tau) = \sum_{a\in {\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}+\tau{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N} - \{0\}} {1\over {a^{k}}} \quad \text{si\ } k\text{\ est\ pair\ }\geq 4, \quad G_{2}(\tau) = \sum_{m\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}}(\sum_{n}{}'{1\over{(n+m\tau)^{2}}}), \quad G_{0}(\tau) := -1, $$ et $\sum'$ signifie $\sum_{n\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}}$ si $m\neq 0$ et $\sum_{n\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N} - \{0\}}$ si $m=0$. \subsubsection{Comportement \`a l'infini} On a \begin{equation} \label{comp:A} A(\tau) = \Phi(\tilde y,t)e^{2\pi\on{i}\tilde y} \Phi(\tilde y,t)^{-1} + O(e^{2\pi\on{i}\tau}), \end{equation} \begin{equation} \label{comp:B} B(\tau)=e^{\on{i}\pi t}\Phi(-\tilde y-t,t)e^{2\pi\on{i}x}e^{2\pi\on{i} \tilde y\tau} \Phi(\tilde y,t)^{-1} + O(e^{2\pi\on{i}(1-\epsilon)\tau}) \end{equation} pour tout $\epsilon>0$, lorsque $\tau\to\on{i}\infty$ (\cite{CEE}, d\'emonstration de la prop. 4.7 puis lemme 4.14), o\`u $$ \tilde y:= -{{\on{ad}x}\over{e^{2\pi\on{i}\on{ad}x}-1}}(y) $$ et on rappelle que $t = -[x,y]$ et $\Phi(a,b)$ est l'associateur KZ d\'efini en section \ref{rels:alg}. \section{Les analogues elliptiques des nombres multiz\'etas} \label{sec:dev:A} Dans cette section, nous introduisons certaines r\'egularisations d'int\'egrales it\'er\'ees (section \ref{subsect:IIreg}). Nous nous en servons en section \ref{subsect:multizeta} pour d\'efinir les fonctions du para\`etre elliptique, analogues des nombres multiz\'etas. Nous montrons que les fonctions $A(\tau),B(\tau)$ peuvent s'interpr\'eter comme des s\'eries g\'en\'eratrices pour ces fonctions (section \ref{sect:corr}). Les propri\'et\'es de $A(\tau),B(\tau)$ peuvent donc se traduire en termes fonctionnels : c'est ce qui est fait explicitement en section \ref{subsect:proprietes} pour certaines de ces propri\'et\'es. \subsection{Int\'egrales it\'er\'ees r\'egularis\'ees} \label{subsect:IIreg} On suppose que $(A,A_{+})$ est l'un des couples suivants~: $\bullet$ $A$ est une alg\`ebre augment\'ee, compl\`ete pour la topologie des puissances de l'id\'eal d'augmen\-ta\-tion $A_{+}$ ; $\bullet$ $A=A_{+}={\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$. Soit $M$ un vari\'et\'e lisse, $U\subset M$ un ouvert, $\gamma : I = [0,1]\to M$ un chemin lisse tel que $\gamma(\mathring I) \subset U$. On pose $$ \Omega^{1}(U,A_{+}):= \{\text{1-formes diff\'erentielles lisses sur }U \text{ \`a valeurs dans }A_{+}\}. $$ Soient $\omega_{1},\ldots,\omega_{n}\in \Omega^{1}(U,A_{+})$, telles que : $\bullet$ si $i\notin \{1,n\}$, $\gamma^{*}(\omega_{i})\in O(d\on{log}(t-c))$ lorsque $t\to c$, pour chaque $c\in\{0,1\}$ ; $\bullet$ si $i\in \{1,n\}$, alors il existe $N\geq 0$ tel que $\gamma^{*}(\omega_{i}) = O((\on{log}(t-c))^{N})dt$ lorsque $t\to c$, pour chaque $c\in\{0,1\}$. On pose alors $$ (\int_{\gamma}\omega_{1}\circ\cdots\circ\omega_{n})^{\text{op}} := m^{\text{op}(n)}(\int_{\Delta_{n}} (\gamma^{n})^{*}(\omega_{1}\otimes\cdots\otimes\omega_{n})), $$ o\`u $\Delta_{n}$ est le simplexe $\{(t_{1},\ldots,t_{n})\in{\mathbb{R}}}\newcommand{\PP}{{\mathbb{P}}^{n}| 0\leq t_{1}\leq\ldots\leq t_{n}\leq 1\}$ et $m^{\text{op}(n)} : A^{\otimes n} \to A$ est le produit $n$uple de l'alg\`ebre oppos\'ee \`a $A$. Soit $\alpha\in A$ et $$ \Omega:= \{\omega\in\Omega^{1}(U,A_{+}) | \gamma^{*}(\omega) = d(\alpha \on{log}(t-c))+O(1)dt \text{ lorsque }t\to c\text{ pour chaque }c\in\{0,1\}\}$$ et $$ {\mathcal L}:= \{\ell\in C^{\infty}(U,A_{+}) | \gamma^{*}(\ell) - \alpha\on{log}(t(1-t)) \in C^{\infty}(I,A_{+})\}. $$ Pour $\omega_{1},\ldots,\omega_{n}\in\Omega$ et $\ell\in{\mathcal L}$, on pose $$I^{\ell}_{\gamma}(\omega_{1},\ldots,\omega_{n}):= \sum_{a,b\geq 0, \atop a+b\leq n-1} \Big({{(-1)^{b}}\over{a!b!}} \int_{\gamma} (\alpha \ell)^{a}(\omega_{a+1} - \alpha d\ell) \circ \omega_{a+2}\circ \cdots \circ \omega_{n-b-1} \circ (\omega_{n-b} - \alpha d\ell)(\alpha \ell)^{b} \Big)^{\text{op}}. $$ Si $\omega\in\Omega^{1}(]0,1[,A)$ avec $\omega = \alpha d\on{log}t(1-t) + O(1)$ en $t =0,1$. L'holonomie renormalis\'ee de $0$ \`a $1$ de l'\'equation diff\'erentielle $$ dy = \omega y $$ est $M:= Y_{1}^{-1}Y_{0}$, o\`u $Y_{i}$ sont des solutions telles que $Y_{i}\simeq \on{exp}(\alpha \ell)$ en $z\to i$ ($=0,1$). On v\'erifie alors que \begin{equation} \label{eq:monodr} M = \sum_{n\geq 0} I^{\ell}_{[0,1]}(\underbrace{\omega,\ldots,\omega}_{n}). \end{equation} \subsection{Les analogues elliptiques des nombres multiz\'etas} \label{subsect:multizeta} Fixons $\tau\in\HH$. On pose $\theta:= \theta_{\tau}$. Pour $x\in{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$, on pose $$ \sigma^{\tau}_{x}(z):= {\theta(z+x)\over{\theta(z)\theta(x)}}. $$ Consid\'erant $x$ comme une variable formelle proche de $0$, on voit $\sigma_{x}$ comme un \'el\'ement de $x^{-1}\on{Mer}({\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}})[[x]]$, o\`u $\on{Mer}({\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}) = \{$fonctions m\'eromorphes d\'efinies sur ${\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}\}$. Plus pr\'ecis\'ement : \begin{proposition} $\sigma_{x}^{\tau}$ admet le d\'eveloppement $$ \sigma_{x}^{\tau}(z) = {1\over x} + \sum_{n\geq 0}k_{n}^{\tau}(z)x^{n}, $$ avec $k^{\tau}_{0}(z) = (\theta'_{\tau}/\theta_{\tau})(z)$ et $k_{n}^{\tau}$ finie en $0$ et $1$ si $n>0$. \end{proposition} {\em D\'emonstration.} Posons $\theta:=\theta_{\tau}$. On a $x\sigma^{\tau}_{x}(z)_{|x=0}=1$. De plus, $$ (\sigma^{\tau}_{x}(z) - {1\over x})_{|x=0} = {1\over x} ({\theta(z+x)\over{\theta(z)}}{x\over{\theta(x)}}-1)_{|x=0} = {\theta'\over\theta}(z). $$ Enfin, le d\'eveloppement de $\sigma_{x}^{\tau}(z)$ en $z=0$ est, compte tenu de l'imparit\'e de $\theta$ $$ \sigma_{x}^{\tau}(z) = {\theta(x+z)\over{\theta(x)}}{1\over{\theta(z)}} = (1+z{\theta'\over\theta}(x)+O(z^{2}))({1\over z}+O(z)) = {1\over z}+{\theta'\over\theta}(x)+O(z). $$ Donc $$ \sigma_{x}^{\tau}(z) - ({1\over x}+k_{0}^{\tau}(z)) = ({\theta'\over\theta}(x)-{1\over x}) + ({1\over z} - {\theta'\over\theta}(z))+O(z)$$ qui est fini en $z=0$. Il s'ensuit que les $k_{n}^{\tau}$ sont finis en $0$. Par sym\'etrie, ils sont \'egalement finis en $1$. \hfill \qed\medskip Comme $\theta'(0)=1$ et $\theta$ ne s'annule pas sur $\{a+b\tau| a,b\in [0,1]\} -\{0,1,\tau,1+\tau\}$, il existe une d\'etermination de $\log(-2\pi\on{i}\theta)$ sur cet ouvert telle que $\on{log}(-2\pi\on{i}\theta(z)) = \on{log}(-2\pi\on{i}z) +o(1)$ lorsque $z\to 0^{+}$ (le logarithme \'etant de partie imaginaire $-\pi/2$). \begin{definition} Pour $\tau\in\HH$, on pose \begin{equation} \label{def:I} I_{x_{1},\ldots,x_{n}}(\tau):= I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]}(\sigma_{x_{1}}^{\tau}dz, \ldots,\sigma_{x_{n}}^{\tau}dz) \end{equation} \begin{equation} \label{def:J} J_{x_{1},\ldots,x_{n}}(\tau):= I_{[0,\tau]}^{\on{log} (-2\pi\on{i} e^{{{\on{i}\pi z^{2}}\over{\tau}}}\theta_{\tau}(z))} (e^{2\pi\on{i}{{x_{1}z}\over\tau}} \sigma_{x_{1}}^{\tau}(z)dz,\ldots, e^{2\pi\on{i}{{x_{n}z}\over\tau}}\sigma_{x_{n}}^{\tau}(z)dz) ; \end{equation} ce sont des s\'eries dans $(x_{1}\cdots x_{n})^{-1}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[[x_{1},\ldots,x_{n}]]$. \end{definition} On a $$ I_{x_{1},\ldots,x_{n}}(\tau) = \sum_{d_{1},\ldots,d_{n}\geq -1} I_{d_{1},\ldots,d_{n}}(\tau)x_{1}^{d_{1}}\cdots x_{n}^{d_{n}}, \quad J_{x_{1},\ldots,x_{n}}(\tau) = \sum_{d_{1},\ldots,d_{n}\geq -1} J_{d_{1},\ldots,d_{n}}(\tau)x_{1}^{d_{1}}\cdots x_{n}^{d_{n}}, $$ o\`u pour $\underline{d}:= (d_{1},\ldots,d_{n})\in {\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}_{\geq -1}^{n}$, $$ I_{0,\ldots,0}(\tau):=0, \quad I_{\underline{d}}(\tau) := \int_{0}^{1} {{(\on{log}\theta)^{\alpha-1}}\over{(\alpha-1)!}} k_{d_{\alpha}}dz\circ k_{d_{\alpha+1}}dz \circ \cdots\circ k_{d_{\beta-1}}dz \circ k_{d_{\beta}}dz{{(-\on{log}\theta)^{n-\beta}} \over{(n-\beta)!}} \text{ si }\underline{d}\neq 0, $$ o\`u $\alpha = \on{min}\{k|d_{k}\neq 0\}$ et $\beta = \on{max}\{k|d_{k}\neq 0\}$, $k_{-1}=1$, et les $J_{\underline{d}}(\tau)$ ont une d\'efinition semblable aux $I_{\underline{d}}(\tau)$. On a $I_{\underbrace{\scriptstyle{-1,\ldots,-1}}_{n}}(\tau) = 1/n!$, $J_{\underbrace{\scriptstyle{-1,\ldots,-1}}_{n}}(\tau) = \tau^{n}/n!$. On appelle les fonctions $I_{\underline{d}}(\tau)$, $J_{\underline{d}}(\tau)$ les {\it analogues elliptiques des nombres multiz\'etas} ; les $I_{x_{1},\ldots,x_{n}}(\tau)$, $J_{x_{1},\ldots,x_{n}}(\tau)$ en sont des s\'eries g\'en\'eratrices. Cette terminologie est justifi\'ee par les r\'esultats de la section suivante. \subsection{Lien avec le d\'eveloppement de $A(\tau)$, $B(\tau)$} \label{sect:corr} On a : \begin{equation} \label{form:A} e^{\on{i}\pi t}A(\tau) = \sum_{n\geq 0} (-1)^{n}\sum_{d_{1},\ldots,d_{n}\geq -1} I_{d_{1},\ldots,d_{n}}(\tau)[x^{d_{n}+1}y]\cdots[x^{d_{1}+1}y], \end{equation} o\`u $$ [x^{n}y]:= \on{ad}(x)^{n}(y),\quad n\geq 0. $$ En effet, si $I(\tau)$ est le membre de droite de (\ref{form:A}), on a d'apr\`es (\ref{eq:monodr}) l'\'egalit\'e $I(\tau) = G_{1}(z)^{-1}G(z)$, o\`u $G_{1}(z)$ est la solution de (\ref{eq:dept}) dans $\{z\in{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}|0<\Im(z)<\Im(\tau)\}$ telle que $G_{1}(z)\sim (-2\pi\on{i}(1-z))^{t}$ lorsque $z\to 1$, donc $A(\tau) = G(z-1)^{-1}G(z) = G(z-1)^{-1}G_{1}(z) I(\tau) = e^{-\on{i}\pi\tau} I(\tau)$. Soit $$ F:= \oplus_{n\geq 0} F_{n} := \bigoplus_{n\geq 0} (x_{1}\cdots x_{n})^{-1} {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[[x_{1},\ldots,x_{n}]] ; $$ munie du produit $f*g:= h$, avec $h(x_{1},\ldots,x_{n+m}):= f(x_{1},\ldots,x_{n})g(x_{n+1},\ldots,x_{n+m})$ pour $f\in F_{n}$, $g\in F_{m}$, c'est une alg\`ebre gradu\'ee. Soit ${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x\subset {\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$ la somme directe des composantes de degr\'e $>0$ en $x$. Par \'elimination de Lazard, ${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x$ est l'alg\`ebre de Lie librement engendr\'ee par les $[x^{n}y]$, $n\geq 0$. $U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2} \ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)$ est donc l'alg\`ebre associative libre sur ces g\'en\'erateurs. On a donc un isomorphisme entre $F$ et la compl\'etion de $U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2} \ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)$ pour le degr\'e en $x,y$ via $$ [x^{d_{1}}y]\cdots [x^{d_{n}}y] \leftrightarrow x_{1}^{d_{1}-1}\cdots x_{n}^{d_{n}-1}\in F_{n}. $$ Alors $$ U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)_{\text{deg. en }x,y} \ni e^{\on{i}\pi t}A(\tau) \leftrightarrow ((-1)^{n}I_{x_{n},\ldots,x_{1}}(\tau))_{n\geq 0}\in F. $$ Par ailleurs, $\on{log}B(\tau) = 2\pi\on{i}x-\tau y + \text{termes de degr\'e }\geq 2 \text{ en }x,y$. Il s'ensuit que si $e_{+}\in\on{Der}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ est la d\'erivation $(x,y)\mapsto(0,x)$, alors $\on{exp}((2\pi\on{i}/\tau)e_{+})(B(\tau))$ est dans la compl\'etion en $x,y$ de $U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)$. On a en fait \begin{equation} \label{form:B} \on{exp}({{2\pi\on{i}}\over\tau}e_{+})(e^{-\on{i}\pi t}B(\tau)) = \sum_{n\geq 0}(-1)^{n}\sum_{d_{1},\ldots,d_{n}\geq -1} J_{d_{1},\ldots,d_{n}}(\tau)[x^{d_{n}+1}y]\cdots[x^{d_{1}+1}y], \end{equation} en d'autres termes $$ U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)_{\text{deg. en }x,y} \ni \on{exp}({{2\pi\on{i}}\over\tau}e_{+})(e^{-\on{i}\pi t}B(\tau)) \leftrightarrow ((-1)^{n}J_{x_{n},\ldots,x_{1}}(\tau))_{n\geq 0}\in F. $$ Ce r\'esultat est fond\'e sur un raisonnement analogue \`a la d\'emonstration de (\ref{form:A}) et sur les observations suivantes : $B(\tau) = H(z)^{-1}H(z+\tau)$, o\`u $H(z)$ est la solution de $$ H'(z) = \big({{2\pi\on{i}x}\over\tau} - e^{2\pi\on{i}{z\over\tau} \on{ad}x}{{\theta(z+\on{ad}x)\on{ad}x}\over{\theta(z)\theta(\on{ad}x)}} (y)\big)H(z)$$ telle que $H(z)\sim (-2\pi\on{i}z)^{-[x,y]}$ en $z\to 0$ ; l'image de ce syst\`eme sous $\on{exp}((2\pi\on{i}/\tau)e_{+})$ est le syst\`eme analogue, dans lequel le terme en $(2\pi\on{i}x)/\tau$ a disparu. \subsection{Propri\'et\'es alg\'ebriques et modulaires} \label{subsect:proprietes} Le caract\`ere ``de type groupe'' de $A(\tau),B(\tau)$ se traduit par les identit\'es \begin{equation} \label{rels:Id} I_{d_{1},\ldots,d_{n}}(\tau) I_{d_{n+1},\ldots,d_{n+m}}(\tau) = \sum_{\sigma\in S_{n,m}} I_{d_{\sigma(1)},\ldots,d_{\sigma(n+m)}}(\tau), \end{equation} et les identit\'es similaires pour les $J_{\underline{d}}(\tau)$, o\`u $S_{n,m} = \{\sigma\in S_{n+m}|\sigma(i)<\sigma(j)$ si $i<j\leq n$ ou $n+1\leq i<j\}$. Les identit\'es (\ref{A:A:B:B}) se traduisent par $$ \sum_{k=0}^{n} (-1)^{d_{1}+\cdots+d_{k}} I_{d_{1},\ldots,d_{k}}(\tau) I_{d_{k+1},\ldots,d_{n}}(\tau)=0 \quad\text{si}\quad n\geq 1, \ d_{1},\ldots,d_{n}\geq -1$$ et les identit\'es analogues en rempla\c{c}ant chaque $I_{\underline{d}}$ par $J_{\underline{d}}$. Les multiz\'etas elliptiques sont reli\'es par l'identit\'e modulaire \begin{equation} \label{id:mod} J_{x_{1},\ldots,x_{n}}(\tau) = \sum_{a,b\geq 0, \atop{a+b\leq n-1}} {{(-1)^{b}}\over{a!b!}}(\on{log}\tau)^{a+b}I_{{x_{a+1}\over\tau},\ldots, {x_{n-b}\over\tau}}({{-1} \over\tau}), \end{equation} qui repose sur les identit\'es $$ \theta_{-1/\tau}(z) = {1\over\tau}e^{\on{i}\pi\tau z^{2}}\theta_{\tau}(\tau z), \quad \sigma^{-1/\tau}_{x}(z)dz = e^{2\pi\on{i}x\tau z} \sigma^{\tau}_{\tau x}(\tau z)d(\tau z). $$ L'identit\'e (\ref{id:mod}) traduit l'identit\'e (\ref{id:mod:A:B}) reliant $A(\tau)$ et $B(\tau)$. \section{Syst\`eme diff\'erentiel pour les analogues elliptiques des nombres multiz\'etas} \label{sect:diff} Dans cette section, nous obtenons un syst\`eme diff\'erentiel satisfait par les analogues elliptiaues des nombres multiz\'etas \`a partir de leur expression int\'egrale. Pla\c{c}ons-nous dans le cadre de la section \ref{subsect:IIreg}, avec $A = A_{+} = {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$. On remarque que ${\mathcal L}$ est un espace affine sous $$ {\mathcal L}_{0}:= \{\ell\in C^{\infty}(U) | \gamma^{*}(\ell)\in C^{\infty}(I)\}. $$ On montre alors que la d\'ependance en $\ell$ de $I^{\ell}_{\gamma}(\omega_{1},\ldots,\omega_{n})$ est la suivante : \begin{proposition} \label{var:int} Pour $\varphi\in {\mathcal L}_{0}$, on a $$ I^{\ell+\varphi}_{\gamma}(\omega_{1},\ldots,\omega_{n}) = \sum_{a,b\geq 0, \atop a+b\leq n-1} {{(-1)^{b}}\over{a!b!}} \varphi(\alpha)^{a}\varphi(\beta)^{b} I^{\ell}_{\gamma} (\omega_{a+1},\ldots,\omega_{n-b}), $$ avec $\alpha:= \gamma(0)$, $\beta:= \gamma(1)$. \end{proposition} La proposition suivante repose sur l'int\'egration par parties. \begin{proposition} \label{prop:analyse} Soient $\ell\in{\mathcal L}$, $\omega_{1},\ldots,\omega_{n}\in \Omega$, $g_{1},\ldots,g_{n}\in {\mathcal L}_{0}$, $\psi_{12},\ldots,\psi_{n-1,n}\in\Omega$ tels que~: $$ \forall i\in \{1,\ldots,n-1\}, \quad (g_{i}-g_{i+1})(\alpha) = (g_{i}-g_{i+1})(\beta), \quad \on{et} \quad g_{i}\omega_{i+1}-g_{i+1}\omega_{i} = (g_{i}-g_{i+1})(\alpha)\psi_{i,i+1}. $$ Alors \begin{align} \label{id:prop:analyse} & (d/d\epsilon)_{|\epsilon=0}I^{\ell}_{\gamma}(\omega_{1}+\epsilon dg_{1}, \ldots,\omega_{n}+\epsilon dg_{n}) \\ & \nonumber = - g_{1}(\alpha)I^{\ell}_{\gamma}(\omega_{2},\ldots,\omega_{n}) +g_{n}(\alpha)I^{\ell}_{\gamma}(\omega_{1},\ldots,\omega_{n-1}) + \sum_{i=1}^{n-1}(g_{i}-g_{i+1})(\alpha)I^{\ell}_{\gamma}(\omega_{1}, \ldots,\psi_{i,i+1},\ldots,\omega_{n}) . \end{align} \end{proposition} La fonction de Weierstrass est d\'efinie par $\wp(z) = \sum'_{a\in {\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}+\tau{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}} ((z+a)^{-2}- a^{-2})$, o\`u $\sum'$ signifie que le terme $a^{-2}$ n'est pas pris en compte lorsque $a=0$. On pose alors $$\tilde\wp(z):= \wp(z)+G_{2}(\tau) = \sum_{m\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}} (\sum_{n}{}^{'}(z+n+m\tau)^{-2}).$$ \begin{theorem} \label{thm:ode} \begin{equation} \label{I:ode} (2\pi\on{i})\partial_{\tau} I_{x_{1},\ldots,x_{n}}(\tau) = \tilde\wp(x_{1})I_{x_{2},\ldots,x_{n}}(\tau) - \tilde\wp(x_{n})I_{x_{1},\ldots,x_{n-1}} (\tau) + \sum_{i=1}^{n-1}(\wp(x_{i+1})-\wp(x_{i}))I_{x_{1},\ldots, x_{i}+x_{i+1},\ldots,x_{n}}(\tau). \end{equation} et \begin{align*} (2\pi\on{i})\partial_{\tau} J_{x_{1},\ldots,x_{n}}(\tau) & = \tilde\wp(x_{1})J_{x_{2},\ldots,x_{n}}(\tau) - \tilde\wp(x_{n})J_{x_{1},\ldots,x_{n-1}} (\tau) + \sum_{i=1}^{n-1}(\wp(x_{i+1})-\wp(x_{i}))J_{x_{1},\ldots, x_{i}+x_{i+1},\ldots,x_{n}}(\tau) \\ & -{{2\pi\on{i}}\over\tau}(x_{1}\partial_{x_{1}}+\cdots + x_{n} \partial_{x_{n}})J^{\tau}_{x_{1},\ldots,x_{n}}. \end{align*} \end{theorem} {\em D\'emonstration.} Montrons la premi\`ere identit\'e. $I_{x_{1},\ldots,x_{n}}(\tau)= I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]}(\sigma_{x_{1}}^{\tau}dz,\ldots, \sigma_{x_{n}}^{\tau}dz)$, donc $$ \partial_{\tau}I_{x_{1},\ldots,x_{n}}(\tau) = (d/dt)_{|t=0}I^{\on{log}(-2\pi\on{i}\theta_{\tau+t})}_{[0,1]} (\sigma^{\tau}_{x_{1}}dz,\ldots,\sigma^{\tau}_{x_{n}}dz) + (d/dt)_{|t=0}I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma^{\tau+t}_{x_{1}}dz,\ldots,\sigma^{\tau+t}_{x_{n}}dz). $$ L'\'egalit\'e recherch\'ee est une cons\'equence des deux lemmes suivants. \begin{lemma} $(d/dt)_{|t=0}I^{\on{log}(-2\pi\on{i}\theta_{\tau+t})}_{[0,1]} (\sigma^{\tau}_{x_{1}}dz,\ldots,\sigma^{\tau}_{x_{n}}dz) =0$. \end{lemma} {\em D\'emonstration.} $\theta_{\tau}$ \'etant impaire, on a $\theta_{\tau}(z) = z+O(z^{3})$ en $z=0$, donc $\on{log}(-2\pi\on{i}\theta_{\tau}(z)) = \on{log}(-2\pi\on{i}z)+O(z^{2})$. Donc si on pose $\varphi(z) = (d/dt)_{|t=0}\on{log}(-2\pi\on{i}\theta_{\tau+t}(z))$, alors $\varphi(z) = O(z^{2})$ en $z=0$, d'o\`u $\varphi(0)=0$ ; de m\^{e}me, $\varphi(1)=0$. Le r\'esultat est alors une cons\'equence de l'identit\'e $$ (d/dt)_{|t=0}I^{\ell+t\varphi}_{[0,1]}(\omega_{1},\ldots,\omega_{n}) = \varphi(0)I^{\ell}_{[0,1]}(\omega_{2},\ldots,\omega_{n}) - \varphi(1) I^{\ell}_{[0,1]}(\omega_{1},\ldots,\omega_{n-1})$$ (cf. proposition \ref{var:int}). \hfill\qed \medskip \begin{lemma} \begin{align} \label{big:id} & 2\pi\on{i} (d/dt)_{|t=0}I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma_{x_{1}}^{\tau+t}dz,\ldots,\sigma_{x_{n}}^{\tau+t}dz) \\ & \nonumber = \tilde\wp(x_{1})I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma_{x_{2}}^{\tau}dz,\ldots,\sigma_{x_{n}}^{\tau}dz) -\tilde\wp(x_{n})I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma_{x_{1}}^{\tau}dz, \ldots,\sigma_{x_{n-1}}^{\tau}dz) \\ & \nonumber +\sum_{i=1}^{n-1} (\wp(x_{i+1})-\wp(x_{i}))I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma_{x_{1}}^{\tau}dz, \ldots,\sigma_{x_{i}+x_{i+1}}^{\tau}dz,\ldots,\sigma_{x_{n}}^{\tau}dz). \end{align} \end{lemma} {\em D\'emonstration.} Posons $\ell := \on{log}(-2\pi\on{i}\theta_{\tau})$, $\omega_{i} := \sigma_{x_{i}}^{\tau}dz$, $g_{i}:= {1\over{2\pi\on{i}}} \partial_{x_{i}}\sigma_{x_{i}}^{\tau}$, $\psi_{i,i+1}:= \sigma_{x_{i}+x_{i+1}}^{\tau}dz$. Les hypoth\`eses de la proposition \ref{prop:analyse} sont satisfaites car : $\bullet$ $g_{i}(z) = {1\over{2\pi\on{i}}}{{\theta(x_{i})\theta'(x_{i}+z) -\theta'(x_{i})\theta(x_{i}+z)}\over{\theta(z)\theta(x_{i})^{2}}}$ est $C^{\infty}$ en $0$ et $1$ comme rapport de deux fonctions $C^{\infty}$ s'y annulant, avec le d\'enominateur de d\'eriv\'ee non nulle ; on a alors $$ g_{i}(0) = g_{i}(1) = {1\over{2\pi\on{i}}}{{\theta\theta''-(\theta')^{2}} \over{\theta^{2}}}(x_{i}) = {1\over {2\pi\on{i}}}({\theta'\over\theta})'(x_{i}) = - {1\over {2\pi\on{i}}} \tilde\wp(x_{i}) $$ (la derni\`ere \'egalit\'e provient du lemme \ref{lemme:weier}) ; en particulier, $(g_{i}-g_{i+1})(0) = (g_{i}-g_{i+1})(1)$ ; $\bullet$ comme chaque $\sigma_{x}^{\tau}dz$, $\psi_{i,i+1}$ est dans $\Omega$ ; $\bullet$ on a l'identit\'e \begin{equation} \label{id:surv} \forall x,y\in{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}, \quad (\partial_{x}\sigma_{x}) \sigma_{y} - \sigma_{x}(\partial_{y}\sigma_{y}) = \sigma_{x+y}(\wp(y)-\wp(x)) , \end{equation} qui se montre comme suit : le membre de gauche a le m\^{e}me comportement que le membre de droite sous les transformations de la variable muette $z\mapsto z+1$, $z\mapsto z+\tau$ ; pour \'etudier son comportement en $z=0$, on le transforme ainsi $(\partial_{x}\sigma_{x}) \sigma_{y}(z) - \sigma_{x}(\partial_{y}\sigma_{y})(z) = \sigma_{x}\sigma_{y}({{\partial_{x}\sigma_{x}} \over{\sigma_{x}}} - {{\partial_{y}\sigma_{y}} \over{\sigma_{y}}})(z) = \sigma_{x}\sigma_{y}({\theta'\over\theta}(z+x) - {\theta'\over\theta}(x) - {\theta'\over\theta}(z+y) + {\theta'\over\theta}(y))$ dont le d\'eveloppement est ${1\over {z^{2}}}\times z\times (({\theta'\over\theta})'(x) - ({\theta'\over\theta})'(y)) + O(1) = {1\over z}(\wp(y)-\wp(x)) + O(1)$. On a donc p\^{o}le simple en $0$, ce qui implique que le membre de gauche est proportionnel \`a $\sigma_{x+y}$ ; le d\'eveloppement en $0$ permet aussi de calculer le coefficient de proportionnalit\'e ; $\bullet$ l'\'egalit\'e $g_{i}\omega_{i+1} - g_{i+1}\omega_{i} = (g_{i}-g_{i+1})(0) \psi_{i,i+1}$ est une cons\'equence directe de l'identit\'e (\ref{id:surv}). La proposition \ref{prop:analyse} dit alors que $(d/dt)_{|t=0} I^{\on{log}(-2\pi\on{i}\theta_{\tau})}_{[0,1]} (\sigma_{x_{1}}^{\tau}dz+t dg_{1},\ldots,\sigma_{x_{n}}^{\tau}dz+t dg_{n}) = {1\over{2\pi\on{i}}}\times ($membre de droite de $(\ref{big:id}))$. Il reste \`a montrer que $(d/dt)_{|t=0}\sigma_{x_{i}}^{\tau+t} = dg_{i}$. Ceci est une cons\'equence imm\'ediate de l'\'egalit\'e $$ \partial_{\tau}({{\theta(z+x)}\over{\theta(z)\theta(x)}}) = {1\over{2\pi\on{i}}}\partial_{z}\partial_{x}( {{\theta(z+x)}\over{\theta(z)\theta(x)}}) $$ (cf. \cite{CEE}, trois lignes apr\`es l'\'equation (14)). \hfill \qed La deuxi\`eme identit\'e se montre comme la premi\`ere. C'est aussi une cons\'equence de la premi\`ere identit\'e et de l'identit\'e modulaire (\ref{id:mod}), compte tenu de la relation modulaire $\tilde\wp_{-1/\tau}(x) = \tau^{2}\tilde\wp_{\tau}(\tau x) - 2\pi\on{i}\tau$, qui provient de $\wp_{-1/\tau}(x) = \tau^{2} \wp_{\tau}(\tau x)$ et de $G_{2}(-1/\tau) = \tau^{2}G_{2}(\tau) - 2\pi\on{i} \tau$ (\cite{Se}, \'equation (45) p. 156). \qed\medskip On termine cette section avec la d\'emonstration d'une identit\'e utilis\'ee plus haut. \begin{lemma} \label{lemme:weier} On a les d\'eveloppements de Laurent suivants en $x=0$ $$ {\theta'_{\tau}\over\theta_{\tau}}(x) = {1\over x} - G_{2}(\tau)x - G_{4}(\tau)x^{3}- \cdots, \quad \wp_{\tau}(x) = {1\over x^{2}} + 3G_{4}(\tau)x^{2}+5G_{6}(\tau)x^{4} +\cdots. $$ \end{lemma} {\em D\'emonstration.} Le deuxi\`eme d\'eveloppement provient de $(x+a)^{-2} = a^{-2}-2xa^{-3}+\cdots$. D'apr\`es \cite{Po}, Thm. 3.9, $\wp = - (\sigma'/\sigma)'$ si $\sigma(z) = e^{{1\over 2}G_{2}z^{2}}\theta(z)$. Donc $(\theta'/\theta)'(z) = -\wp(z)-G_{2}(\tau)$, ce qui d\'etermine le d\'eveloppement de $\theta'/\theta$ \`a une constante additive pr\`es. Cette constante est d\'etermin\'ee par le fait que $\theta'/\theta$ est une fonction impaire. \hfill \qed\medskip On en d\'eduit $$ \tilde\wp(x) = \sum_{n\geq -1} (2n+1)G_{2n+2}(\tau)x^{2n} = -({\theta'\over\theta})'(z), $$ o\`u on a pos\'e $G_{0}:= -1$. \section{R\'ealisation de $\langle\delta_{2n},n\geq -1\rangle$ et comparaison de syst\`emes diff\'erentiels} \label{sect:real:fonct} Comme $F = U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)\subset U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ est un sous-${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$-module de $U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$, on dispose d'un sous-espace $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F) \subset \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})) = \on{Der}_{t}(U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}))$, qui est en fait une sous-alg\`ebre de Lie. ($\on{Der}_{t}$ est l'ensemble des d\'erivations qui envoient $t= -[x,y]\in{\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$ sur $0$.) D'autre part, le degr\'e en $y$ induit une graduation de ces alg\`ebres de Lie ; on note $\on{Der}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})_{+}\subset \on{Der}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ la partie de $y$-degr\'e $>0$. On a donc une suite d'inclusions d'alg\`ebres de Lie $$ \langle\delta_{2n},n\geq -1\rangle \subset \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})_{+} \subset \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F) \subset \on{Der}_{t}(F). $$ Le but de cette section est d'\'etablir un isomorphisme $$ \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)\simeq {\mathcal G}_{0} $$ entre $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)$ et une alg\`ebre de Lie ``fonctionnelle'' ${\mathcal G}_{0}$ explicite, puis d'en d\'eduire le lien entre les \'equations diff\'erentielles du th\'eor\`eme \ref{thm:ode} et celles satisfaites par $A(\tau)$, $B(\tau)$ (\'equations (\ref{ED:A:B})). L'\'enonc\'e suivant est imm\'ediat. \begin{proposition} Soit $$ {\mathcal G}:= \oplus_{n\geq 1}{\mathcal G}[n] = \oplus_{n\geq 1} {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}(x_{1},\ldots,x_{n+1}) ; $$ une structure d'alg\`ebre de Lie gradu\'ee est d\'efinie sur ${\mathcal G}$ par $$ [\varphi^{1,\ldots,n},\psi^{1,\ldots,m}]:= \sum_{i=1}\varphi^{i,i+1,\ldots,i+n-1} \psi^{1,\ldots,i-1,ii+1\ldots i+n-1,i+n,\ldots,n+m-1} - ((\varphi,n)\leftrightarrow(\psi,m)) ; $$ on note $\varphi^{1,\ldots,n}:= \varphi(x_{1},\ldots,x_{n})$, $\varphi^{12,3}:= \varphi(x_{1}+x_{2},x_{3})$, etc. \end{proposition} L'espace $F_{\infty}:= {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}(x_{i},i\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N})$ des fractions rationnelles en une infinit\'e de variables est un ${\mathcal G}$-module via $\varphi * f:= \sum_{i\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}} \varphi^{i,i+1,\ldots,i+n-1} f^{\ldots,i-1,ii+1\ldots i+n-1,i+n,\ldots}$. Par l'identification de ${\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n}]$ \`a ${\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n+1}]/ (x_{1}+\cdots+x_{n+1})$, on obtient une action du groupe sym\'etrique $S_{n+1}$ sur cette premi\`ere alg\`ebre (en d'autres termes, on a affaire \`a l'alg\`ebre sym\'etrique du quotient ${\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}^{n+1}/{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$ de la repr\'esentation naturelle par la triviale). On note $C_{n+1}\subset S_{n+1}$ le sous-groupe cyclique. \begin{proposition} Soit $$ {\mathcal G}_{0}:= \oplus_{n\geq 1}{\mathcal G}_{0}[n]= \oplus_{n\geq 1}\big({{{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n}]}\over {x_{1}\cdots x_{n}(x_{1}+\cdots+x_{n})}}\big)^{C_{n+1}}. $$ Une structure d'alg\`ebre de Lie gradu\'ee est d\'efinie sur ${\mathcal G}_{0}$ par \begin{align*} [\varphi^{1,\ldots,n},\psi^{1,\ldots,m}]_{0} & := \sum_{i=1}^{n}(\varphi^{i,i+1,\ldots,i+n-1} - \varphi^{i+1,\ldots,i+n}) \psi^{1,\ldots,i-1,ii+1\ldots i+n,i+n+1,\ldots,n+m} \\ & - \sum_{j=1}^{m}(\psi^{j,j+1,\ldots,j+m-1} - \psi^{j+1,\ldots,j+m}) \varphi^{1,\ldots,j-1,jj+1\ldots j+m,j+m+1,\ldots,n+m} \\ & - \varphi^{1,\ldots,n}\psi^{n+1,\ldots,n+m} +\varphi^{n+1,\ldots,n+m}\psi^{1,\ldots,n}. \end{align*} $F$ a une structure de ${\mathcal G}_{0}$-module gradu\'e par \begin{align} \label{act:F} \varphi^{1,\ldots,n} \bullet f^{1,\ldots,m} & \nonumber := \sum_{i=1}^{m}(\varphi^{i,i+1,\ldots,i+n-1} - \varphi^{i+1,\ldots,i+n})f^{1,\ldots,i-1,ii+1\ldots i+n,i+n+1,\ldots,n+m} \\ & - \varphi^{1,\ldots,n}f^{n+1,\ldots,n+m} + \varphi^{n+1,\ldots,n+m} f^{1,\ldots,m}. \end{align} Un isomorphisme d'alg\`ebres de Lie gradu\'ees $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)\simeq {\mathcal G}_{0}$ est donn\'e par $$ \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)[n]\ni D = (u,v)\leftrightarrow \varphi(x_{1},\ldots,x_{n}) \in{\mathcal G}_{0}[n], $$ o\`u $(u,v)$ est la d\'erivation donn\'ee par $x\mapsto u$, $y\mapsto v$ et la correspondance est donn\'ee par $$ u(x_{1},\ldots,x_{n}) = (x_{1}+\cdots+x_{n})\varphi(x_{1},\ldots,x_{n}), $$ $$ v(x_{1},\ldots,x_{n+1}) = ({1\over {x_{1}}} - {1\over{x_{1}+\cdots +x_{n+1}}})\varphi(x_{2},\ldots,x_{n+1}) + ({1\over{x_{1}+\cdots +x_{n+1}}} - {1\over {x_{n+1}}})\varphi(x_{1},\ldots,x_{n}). $$ \end{proposition} Un morphisme d'alg\`ebres de Lie ${\mathcal G}_{0}\to{\mathcal G}$ est par ailleurs donn\'e par $$ {\mathcal G}_{0}[n]\ni \varphi(x_{1},\ldots,x_{n})\mapsto \varphi(x_{1},\ldots,x_{n}) - \varphi(x_{2},\ldots,x_{n+1}) \in{\mathcal G}[n]. $$ {\em D\'emonstration.} Soit $D \in\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)[n]$. $D$ est d\'etermin\'e par $(u,v):= (D(x),D(y))\in F_{n}\times F_{n+1}$. La condition sur $(u,v)$ est $$ (x_{1}+\cdots+x_{n+1})v(x_{1},\ldots,x_{n+1}) = x_{1}^{-1}u(x_{2},\ldots, x_{n+1}) - x_{n+1}^{-1}u(x_{1},\ldots,x_{n}) $$ (identit\'e dans $(x_{1}\cdots x_{n+1})^{-1}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n+1}]$). On dispose d'une application de r\'eduction modulo $x_{1}+\cdots+x_{n+1}$ de cet espace vers $(x_{1}\cdots x_{n}(x_{1}+\cdots+x_{n}))^{-1}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1}, \ldots,x_{n}]$. L'image de cette identit\'e exprime alors la $C_{n+1}$-invariance de $\varphi(x_{1},\ldots,x_{n}):= u(x_{1},\ldots,x_{n})/(x_{1}+\cdots+x_{n})$. On a donc une application lin\'eaire \begin{equation} \label{appli} \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)[n] \to (x_{1}\cdots x_{n}(x_{1}+\cdots+x_{n}))^{-1}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n}]^{C_{n+1}}, \end{equation} $D\mapsto \varphi$. Cette application est injective car la nullit\'e de $u$ implique celle de $v$. Les deux derni\`eres formules de la proposition d\'efinissent une application $$ (x_{1}\cdots x_{n}(x_{1}+\cdots+x_{n}))^{-1}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1},\ldots,x_{n}]^{C_{n+1}} \to F_{n}\times F_{n+1} $$ (le p\^{o}le en $x_{1}+\cdots+x_{n+1}$ disparaissant par $C_{n+1}$-invariance), qui est en fait d'image dans $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)[n]$ et inverse \`a (\ref{appli}). On v\'erifie alors que le transport \`a ${\mathcal G}_{0}$ de la structure d'alg\`ebre de Lie sur $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)$ et de module de $F$ sur cette alg\`ebre de Lie est donn\'e par les formules de l'\'enonc\'e. \hfill \qed\medskip On a $$ {\mathcal G}_{0}[1] = x_{1}^{-2}{\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}[x_{1}^{2}]. $$ \begin{lemma} L'isomorphisme ${\mathcal G}_{0}\simeq \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)$ induit la correspondance $$ \delta_{2n}\in \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})_{+}[1] \subset \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)[1] \leftrightarrow {\mathcal G}_{0}[1] \ni x_{1}^{2n}. $$ \end{lemma} {\em D\'emonstration.} La d\'erivation correspondant \`a $x_{1}^{2n}$ est une d\'erivation de $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)$ telle que $x\mapsto u = x_{1}^{2n+1} \leftrightarrow [x^{2n+1}y]$. Comme l'application $\on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2},F)\to F$, $D\mapsto D(x)$ est injective, cette d\'erivation coincide avec $\delta_{2n}$. \hfill \qed\medskip Rappelons par ailleurs la correspondance $$ e^{\on{i}\pi t}A(\tau)\in U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)\leftrightarrow F\ni ((-1)^{n}I_{x_{n},\ldots,x_{1}}(\tau))_{n\geq 0} =: \tilde I(\tau). $$ (section \ref{sect:corr}). D'apr\`es (\ref{ED:A:B}) et l'invariance de $t$ sous les $\delta_{2n}$, $n\geq -1$, $e^{\on{i}\pi t}A(\tau)$ satisfait l'\'equation diff\'erentielle $$ 2\pi\on{i}\partial_{\tau}(e^{\on{i}\pi t}A(\tau)) = -(\sum_{n\geq -1} (2n+1) G_{2n+2}(\tau) \delta_{2n}) (e^{\on{i}\pi t}A(\tau)). $$ L'image de cette \'equation diff\'erentielle sous l'isomorphisme $U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}\ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)\simeq F$ donne $$ 2\pi\on{i}\partial_{\tau}\tilde I(\tau) = -(\sum_{n\geq -1} (2n+1) G_{2n+2}(\tau) x_{1}^{2n}) \bullet \tilde I(\tau) = -\tilde\wp_{\tau}(x_{1}) \bullet \tilde I(\tau), $$ donc si $I(\tau):= (I_{x_{1},\ldots,x_{n}}(\tau))_{n\geq 0}$, alors $2\pi\on{i}\partial_{\tau}I(\tau) = -\tilde\wp_{\tau}(x_{1})\bullet I(\tau)$, c'est \`a dire que pour chaque $n$ $$ 2\pi\on{i}\partial_{\tau}I_{x_{1},\ldots,x_{n}}(\tau) = -\tilde\wp_{\tau}(x_{1}) \bullet I_{x_{1},\ldots,x_{n-1}}(\tau). $$ Compte tenu de la formule (\ref{act:F}) pour l'action de ${\mathcal G}_{0}$ sur $F$, on retrouve ainsi la premi\`ere \'equation diff\'erentielle du th\'eor\`eme \ref{thm:ode}. De m\^{e}me, $e^{-\on{i}\pi t}B(\tau)$ satisfait l'\'equation diff\'erentielle $$ 2\pi\on{i}\partial_{\tau}(e^{-\on{i}\pi t}B(\tau)) = -(\sum_{n\geq 1} (2n+1)G_{2n+2}(\tau)\delta_{2n})(e^{-\on{i}\pi t}B(\tau)). $$ Soit $\tilde B(\tau):= \on{exp}({2\pi\on{i}\over\tau}e_{+})(e^{-\on{i}\pi t} B(\tau))$ (voir section \ref{sect:corr}), on en d\'eduit $$ 2\pi\on{i}\partial_{\tau}\tilde B(\tau) = -({2\pi\on{i}\over\tau}h + \sum_{n\geq -1}(2n+1)G_{2n+2}(\tau)\delta_{2n}) (\tilde B(\tau)), $$ o\`u $h := [e_{+},\delta]$ est la d\'erivation de ${\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2}$ donn\'ee par $(x,y)\mapsto (x,-y)$, compte tenu de $[e_{+},\delta_{2n}]=0$ si $n\geq 0$ et ${1\over 2}[e_{+},[e_{+},\delta_{-2}]] + e_{+}=0$. On a la correspondance $$ U({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2} \ominus {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}} x)\ni \tilde B(\tau)\leftrightarrow ((-1)^{n}J_{x_{n},\ldots,x_{1}}(\tau))=:\tilde J(\tau)\in F, $$ par ailleurs la d\'erivation $h$ se transporte sous cette correspondance en la d\'erivation de $F = \oplus_{n\geq 0}F_{n}$ de degr\'e z\'ero, op\'erant sur $F_{n}$ comme $\xi:= \sum_{i=1}^{n} x_{i}\partial_{x_{i}}$. On en d\'eduit $$ 2\pi\on{i}\partial_{\tau}\tilde J(\tau) = -({2\pi\on{i}\over\tau}\xi +\tilde\wp_{\tau}(x_{1})\bullet)\tilde J(\tau), $$ donc $J(\tau):= (J_{x_{1},\ldots,x_{n}}(\tau))_{n\geq 0}$ satisfait la m\^{e}me \'equation diff\'erentielle, donc $$ 2\pi\on{i}\partial_{\tau} J_{x_{1},\ldots,x_{n}}(\tau) = - {2\pi\on{i}\over\tau}(\sum_{i=1}^{n}x_{i}\partial_{x_{i}}) J_{x_{1},\ldots,x_{n}}(\tau) -\tilde\wp_{\tau}(x_{1})\bullet J_{x_{1},\ldots,x_{n-1}}(\tau), $$ ce qui permet de retrouver la deuxi\`eme \'equation diff\'erentielle du th\'eor\`eme \ref{thm:ode}. \section{D\'eveloppement asymptotique des analogues elliptiques des nombres multiz\'etas} \label{section:DA} Dans cette section, nous utilisons les \'equations diff\'erentielles satisfaites par les fonctions $A(\tau)$ et $B(\tau)$ (\'equations (\ref{ED:A:B})) et leur comportement \`a l'infini ((\ref{comp:A}), (\ref{comp:B})) pour en obtenir un d\'eveloppement asymptotique en $\tau\to\on{i}\infty$. Nous en d\'eduisons la forme du d\'eveloppement asymptotique des fonctions $I_{\underline{d}}(\tau)$, $J_{\underline{d}}(\tau)$ dans cette r\'egion. \subsection{D\'eveloppement de $g(\tau)$} \label{DA:g} Soit ${\mathfrak G}$ la compl\'etion de l'alg\`ebre de Lie $\langle\delta_{2n},n\geq -1\rangle \subset \on{Der}_{t}({\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ pour le bidegr\'e en $(x,y)$ ; on a $|\delta_{2n}| = (2n+1,1)$. Soit $G:= \on{exp}({\mathfrak G}) \subset \on{Aut}_{t}(\hat{\mathfrak{f}}}\newcommand{\h}{{\mathfrak{h}}_{2})$ le groupe de Lie correspondant. \begin{proposition} \label{prop:h} Il existe une unique fonction $g(\tau) : \HH\to G$, telle que $$ 2\pi\on{i}\partial_{\tau}g(\tau) = -(\sum_{n\geq -1} (2n+1)G_{2n+2}(\tau)\delta_{2n})g(\tau)$$ et $g(\tau)\simeq e^{{-1\over{2\pi\on{i}}}(\delta_{-2}+\sum_{n\geq 0} (2n+1)\cdot 2\zeta(2n+2)\delta_{2n})\tau}= e^{D_{0}\tau}$ en $\tau\to \on{i}\infty$. Il existe une collection $(h_{k})_{k\geq 0}$, avec $h_{0}=1$, telle que $g(\tau)$ a le d\'eveloppement asymptotique $$ g(\tau)\simeq \sum_{k,n\geq 0}{1\over{n!}}h_{k}D_{0}^{n}\tau^{n}e^{2\pi\on{i} k\tau} $$ en $\tau\to\on{i}\infty$. \end{proposition} {\em D\'emonstration.} Posons $D(\tau):= {{-1}\over{2\pi\on{i}}} \sum_{n\geq -1}(2n+1)G_{2n+2}(\tau)\delta_{2n}$. Posons si $m\geq 1$, $g_{2m}(n):= {{2(2\pi\on{i})^{2m}}\over{(2m-1)!}}\sigma_{2m-1}(n)$ (o\`u $\sigma_{k}(n) = \sum_{d|n}d^{k}$) si $n>0$, et $g_{2m}(0):= 2\zeta(2m)$ ; et posons $g_{0}(n)=0$ si $n>0$, et $g_{0}(0)=-1$. Alors $G_{2m}(\tau) = \sum_{n\geq 0} g_{2m}(n)e^{2\pi\on{i}n\tau}$ et $$D(\tau) = \sum_{m\geq 0}D_{m}e^{2\pi\on{i}m\tau}, \text{\ o\`u\ }D_{m}:= {{-1}\over{2\pi\on{i}}} \sum_{n\geq -1} (2n+1)g_{2n+2}(m)\delta_{2n}. $$ Comme $D_{0}$ est de $y$-degr\'e $1$, $2\pi\on{i}m-\on{ad}D_{0}$ est inversible dans $\on{End}(U{\mathfrak G})$ si $m>0$. D\'efinissons $(h_{m})_{m\geq 0}$ par $h_{0}:=1$, $$ h_{m}:= (2\pi\on{i}m - \on{ad}D_{0})^{-1}(\sum_{m'+m''=m\atop{m'>0}} D_{m'}h_{m''})\text{\ si\ }m>0. $$ Alors $h(\tau):= \sum_{m\geq 0}h_{m}e^{2\pi\on{i}m\tau}$ est une solution formelle de $\partial_{\tau}h(\tau) = D(\tau)h(\tau) - h(\tau)D_{0}$, qui est l'\'equation diff\'erentielle satisfaite par $g(\tau)e^{-D_{0}\tau}$ ; cette fonction admet donc $h(\tau)$ comme d\'eveloppement asymptotique. \hfill \qed\medskip \subsection{D\'eveloppements de $A(\tau)$, $B(\tau)$} \label{sect:DA:A:B} Posons $$ A_{\infty}:= \Phi(\tilde y,t)e^{2\pi\on{i}\tilde y}\Phi(\tilde y,t)^{-1}, \quad \underline B(\tau):= e^{\on{i}\pi t}\Phi(-\tilde y-t,t)e^{2\pi\on{i}x} e^{2\pi\on{i}\tilde y\tau}\Phi(\tilde y,t)^{-1}, \quad B_{\infty}:= B[0]. $$ D'apr\`es \cite{CEE}, on a $$ A_{\infty} = e^{\tau D_{0}}(A_{\infty}), \quad \underline B(\tau) = e^{\tau D_{0}}(B_{\infty}). $$ Posons $h(\tau):= g(\tau)e^{-\tau D_{0}}$, alors d'apr\`es la proposition \ref{prop:h}, $h(\tau)$ admet le d\'eveloppement asymptotique $h(\tau)\simeq 1+\sum_{m>0}h_{m}e^{2\pi\on{i}m\tau}$. On a alors $A(\tau) = g(\tau)(A_{\infty}) = h(\tau)(A_{\infty})$ donc $A(\tau)$ admet le d\'eveloppement asymptotique $$ \label{dev:A} A(\tau)\simeq \sum_{m\geq 0}e^{2\pi\on{i}m\tau} h_{m}(A_{\infty}), $$ et $B(\tau) = g(\tau)(B_{\infty}) = h(\tau)(\underline B(\tau))$, donc $\tilde B(\tau)$ admet le d\'eveloppement asymptotique $$ \label{dev:B} \tilde B(\tau) \simeq \on{exp}(-{{2\pi\on{i}}\over\tau}e_{+})h(\tau) (\underline B(\tau)). $$ \subsection{D\'eveloppements de $I_{\underline{d}}(\tau)$, $J_{\underline{d}}(\tau)$} \label{sect:DA:I:J} Soit ${\bold k}_{MZV}\subset {\mathbb{C}}}\newcommand{\QQ}{{\mathbb{Q}}$ le $\QQ$-sous-anneau engendr\'e par les multiz\'etas. $\Phi$ \'etant \`a coefficients dans ${\bold k}_{MZV}$, on d\'eduit de (\ref{dev:A}) et (\ref{dev:B}) : \begin{proposition} Les fonctions $I_{\underline{d}}(\tau)$, $J_{\underline{d}}(\tau)$ admettent les d\'eveloppements asymptotiques $$ I_{\underline{d}}(\tau)\simeq \sum_{n\geq 0} I_{\underline{d},n}e^{2\pi\on{i}n\tau}, \quad J_{\underline{d}}(\tau) \simeq \sum_{n\geq 0}\sum_{s\in{\mathbb{Z}}}\newcommand{\NN}{{\mathbb N}}J_{\underline d,n,s} \tau^{s}e^{2\pi\on{i}n\tau} , $$ dans lesquels les coefficients sont dans ${\bold k}_{MZV}[2\pi\on{i}]$. Dans la deuxi\`eme s\'erie, la deuxi\`eme somme $\sum_{s}$ est finie pour tout $n\geq 0$. \end{proposition} \defBibliographie{Bibliographie}
2,877,628,090,891
arxiv
\section{Introduction} The two-dimensional models have widely been used in the context of the two-dimensional gravity ($e.g.$ see \cite{1,2,3,4} and references therein) and string theory. From the 2d-gravity point of view, higher-dimensional gravity models, by dimensional reduction reduce to the 2d-gravity \cite{1,2,3}. From the string theory point of view, the (1+1)-dimensional actions are fundamental tools of the theory. However, 2d-gravity and 2d- string theory are closely related to each other. The known sigma models for string, in the presence of the dilaton field $\Phi (X)$, contain the two-dimensional scalar curvature $R(h_{ab})$, \begin{eqnarray} S_{\Phi} = \frac{1}{4\pi}\int d^2 \sigma \sqrt{h}R \Phi (X). \end{eqnarray} In two dimensions the combination $\sqrt{h}R$ is total derivative. Thus, in the absence of the dilaton field, this action is a topological invariant that gives no dynamics to the worldsheet metric $h_{ab}$. In fact, in the action (1), the dilaton is not the only choice. For example, replacing the dilaton field with the scalar curvature $R$, leads to the $R^2$-gravity \cite{1,4,5}. In particular the Polyakov action is replaced by a special combination of the worldsheet fields, which include an overall factor $R^{-1}$. Removing the dilaton and replacing it with another quantities motivated us to study a class of two-dimensional actions. They are useful in the context of the non-critical strings with curved worldsheet, and the 2-dimensional gravity. Instead of the dilaton field, we introduce some combinations of $h_{ab}$, $R$ and the induced metric on the worldsheet, $i.e.$ $\gamma_{ab}$, which give dynamics to $h_{ab}$. These non-linear combinations can contain an arbitrary function $f(R)$ of the scalar curvature $R$. We observe that these dynamics lead to the constraint equation for $h_{ab}$, extracted from the Polyakov action. For the flat spacetime, these models have the Poincar\'e symmetry. In addition, they are reparametrization invariant. However, for any function $f(R)$, they do not have the Weyl symmetry. Therefore, the string worldsheet at most is conformally flat. By introducing an extra scalar field in these actions, they also find the Weyl symmetry. Note that a Weyl non-invariant string theory has noncritical dimension, $e.g.$ see \cite{6}. This paper is organized as follows. In section 2, we introduce a new action for the string in which the corresponding worldsheet always is curved. In section 3, the Poincar\'e symmetry of this string model will be studied. In section 4, the generalized form of the above action will be introduced and it will be analyzed. \section{Curved worldsheet in the curved spacetime} We consider the following action for the string, which propagates in the curved spacetime \begin{eqnarray} S= -T \int d^2 \sigma \sqrt{h}R \bigg{(}R - \frac{1}{2\pi \alpha'} h^{ab}\gamma_{ab}\bigg{)}, \end{eqnarray} where $h=-\det h_{ab}$, and $T$ is a dimensionless constant. In addition, $R$ denotes the two-dimensional scalar curvature which is made from $h_{ab}$. The string coordinates are $\{X^\mu (\sigma , \tau)\}$. The induced metric on the worldsheet, $i.e.$ $\gamma_{ab}$, is also given by \begin{eqnarray} \gamma_{ab}= g_{\mu \nu}(X)\partial_a X^\mu (\sigma , \tau) \partial_b X^\nu (\sigma , \tau) , \end{eqnarray} where $g_{\mu \nu}(X)$ is the spacetime metric. In two dimensions, the symmetries of the curvature tensor imply the identity \begin{eqnarray} R_{ab} - \frac{1}{2}h_{ab}R =0. \end{eqnarray} Therefore, the variation of the action (2) leads to the following equation of motion for $h^{ab}$, \begin{eqnarray} R_{ab} - \frac{1}{2\pi \alpha'} \gamma_{ab}=0. \end{eqnarray} This implies that the energy-momentum tensor, extracted from the action (2), vanishes. Contraction of this equation by $h^{ab}$ gives $R = \frac{1}{2\pi \alpha'} h^{ab}\gamma_{ab}$. Introducing this equation and the equation (5) into (4) leads to \begin{eqnarray} T_{ab}^{({\rm Polyakov})} \equiv \gamma_{ab} - \frac{1}{2}h_{ab}(h^{a'b'}\gamma_{a'b'})=0. \end{eqnarray} This is the constraint equation, extracted from the Polyakov action. Note that the energy-momentum tensor, due to the action (2), is proportional to the left-hand-side of the equation (5). Thus, it is different from (6). The equation of motion of the string coordinate $X^\mu (\sigma , \tau)$ also is \begin{eqnarray} \partial_a (\sqrt{h}R h^{ab}\partial_b X^\mu) +\sqrt{h}R h^{ab} \Gamma^\mu_{\nu \lambda}\partial_a X^\nu\partial_b X^\lambda =0. \end{eqnarray} Presence of the scalar curvature $R$ distinguishes this equation from its analog, extracted from the Polyakov action. Now consider those solutions of the equations of motion (5) and (7), which admit constant scalar curvature $R$. For these solutions, the equation (7) reduces to the equation of motion of the string coordinates, extracted from the Polyakov action with the curved background. However, for general solutions the scalar curvature $R$ depends on the worldsheet coordinates $\sigma$ and $\tau$, and hence this coincidence does not occur. \subsection{The model in the conformal gauge} Under reparametrization of $\sigma$ and $\tau$, the action (2) is invariant. That is, in two dimensions the general coordinate transformations $\sigma \rightarrow \sigma'(\sigma , \tau)$ and $\tau \rightarrow \tau'(\sigma , \tau)$, depend on two free functions, namely the new coordinates $\sigma'$ and $\tau'$. By means of such transformations any two of the three independent components of $h_{ab}$ can be eliminated. A standard choice is a parametrization of the worldsheet such that \begin{eqnarray} h_{ab}=e^{\phi (\sigma , \tau)} \eta_{ab} , \end{eqnarray} where $\eta_{ab}=diag(-1 , 1)$, and $e^{\phi (\sigma , \tau)}$ is an unknown conformal factor. The choice (8) is called the conformal gauge. Since the action (2) does not have the Weyl symmetry (a local rescaling of the worldsheet metric $h_{ab}$) we cannot choose the gauge $h_{ab}=\eta_{ab}$. The scalar curvature corresponding to the metric (8) is \begin{eqnarray} R = -e^{-\phi}\partial^2 \phi , \end{eqnarray} where $\partial^2 = \eta^{ab}\partial_a \partial_b$. Thus, the action (2) reduces to \begin{eqnarray} S' = -T \int d^2 \sigma e^{-\phi} \partial^2 \phi \bigg{(} \partial^2 \phi + \frac{1}{2\pi \alpha'}\eta^{ab}\gamma_{ab}\bigg{)}. \end{eqnarray} According to the gauge (8), this action describes a conformally flat worldsheet. \section{Poincar\'e symmetry of the model} In this section we consider flat Minkowski space, $i.e.$ $g_{\mu\nu}(X)=\eta_{\mu\nu}$. Therefore, the equations of motion are simplified to \begin{eqnarray} R_{ab}-\frac{1}{2\pi \alpha'}\eta_{\mu\nu} \partial_a X^\mu \partial_b X^\nu =0, \end{eqnarray} \begin{eqnarray} \partial_a (\sqrt{h}R h^{ab}\partial_b X^\mu)=0. \end{eqnarray} The Poincar\'e symmetry reflects the symmetry of the background in which the string is propagating. It is described by the transformations \begin{eqnarray} &~& \delta X^\mu = a^\mu_{\;\;\nu} X^\nu + b^\mu, \nonumber\\ &~& \delta h^{ab} =0, \end{eqnarray} where $a^\mu_{\;\;\nu}$ and $b^\mu$ are independent of the worldsheet coordinates $\sigma$ and $\tau$, and $a_{\mu\nu}= \eta_{\mu \lambda}a^\lambda_{\;\;\nu}$ is antisymmetric. Thus, from the worldsheet point of view, these transformations are global symmetries. Under these transformations the action (2) is invariant. \subsection{The conserved currents} The Poincar\'e invariance of the action (2) is associated to the following Noether currents \begin{eqnarray} &~& {\cal{J}}^{\mu\nu a} = \frac{T}{2\pi \alpha'}\sqrt{h}R h^{ab} (X^\mu \partial_b X^\nu-X^\nu \partial_b X^\mu), \nonumber\\ &~& {\cal{P}}^{\mu a} =\frac{T}{2\pi \alpha'}\sqrt{h}R h^{ab}\partial_b X^\mu , \end{eqnarray} where the current ${\cal{P}}^{\mu a}$ is corresponding to the translation invariance and ${\cal{J}}^{\mu\nu a}$ is the current associated to the Lorentz symmetry. According to the equation of motion (12) these are conserved currents \begin{eqnarray} &~& \partial_a {\cal{J}}^{\mu\nu a} =0, \nonumber\\ &~& \partial_a {\cal{P}}^{\mu a} =0. \end{eqnarray} \subsection{The covariantly conserved currents} It is possible to construct two other currents from (14), in which they be covariantly conserved. For this, there is the useful formula \begin{eqnarray} \nabla_a K^a = \frac{1}{\sqrt{h}}\partial_a (\sqrt{h}K^a), \end{eqnarray} where $K^a$ is a worldsheet vector. Therefore, we define the currents $J^{\mu \nu a}$ and $P^{\mu a}$ as in the following \begin{eqnarray} &~& J^{\mu\nu a}=\frac{1}{\sqrt{h}}{\cal{J}}^{\mu\nu a}, \nonumber\\ &~& P^{\mu a}=\frac{1}{\sqrt{h}}{\cal{P}}^{\mu a}. \end{eqnarray} According to the equations (15) and (16), these are covariantly conserved currents, $i.e.$, \begin{eqnarray} \nabla_a J^{\mu\nu a}= \nabla_a P^{\mu a}=0. \end{eqnarray} The currents (17) can also be written as \begin{eqnarray} &~& J^{\mu\nu}_a = \frac{T}{2\pi \alpha'}R (X^\mu \partial_a X^\nu-X^\nu \partial_a X^\mu), \nonumber\\ &~& P^{\mu}_a =\frac{T}{2\pi \alpha'}R\partial_a X^\mu . \end{eqnarray} Since there is $\nabla_a h_{bc}=0$, the conservation laws (18) also imply the covariantly conservation of the currents (19). \section{Generalization of the model} The generalized form of the action (2) is \begin{eqnarray} I= -T \int d^2 \sigma \sqrt{h}R \bigg{(}f(R) - \frac{1}{2\pi \alpha'} h^{ab}\gamma_{ab}\bigg{)}, \end{eqnarray} where $f(R)$ is an arbitrary differentiable function of the scalar curvature $R$. The set $\{X^\mu(\sigma , \tau)\}$ describes a string worldsheet in the spacetime. These string coordinates appeared in the induced metric $\gamma_{ab}$ through the equation (3). Thus, (20) is a model for the string action. The equation of motion of $X^\mu$ is as previous, $i.e.$ (7). Vanishing the variation of this action with respect to the worldsheet metric $h^{ab}$, gives the equation of motion of $h^{ab}$, \begin{eqnarray} R_{ab}\frac{df(R)}{dR} - \frac{1}{2\pi \alpha'} \gamma_{ab}=0. \end{eqnarray} The trace of this equation is \begin{eqnarray} R\frac{df(R)}{dR} - \frac{1}{2\pi \alpha'}h^{ab} \gamma_{ab}=0. \end{eqnarray} Combining the equations (4), (21) and (22) again leads to the equation (6). As an example, consider the function $f(R) = \alpha \ln R + \beta$. Thus, the field equation (21) implies that the intrinsic metric $h_{ab}$ becomes proportional to the induced metric $\gamma_{ab}$, that is $h_{ab} = \frac{1}{\pi \alpha \alpha'}\gamma_{ab}$. Since the Poincar\'e transformations contain $\delta h^{ab}=0$, the generalized action (20) for the flat background metric $g_{\mu\nu}=\eta_{\mu\nu}$, also has the Poincar\'e invariance. This leads to the previous conserved currents, $i.e.$ (14) and (19). \subsection{Weyl invariance in the presence of a new scalar field} The action (20) under the reparametrization transformations is symmetric. The Weyl transformation is also defined by \begin{eqnarray} h_{ab} \longrightarrow h'_{ab}=e^{\rho (\sigma , \tau)}h_{ab}. \end{eqnarray} Thus, the scalar curvature transforms as \begin{eqnarray} R \longrightarrow R' = e^{-\rho}(R -\nabla^2 \rho), \end{eqnarray} where $\nabla^2 \rho = \frac{1}{\sqrt{h}}\partial_a (\sqrt{h}h^{ab}\partial_b\rho)$. The equations (23) and (24) imply that the action (20), for any function $f(R)$, is Weyl non-invariant. Introducing (23) and (24) into the action (20) gives a new action which contains the field $\rho (\sigma , \tau)$, \begin{eqnarray} I' = -T \int d^2 \sigma \sqrt{h}(R- \nabla^2 \rho) \bigg{(}f[e^{-\rho}(R- \nabla^2 \rho)] -\frac{1}{2\pi \alpha'}e^{-\rho}h^{ab} \gamma_{ab} \bigg{)}. \end{eqnarray} We can ignore the origin of this action. In other words, it is another model for string. However, under the Weyl transformations \begin{eqnarray} &~& h_{ab} \longrightarrow e^{u (\sigma , \tau)}h_{ab}, \nonumber\\ &~& \rho \longrightarrow \rho - u , \end{eqnarray} the action $I'$, for any function $f$, is symmetric. Note that according to the definition of $\nabla^2$ there is the transformation $\nabla^2 \rightarrow e^{-u}\nabla^2$. \section{Conclusions} We considered some string actions which give dynamics to the worldsheet metric $h_{ab}$. Due to the absence of the Weyl invariance, these models admit at most conformally flat (but not flat) worldsheet. We observed that the constraint equation on the metric, extracted from the Polyakov action, is a special result of the field equations of our string models. Obtaining this constraint equation admits us to introduce an arbitrary function of the scalar curvature to the action. For the case $f(R) = \alpha \ln R + \beta$, the metric $h_{ab}$ becomes proportional to the induced metric of the worldsheet. By introducing a new degree of freedom we obtained a string action, in which for any function $f$ is Weyl invariant. Our string models with arbitrary $f(R)$, in the flat background have the Poincar\'e symmetry. The associated conserved currents are proportional to the scalar curvature $R$. We also constructed the covariantly conserved currents from the Poincar\'e currents.
2,877,628,090,892
arxiv
\section{Introduction} Dielectric barrier discharges (DBDs) are plasma discharges incorporating at least one layer of dielectric material separating the two electrodes. The dielectric barrier limits the charge transfer and thus the current flow typically producing a non thermal plasma at atmospheric conditions. This non thermal nature allows for the efficient generation of reactive species thereby providing multiple possibilities in biomedical, surface, and industrial applications \cite{Brandenburg2017,HHKim2004}. \DIFaddbegin \DIFadd{DBDs are classifiable into two main categorical descriptors: volumetric and surface DBDs. }\DIFaddend Volume dielectric barrier discharges (VDBDs) are classifiable from DBDs by having a gas gap and a dielectric barrier present between the two electrodes, producing either homogeneous or filamantary like plasmas depending on the conditions \cite{Kogelschatz2010}. Surface dielectric barrier discharges (SDBDs) on the other hand, have only the dielectric layer directly separating the two electrodes; a plasma is thereby only able to ignite along the surface of the dielectric. Due to the possibility of having a thin structure, SDBDs may have particularly low flow resistance and are therefore commonly researched for gas treatment or flow control purposes \cite{Brandenburg2017,Moreau2007,Mueller2007,Corke2010,HHKim2004}. SDBDs have the capability of being built in many unique geometrical configurations ranging in symmetry providing either a single axis or multiple axes for plasma propagation. They may also allow for either a single phase, anodic or cathodic plasma, or a dual phase ignition process. Throughout the 1990s SDBDs have been well investigated as potential actuators for gas flow control \cite{Brandenburg2017,HHKim2004,Moreau2007,Corke2010}. For such purposes an asymmetric geometry, where one electrode is offset from the opposite electrode and possibly completely submerged by the dielectric, is typically used \DIFdelbegin \DIFdel{\mbox \cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018}}\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018,Opaits2008,Sato2019}}\hspace{0pt }\DIFaddend . Much effort has been put into controlling the \DIFdelbegin \DIFdel{aerodynamic effects and their corresponding plasma behaviors }\DIFdelend \DIFaddbegin \DIFadd{plasma behaviors, such as densities and surface charge deposition, and their corresponding aerodynamic effects }\DIFaddend from said SDBD configurations \DIFdelbegin \DIFdel{\mbox \cite{Corke2010,Opaits2012,Audier2014}}\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cite{Opaits2008,Corke2010,Opaits2012,Audier2014,Sato2019}}\hspace{0pt }\DIFaddend . It has also been shown that AC and pulsed waveforms can significantly modulate the plasma profiles (at positive and negative voltage \DIFdelbegin \DIFdel{phase}\DIFdelend \DIFaddbegin \DIFadd{phases}\DIFaddend ) \cite{Akishev2012,Audier2014,Biganzoli2012,Che2012,Debien2012,Hu2018,Soloviev2017,Soloviev2018,Starikovskii2009,Unfer2010}. In recent years, SDBDs have undergone extensive investigation for gas purification for industrial and environmental protection applications \cite{Brandenburg2017, Mueller2007,HHKim2004}. Absolutely calibrated two wavelength emission spectroscopy has been used in order to characterize a symmetric SDBD under tailored voltage waveforms \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019}. \DIFaddbegin \DIFadd{The waveform under experimental investigation is a damped sine wave with multiple $\mu$s period, adjustable peak to peak voltage, and pulsed in the kHz regime. }\DIFaddend Additional emission spectroscopy, absorption spectroscopy, and Fourier transform infrared (FTIR) spectroscopy methods have also been used to measure various species densities and chemical modifications of cystine. Furthermore, flame ionization detectors, gas chromatography-mass spectroscopy, and ion energy analyzer quadrupole mass spectroscopy are all being used to investigate and characterize the conversion of volatile organic compounds into non-harmful and non-toxic compounds \cite{Schuecke2020}. Furthermore, the inclusion of pre gas heating and catalyst coatings are being investigated for higher conversion efficiencies \cite{Schuecke2020,Peters2021}. In many applications, like chemical processing and gas purification, the interaction between a plasma and a catalyst yields synergistic effects resulting in enhanced performances \cite{HHKim2004,HHKim1999}. As such, various structures of catalytic material are often inserted into traditional DBD reactors including, but not limited to: spheres, honeycombs, 3D fibre deposition structures and coatings of the dielectric barrier itself \cite{Zhang2018,HHKim1999}. The synergistic effect is obtained via two primary methods. Firstly, the altered geometry along with tailored voltage \DIFdelbegin \DIFdel{waveforms }\DIFdelend \DIFaddbegin \DIFadd{waveforms }\DIFaddend influence the discharge characteristics \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Zhang2015}. Secondly, the plasma distribution determines the effective contact area of the catalyst thereby altering the morphology and work function of the catalyst \cite{Neyts2014,Zhang2017}. This leads to a great importance on generating a controllable plasma density and spatial distribution \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Shang2019}. The above studies, although very interesting, were mostly based on experiments of submerged \DIFdelbegin \DIFdel{DBDs }\DIFdelend \DIFaddbegin \DIFadd{SDBDs }\DIFaddend where the plasma discharge is confined to one side of the dielectric plate providing investigations only into a single phase ignition process \cite{Akishev2012,Audier2014,Biganzoli2012,Corke2010,Debien2012,GAO2017,Moreau2007,Opaits2012,Peng2019,Xiahua2016,Shang2019,Shang2019,Soloviev2017,Starikovskii2009}. That is to say that only either an anodic or cathodic phase plasma is present, but never both simultaneously. This single phase nature limits the effective volume and surface area of the plasma which \DIFdelbegin \DIFdel{may greatly reduce the performance of the application. Additionally}\DIFdelend \DIFaddbegin \DIFadd{defines the effective catalytic surface area exposed to the plasma species in plasma enhanced catalysis. As such, the catalyst performance is potentially limited to a great extent in a single phase SDBD. In gas treatment conditions, an SDBD electrode system is very likely to be placed along the central plane parallel to gas flow in order to minimize flow resistance and increase the treatment volume. Under these conditions, it is very clear that utilizing an SDBD electrode system which ignites on both sides of the dielectric plate will improve the treatment volume, and as such efficiency of the process. } \DIFadd{Unfortunately}\DIFaddend , most theoretical investigations utilizing circuit models \cite{Pipa2012,Peeters2014,Pipa2020_PowerDBDEQC}\DIFaddbegin \DIFadd{, }\DIFaddend global models, molecular dynamic models \cite{Neyts2014}, fluid models \cite{Che2012,Peng2019,Soloviev2018}, and even particle-in-cell/Monte Carlo collision (PIC/MCC) models \cite{Zhang2015,Zhang2017,Zhang2018} of \DIFdelbegin \DIFdel{SDBDs }\DIFdelend \DIFaddbegin \DIFadd{(S)DBDs }\DIFaddend and packed bed reactors provide limited insights into the underlying mechanisms of the plasma propagation \cite{Mujahid2018,Mujahid2020,mujahid2020Propagation}. \DIFaddbegin \DIFadd{No contributions on the theoretical investigation of a dual phase symmetric SDBD could be found by the authors, pointing to a significant lacking of knowledge of such configurations is present. }\DIFaddend The inherent mechanisms behind the evolution of the plasma discharge in asymmetric and even more so symmetric SDBDs is still not fully understood. \DIFdelbegin \DIFdel{This demands a more detailed simulation for the dynamic behavior of the plasma during the ignition process. Lastly, no contributions on the theoretical investigation of a dual phase SDBD could be found by the authors, meaning a significant lacking of knowledge of such configurations is present.}\DIFdelend \DIFaddbegin \DIFadd{It is not yet clear how a simultaneous positive and negative surface streamer (above and below the dielectric) can interact with each other, and to what extent, if any, do they enhance one another. It is not clear how the streamers respond to tailored voltage waveforms, nor what the optimized conditions are for generating large treatment volumes. It is unknown to what extent the surface streamers interact with an active surface such as a catalyst. These are crucial pieces of information to ensure good plasma enhanced catalysis performance. Additionally, many experiments, such as optical emission spectroscopy, still have open questions as to whether the results are more representative of the streamer bulk or the highly dynamic streamer head. These concerns demand a more detailed simulation for the dynamic behavior of the positive and negative streamers in a dual phase symmetric SDBD during the ignition process.}\DIFaddend \DIFaddbegin \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{NegativStreamer_Initial.eps} \caption{\DIFaddFL{Schematic detailing the negative streamer formed via an anode oriented electron avalanche.}} \label{fig:NegativeStreamer} \end{figure} \DIFaddbegin \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{PositivStreamer_Initial.eps} \caption{\DIFaddFL{Schematic detailing the positive streamer, which forms via a cathode oriented propagation front.}} \label{fig:PositiveStreamer} \end{figure} \DIFaddend Therefore, in the present work we computationally investigate the plasma propagation of a symmetric, dual phase SDBD, hereby referred to as the twin SDBD\DIFaddbegin \DIFadd{, under various voltage waveform conditions}\DIFaddend . The particular geometry of the twin SDBD ensures that both an anodic and cathodic phase plasma are simultaneously ignited, separated by the dielectric barrier, and are physically symmetric about the metallic electrodes. The symmetric geometry does not only give rise to a higher plasma surface coverage, but also enables a direct comparison between the positive streamers on the anode side versus the negative streamers on the cathode side as well as the interaction between the two. The \DIFdelbegin \DIFdel{theoretical }\DIFdelend \DIFaddbegin \DIFadd{numerical }\DIFaddend investigations are carried out by means of a 2D PIC/MCC simulation software known as \DIFdelbegin \DIFdel{VSIM \mbox \cite{NIETER2004}}\hspace{0pt . \DIFaddbegin \DIFadd{VSim, a multi-physics simulation tool, which combines the Finite-Difference Time-Domain (FDTD), PIC, and Charged Fluid (Finite Volume) methods for simulating electrical gas discharges. \mbox \cite{NIETER2004}}\hspace{0pt . The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form.}\DIFaddend \DIFaddbegin \DIFadd{To provide a basis of understanding the streamer dynamics in a twin SDBD, that will be revealed in this work, we briefly recall the fundamentals of positive and negative streamer dynamics in a DBD. A negative streamer, see \mbox \cref{fig:NegativeStreamer}}\hspace{0pt , ignites through an anode oriented electron avalanche: electrons, which are accelerated against the direction of the electric field, collide with the background gas. Ionization takes place causing an exponential growth of electrons and ions, creating a quasineutral bulk plasma that propagates from the cathode to the anode. A positive streamer, see \mbox \cref{fig:PositiveStreamer}}\hspace{0pt , is also created via electron collisions, but is somewhat more complex. The cathode oriented positively charged streamer head attracts the electrons which cause ionization in front of the streamer head, resulting in an ionization wave. This ionization wave propagates from the anode to the cathode, leaving behind a quasineutral bulk plasma. Branches may form from the streamer head creating additional ionization waves; branching is more readily observed in gas mixtures that are susceptible to self induced photo ionization. Under short timescales, a few nanoseconds and less, a feature very similar to a low pressure sheath forms. The positive streamer head floats above the cathode due to an absence of available electrons, thus creating a region with a very strong electric field. Given an appropriate amount of time, the positive ions do reach the cathode due to their own velocities. At the dielectric(s), any charges that reach the surface adhere to it and charge it. These surface charges repel incoming like charges along the surface, causing both positive and negative streamers to spread out. Due to the lightweight electrons, this effect is more prominent in negative streamers; however, the floating nature of positive streamers can also facilitate a similar effect. For a deeper understanding we refer the reader to Nijdam \textit{et. al.} and to Zhang \textit{et.al.} \mbox \cite{Nijdam2020,Zhang2021} }\hspace{0pt where the dynamics of positive and negative streamers of a VDBD via PIC/MCC simulations are detailed.}\DIFaddend \DIFaddbegin \DIFadd{This paper is structured as follows: First in \mbox \cref{Model} }\hspace{0pt the computational model and geometry are described. Following this, in \mbox \cref{Results} }\hspace{0pt the results of the various simulations are presented: the DC results in sub-\mbox \cref{SingleStreamers,DualStreamers}}\hspace{0pt , and the AC results in sub-\mbox \cref{ACStreamers}}\hspace{0pt . Finally, in \mbox \cref{Conclusion} }\hspace{0pt our closing remarks and conclusions are discussed. }\DIFaddend \DIFdel{This paper is structured as follows: First in sub-\mbox \cref{Ignition} }\hspace{0pt a detailed summary of how an atmospheric pressure plasma ignites in a parallel plate electrode with a single dielectric layer on one electrode is presented; this provides the basis of understanding from a simplified perspective. Following in \mbox \cref{Model} }\hspace{0pt the physical system and simulated model are described. In sub-\mbox \cref{Electrode Geometry} }\hspace{0pt typical experimental conditions of the device under investigation are presented. Sequentially in sub-\mbox \cref{Sim Model,Sim Geometry,Waveform} }\hspace{0pt the simulation method, approximated geometry of the physical system, and investigated voltage waveforms are explained. Following this, in \mbox \cref{Results} }\hspace{0pt the results of the various simulations are presented. Initially in sub-\mbox \cref{SingleStreamers}}\hspace{0pt , a simulated DC voltage is used to investigate the positive and negative streamers each ignited individually. In sub-\mbox \cref{DualStreamers} }\hspace{0pt the same DC voltage is used to simulate the ignition of both streamers simultaneously. Lastly, in sub-\mbox \cref{ACStreamers} }\hspace{0pt an AC voltage profile is used to simulate and investigate the phase switching mechanisms of the discharge. Finally in \mbox \cref{Conclusion} }\hspace{0pt our closing remarks and conclusions are discussed. The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form.}\DIFdelend \DIFdelbegin \subsection{\DIFdel{Parallel plate atmospheric pressure DBD ignition}} \addtocounter{subsection}{-1 \DIFdel{For simplicity of explanation, in order to describe the ignition mechanics of an atmospheric pressure DBD in air, we will consider a simplified parallel plate electrode setup, with one electrode covered by a dielectric barrier. Typical VDBD setups tend to have both electrodes covered by a dielectric; however, many have only one covered. Furthermore, nearly all SDBDsetups have only one electrode covered by the dielectric. It should be noted, only the initial ignition and propagation mechanics are explained, and the self extinguishing effect that DBDs exhibit is not explained. \DIFdel{For the most part, similar features of a single streamer may be found whether the metallic electrode acts as the anode or the cathode, or whether the direction of propagation is oriented towards the cathode or the anode. That is to say, that a quasi neutral bulk plasma known as the streamer, and a spatial region at the front of the streamer forms, known as the streamer head, which is not quasi neutral. The direction of the propagation dictates which charge the streamer head has, be it positive or negative. Additionally, the asymmetry of the dielectric barrier produces a slight difference in the overall behavior. This rises from the natural ability of the dielectric barrier being able to hold onto charges for long periods of time, on the multiple seconds timescale \mbox \cite{Brandenburg2013}}\hspace{0pt . When the dielectric barrier is uncharged, this effect only changes the ability of the streamer to reach high densities, }\textit{\DIFdel{i.e.}} \DIFdel{prevents an arc from forming. When the dielectric barrier is charged however, for example from a previous discharge, this additionally produces local effects which promote streamer ignition at the locations of the charges. The lateral components of the electric fields of multiple streamers will cause themselves to organize into a minimal energy state, analogous to efficient packaging problems of a given geometrical shape. Often times, this results in a self patterning of the filaments. \DIFdelbegin \subsubsection{\DIFdel{Positive streamer}} \addtocounter{subsubsection}{-1 \DIFdel{Consider a positive voltage being applied to the uncovered electrode (anode), and the covered electrode being grounded, as shown in \mbox \cref{fig:PositiveStreamer}}\hspace{0pt (a ). Under these conditions, a straight and parallel electric field is formed oriented from the anode to the dielectric surface (cathode). Now consider a locally distributed collection of free electrons located within the gas volume near the anode. These electrons are accelerated in the opposite direction of the electric field, towards the anode. As the electrons migrate towards the anode, they rapidly gain speed and thus kinetic energy. Along the way, some collisions take place with either other electrons or the air molecules causing an energy exchange. If enough energy is imparted upon an N$_2$ or O$_2$ molecule via these collisions with the kinetically excited electrons , or a multitude of collisions, then ionization events may occur. These ionization events typically create additional electrons which are also accelerated towards the anode, thus the number of electrons increases in an exponential manner, which is the so called avalanche effect. These events also typically create positive ions which are accelerated towards the cathode . However, due to the extreme mass difference, over 1000 times different, the positive ions are approximately at rest when compared to the electrons. \DIFdel{When the electrons reach the metallic anode, they are freely absorbed. However, the positive ions are left behind resulting in a net positive charge near the anode surface. This net positive charge results in the potential that is the same polarity as that which is applied at the anode, thus a virtual anode is formed. However, the distance between the virtual anode and the cathode is smaller; therefore, a strong electric field is formed. Constantly, cosmic radiation and other random events are creating new free electrons within the gas volume. Self induced photo ionization also creates new electrons in a stochastic manner. Similar to the initial electrons, these new electrons are attracted towards the positive ions which function as the virtual anode. More ionization events occur as the electrons approach the virtual anode, creating more positive ions even further away from the physical electrode. The electrons behind this new net positive charge quickly orient themselves with the physical anode, and the virtual anode, and the remaining positive ions into a quasi neutral bulk plasma, known as the streamer. \DIFdel{The continuation of this process leads to a growing high density plasma bulk with a positively charged streamer head that propagates from the anode towards the cathode. However, due to the locality of the initial electrons, the initial produced positive streamer headis also local to the anode. Thus, the streamer head is initially small, but as it propagates towards the cathode it expands in the lateral direction as well. It is important to note, that the ions within the positive streamer head do not themselves significantly move, but it is the formation of new charges and the growing quasi neutral bulk that creates the propagation front. Because background radiation and self induced photo ionization are random events, it is very common to see multiple branches form along the sides of the primary streamer . Each of these branches behaves as positive streamer itself. This branching effect is more prominent in gases that are more susceptible to photo ionization.}\DIFdelend \DIFdelbegin \DIFdel{As the plasma, the positively charged streamer head, and any branches approach the cathode, the space between the streamer head, which is the virtual anode, and the cathode naturally reduces. Even though the electric field strength increases due to the decreased distance, this results in a smaller space for newly created electrons to gain enough energy to cause ionization. Therefore, the positive streamer head eventually stops propagating and "floats " at some distance above the cathode. The floating head is only able to expand further in the lateral direction, where newly created electrons have enough space to gain enough energy to cause new ionization events. This floating nature is analogous to a low pressure sheath formation, where there is a spatial region of electron depletion and extremely high electric fields; given enough time the positive charges within the streamer head and bulk plasma would be accelerated towards to the cathode in this sheath like region. If negative surface charges were present due to a previous discharge, the sheath like region would be expected to have a decreased thickness due to the higher potential difference between the streamer head and the charged dielectric surface.}\DIFdelend \DIFdelbegin \DIFdel{Assuming now that the location of the dielectric surface is reversed, meaning that the anode is now covered and the cathode uncovered, as shown in \mbox \cref{fig:PositiveStreamer}}\hspace{0pt (b), much of what has been described still occurs. The electrons are accelerated towards the anode and ionization events begin to take place. A local positively charged streamer head forms. Electrons produced via the background radiation are subsequently attracted towards this streamer head. Thus, the initially small streamer head also expands in the lateral direction as it propagates towards the cathode. Branching of the streamer may also occur. Dissimilar to the previous scenario, the electrons are not freely absorbed by the now covered anode. Instead, electrons are first absorbed by the dielectric surface, such that a negative charge builds up. Additionally, electrons within the bulk streamer that are attracted to the anode are also repelled by the negative surface charges, such that the charging of the dielectric spreads outacross the surface the dielectric in a small localized region. This charging effect reduces the effective electric field within the gas volume, resulting in a lower density streamer. This effect gives rise to the self canceling nature of DBDs. \DIFdelbegin \subsubsection{\DIFdel{Negative streamer}} \addtocounter{subsubsection}{-1 \DIFdel{Consider now a Negative voltage being applied to the uncovered electrode (cathode), and the covered electrode being grounded, as shown in \mbox \cref{fig:NegativeStreamer}}\hspace{0pt (a). Under these conditions, a straight and parallel electric field is formed oriented from the dielectric surface (anode) to the cathode. Now consider a locally distributed collection of free electrons located within the gas volume near the cathode. Just like the positive streamer case, electrons are accelerated towards the anode and ionization events begin to take place. The number of electrons begins to exponentially increase. This electron avalanche is continuously accelerated towards the anode, creating more and more electrons and leaving behind a quasi neutral bulk streamer. Due to this, at the front of the plasma streamer there will be a small region primarily consisting of negative charges. This is the negatively charged streamer head. \DIFdel{Dissimilar to the positive streamer, the electrons are not freely absorbed by the anode, which is in this scenario is the dielectric barrier. Instead, the electrons collect on the surface of the dielectric, as previously described. Due to large electron avalanche, the electrons are able to significantly spread across the surface of the dielectric barrier. This is often times referred to as a surface streamer. As the electrons spread across the surface, strong electric fields between the surface charges and the dielectric surface form thereby promoting the propagation of the surface streamer. In this manner, the electric field between the anode and the cathode is significantly reduced, which also leads towards the self canceling nature of DBDs. Additionally, this leads to larger volumes and as such lower plasma densities. However, identical to the positive streamer, a positively charged spatial region is able to form near the cathode, as electrons are repelled away from the cathode. This creates a feature very similar to a low pressure sheath. \DIFdel{Alternatively, the cathode could be covered by the dielectric barrier, as shown in \mbox \cref{fig:NegativeStreamer}}\hspace{0pt (b). In this manner, an initial collection of electrons near the cathode behaves near identically as before. However, due to the anode being uncovered, the electrons within the bulk streamer are able to be absorbed by the anode. In this scenario, the uncharged dielectric surface on the cathode has a minimal effect on the streamer, as a positively charged spatial region still forms regardless, }\textit{\DIFdel{i.e.}} \DIFdel{no electrons are absorbed by the dielectric surface. Under previously charged conditions, it should be expected that the floating region of this positively charged spatial region is reduced in thickness due to a stronger potential. \DIFdel{For such a geometrical setup, one uncovered metallic electrode parallel to an electrode covered by a dielectric barrier, the breakdown can be described by a positive or negative streamer . The annotative difference is determined by the location of the initial charges responsible for streamer ignition. The positive streamer forms with the charges being located near the anode, and the negative streamer forms with the charges being near the cathode. The physical differences lie within the propagation method and different densities, as the negative streamers tend to have smaller plasma densities, and the additional surface plasma structure. Negative streamers start at the cathode and propagate via an electron avalanche. Positive streamers start at and are anchored to the anode and propagate via a positively charged streamer head which floats above the cathode. When the dielectric barrier is covering the anode, both the anchored electrons from the positive streamer and the electron avalanche from the negative streamer charge the dielectric surface. This surface charging spreads the plasma out along the surface of the dielectric, albeit more prominently for the negative streamer case. Both the positive and negative streamer create a cathode directed positively charged spatial regime and a quasi neutral bulk plasma. When considering an initial uniform distribution of seed electrons, a combination of both a positive and negative streamer effect could be observed.}\DIFdelend \DIFdelbegin \section{\DIFdel{Experimental setup and computational model}} \DIFdelbegin \subsection{\DIFdel{Experiments under investigation}} \DIFdelbegin \DIFdel{The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \mbox \cite{Offerhaus2017} }\hspace{0pt and subsequently in \mbox \cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt . Under experimental operation, the twin SDBD electrode is placed inside of a sealed chamber with synthetic quartz windows for optical observations. The chamber is regulated at $1\,atm$ of pressure with a controllable feed gas mixture. The twin SDBD may be ignited by various tailored high voltage pulsed waveforms. The pulsed nature of the waveforms allow for higher instantaneous plasma powers which are responsible for chemical activity, but lower averaged powers which are responsible for material failure. \DIFdel{The novel electrode configuration utilizes two nickle coated metallic grids printed onto an alpha alumina oxide ($\alpha$-Al$_2$O$_3$) ceramic plate acting as the dielectric barrier. The grid structures are symmetrically printed on both of the normal faces of the ceramic plate where one side is powered by the voltage waveform of choice, while the other is grounded. Due to this symmetry, all edges of the metallic traces on both top and bottom are susceptible to surface plasma ignition. Thus, positive and negative streamer are simultaneously ignited, which thereby warrants the name "twin SDBD". These two streamer phases are physically separated by the dielectric barrier; however, under bipolar voltage conditions the phases periodically switch. A computer generated graphical example of the electrode when ignited is shown in \mbox \cref{fig:Electrode}}\hspace{0pt . This unique structure is different from VDBDs or submerged SDBDs which typically only allow for a single phase plasma ignition. Furthermore, this physical and electrical symmetry allows for a larger plasma surface coverage of the dielectric barrier, thereby ensuring a higher treatment efficiency \mbox \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt . Lastly, the structure allows for the simultaneous experimental measurement of various plasma properties of both the positive and negative streamer phases. Most intriguingly, the two opposite phased plasma streamers are able to affect one another during ignition, potentially altering the spatial size and form of the streamers.}\DIFdelend \DIFdel{The nickle coated metallic electrodes of the twin SDBD form a $10\,mm$ square lattice totalling in size to $150\,mm$ x $50\,mm$. The metallic traces are $0.450\,mm$ wide and extend approximately $0.020\,mm$ above the surface of the dielectric. The dielectric thickness is $0.635\,mm$. The shape and size of the lattice were chosen based on the ease of manufacturing as well as the reproducibility of measurements. The metallic trace itself has a cross sectional profile similar to a re-curve bow. A cross sectional photo of the electrode profile has been taken with a scanning electron microscope and is shown in \mbox \cref{fig:SEM}}\hspace{0pt .} \addtocounter{section}{-1 \DIFdelend \DIFaddbegin \section{\DIFadd{Computational model}} \DIFaddend \label{Model} \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{SDBD_Elektrode.png} \caption{\DIFdelFL{3D} Computer generated graphic showing the physical structure of the SDBD electrode under consideration. A metallic lattice (\DIFaddFL{dark grey} structure) is printed symmetrically on both the top (visible) and bottom (hidden) faces of the Al$_2$O$_3$ dielectric barrier (\DIFdelFL{white} \DIFaddFL{light grey} material). Due to the strong curvature of the electric field lines when under operation, the plasma (purple structure) ignites along the edges of the metallic lattice.} \label{fig:Electrode} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{NeuElektrode_GitterLinie0001.jpg} \caption{SEM image of electrode cross section. The bulk, homologous material is the Al$_2$O$_3$ dielectric. The hump like structure with larger grains is the metallic electrode trace. \DIFdelFL{A thin nickel coating is also slightly visible along the outside of the trace.}} \label{fig:SEM} \end{figure} \DIFaddbegin \DIFadd{The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \mbox \cite{Offerhaus2017} }\hspace{0pt and subsequently in \mbox \cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt . The authors defer the readers to these references for a detailed description of the twin SDBD system under question. It is important to reiterate that this device consists of a dielectric plate, with metallic grids placed on the surface of the dielectric on both sides. A computer rendered sketch of the system can be seen in \mbox \cref{fig:Electrode}}\hspace{0pt . These grids serve as electrodes. The system is built with both a geometric and electrical symmetry, such that both a positive and negative streamer are simultaneously ignited on either side of the dielectric under any given sufficiently high voltage conditions, which thereby warrants the name ”twin SDBD”. The metallic traces of the electrode system have been imaged with a scanning electron microscope for a more accurate depiction of the electrodes within the simulations. An example image of the cross sectional view of the metallic traces can be seen in \mbox \cref{fig:SEM}}\hspace{0pt , which shows the curved nature of the metallic traces located on the dielectric, which is included in the simulation.} \DIFaddend \subsection{Particle in Cell/Monte Carlo Collision model} A 2D PIC/MCC model is used to study the plasma propagation of the twin SDBD based on the \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend simulation software \cite{NIETER2004}. \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend is being widely used and has been validated \cite{NIETER2004,Zhang2015,Zhang2017}. \DIFaddbegin \DIFadd{As these investigations taken place under similar conditions presented here (atmospheric pressure DBDs, nanosecond timescales and micrometer length scales), we operate under the assumption that our model is also valid. Additionally, the usage of PIC/MCC simulations to investigate the COST-Jet at atmospheric pressure yield realistic results that agree well with experiments, \mbox \cite{Bischoff2018,Korolov2019,Korolov2020}}\hspace{0pt , proving that PIC/MCC models can indeed be used at atmospheric conditions. }\DIFaddend The PIC/MCC simulations performed in \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend are based on an explicit solver and \DIFdel{an electrostatic method} \DIFadd{the electrostatic approximation of Maxwell's equations}, which were described in detail in \cite{Birdsall1991}. The PIC/MCC model takes advantage of accounting for the detailed kinetic behavior of charged particles which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{may be }\DIFaddend important for the evolution of \DIFdelbegin \DIFdel{plasma streamers }\DIFdelend \DIFaddbegin \DIFadd{electron avalanches and branching mechanisms, and therefore, the plasma streamer profiles}\DIFaddend . Air at atmospheric pressure is used as the discharge gas, with a constant density of background molecules, $80\,\%$ N$_2$ and $20\,\%$ O$_2$, at 300$\,$K. Free electrons, N$_2^+$, O$_2^+$ and O$_2^-$ ions are traced throughout the simulation, which are represented as super-particles, i.e. one super-particle corresponds to a certain number of real particles defined by their numerical weighting, initially starting at $20\cdot10^3$ real particles per super particle \cite{Birdsall1991}. \DIFdelbegin \DIFdel{Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \mbox \cite{Zhang2017}}\hspace{0pt . The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \mbox \cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}}\hspace{0pt . At the surface of the dielectric barrier, only electron absorption is considered, }\textit{\DIFdel{i.e.}} \DIFdel{no electron reflection or surface electron emission is considered.}\DIFdelend \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{MCC_PIC_Flowdens.eps} \caption{Logic flow diagram of the PIC/MCC algorithim. One complete loop of the flow diagram represents one time stamp of the PIC/MCC code. During each successive change in the time step of the simulation, all sub algorithms are performed. Particles are pushed, merged, collided, generated, the densities are determined, and analyzed for electrical forces.} \label{fig:ModelFlow} \end{figure} In order to numerically initiate the plasma discharge, a uniform distribution of seed electrons is placed within the free space of the simulated geometry. These seed electron super-particles have a density corresponding to $1\cdot10^{15}\,$m$^{-3}$. Realistically, seed electrons are present due to cosmic radiation and environmental photo-ionization producing background electrons, as well as remaining charges from previous plasma discharges. \DIFaddbegin \DIFadd{The initial electron density was chosen as such in order to increase the initial weighting of the super particles, and thereby the simulation speed. The high initial density increases the speed of the initial electron avalanches and streamer breakdown. As seen later on, maximum achieved densities are on the order of $1\cdot10^{22}\,$m$^{-3}$, which is much higher than the initial density; therefore, the final profiles and mechanisms would not change if a lower initial density was chosen. Thus, the high initial density serves to increasing the simulation speed while not altering the results of the simulations. }\DIFaddend It should be noted that the usage of uniform seed electrons does not consider local effects of previous discharges. As the plasma streamers evolve, the particle number of each considered species will rapidly increase due to the ionization avalanches. To account for this and to reduce the computation time, the weight of each super-particle is adaptive. A merger algorithm conserving both momentum and energy will combine same species super-particles when the number of said super-particles exceeds a threshold value of 10 super-particles respective to each cell of the simulation mesh. As the particle numbers only increase within the considered simulated time, no de-merger algorithm is implemented. This adaptive weight and merger algorithm is described in more detail in \cite{Zhang2017}. \DIFaddbegin \DIFadd{Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \mbox \cite{Zhang2017}}\hspace{0pt . The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \mbox \cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}}\hspace{0pt . At the surface of the dielectric barrier, only electron absorption is considered, }\textit{\DIFadd{i.e.}} \DIFadd{no electron reflection or surface electron emission is considered. Reported in \mbox \cite{Zhang2015,Zhang2017}}\hspace{0pt , the inclusion of secondary electron emission, SEE, surface coefficients do not significantly alter the form of the simulated positive streamers, due to the floating nature of the streamer head. The negative streamer; however, propagates along the surface of the dielectric barrier, and as such, SEE coefficients would be more critical. The inclusion of SEE coefficients would theoretically increase the number of "background" electrons available for streamer propagation, and as such the streamers would propagate faster; however, their forms should not strongly change. Additionally, due to the lower electric fields of the negative streamer and the very short considered timescales, the effect of ion induced SEE would be very limited within this investigation.}\DIFaddend \begin{figure*}[t] \centering \subfloat{ \label{fig:Geom(a)} \includegraphics[width=0.885\textwidth]{GeoLarge.eps}} \\ \subfloat{ \label{fig:Geom(b)} \includegraphics[width=0.885\textwidth]{GeoSmall.eps}} \caption{Schematic of the simulation regimes. Subfigure (a) and (b) correspond to the DC and AC simulated geometries respectively. The color scale corresponds to the different materials as follows: I) air ($80\,\%$ N$_2$ and $20\,\%$ O$_2$), II) Al$_2$O$_3$ dielectric, III) grounded electrode, IV) powered electrode. The boxed in regions denoted with (i) correspond to the regions that are presented in greater detail for the rest of the publication.} \label{fig:Geometry} \end{figure*} With each successive timestamp of the model, a particle pusher, particle merger, and Monte Carlo collision algorithms for all particle species follow in succession. After the collisions, a new electron super particle is added to the simulation regime, the density of each cell is calculated, and Poisson's equation is solved in order to get the electric forces being applied to each particle, after which the cycle repeats. A diagram of the general flow is shown in \cref{fig:ModelFlow}. \subsection{Simulated geometry} \label{Sim Geometry} The geometry to be simulated is a cross section of the twin SDBD described \DIFdelbegin \DIFdel{above in \mbox \cref{Electrode Geometry}}\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{ in \mbox \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt , and shown in \mbox \cref{fig:Electrode}}\hspace{0pt }\DIFaddend . The twin SDBD simultaneously produces positive and negative phased plasma streamers along the edges of the metallic traces; however, the two phases are separated by the \DIFaddbegin \DIFadd{Al$_2$O$_3$ }\DIFaddend dielectric barrier. On either side of the dielectric barrier, ignition on opposite edges of the respective metallic trace can be considered as two individual but same-phased streamers. Two different simulation geometries, referred to as geometry(a) and geometry(b), are considered in order to appropriately resolve the interaction of both the same-phased and respectively opposite-phased plasma streamers. Simulation geometry(a) and simulation geometry(b) are presented in \cref{fig:Geometry}. In total geometry(a) contains a 2D plane that is $9.6\,$mm x $1.2\,$mm in Cartesian X and Y coordinates. The plane is uniformly divided into square cells with unit length of $2.4\,\mu$m resulting in a square lattice of 4000 x 500 cells. \DIFaddbegin \DIFadd{The grid size was chosen based off of the Courant limit, $c\cdot dt<dx$, where $c$ is the speed of light and $dx$ is the grid size. }\DIFaddend Geometry(b) utilizes the same size grid cell, but uses only 1000 x 500 cells resulting in a \DIFaddbegin \DIFadd{total }\DIFaddend width of $2.4\,$mm. \DIFdelbegin \DIFdel{However, for }\DIFdelend \DIFaddbegin \DIFadd{For }\DIFaddend ease of comparison, results from a zoomed in region of size 500 x 500 cells from both simulated geometries are presented for the rest of the paper. The respective regions are outlined by a dashed \DIFdelbegin \DIFdel{lined }\DIFdelend \DIFaddbegin \DIFadd{line }\DIFaddend and annotated with $(i)$ in \cref{fig:Geometry}. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{Efield_arrows.eps} \caption{Electric field distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. \DIFdelbeginFL \DIFdelFL{Absolute }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{The }\DIFaddendFL magnitude of the electric field is plotted on a linear intensity color scale\DIFdelbeginFL \DIFdelFL{. The }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{, where the }\DIFaddendFL threshold value for the minimum intensity \DIFdelbeginFL \DIFdelFL{scale }\DIFdelendFL is chosen to be $1\cdot10^{6}\,$V/m. The normalized direction of the electric field is shown via the vector \DIFdelbeginFL \DIFdelFL{arrows}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{field}\DIFaddendFL .} \label{fig:EField} \includegraphics[width=0.885\textwidth]{Potential_arrows.eps} \caption{Electric potential distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The \DIFdelbeginFL \DIFdelFL{magnitude of the }\DIFdelendFL electric potential is plotted on a linear intensity color scale. \DIFdelbeginFL \DIFdelFL{The }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{Additionally, the }\DIFaddendFL normalized direction of the electric field is shown via the vector \DIFdelbeginFL \DIFdelFL{arrows}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{field}\DIFaddendFL .} \label{fig:EPotential} \end{figure*} Firstly, to investigate the interactivity of two same-phase streamers, positive-positive or negative-negative, two anodes and two cathodes are included in simulation geometry(a). The two same-phase electrodes are simulated with the same potential under DC \DIFdelbegin \DIFdel{condtions }\DIFdelend \DIFaddbegin \DIFadd{conditions }\DIFaddend and are separated in the X-direction by $9.5\,$mm, corresponding to the distance between the edges of two neighboring and parallel metallic traces of the physical electrode. In order to minimize the computational time, the X boundaries of geometry(a) correspond to the vertical center lines of the metallic traces. Simulation geometry(a) may be seen in \cref{fig:Geom(a)}. Later, in \DIFdelbegin \DIFdel{\mbox \cref{SingleStreamers} }\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cref{SingleStreamers,DualStreamers} }\hspace{0pt }\DIFaddend it is deduced that minimal interactivity is observed between two same-phase streamers. This is due to the limited spatial propagation of the plasma streamers on the considered timescales. Therefore, it is appropriate to simulate a section centered about just one metallic trace under the same timescales, thus a second simulation geometry is investigated. In simulation geometry(b) only one set of electrodes \DIFdelbegin \DIFdel{are considered, are }\DIFdelend \DIFaddbegin \DIFadd{is considered, is }\DIFaddend only simulated under AC conditions, and \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{is }\DIFaddend centered about the X-axis with the walls being $1.2\,$mm away from either side of their center line. \DIFaddbegin \DIFadd{Concerns about the reduced simulation domain having an effect on the calculated electric field strengths are mitigated by the naturally very fast reducing field strength as a function of the square of the distance from the electrodes. The usage of Neumann boundary conditions additionally improves the accuracy, as the simulation walls are not forced to a specific potential. }\DIFaddend Simulation geometry(b) may be seen in \cref{fig:Geom(b)}. Both considered geometries of the 2D PIC/MCC model represent a cross sectional view of the electrode structure, where the anodes and cathodes are separated along the Y-axis by the dielectric barrier. The dielectric is located in the middle of the Y-axis, was chosen to be $0.500\,$mm thick and expands the whole X-direction\DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{, and is simulated with a dielectric constant of 9. }\DIFaddend In this representation, the Z-direction would equate to the length (or width) of the physical electrode setup but is mathematically treated as constant/homogeneous. This results in a simulation regime that is most valid for a planar section in the middle of any grid structure. In both geometries, the electrode structure itself is a geometrical composition of multiple tangent arcs resulting in a "hump" like structure. This electrode structure is used to approximate the real geometric structure of the metallic traces which can be seen in \cref{fig:SEM}. It should be noted that the simulated aspect ratios of the electrode thickness and width to the dielectric thickness is significantly different from reality; however, this was chosen as such in order to avoid numerical issues which would arise from using an appropriately sized simulation grid for realistic aspect ratios. Furthermore, the reduced dielectric thickness of the simulations versus the actual electrode configuration should not lead to any major differences in the interpretations of this paper, as it is the surface of the dielectric that plays a much more important role. By using a reduced dielectric thickness, we are able to increase the number of computational cells available for the plasma propagation, without increasing the entire simulation domain. Particle densities and electric fields are resolved using a cutting-cell technique in order to handle the irregular geometry\DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{, through contributions of neighboring cells. The authors refer the reader to references \mbox \cite{Smithe2008,Meierbachtol2015,loverich2010} }\hspace{0pt for more information. }\DIFaddend Neumann boundary conditions are used in all directions to ensure a smooth electric potential distribution at the boundaries of the simulation walls. The timesteps are non adaptive and fixed at $2\cdot10^{-13}\,$s. \DIFdelbegin \DIFdelend Similar to \cite{Likhanskii2010}, a singular new electron super-particle is randomly added to the simulation domain at each timestep in order to account for random events such as cosmic radiation, photo-ionization, \textit{etc.} as described in \cite{Ebert2006,E_M_van_Veldhuizen2002,Qiu2017}. These random events are beyond the scope of the available \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend functions. The seed electrons, both background and newly loaded electrons, are both sufficient in the simulation region to support streamer propagation as well as to not interfere with the plasma bulk as they are far fewer compared to the generated plasma. The generated plasma density profile is also much smaller than the simulation domain in both considered geometries. \subsection{Waveform variation} \label{Waveform} In all considered simulations and both geometries, the electrode(s) above the dielectric barrier are treated as the powered electrode(s) while the bottom electrode(s) are held constant at $0\,$V. This choice is arbitrary and due to the physical symmetry of the system would provide only mirrored results if the \DIFdelbegin \DIFdel{the }\DIFdelend opposite choice, either inverse polarity and/or choice of powered electrode\DIFaddbegin \DIFadd{, }\DIFaddend was made. Initially, a constant positive $8\,$kV potential is applied to geometry(a), thus the two powered electrodes take the role of the anodes while the bottom two are the cathodes. The initial electric field distribution can be seen in \cref{fig:EField}(a) and the initial potential distribution can be seen in \cref{fig:EPotential}(a). \DIFaddbegin \DIFadd{Within both figures, the magnitudes of the presented quantity are shown via the color scale, and the normalized direction of the electric field are additionally presented for further clarity. The normalized direction is presented as a vector field, where the X and Y directions of the vectors are the normalized X and Y values of the electric field at that grid cell. Naturally, the magnitude of the electric field is obtained from the square root of the sum of the X and Y components squared: $E_{mag} = \sqrt{E_X^2 + E_Y^2}$. }\DIFaddend First, in order to investigate solely the role of the positive streamers, only the top half of the simulation area is seeded with the initial electrons. Likewise, the bottom half is subsequently seeded in a second simulation in order to solely investigate the negative streamers. Third, both halves are identically seeded thereby investigating the interplay and differences of both discharges igniting simultaneously under the DC voltage conditions. These three conditions are applied to geometry(a) only. Lastly, a varying voltage waveform is investigated. \begin{figure}[t]\centering \includegraphics[width=0.4425\textwidth]{ACVoltage.eps} \caption{Applied voltage waveform of the AC simulations. Dashed lines labeled a through f at 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns respectively represent the timestamps at which results are presented in \DIFdelFL{ \mbox \cref{DualStreamers}}\hspace{0pt} \DIFaddFL{ \mbox \cref{ACStreamers}}\hspace{0pt} .} \label{fig:ACPulseform} \end{figure} Geometry(b) is only investigated under the AC conditions shown in \cref{fig:ACPulseform}. Under these conditions, the role of the anode and cathode switches twice; thereby giving insights into the extreme dynamics of fast voltage streamer switching. Initially, the applied voltage potential sharply rises within $0.1\,$ns to the $8\,$kV maximum which is then held constant for $0.7\,$ns. During this time, the anode is located on the top side of the dielectric barrier. At $0.8\,$ns, the voltage is decreased at the same rate, $80\,$kV/s, reaching the minimum applied voltage of $-8\,$kV at $1\,ns$ making the top side of the dielectric barrier the cathode. Again, this minimum value is held constant for $0.7\,$ns until switching back to the positive $8\,$kV potential, again switching the location of the anode and cathode. Without considering any plasma propagation, the base electric field distribution for both a positive and negative applied potential are shown in \cref{fig:EField} and the equivalent potential distribution can be seen in \cref{fig:EPotential}. \DIFaddbegin \DIFadd{All conditions are simulated for up to a maximum of $2\,$ns, thereby only revealing the inception phase of the streamers. The insights revealed within the \mbox \cref{Results} }\hspace{0pt are consistent with other PIC/MCC models investigating DBD streamers in structured and porous catalytic surfaces \mbox \cite{Zhang2018,Zhang2018Porous}}\hspace{0pt , which also are simulated in the ns timescales. Additionally, the phenomenon of a floating positive surface discharge is also observed in various fluid models \mbox \cite{Babaeva2016,Yan2014}}\hspace{0pt . Therefore, the authors believe the results presented throughout this paper, even given the short time scales, are reasonable. The results reported below are meant for a qualitative understanding of the streamer dynamics in a twin SDBD. The general conclusions for more natural voltage waveforms, such as continuous sine waves, can be drawn, and could warrant further studies considering a real RF source. However, the results obtained in this work are particularly relevant for tailored voltage waveforms, which is a hot topic of current research and is trending towards shorter pulses and steeper rise times.}\DIFaddend \section{Results and Discussion} \label{Results} \subsection{Single Streamer Dynamics} \label{SingleStreamers} \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_up_log.eps} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the positive streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged streamer head leading to streamer propagation, II) shaded region showing location of electron depletion\DIFaddFL{, \textit{i.e.} sheath like feature}, III) potential/failed positive streamer branch.} \label{fig:DC PositiveStreamer} \end{figure*} Under the 8$\,$kV DC conditions with seed electrons present only on the anodic side of geometry(a), the propagation of an anodic phased plasma streamer, also known as positive streamer is simulated and presented in \cref{fig:DC PositiveStreamer}. The initial electric field distribution is shown in \cref{fig:EField}(a) and the initial electric potential distribution is shown in \cref{fig:EPotential}(a). Under these conditions, a cathode oriented positively charged streamer head that is able to freely move from the metallic anode to the dielectric surface is able to form. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_down_log.eps} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the negative streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region leading to \DIFaddFL{positive } streamer \DIFaddFL{like } propagation, II) shaded region showing location of electron depletion\DIFaddFL{, \textit{i.e.} sheath like feature.}} \label{fig:DC NegativeStreamer} \end{figure*} The streamer structure is anchored to the anode just above where the highest electric fields are located. It would be expected that the anchoring would take place at the location of the highest electric field; however, under these conditions this is located at the intersection of the electrode and the dielectric surface. At this point, and immediately next to it, due to the strong curvature of the electric field, electrons do not have enough space to gain sufficient energy for ionization. Multiple executions of the simulation produce anchored positions at the same location; furthermore, the anchor position is also at a symmetrical position on the opposite anode, which is not presented in \cref{fig:DC PositiveStreamer}. This suggests that the anchor is positioning itself based on the strong curvature of the anode, and not through the randomness of the ionization events. Indeed, when looking at the curvature of the simulated electrode, it appears as if the plasma is next to the strongest curvature. \DIFaddbegin \DIFadd{Under no conditions did the simulated positive streamers extend a significant amount into the X-direction, such that interactions between the two positive streamers do not need to be considered. }\DIFaddend At $0.2\,$ns the positive streamer has advanced $0.12\,$mm meaning a propagation speed of $0.62\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The positive streamer had reached a propagation distance of $0.31\,$mm resulting in an averaged speed of $0.31\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. It was observed via multiple test executions that these propagation speeds and distances were highly dependent on the initial background electron density. With lower initial densities, the simulated streamer propagates a shorter distance. Likewise, larger background densities would result in faster speeds and longer propagation distances. Initially the positive streamer began to propagate along the electric field lines at an angle offset from the surface of the dielectric barrier. The positive streamer head, which is not directly visible in \cref{fig:DC PositiveStreamer}, forms in front of the streamer and along the bottom side between the bulk plasma and the dielectric barrier. The streamer head is annotated in \cref{fig:DC PositiveStreamer} with an arrow labeled (I). Between the dielectric barrier and the positively charged streamer head is located a sheath like region\DIFaddbegin \DIFadd{, annotated via (II), }\DIFaddend where free electrons are attracted to the streamer head; however, they do not have enough space in order to promote further propagation towards the dielectric. Therefore, the only direction possible is outwards along the X- and positive Y-directions, towards the center of the simulated area. As the streamer continues to propagate along this direction, the \DIFdelbegin \DIFdel{applied }\DIFdelend electric field gets weaker proportional to \DIFdel{$1/r^2$} \DIFadd{$1/r$ (in 2D) or $1/r^2$ (in 3D)}, where $r$ is the distance from the electrode. Thus the positive streamer is able to advance in a somewhat straight line, parallel to the initial trajectory, which is at some angle to the dielectric surface; under these presented conditions this trajectory angle was determined to be $20.6^\circ$. The further the streamer propagates, the more space is available for propagation into the negative Y-direction, towards the dielectric surface. Therefore, in \cref{fig:DC PositiveStreamer}(b), \DIFdelbegin \DIFdel{two potential branches }\DIFdelend \DIFaddbegin \DIFadd{a potential branch }\DIFaddend had began to take shape\DIFdelbegin \DIFdel{; however they are }\DIFdelend \DIFaddbegin \DIFadd{, annotated with (III); however it is }\DIFaddend not able to fully develop. As the cathode is located underneath the positive streamer, that is the only location of the streamer head; therefore, no branching occurs above the streamer bulk. \DIFaddbegin \DIFadd{Due to the location of the failed branch in \mbox \cref{fig:DC PositiveStreamer}}\hspace{0pt (b)(III), it would be extremely difficult to experimentally observe, and is noticeable within these simulations because of the kinetic nature of PIC/MCC models. Naturally, without experimental evidence, the reader might question the reality of whether branching forms or not at these orientations. The authors believe that the simulations are indeed accurate in predicting these features.}\DIFaddend In \cref{fig:DC NegativeStreamer} the same simulation conditions are presented, except the initial seed electrons are on the cathode side of the dielectric barrier, thus the negative streamer is simulated. The seed electrons are still accelerated in the opposite direction of the electric field lines shown in \cref{fig:EField}(a). \DIFdelbegin \DIFdel{As described in \mbox \cref{NegativeStreamer}}\hspace{0pt , an }\DIFdelend \DIFaddbegin \DIFadd{An }\DIFaddend electron avalanche directed towards the anode initiates the discharge. Under these conditions the electrons are pushed towards the dielectric, where they begin to collect on and charge the surface of the dielectric. A positively charged spatial region forms next to the cathode, but is unable to anchor to the cathode, as it must float at some distance away from the cathode. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_all_log.eps} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region/streamer head, II) location of electron depletion, \DIFaddFL{\textit{i.e.} sheath like feature}, III) potential/failed positive streamer branch.} \label{fig:DC BothStreamers} \end{figure*} \DIFaddend Newly created background electrons are pushed away from the cathode. Simultaneously, the electrons are attracted towards the positively charged region. Outside of the sheath region between the two, marked via an arrow labeled (II) in \cref{fig:DC NegativeStreamer}, these two directions are opposite one another. Only a very small amount of electrons are sufficiently accelerated to the positive charges with enough energy in order to cause ionization. Therefore, minimal propagation of the negative streamer parallel to the cathode surface takes place\DIFaddbegin \DIFadd{, as depicted via (I)}\DIFaddend . Newly created background and avalanche electrons that reach the dielectric surface, instead of the positively charged spatial region, help to promote the propagation of the negative streamer along the surface of the dielectric in the X-direction away from the cathode and towards the center of the simulation area. However, no distinctly visible negatively charged streamer head is directly observable. At $0.2\,$ns the negative streamer has advanced $0.077\,$mm meaning a propagation speed of $0.39\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The negative streamer had reached a propagation distance of $0.25\,$mm resulting in an averaged speed of $0.25\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. As with the positive streamer, lower and higher initial electron densities result in a shorter and longer propagation distance, respectively. \DIFaddbegin \DIFadd{Furthermore, under no conditions did the two simulated negative streamers next to both cathodes extend a significant amount into the X-direction, such that interactions between the two negative streamers do not need to be considered. }\DIFaddend \subsection{Dual Streamer Dynamics - DC} \label{DualStreamers} \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{Charge_all.eps} \caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region/streamer head, II) surface charges which are visibly hidden by the mask of the dielectric barrier, III) potential/failed positive streamer branch.} \label{fig:DC BothStreamersCharge} \end{figure*} Presented in \cref{fig:DC BothStreamers,fig:DC BothStreamersCharge} is the complete DC scenario, where seed electrons are present on both the anodic and cathodic sides of the dielectric barrier. The same positive $8\,$kV DC voltage is used. Comparing \cref{fig:DC PositiveStreamer}(a), \cref{fig:DC NegativeStreamer}(a), and \cref{fig:DC BothStreamers}(a) a small difference is observed at $0.2\,$ns. Primarily, the sizes and overall density of both the positive and negative streamers have increased. The positive streamer has advanced $0.15\,$mm while the negative streamer has advanced $0.088\,$mm away from the anodes and cathodes, respectively. By $1.0\,$ns both streamers have significantly increased in size and average density compared to \cref{fig:DC PositiveStreamer}(b) and \cref{fig:DC NegativeStreamer}(b). Failed branches on the positive streamer are still present. The positive streamer has advanced a total of $0.41\,$mm while the negative streamer advanced a total of $0.27\,$mm. \Cref{tab:SizeAndSpeed} summarizes the streamer height, length, propagation angle, and propagation speed for the positive and negative streamers under all three simulation conditions. The propagation angle is determined as the angle at which the positive streamer propagates away from the dielectric surface, and is treated as $0\,^\circ$ for the negative streamer. The streamer length and thickness are respectively the size of the streamers with respect to the parallel and perpendicular axes about the streamer propagation angle. \DIFaddbegin \begin{table*}[t] \centering \begin{tabular}{|c|l|c c|c c|c|c c|} \hline \multirow{2}{*}{Time} & \multirow{2}{*}{DC Streamer} & \multicolumn{2}{c|}{Thickness [$\mu$m]} & \multicolumn{2}{c|}{Length [$\mu$m]} & {Angle }[{$^\circ$}] & \multicolumn{2}{c|}{Speed [$\frac{\mu\mathrm{m}}{\mathrm{ns}}$]}\\ & & {Average }& {Maximum }& {Average }& {Maximum }& & {Propagation }& {Lateral }\\ \hline \hline \rowcolor{gray!25} \cellcolor{white} & {Positive }& {38.39 }& {49.20 }& {123.4 }& {170.4 }& {20.60 }& {617.1 }& {577.7}\\ \rowcolor{white} \cellcolor{white} & {Negative }& {63.66 }& {133.2 }& {77.70 }& {158.4 }& {-- }& {-- }& {388.5}\\ \rowcolor{gray!25} \cellcolor{white} & {Full (+) }& {40.27 }& {54.00 }& {149.72 }& {206.4 }& {14.80 }& {748.61 }& {723.77}\\ \rowcolor{white} \cellcolor{white}\multirow{-4}{*}{$0.2\,$ns} & {Full (-) }& {64.63 }& {128.4 }& {88.20 }& {166.8 }& {-- }& {-- }& {441.0}\\ \hline \rowcolor{gray!25} \cellcolor{white} & {Positive }& {60.07 }& {76.80 }& {305.5 }& {410.4 }& {13.30 }& {305.5 }& {297.3}\\ \rowcolor{white} \cellcolor{white} & {Negative }& {79.10 }& {115.2 }& {247.9 }& {372.0 }& {-- }& {-- }& {247.9}\\ \rowcolor{gray!25} \cellcolor{white} & {Full (+) }& {65.92 }& {80.40 }& {409.17 }& {516.0 }& {10.50 }& {409.17 }& {402.31}\\ \rowcolor{white} \cellcolor{white}\multirow{-4}{*}{$1.0\,$ns} & {Full (-) }& {97.66 }& {157.2 }& {271.0 }& {429.6 }& {-- }& {-- }& {271.0}\\ \hline \end{tabular} \caption{Extracted average and maximum streamer heights and lengths of the DC streamer simulations at both output timestamps of $0.2\,$ns and $1.0\,$ns. The thickness and length are treated as the sum of cells perpendicular and parallel to the streamer propagation direction. The direction of the negative streamers is treated as parallel to the dielectric surface, while the angle of incidence of the positive streamers is determined in post analysis. The propagation speed is determined as the length of the streamer divided by the passed time. The lateral speed is the X-component of the propagation speed.} \label{tab:SizeAndSpeed} \end{table*}\DIFaddend On the anodic side of the dielectric, the positively charged streamer head of the positive streamer is facing the dielectric surface, which can be seen as the red charges in \cref{fig:DC BothStreamersCharge}. This positively charged area acts as a virtual anode that leads to an enhanced electric field in both the X- and Y-directions below the dielectric surface on the cathodic side. \DIFaddbegin \DIFadd{Additionally, the positive streamer has a high charge density. }\DIFaddend \DIFdelbegin \DIFdel{The enhanced field promotes the negative streamer expansion. Likewise, the negative surface charges along the surface of the dielectric barrier on the cathodic side act as a virtual cathode which promotes expansion of the positive streamer along the X-direction.}\DIFdelend \DIFaddbegin \DIFadd{The enhanced field and high density promote the expansion of the negative streamer along the surface of the dielectric in the X-direction. The negative streamer thus charges the surface of the dielectric even more. These negative surface charges along the dielectric barrier on the cathodic side act as a virtual cathode, enhancing the electric field in both the X- and Y-directions above the dielectric. Thus, the negative streamer also facilitates an easier expansion of the positive streamer in the X-direction.}\DIFaddend Here it is clear, that both streamers work together in a unison that increases the effective plasma surface coverage and volume \DIFaddbegin \DIFadd{of both streamers. Naturally, the electric field reduces proportional to the square of the distance from the electrodes, such that the positive and negative streamers are no longer able to expand any further, even with their cooperative effect being considered. Therefore, as with the single phase streamer simulations, interactions with the neighboring discharges on the right hand side of the simulation domain do not need to be taken into consideration.}\DIFaddend \DIFdelbegin \DIFdel{ In essence, the positive streamer pulls the negative streamer along the dielectric surface while the negative streamer simultaneously pushes the positive streamer. Both of the pulling and pushing forces are acting against the potential energy barrier of ionization. Therefore, when either of the streamers are acting alone, they are not able to expand as far.}\DIFdelend \DIFaddbegin \DIFadd{In essence, the positive streamer and the negative streamer work together to promote propagation. Both of the streamers are acting against the potential energy barrier of ionization and the ever decreasing electric field strength. Therefore, with the simultaneous ignition of both positive and negative streamers in a twin SDBD system, the surface coverage and plasma volume are significantly increased when compared to a submerged symmetric SDBD system. When comparing the average lengths of the single and dual streamers in \mbox \cref{tab:SizeAndSpeed}}\hspace{0pt , the positive streamer sees an increase of the propagation length by $17.6 - 25.3\,\%$ and then negative streamer sees an increase of $8.5 - 11.9\,\%$ when both streamers are simultaneously ignited}\DIFaddend . \subsection{Dual Streamer Dynamics - AC} \label{ACStreamers} Due to the minimal extension of the plasma into the free space above and below the dielectric surface of the simulations discussed in \cref{SingleStreamers,DualStreamers}, the simulated area was shifted horizontally to be centered about a single electrode pair, and reduced in width. Under this geometry, geometry(b), a bipolar AC square voltage profile with fast rise and short pulse times is simulated, shown in \cref{fig:ACPulseform}. Seed electrons are placed both above and below the dielectric barrier. Under such conditions, during the first positive pulse the plasma propagates near identically to the DC case discussed in \cref{DualStreamers,fig:DC BothStreamers,fig:DC BothStreamersCharge}. However, here it is observed that two near-mirror discharges simultaneously propagate about the horizontal center axis of both the anode and cathode. For reasons of consistency, only the right half of the simulated area is shown, as seen in \cref{fig:Geom(b)}. If shown, minimal differences between the left and right discharges would be seen, but may be attributed to the stochastic nature of the PIC/MCC code and the random seed electrons implemented each time step. Additionally, the implemented rising time of the voltage waveform from $0\,$V to $+8\,$kV at $0.1\,$ns does not contribute many \DIFdelbegin \DIFdel{difference}\DIFdelend \DIFaddbegin \DIFadd{differences}\DIFaddend , except perhaps a slightly reduced overall density and propagation distance. The electron density distribution, \DIFaddbegin \DIFadd{positive ion density distribution, }\textit{\DIFadd{i.e.}} \DIFadd{summation of N$_2^+$ and O$_2^+$ ions, }\DIFaddend charge disparity distribution, and electric field magnitude and direction are shown in \DIFdelbegin \DIFdel{\mbox \cref{fig:AC_Dens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt }\DIFaddend , respectively. Sub-figures (a) through (f) of each correspond to identical timestamps of interest, shown with respect to the voltage waveform in \cref{fig:ACPulseform}. \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{ne3x2.eps} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) shaded region showing location of electron depletion, \DIFaddbeginFL \textit{\DIFaddFL{i.e.}} \DIFaddFL{sheath like feature., }\DIFaddendFL III) potential/failed/completed positive streamer branch.} \label{fig:AC_Dens} \end{figure*} \begin{figure*}[p] \centering \DIFaddbeginFL \includegraphics[width=0.885\textwidth]{N2O2ions.eps} \caption{\DIFaddFL{Spatial profiles of the positive ion density, }\textit{\DIFaddFL{i.e.}} \DIFaddFL{the summation of N$_2^+$ and O$_2^+$ ions, plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles.}} \label{fig:AC_ionDens} \end{figure*} \begin{figure*}[p] \centering \DIFaddendFL \includegraphics[width=0.885\textwidth]{charge3x2.eps} \caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) surface charges which are visually hidden by the mask of the dielectric barrier, III) potential/failed/completed positive streamer branch.} \label{fig:AC_Charge} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{Efield3x2.eps} \caption{Spatial profiles of the absolute value of the electric field plotted on a linear intensity scale as well as directional arrows at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Cut off value for minimum intensity scale (white) chosen as 1e6$\,$V/m. The direction of the electric field is shown via \DIFdelbeginFL \DIFdelFL{arrows with a }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{the }\DIFaddendFL normalized \DIFdelbeginFL \DIFdelFL{size.}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{vector field as discussed in \mbox \cref{Waveform}}\hspace{0pt }\DIFaddendFL } \label{fig:AC_EField} \end{figure*} Between $0.8\,$ns and $1.0$\,ns the applied voltage is reduced, at $0.9\,$ns the applied voltage is $0\,$V, after which the role of the anode and cathode switch. Due to the polarity switch the electric field is reversed, thus the electrons move in the opposite directions. Free electrons present in the streamer above the dielectric move away from the now metallic cathode. Likewise, electrons from the bulk of the streamer below the dielectric move towards the now anode. Electrons along the surface of the dielectric remain attached and do not move. At $1.0\,$ns the voltage on the cathode has reached its minimum value of $-8\,$kV, where it stays constant for a further $0.7\,$ns. \DIFaddbegin \DIFadd{After which, a second polarity switch takes place. All the while, the positive ion densities very closely follow the electron density profiles. }\DIFaddend \subsubsection{$1^{st}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift - Positive to Negative streamer} \hfill\\ Paying attention to the top half of the simulation regime focuses on the shift from a positive streamer to a negative streamer. As the voltage drops on the top electrode from $+8\,$kV to $0\,$V between $0.8\,$ns and $0.9\,$ns, sub-figures (a) and (b) respectively of \DIFdelbegin \DIFdel{\mbox \cref{fig:AC_Dens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt }\DIFaddend , the electrons are not accelerated as strongly as before. The electrons relax and shift a little inwards towards the streamer bulk and the positively charged streamer head. The plasma volume slightly shrinks and the overall electron density becomes more refined and increases in number. The positively charged streamer head reduces in thickness and disparity, \textit{i.e.} becomes more quasi neutral. As the electrons are not as strongly/no longer attracted to the metallic anode, a positive space charge builds up at the streamer anchor on the anode. These two effects respectively lead to the electric field strength reducing between the streamer and the dielectric surface, and a very strong electric field between the anode and the streamer anchor. At $0.9\,$ns the quasi neutral streamer has a slight net positive charge, and thus takes on the role of a virtual anode along the boundaries of the streamer, meaning that the electric field above the streamer has reversed in the X-direction, but not the Y-direction. Between $0.9\,$ns and $1.0\,$ns, sub-figures (b) and (c) respectively, the applied voltage is negative, thus the metallic electrode is now the cathode and the dielectric surface is the anode. Due to the reversed electric field, electrons within the streamer begin falling to the dielectric surface. Along the way the \DIFdelbegin \DIFdel{the }\DIFdelend remaining positive charges in the streamer head are \DIFdelbegin \DIFdel{either neutralized or }\DIFdelend flooded with electrons such that no charge disparity is noticeable\DIFaddbegin \DIFadd{, as it can be seen that the positive ion densities between the positive streamer and the dielectric do not change between these time steps}\DIFaddend . During this process, the electric field between the streamer and the dielectric surface completely reverses in both the X- and Y-directions. Naturally, falling electrons starting at locations where the streamer began to branch off but could not expand would reach the dielectric surface first. \DIFaddbegin \DIFadd{As the electrons are accelerated towards the dielectric surface, further ionization events take place creating new ions and electron avalanches. }\DIFaddend The electrons that first reach the dielectric charge the surface and repel other electrons into the X-direction away from the cathode, increasing the plasma propagation length. The original streamer is now acting like a negative streamer. By $1.7\,$ns, sub-figure (d), the negative streamer has charged the top surface of the dielectric and almost doubled the lateral length of the original positive streamer. Between $0.9\,$ns and $1.0\,$ns, the \DIFaddbegin \DIFadd{electrons near the }\DIFaddend streamer anchor/tail completely \DIFdelbegin \DIFdel{breaks }\DIFdelend \DIFaddbegin \DIFadd{break }\DIFaddend away from the metallic cathode as the electrons are pushed away from it\DIFaddbegin \DIFadd{; however, the positive ions do not move}\DIFaddend . This results in a net positive charged being left behind. Thus a new positively charged streamer head forms between the cathode and the streamer bulk, both above and below the streamer. Along with this new streamer head forms an extremely high electric field in the local proximity, oriented away from the positive charges towards the cathode. Newly created electrons above the cathode and the streamer bulk, which \DIFdelbegin \DIFdel{acts }\DIFdelend \DIFaddbegin \DIFadd{is acting }\DIFaddend as an anode, are attracted to the streamer head and a small branch begins to form. This branch is shown in \cref{fig:AC_Dens,fig:AC_Charge}\DIFaddbegin \DIFadd{(c) }\DIFaddend with arrows labeled (III)\DIFaddbegin \DIFadd{, and is also visible in \mbox \cref{fig:AC_ionDens}}\hspace{0pt (c)}\DIFaddend . As the simulation progresses in time, new electrons are continuously attracted towards this branch, gain energy, and eventually \DIFdelbegin \DIFdel{initiate an electron avalanche}\DIFdelend \DIFaddbegin \DIFadd{cause ionization}\DIFaddend . A cathode directed positively charged streamer head propagates along and floats above the cathode. A near mirror branch simultaneously forms on the other side of the metallic grid, which is not shown. Due to the positively charged streamer \DIFdelbegin \DIFdel{head }\DIFdelend \DIFaddbegin \DIFadd{heads }\DIFaddend leading both of \DIFdelbegin \DIFdel{the }\DIFdelend \DIFaddbegin \DIFadd{these } branches, they repel one another. Therefore, neither branch is able to reach the other. By $1.7\,$ns, sub-figure (d), the branch has completely developed. Through multiple executions, it has been observed that this branching does not take place if the applied voltage, and thus electric field between the streamer head and cathode, is too low. \DIFadd{It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \mbox{\cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}}(a), and \mbox{\cref{fig:DC BothStreamers}}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \mbox{\cref{fig:AC_Dens,fig:AC_Charge}}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches. As with the positive streamer in the DC case, discussed in \mbox{\cref{SingleStreamers}}, the authors believe these simulated branching mechanisms are accurate, even given the difficulty of experimentally observing them.}\DIFaddend \DIFdelbegin \DIFdel{Initially, on the top half of the simulation area, a positive streamer that is more or less freely able to propagate in space transitions into a negative streamer that propagates along the surface of the dielectric. Both streamers are able to propagate a similar distance in the same amount of time. However, under high enough applied voltage conditions, during the first polarity switch, additional branching is able to form between the cathode and the negative streamer bulk which functions as a virtual anode. This branching propagates via the well known cathode directed positively charged streamer head mechanism, \textit{i.e.} behaves like a positive streamer. It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \mbox{\cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}(a), and \cref{fig:DC BothStreamers}}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \mbox{\cref{fig:AC_Dens,fig:AC_Charge}}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches.}\DIFdelend \subsubsection{$1^{st}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift - Negative to Positive Streamer} \hfill\\ Focusing now on the bottom half of the simulation regime tracks the shift of the negative streamer to a positive streamer between $0.8\,$ns and $1.0\,$ns. During this time, the applied voltage is switched from $+8\,$kV to $-8\,$kV; however, the bottom electrode is held at a constant $0\,$V. As the applied voltage changes polarity, the bottom electrode also switches roles, now becoming the anode. Unlike the top half, the relaxation of the electric field causes a small shift in the bulk electrons which leads to both a large increase in the streamer size \DIFdelbegin \DIFdel{and reduction of the }\DIFdelend \DIFaddbegin \DIFadd{as the electrons are pushed away from the dielectric and towards the metallic electrode. Similar to the top half, the }\DIFaddend average electron density \DIFdelbegin \DIFdel{, as well as reducing }\DIFdelend \DIFaddbegin \DIFadd{slightly increases and }\DIFaddend the charge disparity in the positively charged streamer head near the now anode \DIFaddbegin \DIFadd{reduces}\DIFaddend . This eventually leads to the streamer attaching to the anode, seen in in sub-figure (c), as electrons are freely absorbed by it. \DIFaddbegin \DIFadd{This motion also leads to the creation of a strong positive ion density within at the anchor position, as seen in in \mbox \cref{fig:AC_ionDens}}\hspace{0pt (c). }\DIFaddend Furthermore, a small positive space charge forms between the negatively charged dielectric surface and the bulk plasma as the electrons are pushed away from the dielectric surface\DIFaddbegin \DIFadd{, but the positive ions do not move}\DIFaddend . However, the electrons that had attached to the surface do not desorb within the simulation, neither are electrons emitted due to surface field emission, \DIFaddbegin \DIFadd{emitted due to ion induced secondary electrons, }\DIFaddend nor are electrons reflected. The newly formed positively charged head and the negative surface charges form a very high electric field in a very thin sheath like structure between the streamer and the dielectric surface by $1.0\,$ns. \DIFdelbegin \DIFdel{This }\DIFdelend \DIFaddbegin \DIFadd{The } positively charged streamer head is floating above \DIFadd{the} surface, which is acting as the cathode; however, due to the original proximity of \DIFadd{the }\DIFaddend bulk plasma to the surface and the surface charges, the streamer head remains very close to the surface. \DIFaddbegin \DIFaddend The proximity of the streamer head limits the ability of the streamer to propagate into the X-direction. \DIFdelbegin \DIFdel{The streamer head is eventually able to wrap around to the front }\DIFdelend \DIFaddbegin \DIFadd{As electrons are continuously pushed away from the dielectric surface, the thickness of the streamer head and consequentially the sheath like region increase. Eventually, near the "tip" of this region, along the X-direction, newly generated electrons outside of the plasma bulk are sufficiently attracted towards the positive charges. This leads to the streamer head curling around the "tip" }\DIFaddend of the streamer bulk, providing a virtual anode for \DIFaddbegin \DIFadd{further }\DIFaddend newly created electrons to be attracted to. Sufficiently energetic electrons will promote propagation further into the X-direction, extending the plasma. This propagation also \DIFaddbegin \DIFadd{significantly }\DIFaddend extends in the Y-direction away from the dielectric surface as electrons created near the surface will not gain enough energy for ionization. This causes the streamer to properly float above the dielectric surface, which can be seen at $1.7\,$ns in sub-figure (d), as expected of a cathode directed positively charged streamer head. \DIFaddbegin \DIFadd{The increased propagation length is not as significant as the streamer on the top half of the dielectric, due to the limiting effect that the surface streamer exhibited. }\DIFaddend If the surface of the dielectric was not considered as a pure absorber, then the \DIFdelbegin \DIFdel{additional }\DIFdelend \DIFaddbegin \DIFadd{emission and reflection }\DIFaddend features would provide an additional electron source that would promote the expansion and propagation of the streamer after the voltage had switched. \DIFdel{Originally a low density, small volume negative surface streamer with a cathode oriented floating positive space charge developed on the bottom half of the simulations. This negative streamer is mostly limited to the surface of the dielectric barrier. However, under higher voltage conditions, it may be possible for the positive space charges to wrap around the metallic cathode. After the polarity switch, the negative streamer shifts to a positive streamer that floats above the dielectric surface acting as the cathode. The newly formed positively charged streamer head initially floats very close to the dielectric surface due to the original surface streamer. Eventually, the streamer head promotes propagation into the free space and floats a visible distance away from the dielectric surface. After the polarity switch, due to close proximity of the streamer head to the dielectric barrier, propagation in the X-direction is hindered, such that the negative streamer is able to propagate further.} \subsubsection{$2^{nd}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift} \hfill\\ Between $1.7\,$ns and $1.9\,$ns, the applied voltage potential begins to switch again, this time rising from $-8\,$kV to $+8\,$kV. At $1.8\,$ns, the second polarity change occurs. Due to limited computational resources, the simulation was not executed for a second full positive cycle, and was instead ended at $2.0\,$ns. During this polarity switch, the same changes in the positive and negative streamers are observed. On the bottom half of the simulation, the shift from a positive streamer at $1.7\,$ns, sub-figures (d) to a negative streamer is observed. When the applied voltage is $0\,$V at $1.8\,$ns, sub-figures (e), it can be seen that the floating positively charged streamer head is beginning to be flooded, while a new positively charged streamer head is forming near the metallic cathode. By $2.0\,$ns, sub-figures (f), the streamer bulk has mostly reached the dielectric surface again, has expanded further in the X-direction, and a new positive streamer branch forms near the cathode. It is very well expected that this branch would behave as the one discussed above. On the top half of the simulation, not only is the shift from a negative to positive streamer observed, but also the beginning of the collapse of the positive streamer branch is observed. As already explained and expected, at $1.8\,$ns the positively charged streamer head of both the negative streamer and the streamer branch near the metallic electrode is flooded by electrons moving towards the new anode. At $2.0\,$ns the anchor of the main streamer on the anode is fully formed; however, the electrons within the branch have a further distance to travel and have not yet reached the anode. As the polarity switches at $1.8\,$ns, electrons near the dielectric surface are repelled away and a floating positively charged streamer head forms. At $2.0\,$ns this streamer head is beginning to wrap around the large streamer bulk to promote further expansion in both the X- and Y-directions away from the metallic anode and dielectric surface respectively. \DIFdelbegin \DIFdel{\mbox \Cref{tab:AreaAndDens} }\hspace{0pt presents the instantaneous area and density of the simulated streamersfor both the dual streamer DC and AC scenarios at each presented timestamp. Three different electron densities are tabulated, the average density of all locations containing an electron density greater than $1\cdot10^{16}\,$m$^{-3}$, the average density of all locations containing an electron density greater than $1\cdot10^{18}\,$m$^{-3}$, and the maximum electron density. Two different areas are tabulated, that corresponding to the size of all cells containing an electron density greater than $1\cdot10^{16}\,$m$^{-3}$, and the size of all cells containing an electron density greater than $1\cdot10^{18}\,$m$^{-3}$. Note that the presented electron densities are instantaneous values at specific timestamps and are therefore, not directly comparable to time averaged measurements}\DIFdelend \DIFaddbegin \subsubsection{\DIFadd{Summary of Polarity Shifts}} \hfill\\ \DIFadd{During both polarity shifts, similar and important events take place on the respective positive and negative streamers. The Negative streamer is initially attached to the anodic dielectric surface, and floating away from the metallic cathode. As the polarity changes, the electrons reverse in direction, attaching to the metallic anode and forming a positively charged streamer head near the dielectric surface. Newly created electrons are quickly attracted to the streamer head and as such allow for the now positive streamer to further propagate into the X- and and Y-directions, thereby increasing the volume and overall density of the streamer. The positive streamer is initially floating away from the cathodic dielectric surface, and attached to the metallic anode. As the polarity changes, electron avalanches are instigated and rush towards the dielectric surface, thereby drastically increasing the plasma density, volume, propagation length, and surface coverage. Additionally, as a positively charged streamer head and sheath like region form near the metallic cathode, newly created electrons are able to instigate an additional positive streamer branch that floats above the metallic cathode. This branching feature also drastically increased the electron density and volume. Given a high enough initial voltage, it is expected that this positive streamer branch could form on the negative streamer before any polarity switching occurs.} \DIFadd{The increase in plasma densities, volume, and surface coverage are expected to be directly beneficial to various applications such as plasma enhanced catalysis and gas treatment. In plasma enhanced catalysis, the dielectric surface will typically be coated with a catalyst, such that any increase in surface coverage directly increases the active area of the catalyst. Additionally, any increase in plasma volume and density will naturally increase the radical densities which are available to react with either the catalytic surface and or the treatment gas that the plasma is ignited in, thus directly affecting the efficiency of the process}\DIFaddend . \section{Conclusions and Outlook} \label{Conclusion} In this work, the plasma streamer propagation of a twin SDBD setup by means of PIC/MCC code modeled in dry air under DC and AC voltage operation. The AC driving voltage waveform corresponded to a nanosecond square waveform with sub-nanosecond risetimes. The twin SDBD geometry being fully exposed and symmetric about the dielectric layer promotes both positive and negative streamer discharges to ignite simultaneously, along the edges of both the anode and cathode. This symmetry has not been theoretically investigated extensively, leaving the question of, among others, how do the streamers affect one another. In order to provide insight into this question, multiple scenarios were simulated. First, the propagation of a positive streamer and negative streamer were simulated individually under identical DC conditions. Second, both streamers were allowed to propagate using the same DC conditions, thereby providing insight into the interplay of the two streamers. However, the main focus of the paper is on the role of how the streamers interact and change under AC conditions; therefore, a short multi nanosecond duration bipolar square pulse is used to approximate said conditions. It was first shown that both the positive and negative streamers behave as expected under DC conditions. Both streamers form a quasi neutral bulk. The positive streamer forms and propagates via a floating cathode directed positive streamer head, while the negative streamer propagates via an electron avalanche along the surface of the dielectric barrier. The negative streamer also forms a positive space charge that floats above the metallic cathode. The floating positive space charges of both the positive and negative streamer must float as new electrons which are introduced between the cathode and said space charges are not able to gain enough energy for new ionization events. It was then shown that the interaction of both streamers under DC conditions does not significantly alter the propagation methods, but that the positive streamer "pulls" the negative streamer while simultaneously being "pushed" by the negative streamer, effectively increasing the surface coverage and the densities of the plasma streamers. The speed of propagation of both streamers differs when individually simulated versus when simultaneously simulated. The positively charged streamer head of the positive streamer propagates away from the anode providing an enhanced electric fields that the negative charges of the avalanche of the negative streamer then follow. Likewise, the \DIFdelbegin \DIFdel{avalanche of the }\DIFdelend negative streamer charges the dielectric surface which then \DIFdelbegin \DIFdel{help }\DIFdelend \DIFaddbegin \DIFadd{helps }\DIFaddend to push the positively charged streamer head of the positive streamer further away from the anode. Next, the interactions of the two streamers under switching voltage conditions was investigated. The fast polarity switching of the applied voltage causes significant changes in the streamers. The switch from a positive streamer to a negative streamer, and vice versa were observed to cause a significant increase in both plasma size and density due to similar effects that take place during the DC scenario. It was also observed that additional positive streamer branches are able to form between the negative streamer and cathode under the given conditions. The initial branching structure is very similar to structures that formed on the negative streamer during DC conditions and the AC conditions before any voltage switches. Therefore it is hypothesized that the voltage switching allows for a branch to more easily form, but is still subject to some minimal necessary applied voltage for a given set of geometrical conditions. Overall, an electrode geometry allowing for two oppositely-phased plasmas to simultaneously ignite is beneficial with respect to plasma size and density. The two fully exposed electrodes create strongly curled electric fields that promote the ignition of plasma streamers near the surface of the dielectric. The simultaneous ignition of the streamers enhances the lateral electric fields causing the streamers to propagate further away from the metallic electrodes than they would if one electrode was submerged. This effect is even further enhanced if the applied voltage is able to quickly switch polarities before the streamers have a chance to self extinguish\DIFdelbegin \DIFdel{. The }\DIFdelend \DIFaddbegin \DIFadd{; however, this fast of a voltage profile is difficult to experimentally achieve and as such the reader should remember this if attempting to compare any numerical information from this paper. Nonetheless, the }\DIFaddend enhanced electric fields also allow the plasma to achieve higher densities, which is in many applications desirable. In plasma enhanced catalysis applications, one might want to coat the dielectric surface with a catalyst. Having an enhanced plasma propagation length would directly correlate to an increased surface area of the catalyst that is directly affected by the plasma, leading to a potentially enhanced efficiency. In gas treatment applications, an increased plasma density is typically desirable in order to increase the rate of molecular fragmentation and/or purification. Future experimental measurements and theoretical \DIFaddbegin \DIFadd{and or numerical }\DIFaddend investigations on the electrode geometry could optimize an electrode system for a given set of applications. Additional simulations of a porous catalytic coating attached to the dielectric surface would provide \DIFaddbegin \DIFadd{additional }\DIFaddend insight into plasma enhanced catalysis applications. \section{Acknowledgements} This work is supported by the German Research Foundation (DFG) with the Collaborative Research Centre CRC1316 projects A4 and A5 and the Scientific Research Foundation from Dalian University of Technology, DUT19RC(3)045, and the National Science Foundation of China Grant No. 12020101005. \color{black} \DIFaddbegin \section*{\DIFadd{ORCID iDs}} \begin{table}[h] \begin{tabular}{l p{4cm}} \DIFadd{Q. Z. Zhang:} & \url{https://orcid.org/0000-0002-5726-0829} \\ \DIFadd{R. T. Nguyen-Smith:} & \url{https://orcid.org/0000-0002-5755-4595}\\ \DIFadd{F. Beckfeld:} & \url{https://orcid.org/0000-0001-8605-2634}\\ \DIFadd{Y. Liu:} & \url{https://orcid.org/0000-0002-2680-1338}\\ \DIFadd{T. Mussenbrock:} & \url{http://orcid.org/0000-0001-6445-4990} \\ \DIFadd{J. Schulze:} & \url{https://orcid.org/0000-0001-7929-5734}\\ \end{tabular} \label{tab:my_label} \end{table} \DIFaddend \printbibliography \end{document} \section{Introduction} Dielectric barrier discharges (DBDs) are plasma discharges incorporating at least one layer of dielectric material separating the two electrodes. The dielectric barrier limits the charge transfer and thus the current flow typically producing a non thermal plasma at atmospheric conditions. This non thermal nature allows for the efficient generation of reactive species thereby providing multiple possibilities in biomedical, surface, and industrial applications \cite{Brandenburg2017,HHKim2004}. DBDs are classifiable into two main categorical descriptors: volumetric and surface DBDs. Volume dielectric barrier discharges (VDBDs) are classifiable from DBDs by having a gas gap and a dielectric barrier present between the two electrodes, producing either homogeneous or filamantary like plasmas depending on the conditions \cite{Kogelschatz2010}. Surface dielectric barrier discharges (SDBDs) on the other hand, have only the dielectric layer directly separating the two electrodes; a plasma is thereby only able to ignite along the surface of the dielectric. Due to the possibility of having a thin structure, SDBDs may have particularly low flow resistance and are therefore commonly researched for gas treatment or flow control purposes \cite{Brandenburg2017,Moreau2007,Mueller2007,Corke2010,HHKim2004}. SDBDs have the capability of being built in many unique geometrical configurations ranging in symmetry providing either a single axis or multiple axes for plasma propagation. They may also allow for either a single phase, anodic or cathodic plasma, or a dual phase ignition process. Throughout the 1990s SDBDs have been well investigated as potential actuators for gas flow control \cite{Brandenburg2017,HHKim2004,Moreau2007,Corke2010}. For such purposes an asymmetric geometry, where one electrode is offset from the opposite electrode and possibly completely submerged by the dielectric, is typically used \cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018,Opaits2008,Sato2019}. Much effort has been put into controlling the plasma behaviors, such as densities and surface charge deposition, and their corresponding aerodynamic effects from said SDBD configurations \cite{Opaits2008,Corke2010,Opaits2012,Audier2014,Sato2019}. It has also been shown that AC and pulsed waveforms can significantly modulate the plasma profiles (at positive and negative voltage phases) \cite{Akishev2012,Audier2014,Biganzoli2012,Che2012,Debien2012,Hu2018,Soloviev2017,Soloviev2018,Starikovskii2009,Unfer2010}. In recent years, SDBDs have undergone extensive investigation for gas purification for industrial and environmental protection applications \cite{Brandenburg2017, Mueller2007,HHKim2004}. Absolutely calibrated two wavelength emission spectroscopy has been used in order to characterize a symmetric SDBD under tailored voltage waveforms \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019}. The waveform under experimental investigation is a damped sine wave with multiple $\mu$s period, adjustable peak to peak voltage, and pulsed in the kHz regime. Additional emission spectroscopy, absorption spectroscopy, and Fourier transform infrared (FTIR) spectroscopy methods have also been used to measure various species densities and chemical modifications of cystine. Furthermore, flame ionization detectors, gas chromatography-mass spectroscopy, and ion energy analyzer quadrupole mass spectroscopy are all being used to investigate and characterize the conversion of volatile organic compounds into non-harmful and non-toxic compounds \cite{Schuecke2020}. Furthermore, the inclusion of pre gas heating and catalyst coatings are being investigated for higher conversion efficiencies \cite{Schuecke2020,Peters2021}. In many applications, like chemical processing and gas purification, the interaction between a plasma and a catalyst yields synergistic effects resulting in enhanced performances \cite{HHKim2004,HHKim1999}. As such, various structures of catalytic material are often inserted into traditional DBD reactors including, but not limited to: spheres, honeycombs, 3D fibre deposition structures and coatings of the dielectric barrier itself \cite{Zhang2018,HHKim1999}. The synergistic effect is obtained via two primary methods. Firstly, the altered geometry along with tailored voltage waveforms influence the discharge characteristics \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Zhang2015}. Secondly, the plasma distribution determines the effective contact area of the catalyst thereby altering the morphology and work function of the catalyst \cite{Neyts2014,Zhang2017}. This leads to a great importance on generating a controllable plasma density and spatial distribution \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Shang2019}. The above studies, although very interesting, were mostly based on experiments of submerged SDBDs where the plasma discharge is confined to one side of the dielectric plate providing investigations only into a single phase ignition process \cite{Akishev2012,Audier2014,Biganzoli2012,Corke2010,Debien2012,GAO2017,Moreau2007,Opaits2012,Peng2019,Xiahua2016,Shang2019,Shang2019,Soloviev2017,Starikovskii2009}. That is to say that only either an anodic or cathodic phase plasma is present, but never both simultaneously. This single phase nature limits the effective volume and surface area of the plasma which defines the effective catalytic surface area exposed to the plasma species in plasma enhanced catalysis. As such, the catalyst performance is potentially limited to a great extent in a single phase SDBD. In gas treatment conditions, an SDBD electrode system is very likely to be placed along the central plane parallel to gas flow in order to minimize flow resistance and increase the treatment volume. Under these conditions, it is very clear that utilizing an SDBD electrode system which ignites on both sides of the dielectric plate will improve the treatment volume, and as such efficiency of the process. Unfortunately, most theoretical investigations utilizing circuit models \cite{Pipa2012,Peeters2014,Pipa2020_PowerDBDEQC}, global models, molecular dynamic models \cite{Neyts2014}, fluid models \cite{Che2012,Peng2019,Soloviev2018}, and even particle-in-cell/Monte Carlo collision (PIC/MCC) models \cite{Zhang2015,Zhang2017,Zhang2018} of (S)DBDs and packed bed reactors provide limited insights into the underlying mechanisms of the plasma propagation \cite{Mujahid2018,Mujahid2020,mujahid2020Propagation}. No contributions on the theoretical investigation of a dual phase symmetric SDBD could be found by the authors, pointing to a significant lacking of knowledge of such configurations is present. The inherent mechanisms behind the evolution of the plasma discharge in asymmetric and even more so symmetric SDBDs is still not fully understood. It is not yet clear how a simultaneous positive and negative surface streamer (above and below the dielectric) can interact with each other, and to what extent, if any, do they enhance one another. It is not clear how the streamers respond to tailored voltage waveforms, nor what the optimized conditions are for generating large treatment volumes. It is unknown to what extent the surface streamers interact with an active surface such as a catalyst. These are crucial pieces of information to ensure good plasma enhanced catalysis performance. Additionally, many experiments, such as optical emission spectroscopy, still have open questions as to whether the results are more representative of the streamer bulk or the highly dynamic streamer head. These concerns demand a more detailed simulation for the dynamic behavior of the positive and negative streamers in a dual phase symmetric SDBD during the ignition process. \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{NegativStreamer_Initial-eps-converted-to.pdf} \caption{Schematic detailing the negative streamer formed via an anode oriented electron avalanche.} \label{fig:NegativeStreamer} \end{figure} Therefore, in the present work we computationally investigate the plasma propagation of a symmetric, dual phase SDBD, hereby referred to as the twin SDBD, under various voltage waveform conditions. The particular geometry of the twin SDBD ensures that both an anodic and cathodic phase plasma are simultaneously ignited, separated by the dielectric barrier, and are physically symmetric about the metallic electrodes. The symmetric geometry does not only give rise to a higher plasma surface coverage, but also enables a direct comparison between the positive streamers on the anode side versus the negative streamers on the cathode side as well as the interaction between the two. The numerical investigations are carried out by means of a 2D PIC/MCC simulation software known as VSim, a multi-physics simulation tool, which combines the Finite-Difference Time-Domain (FDTD), PIC, and Charged Fluid (Finite Volume) methods for simulating electrical gas discharges. \cite{NIETER2004}. The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form. \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{PositivStreamer_Initial-eps-converted-to.pdf} \caption{Schematic detailing the positive streamer, which forms via a cathode oriented propagation front.} \label{fig:PositiveStreamer} \end{figure} To provide a basis of understanding the streamer dynamics in a twin SDBD, that will be revealed in this work, we briefly recall the fundamentals of positive and negative streamer dynamics in a DBD. A negative streamer, see \cref{fig:NegativeStreamer}, ignites through an anode oriented electron avalanche: electrons, which are accelerated against the direction of the electric field, collide with the background gas. Ionization takes place causing an exponential growth of electrons and ions, creating a quasineutral bulk plasma that propagates from the cathode to the anode. A positive streamer, see \cref{fig:PositiveStreamer}, is also created via electron collisions, but is somewhat more complex. The cathode oriented positively charged streamer head attracts the electrons which cause ionization in front of the streamer head, resulting in an ionization wave. This ionization wave propagates from the anode to the cathode, leaving behind a quasineutral bulk plasma. Branches may form from the streamer head creating additional ionization waves; branching is more readily observed in gas mixtures that are susceptible to self induced photo ionization. Under short timescales, a few nanoseconds and less, a feature very similar to a low pressure sheath forms. The positive streamer head floats above the cathode due to an absence of available electrons, thus creating a region with a very strong electric field. Given an appropriate amount of time, the positive ions do reach the cathode due to their own velocities. At the dielectric(s), any charges that reach the surface adhere to it and charge it. These surface charges repel incoming like charges along the surface, causing both positive and negative streamers to spread out. Due to the lightweight electrons, this effect is more prominent in negative streamers; however, the floating nature of positive streamers can also facilitate a similar effect. For a deeper understanding we refer the reader to Nijdam \textit{et. al.} and to Zhang \textit{et.al.} \cite{Nijdam2020,Zhang2021} where the dynamics of positive and negative streamers of a VDBD via PIC/MCC simulations are detailed. \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{SDBD_Elektrode.png} \caption{Computer generated graphic showing the physical structure of the SDBD electrode under consideration. A metallic lattice (dark grey structure) is printed symmetrically on both the top (visible) and bottom (hidden) faces of the Al$_2$O$_3$ dielectric barrier (light grey material). Due to the strong curvature of the electric field lines when under operation, the plasma (purple structure) ignites along the edges of the metallic lattice.} \label{fig:Electrode} \end{figure} This paper is structured as follows: First in \cref{Model} the computational model and geometry are described. Following this, in \cref{Results} the results of the various simulations are presented: the DC results in sub-\cref{SingleStreamers,DualStreamers}, and the AC results in sub-\cref{ACStreamers}. Finally, in \cref{Conclusion} our closing remarks and conclusions are discussed. \section{Computational model} \label{Model} \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{NeuElektrode_GitterLinie0001.jpg} \caption{SEM image of electrode cross section. The bulk, homologous material is the Al$_2$O$_3$ dielectric. The hump like structure with larger grains is the metallic electrode trace.} \label{fig:SEM} \end{figure} The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \cite{Offerhaus2017} and subsequently in \cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}. The authors defer the readers to these references for a detailed description of the twin SDBD system under question. It is important to reiterate that this device consists of a dielectric plate, with metallic grids placed on the surface of the dielectric on both sides. A computer rendered sketch of the system can be seen in \cref{fig:Electrode}. These grids serve as electrodes. The system is built with both a geometric and electrical symmetry, such that both a positive and negative streamer are simultaneously ignited on either side of the dielectric under any given sufficiently high voltage conditions, which thereby warrants the name ”twin SDBD”. The metallic traces of the electrode system have been imaged with a scanning electron microscope for a more accurate depiction of the electrodes within the simulations. An example image of the cross sectional view of the metallic traces can be seen in \cref{fig:SEM}, which shows the curved nature of the metallic traces located on the dielectric, which is included in the simulation. \subsection{Particle in Cell/Monte Carlo Collision model} \label{Sim Model} \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{MCC_PIC_Flowdens-eps-converted-to.pdf} \caption{Logic flow diagram of the PIC/MCC algorithim. One complete loop of the flow diagram represents one time stamp of the PIC/MCC code. During each successive change in the time step of the simulation, all sub algorithms are performed. Particles are pushed, merged, collided, generated, the densities are determined, and analyzed for electrical forces.} \label{fig:ModelFlow} \end{figure} A 2D PIC/MCC model is used to study the plasma propagation of the twin SDBD based on the VSim simulation software \cite{NIETER2004}. VSim is being widely used and has been validated \cite{NIETER2004,Zhang2015,Zhang2017}. As these investigations taken place under similar conditions presented here (atmospheric pressure DBDs, nanosecond timescales and micrometer length scales), we operate under the assumption that our model is also valid. Additionally, the usage of PIC/MCC simulations to investigate the COST-Jet at atmospheric pressure yield realistic results that agree well with experiments, \cite{Bischoff2018,Korolov2019,Korolov2020}, proving that PIC/MCC models can indeed be used at atmospheric conditions. The PIC/MCC simulations performed in VSim are based on an explicit solver and the electrostatic approximation of Maxwell's equations, which were described in detail in \cite{Birdsall1991}. The PIC/MCC model takes advantage of accounting for the detailed kinetic behavior of charged particles which may be important for the evolution of electron avalanches and branching mechanisms, and therefore, the plasma streamer profiles. Air at atmospheric pressure is used as the discharge gas, with a constant density of background molecules, $80\,\%$ N$_2$ and $20\,\%$ O$_2$, at $300\,$K. Free electrons, N$_2^+$, O$_2^+$ and O$_2^-$ ions are traced throughout the simulation, which are represented as super-particles, i.e. one super-particle corresponds to a certain number of real particles defined by their numerical weighting, initially starting at $20\cdot10^3$ real particles per super particle \cite{Birdsall1991}. In order to numerically initiate the plasma discharge, a uniform distribution of seed electrons is placed within the free space of the simulated geometry. These seed electron super-particles have a density corresponding to $1\cdot10^{15}\,$m$^{-3}$. Realistically, seed electrons are present due to cosmic radiation and environmental photo-ionization producing background electrons, as well as remaining charges from previous plasma discharges. The initial electron density was chosen as such in order to increase the initial weighting of the super particles, and thereby the simulation speed. The high initial density increases the speed of the initial electron avalanches and streamer breakdown. As seen later on, maximum achieved densities are on the order of $1\cdot10^{22}\,$m$^{-3}$, which is much higher than the initial density; therefore, the final profiles and mechanisms would not change if a lower initial density was chosen. Thus, the high initial density serves to increasing the simulation speed while not altering the results of the simulations. It should be noted that the usage of uniform seed electrons does not consider local effects of previous discharges. As the plasma streamers evolve, the particle number of each considered species will rapidly increase due to the ionization avalanches. To account for this and to reduce the computation time, the weight of each super-particle is adaptive. A merger algorithm conserving both momentum and energy will combine same species super-particles when the number of said super-particles exceeds a threshold value of 10 super-particles respective to each cell of the simulation mesh. As the particle numbers only increase within the considered simulated time, no de-merger algorithm is implemented. This adaptive weight and merger algorithm is described in more detail in \cite{Zhang2017}. Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \cite{Zhang2017}. The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}. At the surface of the dielectric barrier, only electron absorption is considered, \textit{i.e.} no electron reflection or surface electron emission is considered. Reported in \cite{Zhang2015,Zhang2017}, the inclusion of secondary electron emission, SEE, surface coefficients do not significantly alter the form of the simulated positive streamers, due to the floating nature of the streamer head. The negative streamer; however, propagates along the surface of the dielectric barrier, and as such, SEE coefficients would be more critical. The inclusion of SEE coefficients would theoretically increase the number of "background" electrons available for streamer propagation, and as such the streamers would propagate faster; however, their forms should not strongly change. Additionally, due to the lower electric fields of the negative streamer and the very short considered timescales, the effect of ion induced SEE would be very limited within this investigation. \begin{figure*}[t] \centering \subfloat{ \label{fig:Geom(a)} \includegraphics[width=0.885\textwidth]{GeoLarge-eps-converted-to.pdf}} \\ \subfloat{ \label{fig:Geom(b)} \includegraphics[width=0.885\textwidth]{GeoSmall-eps-converted-to.pdf}} \caption{Schematic of the simulation regimes. Subfigure (a) and (b) correspond to the DC and AC simulated geometries respectively. The color scale corresponds to the different materials as follows: I) air ($80\,\%$ N$_2$ and $20\,\%$ O$_2$), II) Al$_2$O$_3$ dielectric, III) grounded electrode, IV) powered electrode. The boxed in regions denoted with (i) correspond to the regions that are presented in greater detail for the rest of the publication.} \label{fig:Geometry} \end{figure*} With each successive timestamp of the model, a particle pusher, particle merger, and Monte Carlo collision algorithms for all particle species follow in succession. After the collisions, a new electron super particle is added to the simulation regime, the density of each cell is calculated, and Poisson's equation is solved in order to get the electric forces being applied to each particle, after which the cycle repeats. A diagram of the general flow is shown in \cref{fig:ModelFlow}. \subsection{Simulated geometry} \label{Sim Geometry} The geometry to be simulated is a cross section of the twin SDBD described in \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}, and shown in \cref{fig:Electrode}. The twin SDBD simultaneously produces positive and negative phased plasma streamers along the edges of the metallic traces; however, the two phases are separated by the Al$_2$O$_3$ dielectric barrier. On either side of the dielectric barrier, ignition on opposite edges of the respective metallic trace can be considered as two individual but same-phased streamers. Two different simulation geometries, referred to as geometry(a) and geometry(b), are considered in order to appropriately resolve the interaction of both the same-phased and respectively opposite-phased plasma streamers. Simulation geometry(a) and simulation geometry(b) are presented in \cref{fig:Geometry}. In total geometry(a) contains a 2D plane that is $9.6\,$mm x $1.2\,$mm in Cartesian X and Y coordinates. The plane is uniformly divided into square cells with unit length of $2.4\,\mu$m resulting in a square lattice of 4000 x 500 cells. The grid size was chosen based off of the Courant limit, $c\cdot dt<dx$, where $c$ is the speed of light and $dx$ is the grid size. Geometry(b) utilizes the same size grid cell, but uses only 1000 x 500 cells resulting in a total width of $2.4\,$mm. For ease of comparison, results from a zoomed in region of size 500 x 500 cells from both simulated geometries are presented for the rest of the paper. The respective regions are outlined by a dashed line and annotated with $(i)$ in \cref{fig:Geometry}. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{Efield_arrows-eps-converted-to.pdf} \caption{Electric field distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The magnitude of the electric field is plotted on a linear intensity color scale, where the threshold value for the minimum intensity is chosen to be $1\cdot10^{6}\,$V/m. The normalized direction of the electric field is shown via the vector field.} \label{fig:EField} \includegraphics[width=0.885\textwidth]{Potential_arrows-eps-converted-to.pdf} \caption{Electric potential distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The electric potential is plotted on a linear intensity color scale. Additionally, the normalized direction of the electric field is shown via the vector field.} \label{fig:EPotential} \end{figure*} Firstly, to investigate the interactivity of two same-phase streamers, positive-positive or negative-negative, two anodes and two cathodes are included in simulation geometry(a). The two same-phase electrodes are simulated with the same potential under DC conditions and are separated in the X-direction by $9.5\,$mm, corresponding to the distance between the edges of two neighboring and parallel metallic traces of the physical electrode. In order to minimize the computational time, the X boundaries of geometry(a) correspond to the vertical center lines of the metallic traces. Simulation geometry(a) may be seen in \cref{fig:Geom(a)}. Later, in \cref{SingleStreamers,DualStreamers} it is deduced that minimal interactivity is observed between two same-phase streamers. This is due to the limited spatial propagation of the plasma streamers on the considered timescales. Therefore, it is appropriate to simulate a section centered about just one metallic trace under the same timescales, thus a second simulation geometry is investigated. In simulation geometry(b) only one set of electrodes is considered, is only simulated under AC conditions, and is centered about the X-axis with the walls being $1.2\,$mm away from either side of their center line. Concerns about the reduced simulation domain having an effect on the calculated electric field strengths are mitigated by the naturally very fast reducing field strength as a function of the square of the distance from the electrodes. The usage of Neumann boundary conditions additionally improves the accuracy, as the simulation walls are not forced to a specific potential. Simulation geometry(b) may be seen in \cref{fig:Geom(b)}. Both considered geometries of the 2D PIC/MCC model represent a cross sectional view of the electrode structure, where the anodes and cathodes are separated along the Y-axis by the dielectric barrier. The dielectric is located in the middle of the Y-axis, was chosen to be $0.500\,$mm thick and expands the whole X-direction, and is simulated with a dielectric constant of 9. In this representation, the Z-direction would equate to the length (or width) of the physical electrode setup but is mathematically treated as constant/homogeneous. This results in a simulation regime that is most valid for a planar section in the middle of any grid structure. In both geometries, the electrode structure itself is a geometrical composition of multiple tangent arcs resulting in a "hump" like structure. This electrode structure is used to approximate the real geometric structure of the metallic traces which can be seen in \cref{fig:SEM}. It should be noted that the simulated aspect ratios of the electrode thickness and width to the dielectric thickness is significantly different from reality; however, this was chosen as such in order to avoid numerical issues which would arise from using an appropriately sized simulation grid for realistic aspect ratios. Furthermore, the reduced dielectric thickness of the simulations versus the actual electrode configuration should not lead to any major differences in the interpretations of this paper, as it is the surface of the dielectric that plays a much more important role. By using a reduced dielectric thickness, we are able to increase the number of computational cells available for the plasma propagation, without increasing the entire simulation domain. Particle densities and electric fields are resolved using a cutting-cell technique in order to handle the irregular geometry, through contributions of neighboring cells. The authors refer the reader to references \cite{Smithe2008,Meierbachtol2015,loverich2010} for more information. Neumann boundary conditions are used in all directions to ensure a smooth electric potential distribution at the boundaries of the simulation walls. The timesteps are non adaptive and fixed at $2\cdot10^{-13}\,$s. Similar to \cite{Likhanskii2010}, a singular new electron super-particle is randomly added to the simulation domain at each timestep in order to account for random events such as cosmic radiation, photo-ionization, \textit{etc.} as described in \cite{Ebert2006,E_M_van_Veldhuizen2002,Qiu2017}. These random events are beyond the scope of the available VSim functions. The seed electrons, both background and newly loaded electrons, are both sufficient in the simulation region to support streamer propagation as well as to not interfere with the plasma bulk as they are far fewer compared to the generated plasma. The generated plasma density profile is also much smaller than the simulation domain in both considered geometries. \subsection{Waveform variation} \label{Waveform} In all considered simulations and both geometries, the electrode(s) above the dielectric barrier are treated as the powered electrode(s) while the bottom electrode(s) are held constant at $0\,$V. This choice is arbitrary and due to the physical symmetry of the system would provide only mirrored results if the opposite choice, either inverse polarity and/or choice of powered electrode, was made. Initially, a constant positive $8\,$kV potential is applied to geometry(a), thus the two powered electrodes take the role of the anodes while the bottom two are the cathodes. The initial electric field distribution can be seen in \cref{fig:EField}(a) and the initial potential distribution can be seen in \cref{fig:EPotential}(a). Within both figures, the magnitudes of the presented quantity are shown via the color scale, and the normalized direction of the electric field are additionally presented for further clarity. The normalized direction is presented as a vector field, where the X and Y directions of the vectors are the normalized X and Y values of the electric field at that grid cell. Naturally, the magnitude of the electric field is obtained from the square root of the sum of the X and Y components squared: $E_{mag} = \sqrt{E_X^2 + E_Y^2}$. First, in order to investigate solely the role of the positive streamers, only the top half of the simulation area is seeded with the initial electrons. Likewise, the bottom half is subsequently seeded in a second simulation in order to solely investigate the negative streamers. Third, both halves are identically seeded thereby investigating the interplay and differences of both discharges igniting simultaneously under the DC voltage conditions. These three conditions are applied to geometry(a) only. Lastly, a varying voltage waveform is investigated. \begin{figure}[t] \centering \includegraphics[width=0.4425\textwidth]{ACVoltage-eps-converted-to.pdf} \caption{Applied voltage waveform of the AC simulations. Dashed lines labeled a through f at 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns respectively represent the timestamps at which results are presented in \cref{DualStreamers}.} \label{fig:ACPulseform} \end{figure} Geometry(b) is only investigated under the AC conditions shown in \cref{fig:ACPulseform}. Under these conditions, the role of the anode and cathode switches twice; thereby giving insights into the extreme dynamics of fast voltage streamer switching. Initially, the applied voltage potential sharply rises within $0.1\,$ns to the $8\,$kV maximum which is then held constant for $0.7\,$ns. During this time, the anode is located on the top side of the dielectric barrier. At $0.8\,$ns, the voltage is decreased at the same rate, $80\,$kV/s, reaching the minimum applied voltage of $-8\,$kV at $1\,$ns making the top side of the dielectric barrier the cathode. Again, this minimum value is held constant for $0.7\,$ns until switching back to the positive $8\,$kV potential, again switching the location of the anode and cathode. Without considering any plasma propagation, the base electric field distribution for both a positive and negative applied potential are shown in \cref{fig:EField} and the equivalent potential distribution can be seen in \cref{fig:EPotential}. All conditions are simulated for up to a maximum of $2\,$ns, thereby only revealing the inception phase of the streamers. The insights revealed within the \cref{Results} are consistent with other PIC/MCC models investigating DBD streamers in structured and porous catalytic surfaces \cite{Zhang2018,Zhang2018Porous}, which also are simulated in the ns timescales. Additionally, the phenomenon of a floating positive surface discharge is also observed in various fluid models \cite{Babaeva2016,Yan2014}. Therefore, the authors believe the results presented throughout this paper, even given the short time scales, are reasonable. The results reported below are meant for a qualitative understanding of the streamer dynamics in a twin SDBD. The general conclusions for more natural voltage waveforms, such as continuous sine waves, can be drawn, and could warrant further studies considering a real RF source. However, the results obtained in this work are particularly relevant for tailored voltage waveforms, which is a hot topic of current research and is trending towards shorter pulses and steeper rise times. \section{Results and Discussion} \label{Results} \subsection{Single Streamer Dynamics} \label{SingleStreamers} \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_up_log-eps-converted-to.pdf} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the positive streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged streamer head leading to streamer propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature, III) potential/failed positive streamer branch.} \label{fig:DC PositiveStreamer} \end{figure*} Under the 8kV DC conditions with seed electrons present only on the anodic side of geometry(a), the propagation of an anodic phased plasma streamer, also known as positive streamer is simulated and presented in \cref{fig:DC PositiveStreamer}. The initial electric field distribution is shown in \cref{fig:EField}(a) and the initial electric potential distribution is shown in \cref{fig:EPotential}(a). Under these conditions, a cathode oriented positively charged streamer head that is able to freely move from the metallic anode to the dielectric surface is able to form. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_down_log-eps-converted-to.pdf} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the negative streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region leading to positive streamer like propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature.} \label{fig:DC NegativeStreamer} \end{figure*} The streamer structure is anchored to the anode just above where the highest electric fields are located. It would be expected that the anchoring would take place at the location of the highest electric field; however, under these conditions this is located at the intersection of the electrode and the dielectric surface. At this point, and immediately next to it, due to the strong curvature of the electric field, electrons do not have enough space to gain sufficient energy for ionization. Multiple executions of the simulation produce anchored positions at the same location; furthermore, the anchor position is also at a symmetrical position on the opposite anode, which is not presented in \cref{fig:DC PositiveStreamer}. This suggests that the anchor is positioning itself based on the strong curvature of the anode, and not through the randomness of the ionization events. Indeed, when looking at the curvature of the simulated electrode, it appears as if the plasma is next to the strongest curvature. Under no conditions did the simulated positive streamers extend a significant amount into the X-direction, such that interactions between the two positive streamers do not need to be considered. At $0.2\,$ns the positive streamer has advanced $0.12\,$mm meaning a propagation speed of $0.62\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The positive streamer had reached a propagation distance of $0.31\,$mm resulting in an averaged speed of $0.31\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. It was observed via multiple test executions that these propagation speeds and distances were highly dependent on the initial background electron density. With lower initial densities, the simulated streamer propagates a shorter distance. Likewise, larger background densities would result in faster speeds and longer propagation distances. Initially the positive streamer began to propagate along the electric field lines at an angle offset from the surface of the dielectric barrier. The positive streamer head, which is not directly visible in \cref{fig:DC PositiveStreamer}, forms in front of the streamer and along the bottom side between the bulk plasma and the dielectric barrier. The streamer head is annotated in \cref{fig:DC PositiveStreamer} with an arrow labeled (I). Between the dielectric barrier and the positively charged streamer head is located a sheath like region, annotated via (II), where free electrons are attracted to the streamer head; however, they do not have enough space in order to promote further propagation towards the dielectric. Therefore, the only direction possible is outwards along the X- and positive Y-directions, towards the center of the simulated area. As the streamer continues to propagate along this direction, the electric field gets weaker proportional to $1/r$ (in 2D) or $1/r^2$ (in 3D), where $r$ is the distance from the electrode. Thus the positive streamer is able to advance in a somewhat straight line, parallel to the initial trajectory, which is at some angle to the dielectric surface; under these presented conditions this trajectory angle was determined to be $20.6^\circ$. The further the streamer propagates, the more space is available for propagation into the negative Y-direction, towards the dielectric surface. Therefore, in \cref{fig:DC PositiveStreamer}(b), a potential branch had began to take shape, annotated with (III); however it is not able to fully develop. As the cathode is located underneath the positive streamer, that is the only location of the streamer head; therefore, no branching occurs above the streamer bulk. Due to the location of the failed branch in \cref{fig:DC PositiveStreamer}(b)(III), it would be extremely difficult to experimentally observe, and is noticeable within these simulations because of the kinetic nature of PIC/MCC models. Naturally, without experimental evidence, the reader might question the reality of whether branching forms or not at these orientations. The authors believe that the simulations are indeed accurate in predicting these features. In \cref{fig:DC NegativeStreamer} the same simulation conditions are presented, except the initial seed electrons are on the cathode side of the dielectric barrier, thus the negative streamer is simulated. The seed electrons are still accelerated in the opposite direction of the electric field lines shown in \cref{fig:EField}(a). An electron avalanche directed towards the anode initiates the discharge. Under these conditions the electrons are pushed towards the dielectric, where they begin to collect on and charge the surface of the dielectric. A positively charged spatial region forms next to the cathode, but is unable to anchor to the cathode, as it must float at some distance away from the cathode. \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{ne_all_log-eps-converted-to.pdf} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region/streamer head, II) location of electron depletion, i.e. sheath like feature., III) potential/failed positive streamer branch.} \label{fig:DC BothStreamers} \end{figure*} Newly created background electrons are pushed away from the cathode. Simultaneously, the electrons are attracted towards the positively charged region. Outside of the sheath region between the two, marked via an arrow labeled (II) in \cref{fig:DC NegativeStreamer}, these two directions are opposite one another. Only a very small amount of electrons are sufficiently accelerated to the positive charges with enough energy in order to cause ionization. Therefore, minimal propagation of the negative streamer parallel to the cathode surface takes place, as depicted via (I). Newly created background and avalanche electrons that reach the dielectric surface, instead of the positively charged spatial region, help to promote the propagation of the negative streamer along the surface of the dielectric in the X-direction away from the cathode and towards the center of the simulation area. However, no distinctly visible negatively charged streamer head is directly observable. At $0.2\,$ns the negative streamer has advanced $0.077\,$mm meaning a propagation speed of $0.39\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The negative streamer had reached a propagation distance of $0.25\,$mm resulting in an averaged speed of $0.25\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. As with the positive streamer, lower and higher initial electron densities result in a shorter and longer propagation distance, respectively. Furthermore, under no conditions did the two simulated negative streamers next to both cathodes extend a significant amount into the X-direction, such that interactions between the two negative streamers do not need to be considered. \subsection{Dual Streamer Dynamics - DC} \label{DualStreamers} \begin{figure*}[t] \centering \includegraphics[width=0.885\textwidth]{Charge_all-eps-converted-to.pdf} \caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region/streamer head, II) surface charges which are visibly hidden by the mask of the dielectric barrier, III) potential/failed positive streamer branch.} \label{fig:DC BothStreamersCharge} \end{figure*} Presented in \cref{fig:DC BothStreamers,fig:DC BothStreamersCharge} is the complete DC scenario, where seed electrons are present on both the anodic and cathodic sides of the dielectric barrier. The same positive $8\,$kV DC voltage is used. Comparing \cref{fig:DC PositiveStreamer}(a), \cref{fig:DC NegativeStreamer}(a), and \cref{fig:DC BothStreamers}(a) a small difference is observed at $0.2\,$ns. Primarily, the sizes and overall density of both the positive and negative streamers have increased. The positive streamer has advanced $0.15\,$mm while the negative streamer has advanced $0.088\,$mm away from the anodes and cathodes, respectively. By $1.0\,$ns both streamers have significantly increased in size and average density compared to \cref{fig:DC PositiveStreamer}(b) and \cref{fig:DC NegativeStreamer}(b). Failed branches on the positive streamer are still present. The positive streamer has advanced a total of $0.41\,$mm while the negative streamer advanced a total of $0.27\,$mm. \Cref{tab:SizeAndSpeed} summarizes the streamer height, length, propagation angle, and propagation speed for the positive and negative streamers under all three simulation conditions. The propagation angle is determined as the angle at which the positive streamer propagates away from the dielectric surface, and is treated as $0\,^\circ$ for the negative streamer. The streamer length and thickness are respectively the size of the streamers with respect to the parallel and perpendicular axes about the streamer propagation angle. On the anodic side of the dielectric, the positively charged streamer head of the positive streamer is facing the dielectric surface, which can be seen as the red charges in \cref{fig:DC BothStreamersCharge}. This positively charged area acts as a virtual anode that leads to an enhanced electric field in both the X- and Y-directions below the dielectric surface on the cathodic side. Additionally, the positive streamer has a high charge density. The enhanced field and high density promote the expansion of the negative streamer along the surface of the dielectric in the X-direction. The negative streamer thus charges the surface of the dielectric even more. These negative surface charges along the dielectric barrier on the cathodic side act as a virtual cathode, enhancing the electric field in both the X- and Y-directions above the dielectric. Thus, the negative streamer also facilitates an easier expansion of the positive streamer in the X-direction. Here it is clear, that both streamers work together in a unison that increases the effective plasma surface coverage and volume of both streamers. Naturally, the electric field reduces proportional to the square of the distance from the electrodes, such that the positive and negative streamers are no longer able to expand any further, even with their cooperative effect being considered. Therefore, as with the single phase streamer simulations, interactions with the neighboring discharges on the right hand side of the simulation domain do not need to be taken into consideration. \begin{table*}[t] \centering \begin{tabular}{|c|l|c c|c c|c|c c|} \hline \multirow{2}{*}{Time} & \multirow{2}{*}{DC Streamer} & \multicolumn{2}{c|}{Thickness [$\mu$m]} & \multicolumn{2}{c|}{Length [$\mu$m]} & Angle [$^\circ$] & \multicolumn{2}{c|}{Speed [$\frac{\mu\mathrm{m}}{\mathrm {ns}}$]}\\ & & Average & Maximum & Average & Maximum & & Propagation & Lateral \\ \hline \hline \rowcolor{gray!25} \cellcolor{white} & Positive & 38.39 & 49.20 & 123.4 & 170.4 & 20.60 & 617.1 & 577.7\\ \rowcolor{white} \cellcolor{white} & Negative & 63.66 & 133.2 & 77.70 & 158.4 & -- & -- & 388.5\\ \rowcolor{gray!25} \cellcolor{white} & Full (+) & 40.27 & 54.00 & 149.72 & 206.4 & 14.80 & 748.61 & 723.77\\ \rowcolor{white} \cellcolor{white}\multirow{-4}{*}{$0.2\,$ns} & Full (-) & 64.63 & 128.4 & 88.20 & 166.8 & -- & -- & 441.0\\ \hline \rowcolor{gray!25} \cellcolor{white} & Positive & 60.07 & 76.80 & 305.5 & 410.4 & 13.30 & 305.5 & 297.3\\ \rowcolor{white} \cellcolor{white} & Negative & 79.10 & 115.2 & 247.9 & 372.0 & -- & -- & 247.9\\ \rowcolor{gray!25} \cellcolor{white} & Full (+) & 65.92 & 80.40 & 409.17 & 516.0 & 10.50 & 409.17 & 402.31\\ \rowcolor{white} \cellcolor{white}\multirow{-4}{*}{$1.0\,$ns} & Full (-) & 97.66 & 157.2 & 271.0 & 429.6 & -- & -- & 271.0\\ \hline \end{tabular} \caption{Extracted average and maximum streamer heights and lengths of the DC streamer simulations at both output timestamps of $0.2\,$ns and $1.0\,$ns. The thickness and length are treated as the sum of cells perpendicular and parallel to the streamer propagation direction. The direction of the negative streamers is treated as parallel to the dielectric surface, while the angle of incidence of the positive streamers is determined in post analysis. The propagation speed is determined as the length of the streamer divided by the passed time. The lateral speed is the X-component of the propagation speed.} \label{tab:SizeAndSpeed} \end{table*} In essence, the positive streamer and the negative streamer work together to promote propagation. Both of the streamers are acting against the potential energy barrier of ionization and the ever decreasing electric field strength. Therefore, with the simultaneous ignition of both positive and negative streamers in a twin SDBD system, the surface coverage and plasma volume are significantly increased when compared to a submerged symmetric SDBD system. When comparing the average lengths of the single and dual streamers in \cref{tab:SizeAndSpeed}, the positive streamer sees an increase of the propagation length by $17.6 - 25.3\,\%$ and then negative streamer sees an increase of $8.5 - 11.9\,\%$ when both streamers are simultaneously ignited. \subsection{Dual Streamer Dynamics - AC} \label{ACStreamers} Due to the minimal extension of the plasma into the free space above and below the dielectric surface of the simulations discussed in \cref{SingleStreamers,DualStreamers}, the simulated area was shifted horizontally to be centered about a single electrode pair, and reduced in width. Under this geometry, geometry(b), a bipolar AC square voltage profile with fast rise and short pulse times is simulated, shown in \cref{fig:ACPulseform}. Seed electrons are placed both above and below the dielectric barrier. Under such conditions, during the first positive pulse the plasma propagates near identically to the DC case discussed in \cref{DualStreamers,fig:DC BothStreamers,fig:DC BothStreamersCharge}. However, here it is observed that two near-mirror discharges simultaneously propagate about the horizontal center axis of both the anode and cathode. For reasons of consistency, only the right half of the simulated area is shown, as seen in \cref{fig:Geom(b)}. If shown, minimal differences between the left and right discharges would be seen, but may be attributed to the stochastic nature of the PIC/MCC code and the random seed electrons implemented each time step. Additionally, the implemented rising time of the voltage waveform from $0\,$V to $+8\,$kV at $0.1\,$ns does not contribute many differences, except perhaps a slightly reduced overall density and propagation distance. The electron density distribution, positive ion density distribution, \textit{i.e.} summation of N$_2^+$ and O$_2^+$ ions, charge disparity distribution, and electric field magnitude and direction are shown in \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}, respectively. Sub-figures (a) through (f) of each correspond to identical timestamps of interest, shown with respect to the voltage waveform in \cref{fig:ACPulseform}. \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{ne3x2-eps-converted-to.pdf} \caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature., III) potential/failed/completed positive streamer branch.} \label{fig:AC_Dens} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{N2O2ions-eps-converted-to.pdf} \caption{Spatial profiles of the positive ion density, \textit{i.e.} the summation of N$_2^+$ and O$_2^+$ ions, plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles.} \label{fig:AC_ionDens} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{charge3x2-eps-converted-to.pdf} \caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) surface charges which are visually hidden by the mask of the dielectric barrier, III) potential/failed/completed positive streamer branch.} \label{fig:AC_Charge} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=0.885\textwidth]{Efield3x2-eps-converted-to.pdf} \caption{Spatial profiles of the absolute value of the electric field plotted on a linear intensity scale as well as directional arrows at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Cut off value for minimum intensity scale (white) chosen as 1e6$\,$V/m. The direction of the electric field is shown via the normalized vector field as discussed in \cref{Waveform}} \label{fig:AC_EField} \end{figure*} Between $0.8\,$ns and $1.0\,$ns the applied voltage is reduced, at $0.9\,$ns the applied voltage is $0\,$V, after which the role of the anode and cathode switch. Due to the polarity switch the electric field is reversed, thus the electrons move in the opposite directions. Free electrons present in the streamer above the dielectric move away from the now metallic cathode. Likewise, electrons from the bulk of the streamer below the dielectric move towards the now anode. Electrons along the surface of the dielectric remain attached and do not move. At $1.0\,$ns the voltage on the cathode has reached its minimum value of $-8\,$kV, where it stays constant for a further $0.7\,$ns. After which, a second polarity switch takes place. All the while, the positive ion densities very closely follow the electron density profiles. \subsubsection{$1^{st}$ Polarity Shift - Positive to Negative streamer} \hfill\\ Paying attention to the top half of the simulation regime focuses on the shift from a positive streamer to a negative streamer. As the voltage drops on the top electrode from $+8\,$kV to $0\,$V between $0.8\,$ns and $0.9\,$ns, sub-figures (a) and (b) respectively of \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}, the electrons are not accelerated as strongly as before. The electrons relax and shift a little inwards towards the streamer bulk and the positively charged streamer head. The plasma volume slightly shrinks and the overall electron density becomes more refined and increases in number. The positively charged streamer head reduces in thickness and disparity, \textit{i.e.} becomes more quasi neutral. As the electrons are not as strongly/no longer attracted to the metallic anode, a positive space charge builds up at the streamer anchor on the anode. These two effects respectively lead to the electric field strength reducing between the streamer and the dielectric surface, and a very strong electric field between the anode and the streamer anchor. At $0.9\,$ns the quasi neutral streamer has a slight net positive charge, and thus takes on the role of a virtual anode along the boundaries of the streamer, meaning that the electric field above the streamer has reversed in the X-direction, but not the Y-direction. Between $0.9\,$ns and $1.0\,$ns, sub-figures (b) and (c) respectively, the applied voltage is negative, thus the metallic electrode is now the cathode and the dielectric surface is the anode. Due to the reversed electric field, electrons within the streamer begin falling to the dielectric surface. Along the way the remaining positive charges in the streamer head are flooded with electrons such that no charge disparity is noticeable, as it can be seen that the positive ion densities between the positive streamer and the dielectric do not change between these time steps. During this process, the electric field between the streamer and the dielectric surface completely reverses in both the X- and Y-directions. Naturally, falling electrons starting at locations where the streamer began to branch off but could not expand would reach the dielectric surface first. As the electrons are accelerated towards the dielectric surface, further ionization events take place creating new ions and electron avalanches. The electrons that first reach the dielectric charge the surface and repel other electrons into the X-direction away from the cathode, increasing the plasma propagation length. The original streamer is now acting like a negative streamer. By $1.7\,$ns, sub-figure (d), the negative streamer has charged the top surface of the dielectric and almost doubled the lateral length of the original positive streamer. Between $0.9\,$ns and $1.0\,$ns, the electrons near the streamer anchor/tail completely break away from the metallic cathode as the electrons are pushed away from it; however, the positive ions do not move. This results in a net positive charged being left behind. Thus a new positively charged streamer head forms between the cathode and the streamer bulk, both above and below the streamer. Along with this new streamer head forms an extremely high electric field in the local proximity, oriented away from the positive charges towards the cathode. Newly created electrons above the cathode and the streamer bulk, which is acting as an anode, are attracted to the streamer head and a small branch begins to form. This branch is shown in \cref{fig:AC_Dens,fig:AC_Charge}(c) with arrows labeled (III), and is also visible in \cref{fig:AC_ionDens}(c). As the simulation progresses in time, new electrons are continuously attracted towards this branch, gain energy, and eventually cause ionization. A cathode directed positively charged streamer head propagates along and floats above the cathode. A near mirror branch simultaneously forms on the other side of the metallic grid, which is not shown. Due to the positively charged streamer heads leading both of these branches, they repel one another. Therefore, neither branch is able to reach the other. By $1.7\,$ns, sub-figure (d), the branch has completely developed. Through multiple executions, it has been observed that this branching does not take place if the applied voltage, and thus electric field between the streamer head and cathode, is too low. It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}(a), and \cref{fig:DC BothStreamers}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \cref{fig:AC_Dens,fig:AC_Charge}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches. As with the positive streamer in the DC case, discussed in \cref{SingleStreamers}, the authors believe these simulated branching mechanisms are accurate, even given the difficulty of experimentally observing them. \subsubsection{$1^{st}$ Polarity Shift - Negative to Positive Streamer} \hfill\\ Focusing now on the bottom half of the simulation regime tracks the shift of the negative streamer to a positive streamer between $0.8\,$ns and $1.0\,$ns. During this time, the applied voltage is switched from $+8\,$kV to $-8\,$kV; however, the bottom electrode is held at a constant $0\,$V. As the applied voltage changes polarity, the bottom electrode also switches roles, now becoming the anode. Unlike the top half, the relaxation of the electric field causes a small shift in the bulk electrons which leads to both a large increase in the streamer size as the electrons are pushed away from the dielectric and towards the metallic electrode. Similar to the top half, the average electron density slightly increases and the charge disparity in the positively charged streamer head near the now anode reduces. This eventually leads to the streamer attaching to the anode, seen in in sub-figure (c), as electrons are freely absorbed by it. This motion also leads to the creation of a strong positive ion density within at the anchor position, as seen in in \cref{fig:AC_ionDens}(c). Furthermore, a small positive space charge forms between the negatively charged dielectric surface and the bulk plasma as the electrons are pushed away from the dielectric surface, but the positive ions do not move. However, the electrons that had attached to the surface do not desorb within the simulation, neither are electrons emitted due to surface field emission, emitted due to ion induced secondary electrons, nor are electrons reflected. The newly formed positively charged head and the negative surface charges form a very high electric field in a very thin sheath like structure between the streamer and the dielectric surface by $1.0\,$ns. The positively charged streamer head is floating above the surface, which is acting as the cathode; however, due to the original proximity of the bulk plasma to the surface and the surface charges, the streamer head remains very close to the surface. The proximity of the streamer head limits the ability of the streamer to propagate into the X-direction. As electrons are continuously pushed away from the dielectric surface, the thickness of the streamer head and consequentially the sheath like region increase. Eventually, near the "tip" of this region, along the X-direction, newly generated electrons outside of the plasma bulk are sufficiently attracted towards the positive charges. This leads to the streamer head curling around the "tip" of the streamer bulk, providing a virtual anode for further newly created electrons to be attracted to. Sufficiently energetic electrons will promote propagation further into the X-direction, extending the plasma. This propagation also significantly extends in the Y-direction away from the dielectric surface as electrons created near the surface will not gain enough energy for ionization. This causes the streamer to properly float above the dielectric surface, which can be seen at $1.7\,$ns in sub-figure (d), as expected of a cathode directed positively charged streamer head. The increased propagation length is not as significant as the streamer on the top half of the dielectric, due to the limiting effect that the surface streamer exhibited. If the surface of the dielectric was not considered as a pure absorber, then the emission and reflection features would provide an additional electron source that would promote the expansion and propagation of the streamer after the voltage had switched. \subsubsection{$2^{nd}$ Polarity Shift} \hfill\\ Between $1.7\,$ns and $1.9\,$ns, the applied voltage potential begins to switch again, this time rising from $-8\,$kV to $+8\,$kV. At $1.8\,$ns, the second polarity change occurs. Due to limited computational resources, the simulation was not executed for a second full positive cycle, and was instead ended at $2.0\,$ns. During this polarity switch, the same changes in the positive and negative streamers are observed. On the bottom half of the simulation, the shift from a positive streamer at $1.7\,$ns, sub-figures (d) to a negative streamer is observed. When the applied voltage is $0\,V$ at $1.8\,$ns, sub-figures (e), it can be seen that the floating positively charged streamer head is beginning to be flooded, while a new positively charged streamer head is forming near the metallic cathode. By $2.0\,$ns, sub-figures (f), the streamer bulk has mostly reached the dielectric surface again, has expanded further in the X-direction, and a new positive streamer branch forms near the cathode. It is very well expected that this branch would behave as the one discussed above. On the top half of the simulation, not only is the shift from a negative to positive streamer observed, but also the beginning of the collapse of the positive streamer branch is observed. As already explained and expected, at $1.8\,$ns the positively charged streamer head of both the negative streamer and the streamer branch near the metallic electrode is flooded by electrons moving towards the new anode. At $2.0\,$ns the anchor of the main streamer on the anode is fully formed; however, the electrons within the branch have a further distance to travel and have not yet reached the anode. As the polarity switches at $1.8\,$ns, electrons near the dielectric surface are repelled away and a floating positively charged streamer head forms. At $2.0\,$ns this streamer head is beginning to wrap around the large streamer bulk to promote further expansion in both the X- and Y-directions away from the metallic anode and dielectric surface respectively. \subsubsection{Summary of Polarity Shifts} \hfill\\ During both polarity shifts, similar and important events take place on the respective positive and negative streamers. The Negative streamer is initially attached to the anodic dielectric surface, and floating away from the metallic cathode. As the polarity changes, the electrons reverse in direction, attaching to the metallic anode and forming a positively charged streamer head near the dielectric surface. Newly created electrons are quickly attracted to the streamer head and as such allow for the now positive streamer to further propagate into the X- and Y-directions, thereby increasing the volume and overall density of the streamer. The positive streamer is initially floating away from the cathodic dielectric surface, and attached to the metallic anode. As the polarity changes, electron avalanches are instigated and rush towards the dielectric surface, thereby drastically increasing the plasma density, volume, propagation length, and surface coverage. Additionally, as a positively charged streamer head and sheath like region form near the metallic cathode, newly created electrons are able to instigate an additional positive streamer branch that floats above the metallic cathode. This branching feature also drastically increased the electron density and volume. Given a high enough initial voltage, it is expected that this positive streamer branch could form on the negative streamer before any polarity switching occurs. The increase in plasma densities, volume, and surface coverage are expected to be directly beneficial to various applications such as plasma enhanced catalysis and gas treatment. In plasma enhanced catalysis, the dielectric surface will typically be coated with a catalyst, such that any increase in surface coverage directly increases the active area of the catalyst. Additionally, any increase in plasma volume and density will naturally increase the radical densities which are available to react with either the catalytic surface and or the treatment gas that the plasma is ignited in, thus directly affecting the efficiency of the process. \section{Conclusions and Outlook} \label{Conclusion} In this work, the plasma streamer propagation of a twin SDBD setup by means of PIC/MCC code modeled in dry air under DC and AC voltage operation. The AC driving voltage waveform corresponded to a nanosecond square waveform with sub-nanosecond risetimes. The twin SDBD geometry being fully exposed and symmetric about the dielectric layer promotes both positive and negative streamer discharges to ignite simultaneously, along the edges of both the anode and cathode. This symmetry has not been theoretically investigated extensively, leaving the question of, among others, how do the streamers affect one another. In order to provide insight into this question, multiple scenarios were simulated. First, the propagation of a positive streamer and negative streamer were simulated individually under identical DC conditions. Second, both streamers were allowed to propagate using the same DC conditions, thereby providing insight into the interplay of the two streamers. However, the main focus of the paper is on the role of how the streamers interact and change under AC conditions; therefore, a short multi nanosecond duration bipolar square pulse is used to approximate said conditions. It was first shown that both the positive and negative streamers behave as expected under DC conditions. Both streamers form a quasi neutral bulk. The positive streamer forms and propagates via a floating cathode directed positive streamer head, while the negative streamer propagates via an electron avalanche along the surface of the dielectric barrier. The negative streamer also forms a positive space charge that floats above the metallic cathode. The floating positive space charges of both the positive and negative streamer must float as new electrons which are introduced between the cathode and said space charges are not able to gain enough energy for new ionization events. It was then shown that the interaction of both streamers under DC conditions does not significantly alter the propagation methods, but that the positive streamer "pulls" the negative streamer while simultaneously being "pushed" by the negative streamer, effectively increasing the surface coverage and the densities of the plasma streamers. The speed of propagation of both streamers differs when individually simulated versus when simultaneously simulated. The positively charged streamer head of the positive streamer propagates away from the anode providing an enhanced electric fields that the negative charges of the avalanche of the negative streamer then follow. Likewise, the negative streamer charges the dielectric surface which then helps to push the positively charged streamer head of the positive streamer further away from the anode. Next, the interactions of the two streamers under switching voltage conditions was investigated. The fast polarity switching of the applied voltage causes significant changes in the streamers. The switch from a positive streamer to a negative streamer, and vice versa were observed to cause a significant increase in both plasma size and density due to similar effects that take place during the DC scenario. It was also observed that additional positive streamer branches are able to form between the negative streamer and cathode under the given conditions. The initial branching structure is very similar to structures that formed on the negative streamer during DC conditions and the AC conditions before any voltage switches. Therefore it is hypothesized that the voltage switching allows for a branch to more easily form, but is still subject to some minimal necessary applied voltage for a given set of geometrical conditions. Overall, an electrode geometry allowing for two oppositely-phased plasmas to simultaneously ignite is beneficial with respect to plasma size and density. The two fully exposed electrodes create strongly curled electric fields that promote the ignition of plasma streamers near the surface of the dielectric. The simultaneous ignition of the streamers enhances the lateral electric fields causing the streamers to propagate further away from the metallic electrodes than they would if one electrode was submerged. This effect is even further enhanced if the applied voltage is able to quickly switch polarities before the streamers have a chance to self extinguish; however, this fast of a voltage profile is difficult to experimentally achieve and as such the reader should remember this if attempting to compare any numerical information from this paper. Nonetheless, the enhanced electric fields also allow the plasma to achieve higher densities, which is in many applications desirable. In plasma enhanced catalysis applications, one might want to coat the dielectric surface with a catalyst. Having an enhanced plasma propagation length would directly correlate to an increased surface area of the catalyst that is directly affected by the plasma, leading to a potentially enhanced efficiency. In gas treatment applications, an increased plasma density is typically desirable in order to increase the rate of molecular fragmentation and/or purification. Future experimental measurements and theoretical and or numerical investigations on the electrode geometry could optimize an electrode system for a given set of applications. Additional simulations of a porous catalytic coating attached to the dielectric surface would provide additional insight into plasma enhanced catalysis applications. \section{Acknowledgements} This work is supported by the German Research Foundation (DFG) with the Collaborative Research Centre CRC1316 projects A4 and A5 and the Scientific Research Foundation from Dalian University of Technology, DUT19RC(3)045, and the National Science Foundation of China Grant No. 12020101005. \color{black} \newpage \section*{ORCID iDs} \begin{table}[h] \begin{tabular}{l p{4cm}} Q. Z. Zhang: & \url{https://orcid.org/0000-0002-5726-0829} \\ R. T. Nguyen-Smith: & \url{https://orcid.org/0000-0002-5755-4595}\\ F. Beckfeld: & \url{https://orcid.org/0000-0001-8605-2634}\\ Y. Liu: & \url{https://orcid.org/0000-0002-2680-1338}\\ T. Mussenbrock: & \url{http://orcid.org/0000-0001-6445-4990} \\ J. Schulze: & \url{https://orcid.org/0000-0001-7929-5734}\\ \end{tabular} \label{tab:my_label} \end{table} \printbibliography \end{document}
2,877,628,090,893
arxiv
\section{Introduction} \subsection{Plasma Density Irregularities} The Earth's plasma environment abounds with density irregularities, on scales ranging from those of neutral atmosphere waves (100--1000\,km) down to ion and electron gyroradii (0.1--1\,m) \citep{Yeh1974, Booker1979, Fung2000, Akmaev2011, Wang2011, Nicolls2014}. They tend to be elongated along the geomagnetic field lines owing to large electron mobilities in this direction, and are broadly referred to as field-aligned irregularities (FAIs) \citep[e.g.][]{Sonwalkar2006, Makela2012}. Acoustic-gravity waves (AGWs), which couple to charged species via collisions, are believed to be the source of energy driving the formation of smaller-scale density structures, for example via the spatial resonance mechanism \citep{Whitehead1971, Klostermeyer1978} or upon passage through a critical layer \citep{Booker1967, Staquet2002}, with further cascades possible through various plasma instabilities and nonlinear processes \citep{Perkins1973, Fejer1980, Maruyama1990, Nicolls2005}. Observational evidence for the association between AGWs and FAIs was recently presented by \citet{Sun2015} and \citet{Loi2016}. FAIs of suitable dimensions and density contrasts are capable of ducting and guiding VLF to HF radio waves, a property that enabled their initial discovery \citep{Storey1953} and subsequent investigations \citep{Calvert1969, Angerami1970, Singh1998, Darrouzet2009}. Studies of their morphology have mostly relied on theoretical knowledge about electromagnetic (EM) wave trapping and guidance by plasma density structures \citep{Smith1960, Calvert1995}, aided by ray tracing calculations to interpret received signal patterns \citep{Haselgrove1954, Smith1961, Dyson1967, Hayakawa1978, Fung2005, Kulkarni2008}. Sheet-like structures extended in the east-west direction have been posited as one interpretation of topside sounder data \citep{Muldrew1963, Dyson1967}, possibly occurring in uniformly spaced onion-like layers \citep{Gross1984}. Numerical studies suggest that tubular structures can also be effective waveguides \citep{Smith1961, Platt1989}. The existence of tubular structures with inferred widths of 10--100\,km has been established through satellite \textit{in situ} measurements \citep{Angerami1970, Sonwalkar1994, Decreau2005}, satellite remote sensing \citep{Fung2003, Darrouzet2009, Woodroffe2013}, ground-based whistler spectrograms \citep{Hayakawa1978, Singh1998, Altaf2013}, and ground-based radio interferometric arrays \citep{Jacobson1993, Hoogeveen1997, Helmboldt2012b}. Recent interferometric observations support the idea that tubular FAIs may in some instances be confined to a narrow sheet-like layer on a single magnetic shell \citep{Loi2015_mn2e, Loi2015a_mn2e}. Subtler details such as field-aligned density profiles have been measured through VLF whistler observations \citep{Carpenter1964, Lichtenberger2009, Lichtenberger2013}, ULF field-line resonances \citep{Menk1999}, and topside sounding \citep{Dyson1978, Tu2006}. The growth and decay rates of FAIs depend on many factors, including the electrical conductivity, electric and magnetic fields, ambient densities and density gradients, collisional frequencies and drift velocities \citep{FarleyJr1963, Fejer1980, Ossakow1981, Sojka1998, Fejer1999}. Numerical and experimental results suggest that growth due to various instabilities may occur on timescales of minutes to tens of minutes \citep{Park1971a, Sojka1998, Carter2014}, and decay by cross-field diffusion on timescales of hours to days \citep{Thomson1978, Singh1999, Singh2013}. \begin{figure*} \centering \includegraphics[width=0.65\textwidth]{radio_propagation_effects} \caption{A cartoon illustration of the effects of phase distortions induced by the ionosphere. In the case of a smooth ionosphere (a), there is no change in the differential phase. In the linearly varying case (b), a simple angular shift results. Higher order spatial variations (c) cause shape distortion. For sufficiently small irregularities (d) a non-linear diffractive regime is reached, resulting in scintillation.} \label{fig:propagation_effects} \end{figure*} \subsection{Radio Interferometry}\label{sec:interferometry} Many radio telescopes are interferometric arrays that measure the phase difference of incoming EM waves between pairs of receivers \citep{Thompson2001}. Since the refractive index of a plasma is electron density-dependent, interferometers can detect density gradients through phase fluctuations. In an interferometric image, these manifest as shifts in the angular position of a radio source and/or broadening of the angular extent of a source, and may also be accompanied by intensity fluctuations \citep{Hamaker1978, Bougeret1981, Spoelstra1984, Jacobson1992a, Cotton2004, Cohen2009}. It is in the interest of those using radio telescopes for astronomical observations to model and subtract these distortions from (i.e.~calibrate) the data. The development of ionospheric calibration strategies for newly emergent widefield interferometers operating in the VHF band is an ongoing challenge, with a variety of approaches being trialled \citep{Cotton2004, Intema2009, Arora2015_mn2e, vanWeeren2016}. Bulk angular shifts are caused by density gradients whose scale lengths greatly exceed the physical size of the interferometric array, forming a wedge that collectively tilts the wavefront normals arriving at all baselines \citep{Thompson2001}. The associated displacement $\Delta\theta$ (in radians) is directly proportional to the transverse TEC gradient $\nabla_\perp$TEC (in el\,m$^{-3}$) and inversely proportional to the observing frequency $\nu$ (in Hz) according to \begin{equation} \Delta\theta \approx -\frac{40.3}{\nu^2} \nabla_\perp \text{TEC} \:. \label{eq:displacement} \end{equation} Angular broadening can be conceived as the result of superposing multiple wavefront normals, which might occur if irregularity scale sizes fall below the size of the array (different baselines experience different phase deviations), or if integration times exceed the timescale of variation in the plasma along the line of sight \citep{Spoelstra1984, Lonsdale2005, Kassim2007}. Intensity variations can arise from focusing and defocusing of the incoming rays, becoming extreme (modulation indices of order unity) in the diffraction-dominated limit \citep{Meyer-Vernet1980, Booker1981, Spoelstra1985}. These effects are illustrated in the cartoon in Figure \ref{fig:propagation_effects}. Note that a perfectly uniform TEC screen cannot be detected by an interferometer, since it is only the phase difference and not the absolute phase that is measured on a baseline (it is in principle possible to measure the absolute TEC through Faraday rotation, but the precision of this is lower). To perform the phase measurements the telescope must have a source of back-illuminating radio waves. These can be naturally occurring, e.g.~solar active regions \citep{Bougeret1981, Mercier1986} or cosmic radio sources \citep{Hamaker1978, Jacobson1993, Spoelstra1997, Helmboldt2014a, Loi2015a_mn2e}, or of man-made origin such as satellite beacons \citep{VanVelthoven1990_phd, Jacobson1996, Hoogeveen1997a}. Celestial sources of radio emission, though faint in comparison to solar or man-made sources, are stable in output and detectable at a rate of about one per square degree on the sky, for a confusion-limited interferometer with 2\,arcmin angular resolution operating in the VHF band. The exploitation of their abundance to construct spatially detailed, regional-scale TEC gradient maps has been made possible by the recent development of sensitive, widefield radio telescopes \citep{Loi2015a_mn2e, Loi2015b_mn2e}. Detections of tubular FAIs by interferometric arrays were previously made using the Westerbork Radio Synthesis Telescope \citep{VanVelthoven1990_phd}, the Very Large Array (VLA) \citep{Jacobson1992, Jacobson1993, Hoogeveen1997, Helmboldt2012c}, and the Los Alamos interferometer \citep{Jacobson1996, Hoogeveen1997a}. Increasingly sophisticated analysis methods for the VLA, designed to cope with its sparse spatial sampling, have been under recent development \citep{Cohen2009, Coker2009, Helmboldt2012, Helmboldt2012b, Helmboldt2014a}. The sparseness with which traditional interferometers sample the TEC distribution is contrasted by the breadth and detail achieved by new-generation instruments such as the Murchison Widefield Array (MWA) \citep{Lonsdale2009_mn2e, Bowman2013_mn2e, Tingay2013_mn2e}, which permit rich visualisations of FAIs and travelling disturbances over wide fields of view \citep{Loi2015_mn2e, Loi2016}. Tubular FAIs have been observed in half or more of nighttime MWA data, to varying degrees of prominence \citep{Loi2015a_mn2e}. Their high occurrence rate over the site of the MWA suggests that they must be accounted for if a thorough calibration of ionospheric effects is desired. However, their steep inclination with respect to the ground implies that constant-altitude screen models might be unsuitable and that a more flexible model is needed to capture their properties. \subsection{This Work} We present a new method for visualising and studying FAIs, designed for widefield interferometers, that involves mapping TEC gradients measured from radio synthesis images onto a plane tangent to a magnetic shell. This transformation removes the perspective distortion associated with inclined magnetic field lines, providing a physically meaningful reference frame in which to inspect their properties. We apply this technique to the dataset previously analysed by \citet{Loi2015_mn2e}, known to exhibit prominent FAIs. We also extend the analysis to sub-array scales, something which has not been previously done with MWA data, by examining angular broadening effects. The paper is structured as follows. In Section \ref{sec:model} we introduce the model. In Section \ref{sec:methods} we introduce the MWA observations, explain the analysis approaches for large- and small-scale irregularities, and demonstrate how the best-fit inclination may be obtained. Our results are presented in Sections \ref{sec:superMWA} and \ref{sec:subMWA} for large- and small-scale irregularities, respectively. We conclude in Section \ref{sec:conclusion}. \section{The Inclined Plane Model}\label{sec:model} The quantity measured by an interferometer through the angular displacement of an unresolved (point-like) radio source is $\nabla_\perp$TEC, the gradient of the TEC transverse to the line of sight to the source. Because it involves an absolute reference length scale (the spacing between receivers), this gradient is independent of the distance to the irregularities causing the phase perturbation. It is a two-dimensional vector, lying in the plane perpendicular to the line of sight. In addition, the angular position of the origin of this vector is known, given by the angular position of the source, also a two-component vector. For a celestial source, this is usually specified in terms of the right ascension $\alpha$ and declination $\delta$ (not to be confused with magnetic declination), which are coordinates fixed with respect to the celestial sphere. There are in total thus four independent input quantities: $\alpha$, $\delta$, $\partial_\alpha$TEC and $\partial_\delta$TEC, where the partial derivative shorthand $\partial_X$ denotes the gradient along direction $X$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{geometry} \caption{Diagram showing the geometry of the model and how the $(\beta,\gamma)$ and $(\xi,\eta)$ coordinate systems are defined with respect to the ground observer. Cardinal directions are as marked. The $(\beta,\gamma)$ coordinates, shown in red, are angular coordinates for an imaginary sphere centred on the observer (note that the finite radius of the sphere as drawn in this diagram is solely for visualisation purposes, and otherwise has no meaning). These are the same $(\beta,\gamma)$ coordinates as described in \citet{Loi2015a_mn2e}. The $(\xi,\eta)$ coordinates, shown in blue, are Cartesian coordinates describing position on the inclined plane. The origin of this system is defined to be the point on the plane directly overhead of the observer. The two parameters describing the plane, which are the inclination angle $i$ and zenith altitude $h$, are defined as shown. In the current model the normal vector to the plane is assumed to have zero (cardinal) east-west component.} \label{fig:geometry} \end{figure} Geometric arguments presented by \citet{Loi2015_mn2e} and \citet{Loi2015a_mn2e} indicate that FAIs observed over the MWA site are often confined to a narrow layer, i.e.~small $L$-value \citep{McIlwain1961} range, though this value of $L$ may differ between observations. This motivates the introduction of a model that maps the four observables onto such a surface. In this work, we consider the lowest-order approximation to achieve this, namely an inclined 2D plane (i.e.~a flat screen) tangential to the $L$-shell on which the irregularities reside. We assume that all phase perturbations are taken up by this screen, whose thickness is negligible. Clearly this is only valid locally since the curvature of field lines is neglected. Given that the scale of curvature is of order the radius of the Earth (several thousand kilometres) and that the measurements probe a region several hundred kilometres across, we expect this approximation to be reasonable. Although a general 2D plane in 3D space requires three parameters for a full description, one of these can be eliminated by enforcing the east-west component of the normal vector to be zero (N.B.~at the site of the MWA, geomagnetic and geographic east-west coincide to within a degree and so their difference can be conveniently neglected; see Appendix \ref{sec:trafo} for how to handle the more general case). This is equivalent to assuming axisymmetry of the Earth's magnetic field, a good approximation for the inner magnetosphere where the field is close to being dipolar. This leaves two independent parameters, which we define to be $h$, the altitude of the plane at the point directly above the MWA, and $i$, the inclination angle (measured positive for a plane sloping upwards towards the north). These two quantities are illustrated in Figure \ref{fig:geometry}. Suitable values for $h$ and $i$ can in fact be derived solely from the data without any knowledge of the geomagnetic field: $h$ can be estimated via a parallax technique \citep{Loi2015_mn2e} and $i$ can be optimised by demanding that it maximises the ``parallelness'' of resultant features (quantitative details in Section \ref{sec:inclination}). \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{arrow_betagamma_xieta} \caption{A comparison of the electron density gradient vector field from a particular snapshot computed with respect to (a) a Sanson-Flamsteed projection of the $(\beta,\gamma)$ coordinates, and (b) the new $(\xi,\eta)$ coordinates. The parameters used are $h = 570$\,km and $i = 59^\circ$. Note that in both cases the sky is into the page, so that the vertical axis points north and the horizontal axis points west. Horizontal and vertical dotted lines in (a) correspond to lines of constant $\beta$ and $\gamma$, while those in (b) correspond to lines of constant $\eta$ and $\xi$, respectively. In (a), the relationship between $(x,y)$ and $(\beta,\gamma)$ is given by equation (3) of \citet{Loi2015a_mn2e}. It is to be noted that the $(x,y)$ and $(\beta,\gamma)$ systems nearly coincide (the dotted lines are very close to being $x$ and $y$ gridlines). Arrows lengths are directly proportional to the magnitude of the gradient vector and scaled for clarity, such that a $1^\circ$-long arrow in (a) represents a gradient of $2.9 \times 10^{14}$\,el\,m$^{-2}$\,km$^{-1}$ while a 10-km long arrow in (b) represents a gradient of $1.4 \times 10^{14}$\,el\,m$^{-2}$\,km$^{-1}$. Blue and red colours are an aid to visualisation and denote arrows with positive and negative horizontal components, respectively.} \label{fig:arrow} \end{figure*} To describe position on the plane itself, we define an orthogonal basis $(\hat{\boldsymbol{\xi}}, \hat{\boldsymbol{\eta}})$ where the $\hat{\boldsymbol{\xi}}$-axis points west and the $\hat{\boldsymbol{\eta}}$-axis points north (see Figure \ref{fig:geometry}). The equations for how to map celestial coordinates $(\alpha, \delta)$ to screen coordinates $(\xi,\eta)$, which make use of the intermediary $(\beta,\gamma)$ system introduced by \citet{Loi2015a_mn2e}, are given in Appendix \ref{sec:trafo}. Note that the $(\xi,\eta)$ coordinates are not geographically absolute but are defined with respect to the observing location, with $\xi = \eta = 0$ corresponding to the observer's zenith. As explained earlier in Section \ref{sec:interferometry}, an interferometer is insensitive to the zeroth-order (i.e.~constant offset) component of the TEC distribution. This implies that if density irregularities are confined to a thin layer bounded above and below by smooth plasma, then this is observationally equivalent to a thin plasma screen bounded above and below by vacuum. In the context of our model, we thus find it natural to associate a surface density rather than a column density to points on the screen (although given that the two have equivalent units, the difference is mainly conceptual). We denote this surface density by $\Sigma$, a scalar function of $\xi$ and $\eta$. The gradient of $\Sigma$ with respect to the screen can be determined by a suitable transformation of the $\nabla_\perp$TEC vector. For our simple model, it is possible to obtain explicit expressions for $\partial_\xi\Sigma$ and $\partial_\eta\Sigma$ in terms of the four input quantities. These and further details are contained in Appendix \ref{sec:trafo}. The effect of the transformation from angular to screen coordinates for an arbitrary snapshot from the dataset of \citet{Loi2015_mn2e} is demonstrated in Figure \ref{fig:arrow}. The coordinate system in Figure \ref{fig:arrow}a is the one which has been used for visualisation in previous MWA work, and corresponds to the view one might have looking up at the structures from the ground (see \citet{Loi2015a_mn2e} for a qualitative overview and Appendix \ref{sec:trafo} of the current work for quantitative details). On the other hand, Figure \ref{fig:arrow} is the view one might have from a vantage point looking ``face-on'' at the surface on which the irregularities reside. We can visually identify two consequences of the transformation: (i) there is an ``unskewing'' of apparently convergent features into a much more parallel configuration, and (ii) the gradient vectors turn out to be nearly parallel to the $\xi$ axis. While the first property is to be expected from the way in which $i$ was chosen (to maximise the ``parallelness'' of features; details in Section \ref{sec:inclination}), the second property is an independent verification that the model is physically reasonable (larger density gradients can be sustained across than along field lines). \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{broad_vs_time} \caption{(a) The broadening width (quadrature difference of major and minor axes) as a function of time, for sources whose position angles are aligned with the $\xi$ and $\eta$ axes. The alignment criterion has been taken to be a position angle within $15^\circ$ of the respective axis. (b) The fractions of sources selected by these restrictions on orientation, as a function of time. This analysis makes use of the 4-s cadence data to minimise the spatial extent of the region sampled by the drifting sources. Only sources of signal-to-noise ratio greater than 10 have been included.} \label{fig:broad_vs_time} \end{figure*} \section{Methods}\label{sec:methods} \subsection{Instrument, Observations and Data} The MWA telescope is an interferometric array sited at a mid-latitude location in Western Australia, at a geographic latitude of $26.7^\circ$S and a geomagnetic latitude of $38.6^\circ$S ($L = 1.6$). It operates at frequencies between 80 and 300\,MHz (in the VHF band). Its 128 receiving elements (``tiles'') are spread over a total region about 3\,km wide on the ground but are centrally condensed, with most (112 out of 128) tiles lying within 0.75\,km of the core. The MWA therefore behaves as a point detector for structures greatly exceeding 1\,km, and a distributed array for structures on much smaller scales. The instantaneous angular field of view (FOV) of the MWA is $30^\circ$, subtending a physical distance of about 300\,km at 600\,km altitude. The dataset considered in this paper is the same as the one discussed by \citet{Loi2015_mn2e}. It was obtained during mildy disturbed geomagnetic conditions ($K_p = 2$) within the recovery phase of a moderate (minimum $Dst = -45$\,nT, 24-hr maximum $K_p = 4$) geomagnetic storm on 15 October 2013. The data were recorded at 183\,MHz (30.72\,MHz instantaneous bandwidth) over a 1.5-hr long interval between 1346 and 1517 UTC (2146--2317 Australian Western Standard Time). Prominent FAIs are present over the whole FOV during this period. Their characteristic altitude was established by \citet{Loi2015_mn2e} to be $570 \pm 40$\,km. The interval comprises 46 blocks of data spaced by 2\,min and recorded over 112\,s each. However, the interferometric visibilities (i.e.~complex voltages) were in fact captured at 0.5\,s time resolution. While most of the analysis here uses images formed by integrating over the 112\,s of data in each 2-min block, a portion of this work (Section \ref{sec:subMWA}) makes use of images formed at a much higher cadence of 4\,s. This allows short-term dynamics to be elucidated at the expense of diminished image sensitivity. \subsection{Probing Large-scale Structure} As discussed earlier, the size of the array determines how phase fluctuations of various scales manifest in the data. Hereafter we define ``large-scale'' to mean structures larger than the diameter of the MWA (``super-MWA'' scales). Practically speaking, these are the structures that can be probed through the displacements they induce in the positions of unresolved radio sources. The method for extracting these angular displacements in the current work is identical to that described in previous MWA work \citep{Loi2015_mn2e, Loi2015a_mn2e, Loi2015b_mn2e, Loi2016}, to which we refer the reader for further details. Briefly, this involved first identifying candidate radio sources by searching for intensity maxima in the images above a given noise threshold, cross-matching them with a published astronomical database (here the National Radio Astronomy Observatory VLA Sky Survey, NVSS; \citet{Condon1998}) and retaining only those with counterparts. The displacement vector $\Delta\theta$ associated with an NVSS source appearing in a certain snapshot was taken to be the difference between its position measured in that snapshot and its time-averaged position. The $\Delta\theta(\alpha,\delta)$ vectors were then mapped to $\nabla\Sigma(\xi,\eta)$ vectors by the transformation steps described in Section \ref{sec:model} and Appendix \ref{sec:trafo}. Results pertaining to large-scale structure are discussed in Section \ref{sec:superMWA}. \subsection{Probing Small-scale Structure} We define ``small-scale'' here to refer to structures on sub-MWA scales. These fluctuations are responsible for the apparent broadening of otherwise unresolved radio sources; one can envisage this as arising from the simultaneous arrival of wavefronts with different orientations over different parts of the array. The spread of angular offset vectors causes the source to appear blurred. The amount and orientation of the broadening relate to the spread in wavefront orientations, which in turn arises from the spread in $\nabla_\perp$TEC on sub-array scales. The ``broadening vector'' (defined in more detail below) associated with this process therefore obeys the same transformation rules as the $\nabla_\perp$TEC vectors measured through bulk angular displacements. To probe structures in the sub-MWA regime we made use of images formed at 4-s rather than 2-min cadence, to minimise the size of the region sampled towards each source. The drift of celestial sight lines through the screen occurs at a speed of $\sim$40\,m\,s$^{-1}$ at 600\,km altitude, meaning that each patch of sight lines sweeps out an area roughly three times larger in a 2-min compared to a 4-s image, assuming an effective MWA diameter of 1.5\,km ($4.8 \times 1.5$\,km$^2$, versus $1.7 \times 1.5$\,km$^2$). The software package used for automated source extraction (\textsc{Aegean}; \citet{Hancock2012}) operates by fitting 2D elliptical Gaussians to intensity maxima. Besides the best-fit coordinates of the centre of the Gaussian, \textsc{Aegean} also reports the major axis $b_\text{maj}$, minor axis $b_\text{min}$, and position angle $p$ (measured east of north). In the absence of any scatter broadening, the width of a source in an image is given by the width of the synthesised beam (response function), which for the MWA at 183\,MHz is about 2\,arcmin. The MWA synthesised beam is approximately a circular Gaussian, but the exact shape and size can vary over time if certain tiles/baselines are temporarily excised due to interference or hardware malfunctions (largely an automated process). The broadening due to finite angular resolution is distinct from scatter broadening. Our approach to isolating the scatter broadening contribution was to take the quadrature difference between the major and minor axes fitted for the Gaussian, which we call the \textit{broadening width}. This leads to our definition of the \textit{broadening vector} $\mathbf{b}$ as the vector oriented parallel to the major axis whose associated magnitude is the broadening width: \begin{equation} \mathbf{b} = \pm \sqrt{b_\text{maj}^2 - b_\text{min}^2} \left( \frac{\sin p}{\cos \delta} \:, \cos p \right) \:, \label{eq:broadening} \end{equation} given with respect to the $(\alpha,\delta)$ basis. The $\pm$ sign reflects the fact that $\mathbf{b}$ is a spin-2 vector (since the orientation of the major axis is only defined modulo 180$^\circ$). The broadening vectors take the role of the angular displacement vectors $\Delta\theta$ described in Section \ref{sec:model} and Appendix \ref{sec:trafo} as the measure of TEC fluctuation. We subjected the $\mathbf{b}$ vectors to an identical sequence of transformations to obtain the sub-MWA version of ``$\nabla\Sigma$'', which is more closely related to the second spatial derivative of $\Sigma$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{keta0pow_vs_inc} \caption{The degree of ``$\xi$-parallelness'' of features, measured through a spatial Fourier transform (details in text), as a function of the plane inclination $i$. This maximises at around $i = 59^\circ$, which is consistent with the known inclination of geomagnetic field lines within the MWA FOV (see text for discussion). A value of $i = 59^\circ$ has been adopted for all analyses in this work.} \label{fig:optimal_inc} \end{figure} Note that the manner in which we have quantified the scatter broadening contribution assumes that the broadening occurs primarily along a selected direction, and is therefore a conservative lower bound on the actual amount of broadening. Physically we expect density irregularities below $\sim$1\,km scales to be highly anisotropic, because these scales are less than the mean free path of neutrals in the thermosphere (several kilometres) \citep{Jacchia1977} and shaped by magnetic forces rather than collisions. We thus expect small-scale irregularities to be strongly field-aligned, producing broadening in the direction perpendicular to their elongation (cf.~a diffraction grating). Indeed, the data exhibit both a larger fraction of sources broadened along $\xi$ and a larger average broadening width in this direction, compared to $\eta$ (Figure \ref{fig:broad_vs_time}). Referring to the average broadening width for sources whose $\mathbf{b}$ vectors lie within $15^\circ$ of the $\xi$ and $\eta$ axes as $w_\xi$ and $w_\eta$ respectively, we also see from Figure \ref{fig:broad_vs_time} that $w_\xi$ grows with time. This is closely reminiscent of the growth of the larger-scale structures apparent in figure 3c of \citet{Loi2015_mn2e}. In contrast, $w_\eta$ exhibits no such growth. Noting that $w_\eta$ turns out to be comparable to the error on $b_\text{maj}$ and $b_\text{min}$ quoted by \textsc{Aegean}, we can regard this as a measure of the noise floor. Subtracting this in quadrature from $w_\xi$, we thus arrive at the following conservative estimate of the spread in $\nabla\Sigma$ due to small-scale FAIs: \begin{equation} \sigma(\nabla\Sigma) = \frac{1}{2} \sqrt{w_\xi^2 - w_\eta^2} \:. \label{eq:broadwid} \end{equation} The factor of 1/2 reflects the fact that $w_\xi$ and $w_\eta$ are major axis-related quantities while $\sigma$ is a root-mean-square (RMS) quantity, which is related to the semimajor axis of a Gaussian distribution. Unit conversion factors have been omitted for clarity. The results for small-scale structure are presented in Section \ref{sec:subMWA}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{shading_xieta} \caption{The (a) $\xi$ and (b) $\eta$ components of the $\nabla\Sigma$ vector field, for a particular snapshot. Red and blue correspond to positive and negative gradients, and white corresponds either to near-zero values or regions outside the FOV (i.e.~unmeasured values). These have been generated by first forming a natural-neighbour interpolant over the data, resampling over a 5-km uniform grid and then smoothing the result with a 3$\times$3-box median filter. The shorthand $\partial_X \equiv \partial/\partial X$ has been used to denote a partial derivative. See Movie S1 for an animation of the full dataset.} \label{fig:shading_xieta} \end{figure*} \subsection{Choosing the Height and Inclination}\label{sec:inclination} In Section \ref{sec:model} we introduced the two free parameters $h$ and $i$, which are inputs to the model and illustrated in Figure \ref{fig:geometry}. The best-fit average height of the irregularities in the 15 October 2013 dataset was determined previously by \citet{Loi2015_mn2e} to be $570 \pm 40$\,km, and so the value of $h$ we adopt in this work is $h = 570$\,km. Fortunately, the uncertainty in this value only affects the physical scale inferred for the irregularities, and not the orientation or the magnitude of $\nabla\Sigma$. This can be seen directly from Equations (\ref{eq:trafo2a}) and (\ref{eq:trafo2b}), which are independent of $h$. The value of $i$ can be set to be the magnetic inclination, which is known from geomagnetic reference models \citep[e.g.][]{AGRF2010}. However, this value can be directly fitted using MWA data without any prior knowledge of the magnetic field, serving as a useful quantitative cross-check of the assumption that the irregularities are field aligned. We did this by finding the $i$ that maximised the ``parallelness'' of features in the resulting $\nabla\Sigma$ distribution. In more quantitative terms, we made use of the power spectrum technique developed by \citet{Loi2015a_mn2e} applied to the new $\nabla\Sigma(\xi,\eta)$ vector field to obtain the total power of fluctuations with $k_\eta = 0$, where $k_\eta$ denotes spatial frequency along the $\eta$ direction. These are the fluctuation modes whose phase fronts are parallel to the $\eta$ axis. In the notation of \citet{Loi2015a_mn2e} (sections 2.3 and 2.4), but replacing $x \to \xi$, $y \to \eta$, $\Delta x \to \partial_\xi\Sigma$ and $\Delta y \to \partial_\eta\Sigma$, the optimisation problem can be stated quantitatively as the desire to find the $i$ maximising \begin{equation} F(i) = \frac{V}{N_\xi N_t} \sum_{\ell=1}^{N_\xi} \sum_{n=1}^{N_t} \big[ \mathcal{P}_{\ell 1 n}(0^\circ) + \mathcal{P}_{\ell 1 n}(90^\circ) \big] \:. \label{eq:optimise_i} \end{equation} This is the discrete version of the sum of the integrals of the power spectral density over $k_\xi$ and $\omega$ for the two scalar fields $\partial_\xi\Sigma$ and $\partial_\eta\Sigma$ (corresponding to the arguments of $0^\circ$ and $90^\circ$ in the above expression, respectively). The units of $\mathcal{P}$ in the current work are el$^2$\,m$^{-4}$\,s. We gridded the data at a resolution of 2\,km. The quantity $F$ is plotted versus $i$ in Figure \ref{fig:optimal_inc}. This exhibits a maximum near $i = (59 \pm 1)^\circ$. Although this seems to differ slightly from the zenith value of the magnetic inclination at a height of 570\,km, which is $60.3^\circ$ \citep{AGRF2010}, this can be explained by noticing that the north-south physical span of the plane within the FOV is in fact biased towards the north (by $\sim$100\,km, as can be seen in Figure \ref{fig:shading_xieta}). Since the magnetic inclination is shallower towards the north, the average inclination over the FOV should be slightly smaller than the zenith value: at 50 ($= 100 \cos 60^\circ$)\,km north of the MWA it is $59.7^\circ$, and incorporating in addition the curvature of the Earth away from an observer by $\sim 0.5^\circ$ every 50\,km, it is within expectation that the best fit value of $i$ lies closer to $59^\circ$ than $60^\circ$. In all subsequent analyses we adopted a value of $i = 59^\circ$. Our results change very little with variations of this value by $\sim 1^\circ$. As a final comment in this section, we note that the near-constancy of the magnetic inclination along the line through zenith precludes the possibility of inferring $h$ through a fit to $i$. This has previously been done for similar structures (using other interferometers) when lines of sight have been suitably oblique \citep{Jacobson1996, Hoogeveen1997, Hoogeveen1997a}. However, the advantage here is that we have been able to quantitatively verify that the irregularities are indeed consistent with field alignment (to within $\sim 1^\circ$), whereas the previous works held this as a model assumption. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{av_etagradient_vs_xi} \caption{The average (over time and $\eta$) $\eta$-gradient, as a function of $\xi$. Physically, this is the field-aligned density gradient of each flux tube as a function of its longitudinal position. Points have been binned in 5-km intervals along the $\xi$-direction, and a minimum of 10 points in a bin is required for the bin to be shown on the plot. Error bars represent the standard error (standard deviation divided by the square root of the number of points) for each bin.} \label{fig:av_etagradient_vs_xi} \end{figure} \section{Super-MWA Structures}\label{sec:superMWA} \subsection{Morphology} Figure \ref{fig:shading_xieta} shows the spatial distributions of $\partial_\xi\Sigma$ and $\partial_\eta\Sigma$, the two orthogonal components of the $\nabla\Sigma$ vector field, for a representative snapshot from the \citet{Loi2015_mn2e} dataset. Movie S1 (see supplementary material online) shows the time evolution for the whole dataset. For the purposes of visualisation, the discretely sampled data have been interpolated onto a uniform 5\,km grid. This spacing is about half the average pierce-point separation. A representative distribution of the actual pierce points is shown in Figure \ref{fig:arrow}b, where one arrow corresponds to one pierce point. The sampling becomes sparser towards the north (higher altitudes) because the same solid angle subtends a greater physical area at a larger distance. The left and right panels correspond to the field-transverse (i.e.~direction in the plane perpendicular to the magnetic field) and field-aligned density gradients, respectively. We see that vertical (i.e.~$\eta$-parallel) striations are prominent in the field-transverse component of $\nabla\Sigma$, but not the field-aligned component. Typical length scales are about an order of magnitude smaller in the field-transverse than field-aligned directions. This is unsurprising because the Lorentz force acts perpendicular to the magnetic field lines and can only sustain against diffusion in this direction. Gradients along the $\eta$ direction would be caused by other effects, such as modulations by hydrodynamic or MHD waves \citep{Poole1988, Kazimirovsky2002, Menk2007, Fritts2008a, Waters2009}. This illustrates the usefulness of the $(\xi,\eta)$ basis in allowing us to decompose structures shaped by different physical processes. Also apparent from Figure \ref{fig:shading_xieta} is the difference in magnitudes of field-transverse and field-aligned gradients. Field-transverse gradients are about an order of magnitude steeper than field-aligned gradients, which is similarly consistent with physical expectations. In Movie S1 it can be seen that both components of $\nabla\Sigma$ increase in magnitude over time, indicating that the observation interval captured a period of growth. We discuss this further in Section \ref{sec:growth}. \subsection{Field-aligned Density Gradients} Under diffusive (i.e.~hydrostatic) equilibrium, one expects $\Sigma$ to decrease with altitude ($\partial_\eta\Sigma < 0$) along each flux tube. However, we can see from Figure \ref{fig:shading_xieta} and Movie S1 that $\partial_\eta\Sigma$ exhibits variations along $\eta$ on scales of several hundred kilometres, and that this can apparently take both positive and negative values (red and blue hues, respectively). However, we have measured angular offsets not with respect to an absolute reference but rather the time-averaged position, which subtracts away fluctuations that are stationary with respect to the celestial sphere. (The choice to use the time-averaged position as the reference, as has been done in all previous MWA works, stems from the inability to obtain an accurate absolute reference for this analysis; see \citet{Loi2015b_mn2e}, section 2.3 for details.) This implies that the $\partial_\eta\Sigma$ values are not absolute, but should rather be interpreted as fluctuations about some unmeasured global mean. Despite this limitation, there are still useful conclusions that can be drawn from the data. The rotation of the Earth causes each pierce point to drift roughly east-west (i.e.~in the $\xi$-direction) through the screen, and so a large fraction, particularly in the central $\xi$-range of the area scanned, eventually drift across (``sample'') each flux tube. Subtracting the mean thus retains fluctuations on scales smaller than the east-west extent of the FOV, implying that relative differences in $\partial_\eta\Sigma$ between flux tubes can be measured reliably. We investigated the flux tube dependence of the field-aligned gradient by binning the data points gathered from all snapshots by $\xi$-value and then computing the average $\partial_\eta\Sigma$ value for each bin. The result is shown in Figure \ref{fig:av_etagradient_vs_xi}, with the characteristic spread within each bin represented by the error bars. There is evidence for systematic variations in $\partial_\eta\Sigma$ on east-west scales of order several tens of kilometres. This is comparable to the observed widths of the FAIs, and indicates that plasma scale heights can differ measurably between flux tubes that are several tens of kilometres apart. To explain the lack of diminished density contrast at high zenith angles, \citet{Loi2015_mn2e} argued that the thickness of the sheet could not be more than a factor of $\sim$3 larger than the spacing of the structures. If the thickness is $\sim$100\,km and the background electron density is $\sim 10^4$\,el\,cm$^{-3}$ (characteristic of the nighttime midlatitude topside ionosphere; \citet{Schunk2009}), then under the model assumption that all density variations are confined to the screen implies a relative variation in the scale height of $\sim$100\,km between flux tubes within the FOV. Given that absolute scale heights of several hundred kilometres have been measured at similar latitudes, altitudes and times of day \citep{Thomas1966, Watt1966}, this appears to be a substantial variation. However the actual thickness of the screen is largely uncertain, and a thinner screen would yield a proportionally smaller estimate. The physical origin for the local variations in scale height could be temperature differences between flux tubes, perhaps a result of variations in the conductivity tensor combined with storm-related field-aligned currents that produce uneven heating of the plasma, or longitudinally varying rates of impact ionisation. \begin{figure*} \centering \includegraphics[clip=true, trim=2cm 0cm 2cm 0cm, width=\textwidth]{powerspec_gradient} \caption{Logarithm of the power spectral density for the $\xi$-component (top row) and $\eta$-component (bottom row) of the density gradient field $\nabla\Sigma$, collapsed from three dimensions down to two by integrating out $\omega$ (left), $k_\eta$ (middle) and $k_\xi$ (right). Note that these have been zoomed in to the central regions; the absolute Nyquist frequencies are 0.25\,km$^{-1}$ for both the $k_\xi$ and $k_\eta$ axes and 4.2\,mHz for the $\omega$ axis. The corresponding response function is shown in Figure \ref{fig:powerspec_response}.} \label{fig:powerspec} \end{figure*} \begin{figure*} \includegraphics[clip=true, trim=2cm 0cm 2cm 0cm, width=\textwidth]{powerspec_response} \caption{The response function for the power spectra shown in Figure \ref{fig:powerspec}. The three panels correspond to integrating out $\omega$ (left), $k_\eta$ (middle) and $k_\xi$ (right). The diagonal ringing feature in the middle panel results from the rotation of the Earth and consequent drift motion of measurement points through the screen (see \citet{Loi2015a_mn2e} for a more detailed explanation of its origin).} \label{fig:powerspec_response} \end{figure*} \subsection{Power Spectra and Wavelike Behaviour}\label{sec:powerspec} This and the following subsection present quantitative measurements of the fluctuation properties (wavelengths, periods and phase velocities) of the $\nabla\Sigma$ vector field using the power spectrum technique developed by \citet{Loi2015a_mn2e}. Spatial coordinates $x$ and $y$ are replaced by $\xi$ and $\eta$, and the transformed quantity is $\nabla\Sigma$ (rather than $\nabla_\perp$TEC). We chose a grid size of 2\,km, which given a mean sampling density of approximately one pierce point per 100\,km$^2$ on the screen, corresponds to spatially oversampling by a factor of about two. We computed two power spectra, one for $\partial_\xi\Sigma$ and another for $\partial_\eta\Sigma$. Each is a three-dimensional data cube with two axes for the $\xi$ and $\eta$ spatial frequencies (denoted by $k_\xi$ and $k_\eta$) and one axis for the temporal frequency $\omega$. Figure \ref{fig:powerspec} shows the reduced (from 3D to 2D) power spectral density distributions obtained by integrating along each of the axes in turn. The response function for these power spectra, visualised in an identical manner, is shown in Figure \ref{fig:powerspec_response}. Its compactness assures us that the most of the structure seen in Figure \ref{fig:powerspec}, with the exception of the diffuse diagonal band running from top-left to bottom-right in the middle column, is genuine. The band is a typical feature in MWA power spectra and we refer the reader to \citet{Loi2015a_mn2e}, section 3.1 for details of its origin. It can be seen that the greatest concentration of power is in the cells for which $k_\eta = 0$ (left column of Figure \ref{fig:powerspec}). This tells us that fluctuations in $\partial_\xi\Sigma$ take the form of corrugations varying almost purely along the $\xi$-direction, which is consistent with Figure \ref{fig:shading_xieta}. As mentioned earlier in Section \ref{sec:inclination}, the value of $i$ was chosen according to Equation \ref{eq:optimise_i}. This corresponds to choosing the $i$ that concentrates the most signal power into the $k_\eta = 0$ cells. Otherwise put, it is the act of choosing the basis in which as much information is captured in as small a subset of basis vectors as possible. The fact that the value of $i$ which achieves this coincides with the magnetic inclination provides the physical interpretation for the structures as FAIs. A separate subsection (Section \ref{sec:modes}) is devoted to an analysis of the structure within the $k_\eta = 0$ row of cells. Comparing the top and bottom rows of Figure \ref{fig:powerspec} reveals that peaks tend to be 1--2 orders of magnitude more intense for $\partial_\xi\Sigma$ than $\partial_\eta\Sigma$. It can also be seen from the left and middle columns that the power spectral density for $\partial_\eta\Sigma$ is confined to smaller $k_\xi$ values (larger east-west scales) than $\partial_\xi\Sigma$. This is consistent with Figure \ref{fig:shading_xieta} and Movie S1. Phase velocities can be measured from the slopes of features in the $\omega$-$k_\xi$ and $\omega$-$k_\eta$ planes (middle and right columns). A north-eastward propagating fluctuation in $\partial_\eta\Sigma$ can be identified, with a wavelength of $\sim$700\,km and a phase speed of about 200\,m\,s$^{-1}$. This may be an AGW whose presence is revealed only in the $\partial_\eta\Sigma$ component because this component is not overwhelmed by the FAI signal. In contrast, the modes with smaller wavelengths seen in $\partial_\xi\Sigma$ (top row, middle column) are associated with smaller slopes and therefore phase speeds. If one inspects the slopes of lines joining pairs of conjugate peaks one sees that there is a small scatter about $\omega/k_\xi = 0$\,m\,s$^{-1}$, with some slopes being positive and others negative. This suggests that different modes move with slightly different speeds (typical magnitudes are several metres per second), some drifting east and others west. Since the associated FAIs are likely to be \textit{in situ} stationary structures, this suggests the presence of east-west shear in the plasma at a rate of $\sim$10\,m\,s$^{-1}$, which could be a potential source of free energy for growth. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{linlog_keta0pow_vs_kxi} \caption{The power spectral density in the $k_\eta = 0$ row of cells from the top-left panel of Figure \ref{fig:optimal_inc}, plotted as a function of $k_\xi$ on (a) linear and (b) logarithmic axes. Since the power spectrum is inversion-symmetric, it is only necessary to display one half of the $k_\xi$ axis (we have chosen to plot positive $k_\xi$). Strong peaks at discrete spatial frequencies can be seen, indicating the presence of distinct modes. A fit to the power spectrum, corresponding to the red line in panel (b), suggests that it can globally be described by a power law of index $-1.2 \pm 0.1$.} \label{fig:keta0pow_vs_kxi} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{keta0pow_response} \caption{The response function corresponding to the one-dimensional power spectrum shown in Figure \ref{fig:keta0pow_vs_kxi}. Noisy fluctuations have been made visible by the use of a logarithmic scale on the vertical axis. The central peak is about three orders of magnitude above the noise and has a width of 3--4 frequency increments.} \label{fig:keta0pow_response} \end{figure} \subsection{Spatial Periodicities}\label{sec:modes} The concentration of power into the modes with $k_\eta = 0$ through use of the $(\xi,\eta)$ basis allows us to isolate this region of the $\partial_\xi\Sigma$ power spectrum for further analysis. The $\omega$-integrated power spectral density distribution as a function of $k_\xi$ for the $k_\eta = 0$ modes is shown in Figure \ref{fig:keta0pow_vs_kxi}. One can identify prominent peaks at distinct spatial frequencies, whose heights roughly decrease with increasing wavenumber. The corresponding response function, shown in Figure \ref{fig:keta0pow_response}, is highly compact and would not be responsible for generating this complex structure. The most prominent peak is at $k_\xi = 0.018$\,km$^{-1}$, corresponding to a wavelength of 56\,km (this periodicity can be visually identified in Figure \ref{fig:shading_xieta}). Additional peaks appear near $k_\xi$ = 0.036, 0.047, 0.081 and possibly 0.092\,km$^{-1}$, corresponding to wavelengths of 28, 21, 12 and 11\,km respectively. Note that although the wavenumber itself can be measured to a precision of 10$^{-3}$\,km$^{-1}$ (set by the spatial extent of the observed region), the overall error in these values is dominated by the systematic error arising from the uncertainty in $h$. This is large, of order 10\%, and enters through the role of $h$ in setting the linear scale of the $(\xi,\eta)$ coordinates with respect to the data, which can be seen from Equation (\ref{eq:trafo1}). Although the $k_\eta = 0$ region of the $\partial_\xi\Sigma$ power spectrum has a complicated structure containing multiple peaks, the envelope appears to decay with increasing $k_\xi$. If we assume a power-law functional form, the best-fit index turns out to be $-1.2 \pm 0.1$. Note that this applies to the power spectrum of the $\xi$-gradient of $\Sigma$. Because the Fourier transform of the $\xi$-gradient of a function is $-ik_\xi$ times the Fourier transform of the function itself, the power spectrum of $\partial_\xi\Sigma$ is $k_\xi^2$ times the power spectrum of $\Sigma$ and so the corresponding index of the \textit{density} power spectrum is $-3.2$. This may be consistent with the 3D Kolmogorov index of $-11/3$ \citep{Kolmogorov1941} to within experimental uncertainty if one acknowledges the possibility of additional sources of error such as spectral leakage, which tends to systematically flatten the spectrum (see \citet{Loi2015a_mn2e}, Appendix A). Moreover, it is uncertain as to where in wavenumber space a turbulent cascade might begin. It may be that the peaks seen at $k_\xi < 0.1$\,km$^{-1}$ reflect unstable modes where energy is being injected, and that the cascade only develops at higher wavenumbers. Indeed, the region where $k_\xi > 0.1$\,km$^{-1}$ appears to possess a slightly steeper index of $-1.4 \pm 0.2$. Given the evidence for velocity shear (Section \ref{sec:powerspec}), it is possible that the structures have grown through a Kelvin-Helmholtz instability. The dominant wavelength in this case is around eight times the width of the shear layer, so if the primary mode is the one at $\sim$60\,km then one infers a thickness for the shearing region of order 10\,km. The origin of this shear may be in the neutral atmosphere, or alternatively, the periodicities may reflect those of seed disturbances (e.g.~AGWs), where the associated density and conductivity perturbations have mapped upwards along field lines to generate the structures observed. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{superMWA_totgrad_vs_time} \caption{The RMS of density gradients measured through source position offsets, and therefore indicating the characteristic amplitude of fluctuations on scales exceeding the size of the MWA ($\sim$1\,km), as a function of time. Black asterisks correspond to the RMS value calculated over the whole FOV, whereas the upwards-pointing (blue) and downwards-pointing (red) triangles correspond to restrictions to the regions northward ($\eta > 0$) and southward ($\eta < 0$) of zenith, respectively. The slope of the plane towards the north means that this also reflects a split in altitude, with blue triangles being for higher altitudes. The black solid line is an exponential fit ($\propto \mathrm{e}^{\gamma t}$) through the asterisks. The best-fit value for the growth rate $\gamma$ corresponds to a timescale of about 2\,hr.} \label{fig:superMWA_totgrad_vs_time} \end{figure} \subsection{Growth over Observing Interval}\label{sec:growth} As previously noted, the density gradients associated with the FAIs grow over time. This can be quantified by plotting the characteristic $\nabla\Sigma$ value over the FOV as a function of time, shown using black asterisks in Figure \ref{fig:superMWA_totgrad_vs_time}. To enable a quantitative comparison with the small-scale gradients defined in Equation (\ref{eq:broadwid}) (see later in Section \ref{sec:broadening}), it is the RMS value of $\nabla\Sigma$ that has been plotted. We obtained the characteristic growth rate $\gamma$ by fitting to the data an exponential curve (black solid line) of the form $\propto \mathrm{e}^{\gamma t}$, where $t$ is time. The best-fit growth rate is $\gamma = 0.48 \pm 0.03$\,hr$^{-1}$, corresponding to a growth timescale of about 2\,hr. We also examined the separate trends for high ($\eta > 0$) and low ($\eta < 0$) altitudes, plotted as the blue and red triangles in Figure \ref{fig:superMWA_totgrad_vs_time}. While growth on a similar timescale is evident in both regimes, the two curves are noticeably offset with characteristic gradients being about 20\% larger at lower altitudes. This could arise if each flux tube were associated with a fixed-percentage fluctuation of the background density; a global decrease in background density with altitude would then explain the separation between the curves. \section{Sub-MWA Structures}\label{sec:subMWA} \subsection{Scintillation}\label{sec:scintillation} Because an interferometer combines signals coherently, it is susceptible to destructive interference and decorrelation if the phase variation of an incoming signal over the array deviates significantly (by $>$1\,radian) from a linear ramp. This effect is similar to the constructive and destructive interference that results if ray paths are perturbed during propagation to the point where they cross before reaching the detector (cf.~Fraunhofer diffraction). The critical size scale of irregularities on which $\sim$1\,rad associated phase perturbations cause this to occur (i.e.~the Fresnel scale) is around 400\,m for the parameters relevant here. These effects give rise to scintillation, where the apparently brightness of the source fluctuates erratically by a significant fraction of the true brightness. The timescale of this fluctuation is given by the spatial scale of the irregularities divided by the drift speed of the sight lines. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{lightcurve_example} \caption{A representative radio light curve (flux density as a function of time) of a randomly-selected source, exhibiting a modulation index of order unity on short (tens of seconds) timescales. This is characteristic of diffractive scintillation, evidencing significant structure on sub-array scales. Note that this plot was obtained using images formed at a cadence of 4\,s. Similar rapidly-varying light curves are observed for all other sources in the dataset. The error on each flux measurement, given by the local pixel-to-pixel RMS noise, is 0.2\,Jy.} \label{fig:lightcurve_example} \end{figure*} Figure \ref{fig:lightcurve_example} shows the radio light curve for a moderately bright (S/N ratio of 15) source in the data. Large modulations, of up to 40\% the mean brightness of the source and significantly larger than the uncertainty of the measurement ($\sim$0.2\,Jy, given by the local pixel-to-pixel noise), can be observed to occur on tens of second timescales. Rapid brightness variations are similarly seen for many other sources in the field, and can be interpreted as evidence for substantial phase structure on sub-MWA scales. Although it is in principle possible to attempt a temporal power spectrum analysis of these light curves, this is problematic for several reasons. These include the presence of 8-s data gaps for every 2-min block, abrupt beamformer changes at 30-min intervals, and systematic uncertainties in the direction-dependent instrument gain which contaminate the power spectrum across a wide range of frequencies. Suffice to say that no obvious peaks appear in the temporal power spectra (not shown) that might point to a particular diffractive scale. It is possible that this varies over the FOV, but not much more can be said in this regard. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{combandsplit_subMWA_totgrad_vs_time} \caption{The RMS of density gradients measured through the angular broadening of otherwise unresolved sources, and therefore on scales below the size of the MWA ($\sim$1\,km), as a function of time. Panel (a) shows the trend for sources over the whole field of view, whereas (b) shows the trend for two sets of sources separated according to whether they lie north (blue) or south (red) of zenith. There appears to be an increase in growth rate near time $t = 60$\,min in panel (a), which coincides with the time when the two growth curves in panel (b), initially separate, converge. To quantify this apparent change in growth rate, the data before and after $t = 60$\,min have been fitted separately with exponential functions ($\propto \mathrm{e}^{\gamma t}$). These are the dashed and solid lines in (a), with corresponding best-fit growth rates $\gamma$ indicated in the figure legend.} \label{fig:subMWA_totgrad_vs_time} \end{figure*} \subsection{Angular Broadening}\label{sec:broadening} The quantity $\sigma(\nabla\Sigma)$, defined in Equation (\ref{eq:broadwid}) and representing the characteristic spread in density gradients for FAIs on sub-MWA scales, is plotted as a function of time in Figure \ref{fig:subMWA_totgrad_vs_time}. We observe that this increases with time, and at a similar average rate of growth as for the super-MWA structures. However, the growth curve appears (at least visually) to be segmented into two portions associated with different growth rates, the transition occurring near the 60-min mark. We have chosen to fit these two portions separately, and find that the growth rate (given by the slope of the trend) is consistent with undergoing an increase at $t = 60$\,min by a factor of 4--5. The corresponding timescales of growth are 7--8\,hr before $t = 60$\,min, subsequently dropping to 1--2\,hr. Splitting the data into high ($\eta > 0$) and low ($\eta < 0$) altitude regions reveals another property of the apparent transition at $t = 60$\,min. The two growth curves are initially separate, with $\sigma(\nabla\Sigma)$ larger at higher altitudes, but then converge at the transition point (see Figure \ref{fig:subMWA_totgrad_vs_time}b). The initial difference in $\sigma(\nabla\Sigma)$ between the two regimes could be explained by a larger dissipation rate at lower altitudes, where the environment is more collisional. The subsequent convergence of the two curves might then be attributed to the increase in growth rate, with the new growth rate being much larger than the difference in dissipation rates between high and low altitudes. Although the situation appears to be reversed compared to super-MWA structures in that higher altitudes are associated with larger gradients, this could be explained by the greater importance of dissipative effects on smaller scales. It is possible that these small-scale structures derived their energy for formation from the same source as for the larger FAIs. However, large-scale FAIs with density gradients comparable to or greater than this dataset have been previously observed by the MWA that were not accompanied by scintillation, which is uncommon at these observing frequencies \citep{Loi2015b_mn2e, Loi2016}. A number of studies that simultaneously monitored the occurrence of VLF whistlers and VHF scintillation have found the two to be uncorrelated, which suggests that large- and small-scale structures may not share the same physical origin \citep{Singh1994, Patel2010}. It may be that the simultaneous presence of both large- and small-scale irregularities in this dataset is coincidental. However, the observation here that the two share similar growth trends supports the idea that they share a common energy source, so perhaps the small-scale irregularities belong to the high-frequency end of the cascade seen on larger scales (Section \ref{sec:modes}). Ascertaining the conditions necessary for simultaneous or independent formation of FAIs of different sizes will require a broader study that is beyond the scope of this work. It is curious to notice that the value of $\sigma(\nabla\Sigma)$, measuring the power on sub-MWA scales, is very similar to the RMS density gradient for super-MWA scales (compare the vertical axes of Figures \ref{fig:superMWA_totgrad_vs_time} and \ref{fig:subMWA_totgrad_vs_time}), which may be a coincidence. If we can assume that the power-law index of $-1.2$ measured for large-scale density gradient fluctuations can be extrapolated down to small scales (i.e.~that these are part of the same spectrum), this gives us a way of estimating the inner scale. Taking the wavenumbers associated with the outer scale and MWA to be $k_\text{out} = 10^{-4}$\,m$^{-1}$ and $k_\text{MWA} = 10^{-3}$\,m$^{-1}$, and equating the total power (proportional to $\int k^{-1.2} \,\mathrm{d}k$) on super- and sub-MWA scales, we obtain $k_\text{in} \sim 0.1$\,m$^{-1}$ for the inner scale. The corresponding length is $\sim$10\,m, comparable to ion gyroradii. \section{Summary and Outlook}\label{sec:conclusion} We have introduced a new approach for studying FAIs that transforms interferometric TEC gradient data into a physically meaning reference frame, corresponding to the magnetic shell tangent plane. Applying the transformation to the 15 October 2013 dataset of \citet{Loi2015_mn2e}, we demonstrated its utility for decomposing behaviour along and across magnetic field lines through the construction of a suitable set of coordinates $(\xi,\eta)$, where the $\xi$ and $\eta$ directions are perpendicular and parallel to the field, respectively. We also devised a method for computing the best value for the inclination of the plane from the data, and verified that it is in good agreement with the magnetic inclination. Through our analyses we found that: \begin{itemize} \item The transformation to $(\xi,\eta)$ confirms with much greater precision than previous work that the striations seen by the MWA on 15 October 2013 were oriented along the geomagnetic field; \item $\xi$-gradients were around an order of magnitude steeper than $\eta$-gradients and showed structuring on spatial scales of several tens of kilometres, while $\eta$-gradients were structured on much larger scales of several hundred kilometres; \item The plasma scale height, quantified through the $\eta$-gradients, varied by up to $\sim$100\,km between flux tubes within the FOV, possibly reflecting spatial variations in plasma temperature; \item A differential drift of $\sim$10\,m\,s$^{-1}$ in the east-west direction could be identified through power spectrum analysis, possibly providing the energy for growth of the FAIs through a shear-driven instability; \item The spatial power spectrum showed multiple periodicities on scales between 10--60\,km, which may correspond to those of seed fluctuations (e.g.~atmospheric waves) or perhaps the favoured wavelengths of a plasma instability; \item Fitting the power spectrum of $\xi$-gradients with a power law yielded an index of $-1.2 \pm 0.1$, slightly flatter than the Kolmogorov value of $-5/3$; \item Growth of the FAIs on both large and small spatial scales was observed over the interval, and we measured the characteristic timescales to be several hours; \item Many background radio sources exhibited strong scintillations, implying significant structure on small (sub-array) scales; \item Angular broadening analyses revealed small-scale irregularities to be highly anisotropic and aligned along the magnetic field; \item Comparable amounts of fluctuation power were present on large and small scales (relative to the size of the array), allowing us to estimate an inner scale of $\sim$10\,m by extrapolating the power law obtained for large-scale structure; \item The strengths of density gradients exhibited an altitude dependence, being greater at lower altitudes for large-scale structure and greater at higher altitudes for small-scale structure, which might be explained through a size-dependent importance of dissipative effects. \end{itemize} The main weakness of our model is the assumption that irregularities are confined to a thin layer. This may be a fair approximation for the interval analysed here, but there may be periods in which irregularities reside on multiple layers, a situation that would escape detection by the MWA. Because of its small size, the MWA can only perform a crude height localisation through parallax analysis to obtain a single characteristic value for the altitude \citep{Loi2015_mn2e}. Interferometers that are more spatially extended, such as the Low Frequency Array \citep{vanHaarlem2013} and the future Square Kilometre Array (SKA) \citep{Dewdney2009}, would be capable of conducting a more detailed three-dimensional mapping of the ionosphere through e.g.~tomographic inversion \citep{GaussiranII2004, Koopmans2010}. A convenient property of our model is that it is self-contained: the two input parameters (height and inclination) can be fitted using the data alone. The concepts presented here may pave the way for improvements to ionospheric calibration procedures for next-generation radio telescopes. The two-parameter inclined screen model enables most of the FAI fluctuation power to be captured into unidirectional plane-wave modes aligned with the Earth's magnetic field. Such a model may be a viable starting point for the development of compact calibration strategies for the SKA, since a relatively small number of basis elements would be needed to represent FAI-related distortions in the data. The ability of interferometers like the MWA to probe ionospheric density structures at great breadth and detail underscores them as rich sources of geophysical information; we hope for their continued application to this end in time to come.
2,877,628,090,894
arxiv
\section{Introduction and Preliminaries} \label{sec:Intro} Many combinatorial objects that are counted by the Catalan numbers have $k$-ary analogues. Heubach, Li and Mansour list several such examples in \cite{Heubach2008Staircase}, among them $k$-ary trees, different families of lattice paths, nonintersecting arc sequences, and certain types of Young diagrams. \medskip The family of $k$-plane trees, which was first considered in \cite{GuProdingerWagner2010Bijections}, is another example that leads to $k$-ary analogues of the Catalan numbers. It is the family of all labelled plane trees (rooted trees where the order of branches matters) with vertex labels in the set $[k] = \{1,2,\ldots,k\}$ and the restriction that the sum of the labels along any edge is never greater than $k+1$. Figure~\ref{fig:4plane} shows an example of a $4$-plane tree. \medskip Note that $1$-plane trees are simply plane trees where every vertex is labelled $1$, which are counted by the Catalan numbers. Moreover, we note that a plane tree with vertex labels in $\{1,2\}$ is a $2$-plane tree if and only if the vertices labelled $2$ form an independent set. Therefore, the total number of $2$-plane trees is the same as the total number of independent sets in all plane trees, which was determined in~\cite{Klazar1997Twelve}. \medskip The number of $k$-plane trees with $n$ vertices is the generalised Catalan number $$\frac{1}{n-1} \binom{(k+1)(n-1)}{n} = \frac{k}{n} \binom{(k+1)(n-1)}{n-1},$$ and there is a similar formula for the number of $k$-plane trees with $n$ vertices whose root is labelled $h$: $$\frac{k+1-h}{kn-h+1} \binom{(k+1)n-h-1}{n-1}.$$ In particular, we obtain the number of $(k+1)$-ary trees (trees where every internal vertex has precisely $k+1$ children) with $n-1$ internal vertices when $h = k$. An explicit bijection is provided in~\cite{GuProdingerWagner2010Bijections}. \begin{figure}[htbp] \centering \begin{tikzpicture} \node[draw,circle,inner sep=2pt] (u1) at (0,0) {2}; \node[draw,circle,inner sep=2pt] (u2) at (-2,-1) {3}; \node[draw,circle,inner sep=2pt] (u3) at (0,-1) {1}; \node[draw,circle,inner sep=2pt] (u4) at (2,-1) {2}; \node[draw,circle,inner sep=2pt] (u5) at (-2,-2) {1}; \node[draw,circle,inner sep=2pt] (u6) at (1.5,-2) {3}; \node[draw,circle,inner sep=2pt] (u7) at (2.5,-2) {2}; \node[draw,circle,inner sep=2pt] (u8) at (-3.5,-3) {4}; \node[draw,circle,inner sep=2pt] (u9) at (-2.5,-3) {2}; \node[draw,circle,inner sep=2pt] (u10) at (-1.5,-3) {4}; \node[draw,circle,inner sep=2pt] (u11) at (-0.5,-3) {3}; \node[draw,circle,inner sep=2pt] (u12) at (2.5,-3) {1}; \node[draw,circle,inner sep=2pt] (u13) at (-2.5,-4) {3}; \node[draw,circle,inner sep=2pt] (u14) at (-0.5,-4) {1}; \node[draw,circle,inner sep=2pt] (u15) at (2,-4) {4}; \node[draw,circle,inner sep=2pt] (u16) at (3,-4) {3}; \node[draw,circle,inner sep=2pt] (u17) at (-1,-5) {2}; \node[draw,circle,inner sep=2pt] (u18) at (0,-5) {4}; \node[draw,circle,inner sep=2pt] (u19) at (2,-5) {1}; \draw (u1)--(u2); \draw (u1)--(u3); \draw (u1)--(u4); \draw (u2)--(u5); \draw (u4)--(u6); \draw (u4)--(u7); \draw (u5)--(u8); \draw (u5)--(u9); \draw (u5)--(u10); \draw (u5)--(u11); \draw (u7)--(u12); \draw (u9)--(u13); \draw (u11)--(u14); \draw (u12)--(u15); \draw (u12)--(u16); \draw (u14)--(u17); \draw (u14)--(u18); \draw (u15)--(u19); \end{tikzpicture} \caption{An example of a $4$-plane tree.}\label{fig:4plane} \end{figure} \medskip The aim of this paper is to provide refined counting formulas for $k$-plane trees based on the number of occurrences of each label. Perhaps surprisingly, there is an explicit product formula for the number of $k$-plane trees with prescribed multiplicities of all labels. The main theorem reads as follows: \begin{thm}\label{thm:main1} Let $n > 1$. The number of $k$-plane trees with root label $h$, $\ell_i$ vertices labelled $i$ ($i \in [k]$) and $n = \ell_1+\ell_2 + \cdots + \ell_k$ vertices in total is given by \begin{multline*} \frac{\ell_h}{n(2n-1)} \prod_{r=1}^{\lceil k/2 \rceil} \binom{2n-1-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \\ \prod_{r=1}^{h-1} \binom{\sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j - 1}{\ell_{k+1-r}} \prod_{r=h}^{\lfloor k/2 \rfloor} \binom{\sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \end{multline*} if $h \leq \lceil k/2 \rceil$, and by \begin{multline*} \frac{\ell_h}{(n-1)(2n-1)} \prod_{r=1}^{k+1-h} \binom{2n-1-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \\ \prod_{r=k+2-h}^{\lceil k/2 \rceil} \binom{2n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{\sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j - 1}{\ell_{k+1-r}} \end{multline*} otherwise. The total number of $k$-plane trees with $\ell_i$ vertices labelled $i$ ($i \in [k]$) and $n = \ell_1+\ell_2 + \cdots + \ell_k$ vertices in total is $$\frac{1}{n-1} \prod_{r=1}^{\lceil k/2 \rceil} \binom{2n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{\sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j - 1}{\ell_{k+1-r}}.$$ \end{thm} Let us remark here that empty products are always considered to be $1$, and empty sums are considered to be $0$. To illustrate the result in a special case, let us give the formula for the total number of $4$-plane trees with $\ell_1,\ell_2,\ell_3,\ell_4$ vertices labelled $1,2,3,4$ respectively and $n = \ell_1 + \ell_2 + \ell_3 + \ell_4$ vertices in total: \begin{multline*} \frac{1}{n-1} \binom{2n-2}{\ell_1} \binom{2n-2-\ell_1-\ell_4}{\ell_2} \binom{\ell_1 + \ell_2 + \ell_3 + \ell_4 - 1}{\ell_3} \binom{\ell_1 + \ell_4 - 1}{\ell_4} \\ = \frac{1}{n-1} \binom{2n-2}{\ell_1} \binom{2n-2-\ell_1-\ell_4}{\ell_2} \binom{n-1}{\ell_3} \binom{\ell_1 + \ell_4 - 1}{\ell_4}. \end{multline*} Let us also mention again that $2$-plane trees are precisely $\{1,2\}$-labelled plane trees where the vertices labelled $2$ form an independent set. Therefore, for $k=2$, we obtain formulas due to Kirschenhofer, Prodinger and Tichy \cite{Kirschenhofer1986Fibonacci} for the total number of independent sets of a given size (containing the root, not containing the root, and total) in plane trees with $n$ vertices as special cases. \medskip Theorem~\ref{thm:main1} will be proven in Section~\ref{sec:plane} by first establishing a system of functional equations, which can be solved explicitly by means of a suitable substitution. The formula finally follows by an application of the Lagrange-B\"urmann formula. We will derive a number of corollaries from the formula in Theorem~\ref{thm:main1}, in particular on the average number of occurrences of a specific label. \medskip The family of $k$-plane trees can also be bijectively related to lattice paths with up-steps of the form $(1,1)$ and down-steps of the form $(1,-k)$, see \cite{GuProdingerWagner2010Bijections}. In \cite{Heuberger2022Enumeration}, such paths are enumerated by the $y$-coordinates of the down-steps modulo $k$. Interestingly, the number of paths with exactly $a_i$ down-steps at level $i$ modulo $k$ for every $i$ turns out to be given by a similar, albeit somewhat different, product formula as those in Theorem~\ref{thm:main1}. \medskip In Section~\ref{sec:noncrossing}, we consider a similar family of trees called $k$-noncrossing trees: recall that a noncrossing tree is a tree whose vertices $v_1,v_2,\ldots,v_n$ can be arranged as points on a circle (in this order) with the edges represented by line segments between these points that do not intersect at interior points. In analogy to $k$-plane trees, one defines $k$-noncrossing trees as noncrossing trees whose vertices are labelled with labels in $[k]$ in such a way that the labels of two adjacent vertices $v_i, v_j$ with $i < j$ cannot add up to a sum greater than $k+1$ if the path from the root $v_1$ to $v_j$ contains $v_i$ (this includes the case that $i = 1$). Figure~\ref{fig:3noncr} shows an example of a $3$-noncrossing tree. Note that it contains two edges between vertices labelled $2$ and $3$ respectively that would not be allowed in a $k$-plane tree, but are allowed here because the path from the root moves from the vertex with higher index to the vertex with lower index. \medskip The special case $k = 2$ was considered in \cite{YanLiu2009Noncrossing}, where a bijection between $2$-noncrossing trees with a root labelled $2$ and $5$-ary trees was constructed. The more general case was studied in \cite{Pang2010kNoncrossing}, see also \cite{Okoth2022Enumeration}. \begin{figure}[htbp] \centering \begin{tikzpicture} \node[draw,circle,inner sep=2pt] (u1) at (0,3) {2}; \node[draw,circle,inner sep=2pt] (u2) at (1.5,2.6) {3}; \node[draw,circle,inner sep=2pt] (u3) at (2.6,1.5) {1}; \node[draw,circle,inner sep=2pt] (u4) at (3,0) {2}; \node[draw,circle,inner sep=2pt] (u5) at (2.6,-1.5) {2}; \node[draw,circle,inner sep=2pt] (u6) at (1.5,-2.6) {1}; \node[draw,circle,inner sep=2pt] (u7) at (0,-3) {3}; \node[draw,circle,inner sep=2pt] (u8) at (-1.5,-2.6) {2}; \node[draw,circle,inner sep=2pt] (u9) at (-2.6,-1.5) {1}; \node[draw,circle,inner sep=2pt] (u10) at (-3,0) {3}; \node[draw,circle,inner sep=2pt] (u11) at (-2.6,1.5) {1}; \node[draw,circle,inner sep=2pt] (u12) at (-1.5,2.6) {3}; \draw (u1)--(u4); \draw (u1)--(u8); \draw (u1)--(u11); \draw (u2)--(u3); \draw (u2)--(u4); \draw (u5)--(u6); \draw (u5)--(u8); \draw (u7)--(u8); \draw (u8)--(u9); \draw (u9)--(u10); \draw (u11)--(u12); \node at (0,3.6) {$v_1$}; \node at (1.8,3.1) {$v_2$}; \node at (3.1,1.8) {$v_3$}; \node at (3.6,0) {$v_4$}; \node at (3.1,-1.8) {$v_5$}; \node at (1.8,-3.1) {$v_6$}; \node at (0,-3.6) {$v_7$}; \node at (-1.8,-3.1) {$v_8$}; \node at (-3.1,-1.8) {$v_9$}; \node at (-3.6,0) {$v_{10}$}; \node at (-3.1,1.8) {$v_{11}$}; \node at (-1.8,3.1) {$v_{12}$}; \end{tikzpicture} \caption{An example of a $3$-noncrossing tree.}\label{fig:3noncr} \end{figure} \medskip In analogy to Theorem~\ref{thm:main1}, we will also be counting $k$-noncrossing trees by the number of vertices of each label. The resulting formulas are quite similar and again surprisingly explicit. \begin{thm}\label{thm:main2} Let $n > 1$. The number of $k$-noncrossing trees with root label $h$, $\ell_i$ vertices labelled $i$ ($i \in [k]$) and $n = \ell_1+\ell_2 + \cdots + \ell_k$ vertices in total is given by \begin{align*} &\frac{2\ell_h}{(2n-1)(4n-3)} \prod_{r=1}^{\lceil k/2 \rceil} \binom{3n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \\ &\qquad \prod_{r=1}^{h-1} \binom{n - 2 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \prod_{r=h}^{\lfloor k/2 \rfloor} \binom{n - 1 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \\ &\quad - \frac{\ell_h }{(4n-3)(3n-2-\sum_{j=1}^{h-1} \ell_j - \sum_{j=k+2-h}^k \ell_j)} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{n - 1 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \\ &\qquad \prod_{r=1}^{h-1} \binom{3n-3-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=h}^{\lceil k/2 \rceil} \binom{3n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \end{align*} if $h \leq \lceil k/2 \rceil$, and by \begin{align*} &\frac{\ell_h}{(n-1)(4n-3)} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{n - 2 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \\ &\qquad \prod_{r=1}^{k+1-h} \binom{3n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=k+2-h}^{\lceil k/2 \rceil} \binom{3n-3-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \\ &\quad - \frac{\ell_h }{(4n-3)(n - 1 + \sum_{j=1}^{k+1-h} \ell_j + \sum_{j=h}^k \ell_j)} \prod_{r=1}^{\lceil k/2 \rceil} \binom{3n-3-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \\ &\qquad \prod_{r=1}^{k+1-h} \binom{n - 1 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \prod_{r=k+2-h}^{\lfloor k/2 \rfloor} \binom{n - 2 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \end{align*} otherwise. The total number of $k$-noncrossing trees with $\ell_i$ vertices labelled $i$ ($i \in [k]$) and $n = \ell_1+\ell_2 + \cdots + \ell_k$ vertices in total is \begin{multline*} \frac{1}{n-1} \prod_{r=1}^{\lceil k/2 \rceil} \binom{3n-3-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{n - 2 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}} \\ - \frac{1}{2n-1} \prod_{r=1}^{\lceil k/2 \rceil} \binom{3n-2-\sum_{j=1}^{r-1} \ell_j - \sum_{j=k+2-r}^k \ell_j}{\ell_r} \prod_{r=1}^{\lfloor k/2 \rfloor} \binom{n - 1 + \sum_{j=1}^{r} \ell_j + \sum_{j=k+1-r}^k \ell_j}{\ell_{k+1-r}}. \end{multline*} \end{thm} This theorem will be proven in Section~\ref{sec:noncrossing}, using similar techniques as in the proof of Theorem~\ref{thm:main1}, and several corollaries will follow as well. Let us again illustrate the formula in the special case $k=4$, where the formula for the total number of $4$-noncrossing trees with $\ell_1,\ell_2,\ell_3,\ell_4$ vertices labeled $1,2,3,4$ respectively and $n$ vertices in total is \begin{multline*} \frac{1}{n-1} \binom{3n-3}{\ell_1} \binom{3n-3-\ell_1 -\ell_4}{\ell_2} \binom{2n - 2}{\ell_3} \binom{n - 2 + \ell_1 + \ell_4}{\ell_4} \\ - \frac{1}{2n-1} \binom{3n-2}{\ell_1} \binom{3n-2-\ell_1 - \ell_4}{\ell_2} \binom{2n - 1}{\ell_3} \binom{n - 1 + \ell_1 + \ell_4}{\ell_4}. \end{multline*} \section{Plane trees} \label{sec:plane} The key to proving Theorem~\ref{thm:main1} is a system of equations for the multivariate generating functions of $k$-plane trees with a given root label. We fix $k$ and let $\mathcal{P}_r$ denote the set of $k$-plane trees whose root is labelled $r$. Moreover, we let $h_i(T)$ be the number of vertices labelled $i$ in a tree $T$, and $|T|$ the total number of vertices of $T$. Define $$P_r = P_r(z,x_1,x_2,\ldots,x_k) = \sum_{T \in \mathcal{P}_r} z^{|T|} \prod_{i=1}^k x_i^{h_i(T)}.$$ Now note that a tree $T \in \mathcal{P}_r$ can be decomposed into the root (labelled $r$) and a (possibly empty) sequence of branches that are again $k$-plane trees, with root labels in $[k+1-r] = \{1,2,\ldots,k+1-r\}$. Thus we have \begin{equation}\label{eq:mainequation} P_r = x_r z \sum_{j \geq 0} (P_1 + P_2 + \cdots + P_{k+1-r})^j = \frac{x_r z}{1-P_1-P_2 - \cdots - P_{k+1-r}} \end{equation} for all $r \in [k]$. \medskip Now let us set $$F_{k,i}(t) = 1 + \Big( \sum_{j=i}^{\lceil k/2\rceil} x_j - \sum_{j=i}^{\lfloor k/2\rfloor} x_{k+1-j} \Big) t$$ and $$G_{k,i}(t) = 1 + \Big( \sum_{j=i+1}^{\lceil k/2\rceil} x_j - \sum_{j=i}^{\lfloor k/2\rfloor} x_{k+1-j} \Big) t$$ for $1 \leq i \leq \lceil k/2\rceil$. These expressions satisfy the recursions \begin{equation}\label{eq:rec1} F_{k,i+1}(t) = G_{k,i}(t) + x_{k+1-i}t \qquad \text{and} \qquad G_{k,i}(t) = F_{k,i}(t) - x_i t, \end{equation} as well as \begin{equation}\label{eq:rec2} F_{k,i+1}(t) = F_{k,i}(t) + (x_{k+1-i} - x_i)t \qquad \text{and} \qquad G_{k,i+1}(t) = G_{k,i}(t) + (x_{k+1-i} - x_{i+1})t. \end{equation} We can use these recursions to continue the definition of $F_{k,i}(t)$ and $G_{k,i}(t)$ to greater values of $i$: generally, we set $$F_{k,i}(t) = F_{k,1}(t) + \sum_{j=1}^{i-1} (x_{k+1-j} - x_j)t$$ and $$G_{k,i}(t) = G_{k,1}(t) + \sum_{j=1}^{i-1} (x_{k+1-j} - x_{j+1})t.$$ Both~\eqref{eq:rec1} and~\eqref{eq:rec2} remain satisfied. Since $$\sum_{j=i}^{k+1-i} (x_{k+1-j} - x_j) = 0 \qquad \text{and} \qquad \sum_{j=i}^{k-i} (x_{k+1-j} - x_{j+1}) = 0,$$ we see that the following symmetry properties hold: \begin{equation}\label{eq:sym} F_{k,i}(t) = F_{k,k+2-i}(t)\qquad \text{and} \qquad G_{k,i}(t) = G_{k,k+1-i}(t). \end{equation} Moreover, it is important to observe that \begin{equation}\label{eq:midpoint} F_{k,k/2+1} = 1\quad (k \text{ even})\qquad \text{and} \qquad G_{k,(k+1)/2} = 1\quad (k \text{ odd}). \end{equation} The key to the proof of Theorem~\ref{thm:main1} is the substitution \begin{equation}\label{eq:subst} P_1 = \frac{x_1 A}{F_{k,1}(A)} \end{equation} for a suitable power series $A$. One can easily solve the equation for $A$ to show that this power series actually exists (and that it is unique). \medskip As it turns out, we can express $P_1,P_2,\ldots,P_{k-1}$ in terms of $A$ as well and also set up a functional equation for $A$ that is amenable to an application of the Lagrange inversion formula. \begin{pro}\label{pro:formulas_for_Pi} The power series $P_1,P_2,\ldots,P_k$ can be expressed in terms of $A$ and the variables $x_1,x_2,\ldots,x_k$ and $z$ in the following way: for $1 \leq h \leq k$, \begin{align*} P_h &= x_h A \prod_{i=1}^{h} F_{k,i}(A)^{-1} \prod_{i=1}^{h-1} G_{k,i}(A), \\ P_{k+1-h} &= x_{k+1-h} z \prod_{i=1}^{h} F_{k,i}(A) \prod_{i=1}^h G_{k,i}(A)^{-1}. \end{align*} \end{pro} \begin{proof} We use induction on $h$. For $h = 1$, the first equation is exactly our substitution~\eqref{eq:subst}, while the second equation follows from~\eqref{eq:mainequation} for $r = k$ and an application of~\eqref{eq:rec1}: $$P_k = \frac{x_k z}{1-P_1} = \frac{x_k z}{1 - \frac{x_1 A}{F_{k,1}(A)}} = \frac{x_k z F_{k,1}(A)}{F_{k,1}(A) - x_1 A} = \frac{x_k z F_{k,1}(A)}{G_{k,1}(A)}.$$ For the induction step, use~\eqref{eq:mainequation} with $r = h$ and $r = h+1$ respectively, which yields \begin{align*} 1 - P_1 - P_2 - \cdots - P_{k+1-h} = \frac{x_h z}{P_h}, \\ 1 - P_1 - P_2 - \cdots - P_{k-h} = \frac{x_{h+1} z}{P_{h+1}}. \end{align*} Now take the difference: \begin{equation}\label{eq:after_elimination} P_{k+1-h} = \frac{x_{h+1}z}{P_{h+1}} - \frac{x_hz}{P_h}. \end{equation} After some simple manipulations, this gives us \begin{equation}\label{eq:induction_step} P_{h+1} = \frac{x_{h+1}z}{P_{k+1-h} + \frac{x_h z}{P_h}}. \end{equation} Now it only remains to apply the induction hypothesis and simplify: \begin{align*} P_{h+1} &= \frac{x_{h+1}z}{x_{k+1-h} z \prod_{i=1}^h F_{k,i}(A) \prod_{i=1}^h G_{k,i}(A)^{-1} + \frac{z}{A} \prod_{i=1}^h F_{k,i}(A) \prod_{i=1}^{h-1} G_{k,i}(A)^{-1}} \\ &= \frac{x_{h+1}A}{x_{k+1-h}A + G_{k,h}(A)} \prod_{i=1}^h F_{k,i}(A)^{-1} \prod_{i=1}^h G_{k,i}(A) \\ &= \frac{x_{h+1}A}{F_{k,h+1}(A)} \prod_{i=1}^h F_{k,i}(A)^{-1} \prod_{i=1}^h G_{k,i}(A) \\ &= x_{h+1}A \prod_{i=1}^{h+1} F_{k,i}(A)^{-1} \prod_{i=1}^h G_{k,i}(A). \end{align*} Likewise, replacing $h$ by $k-h$ in~\eqref{eq:after_elimination} gives us $$P_{h+1} = \frac{x_{k+1-h}z}{P_{k+1-h}} - \frac{x_{k-h}z}{P_{k-h}}.$$ Thus $$P_{k-h} = \frac{x_{k-h}z}{\frac{x_{k+1-h}z}{P_{k+1-h}} - P_{h+1}}.$$ Now plug in~\eqref{eq:induction_step} and apply the induction hypothesis. Again, we obtain the desired formula after some further manipulations. \end{proof} \begin{cor}\label{cor:identity_for_A} The power series $A$ satisfies the equation $$A = z \prod_{i=1}^{\lceil k/2 \rceil} F_{k,i}(A)^2 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(A)^{-2}.$$ \end{cor} \begin{proof} Replace $h$ by $k+1-h$ in the second equation of Proposition~\ref{pro:formulas_for_Pi} to obtain another representation for $P_h$: $$P_h = x_h z \prod_{i=1}^{k+1-h} F_{k,i}(A) \prod_{i=1}^{k+1-h} G_{k,i}(A)^{-1}.$$ Equating the two expressions for $P_h$ yields $$A = z \prod_{i=1}^h F_{k,i}(A) \prod_{i=1}^{k+1-h} F_{k,i}(A) \prod_{i=1}^{h-1} G_{k,i}(A)^{-1} \prod_{i=1}^{k+1-h} G_{k,i}(A)^{-1}$$ Now we apply the symmetry properties~\eqref{eq:sym} to the second and fourth product: \begin{align*} A &= z \prod_{i=1}^h F_{k,i}(A) \prod_{i=h+1}^{k+1} F_{k,i}(A) \prod_{i=1}^{h-1} G_{k,i}(A)^{-1} \prod_{i=h}^{k} G_{k,i}(A)^{-1} \\ &= z \prod_{i=1}^{k+1} F_{k,i}(A) \prod_{i=1}^{k} G_{k,i}(A)^{-1}. \end{align*} Applying the symmetry properties once again, and noting that $F_{k,k/2+1}$ can be left out if $k$ is even, while $G_{k,(k+1)/2}$ can be left out if $k$ is odd (by~\eqref{eq:midpoint}), we end up with $$A = z \prod_{i=1}^{\lceil k/2 \rceil} F_{k,i}(A)^2 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(A)^{-2},$$ completing the proof. Note that $h$ could have been chosen arbitrarily for this purpose. \end{proof} We can now proceed with the proof of our first main theorem. \begin{proof}[Proof of Theorem~\ref{thm:main1}] We are now ready to apply the Lagrange-B\"urmann formula \cite[Corollary 5.4.3]{Stanley2}, based on Proposition~\ref{pro:formulas_for_Pi} and Corollary~\ref{cor:identity_for_A}. Let us first recall this formula: if $A$ satisfies an implicit equation of the form $A = z \Phi(A)$, then \begin{equation}\label{eq:lag-bur} [z^n] f(A) = \frac{1}{n} [t^{n-1}] f'(t) \Phi(t)^n. \end{equation} Now suppose first that $h \leq \lceil k/2 \rceil$. In view of Proposition~\ref{pro:formulas_for_Pi} and Corollary~\ref{cor:identity_for_A}, we can apply~\eqref{eq:lag-bur} with $$\Phi(t) = \prod_{i=1}^{\lceil k/2 \rceil} F_{k,i}(t)^2 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-2}$$ and $$f(t) = x_h t \prod_{i=1}^{h} F_{k,i}(t)^{-1} \prod_{i=1}^{h-1} G_{k,i}(t)$$ in order to compute the coefficients of $P_h$. The derivative $f'$ is determined by means of logarithmic differentiation. It is also important to observe that $$\frac{F'_{k,i}(t)}{F_{k,i}(t)} = \frac{1}{t} \Big( 1 - \frac{1}{F_{k,i}(t)} \Big)\qquad \text{and} \qquad \frac{G'_{k,i}(t)}{G_{k,i}(t)} = \frac{1}{t} \Big( 1 - \frac{1}{G_{k,i}(t)} \Big),$$ which yields \begin{align*} f'(t) &= f(t) \Big( \frac1{t} - \sum_{i=1}^h \frac{F'_{k,i}(t)}{F_{k,i}(t)} + \sum_{i=1}^{h-1} \frac{G'_{k,i}(t)}{G_{k,i}(t)} \Big) \\ &= f(t) \Big( \frac1{t} - \frac1{t} \sum_{i=1}^h \Big( 1 - \frac{1}{F_{k,i}(t)} \Big) + \frac1{t} \sum_{i=1}^{h-1} \Big( 1 - \frac{1}{G_{k,i}(t)} \Big) \Big) \\ &= \frac{f(t)}{t} \Big( \sum_{i=1}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big). \end{align*} So we have \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] P_h &= \frac{1}{n} [t^{n-1} x_1^{\ell_1} \cdots x_k^{\ell_k}] f'(t) \Phi(t)^n \\ &= \frac{1}{n} [t^{n-1} x_1^{\ell_1} \cdots x_k^{\ell_k}] x_h \prod_{i=1}^{h} F_{k,i}(t)^{-1} \prod_{i=1}^{h-1} G_{k,i}(t) \Big( \sum_{i=1}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big) \\ &\qquad \prod_{i=1}^{\lceil k/2 \rceil} F_{k,i}(t)^{2n} \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-2n} \\ &= \frac{1}{n} [t^{n-1} x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i}(t) \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-1} \Big)^{2n} \\ &\qquad \Big( \prod_{i=1}^{h} F_{k,i}(t) \prod_{i=1}^{h-1} G_{k,i}(t)^{-1} \Big)^{2n-1} \Big( \sum_{i=1}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big). \end{align*} At this point, we can drop the variable $t$ (equivalently, set $t=1$), since the coefficient of $t^{n-1} x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}$ is only nonzero if $\ell_1 + \cdots + \ell_k = n$. Thus we will also only write $F_{k,i}$ and $G_{k,i}$ instead of $F_{k,i}(1)$ and $G_{k,i}(1)$ respectively. \medskip Next we note that $F_{k,1},F_{k,2},\ldots,F_{k,h}$ and $G_{k,1},G_{k,2},\ldots,G_{k,h-1}$ are the only factors that contain the variable $x_h$. Moreover, using logarithmic differentiation once again, one finds that $$ \frac{\partial}{\partial x_h} \Big( \prod_{i=1}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{2n-1} = (2n-1) \Big( \prod_{i=1}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{2n-1}\Big( \sum_{i=1}^h \frac{1}{F_{k,i}} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}} \Big). $$ For any power series $X$, the coefficient of $x_h^{\ell_h-1}$ in $\frac{\partial}{\partial x_h} X$ is precisely $\ell_h$ times the coefficient of $x_h^{\ell_h}$ in $X$. Thus we get $$ [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] P_h = \frac{\ell_h}{n(2n-1)} [x_1^{\ell_1} \cdots x_k^{\ell_k}] \Big( \prod_{i=1}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{2n-1} \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{2n}. $$ Now we finally start extracting coefficients. First of all, we observe that $x_1$ only occurs in the factor $F_{k,1}$. Moreover, we have $$F_{k,1}^{2n-1} = (x_1 + G_{k,1})^{2n-1},$$ so the coefficient of $x_1^{\ell_1}$ is $\binom{2n-1}{\ell_1} G_{k,1}^{2n-1-\ell_1}$, giving us \begin{multline*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] P_h = \frac{\ell_h}{n(2n-1)} \binom{2n-1}{\ell_1} [x_2^{\ell_2} \cdots x_k^{\ell_k}] G_{k,1}^{-\ell_1} \\ \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=2}^{h-1} G_{k,i}^{-1} \Big)^{2n-1} \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{2n}. \end{multline*} Among the remaining factors, $G_{k,1}$ is the only one that contains $x_k$, and we have \begin{align*} [x_k^{\ell_k}] G_{k,1}^{-\ell_1} &= [x_k^{\ell_k}] (F_{k,2} - x_k)^{-\ell_1} = [x_k^{\ell_k}] F_{k,2}^{-\ell_1} \Big( 1 - \frac{x_k}{F_{k,2}} \Big)^{-\ell_1} \\ &= F_{k,2}^{-\ell_1} \binom{-\ell_1}{\ell_k} \Big( - \frac{1}{F_{k,2}}\Big)^{\ell_k} = \binom{\ell_1+\ell_k-1}{\ell_k} F_{k,2}^{-\ell_1-\ell_k}. \end{align*} It follows that \begin{multline*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] P_h = \frac{\ell_h}{n(2n-1)} \binom{2n-1}{\ell_1} \binom{\ell_1+\ell_k-1}{\ell_k} [x_2^{\ell_2} \cdots x_{k-1}^{\ell_{k-1}}] F_{k,2}^{2n-\ell_1-\ell_k-1} \\ \Big( \prod_{i=3}^{h} F_{k,i} \prod_{i=2}^{h-1} G_{k,i}^{-1} \Big)^{2n-1} \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{2n}. \end{multline*} We can now continue in this way, considering the variables $x_1,x_k,x_2,x_{k-1},x_3,\ldots$ in this order. At the end, we have precisely the first formula in Theorem~\ref{thm:main1}. \medskip The case that $h > \lceil k/2 \rceil$ is treated in a similar way: since $$P_{h} = x_{h} z \prod_{i=1}^{k+1-h} F_{k,i}(A) \prod_{i=1}^{k+1-h} G_{k,i}(A)^{-1}$$ in this case by the second equation of Proposition~\ref{pro:formulas_for_Pi}, we apply the Lagrange-B\"urmann formula with $$f(t) = x_h \prod_{i=1}^{k+1-h} F_{k,i}(t) \prod_{i=1}^{k+1-h} G_{k,i}(t)^{-1}$$ and the same function $\Phi$ as before. This yields \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] P_h &= [z^{n-1} x_1^{\ell_1} \cdots x_k^{\ell_k}] f(A) \\ &= \frac{1}{(n-1)} [t^{n-2} x_1^{\ell_1} \cdots x_k^{\ell_k}] f'(t) \Phi(t)^n. \end{align*} The remaining steps are completely analogous to the first case. \medskip Finally, we consider the total number of $k$-plane trees, without taking the root label into account. Here, we first observe that the generating function is $$P_1+P_2 + \cdots + P_k = 1 - \frac{x_1z}{P_1}$$ in view of~\eqref{eq:mainequation} (for $r = 1$). In terms of $A$, this becomes \begin{equation}\label{eq:total_gf} P_1+P_2 + \cdots + P_k = 1 - \frac{z F_{k,1}(A)}{A} \end{equation} by \eqref{eq:subst}. Thus $$[z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] (P_1+P_2+\cdots+P_k) = - [z^{n-1} x_1^{\ell_1} \cdots x_k^{\ell_k}] \frac{F_{k,1}(A)}{A}.$$ Now we apply the Lagrange-B\"urmann formula once again. Noting that $$\frac{\partial}{\partial t} \frac{F_{k,1}(t)}{t} = - \frac{1}{t^2},$$ we obtain \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] (P_1+P_2+\cdots+P_k) &= \frac{1}{n-1} [t^{n-2} x_1^{\ell_1} \cdots x_k^{\ell_k}] t^{-2} \Phi(t)^{n-1} \\ &= \frac{1}{n-1} [t^{n} x_1^{\ell_1} \cdots x_k^{\ell_k}] \Phi(t)^{n-1}, \end{align*} with the same function $\Phi$ as before. Again, the remaining steps are completely analogous. \end{proof} We conclude the section with corollaries of Theorem~\ref{thm:main1} that follow by specialisation. First of all, the following formulas from \cite{GuProdingerWagner2010Bijections} follow easily by ignoring the variables $x_1,x_2,\ldots,x_k$. \begin{cor}\label{cor:total} For every positive integer $n$, the total number of $k$-plane trees with $n$ vertices is $$\frac{k}{n} \binom{(k+1)(n-1)}{n-1}.$$ The number of $k$-plane trees with $n$ vertices whose root is labelled $h$ is $$\frac{k+1-h}{kn-h+1} \binom{(k+1)n-h-1}{n-1}.$$ \end{cor} \begin{proof} Since the number of labels of each kind is no longer relevant, we can set $x_1 = x_2 = \cdots = x_k = 1$. We get $$F_{k,i}(t) = \begin{cases} 1 & k \text{ even,} \\ 1 + t & k \text{ odd,} \end{cases}$$ as well as $$G_{k,i}(t) = \begin{cases} 1 - t & k \text{ even,} \\ 1 & k \text{ odd,} \end{cases}$$ for all values of $i$. Consider the case that $k$ is even, the other being similar. Corollary~\ref{cor:identity_for_A} gives us $$A = z (1-A)^{-k}.$$ So by the Lagrange-B\"urmann formula, we have \begin{align*} [z^n] (P_1+P_2 + \cdots + P_k) &= [z^n] \Big( 1 - \frac{z F_{k,1}(A)}{A} \Big) = - [z^{n-1}] A^{-1} \\ &= \frac{1}{n-1} [t^{n-2}] t^{-2} (1-t)^{-k(n-1)} = \frac{1}{n-1} [t^{n}] (1-t)^{-k(n-1)} \\ &= \frac{1}{n-1} \binom{(k+1)(n-1)}{n} = \frac{k}{n} \binom{(k+1)(n-1)}{n-1}. \end{align*} This is exactly the first formula. Considering the coefficients of $P_h$, which count trees whose root is labelled $h$, we have $$P_h = A (1-A)^{h-1}$$ by Proposition~\ref{pro:formulas_for_Pi}. Thus \begin{align*} [z^n] P_h & = \frac{1}{n} [t^{n-1}] \big((1-t)^{h-1} - t(h-1)(1-t)^{h-2}\big) (1-t)^{-k n}. \\ &= \frac{1}{n} [t^{n-1}] (1-ht) (1-t)^{-k n + h - 2} \\ &= \frac{1}{n} \Big( \binom{(k+1) n - h}{n-1} - h \binom{(k+1) n - h - 1}{n-2} \Big) \\ &= \frac{1}{n} \binom{(k+1) n - h - 1}{n-1} \Big( \frac{(k+1) n - h}{k n - h + 1} - \frac{h(n-1)}{k n - h + 1} \Big) \\ &= \frac{k - h + 1}{k n - h + 1} \binom{(k+1) n - h - 1}{n-1}. \end{align*} \end{proof} Next, we count $k$-plane trees by occurrences of a single label. \begin{cor}\label{cor:single_label} For every $n > 1$, the total number of $k$-plane trees with $n$ vertices of which $\ell$ are labelled $h$ is equal to $$\frac{1}{n-1} \sum_{r=0}^{n-\ell} \binom{2(h-1)(n-1)+r-1}{r} \binom{2(n-1)-r}{\ell} \binom{(k+1-2h)(n-1)}{n-r-\ell}$$ if $h \leq \lceil k/2 \rceil$, and equal to $$\frac{1}{n-1} \sum_{r=0}^{n-\ell} \binom{2(k+1-h)(n-1)}{r} \binom{r+\ell-1}{\ell} \binom{(2h-k-1)(n-1)-r-\ell}{n-r-\ell}$$ otherwise. \end{cor} \begin{proof} Let us give the proof in the case that $k$ is odd and $h \leq \lceil k/2 \rceil = \frac{k+1}{2}$, the other cases being similar. Since we are only interested in vertices labelled $h$, we set $x_i = 1$ for all $i$ except $h$. Then we get $$F_{k,i}(t) = \begin{cases} 1+x_ht & i \leq h, \\ 1+t & i > h,\end{cases} \qquad \text{and} \qquad G_{k,i}(t) = \begin{cases} 1 + (x_h-1)t & i < h, \\ 1 & i \geq h. \end{cases}$$ Now we have to determine $$[z^n x_h^{\ell}] (P_1 + P_2 + \cdots + P_k).$$ Using~\eqref{eq:total_gf} and the Lagrange-B\"urmann formula once again, we find that this equals \begin{multline*} [z^n x_h^{\ell}] (P_1 + P_2 + \cdots + P_k) \\ = \frac{1}{n-1} [t^n x_h^{\ell}] (1+x_h t)^{2h(n-1)} (1+t)^{(k+1-2h)(n-1)} (1+(x_h-1)t)^{-2(h-1)(n-1)}. \end{multline*} Now we extract the coefficient as follows: \begin{align*} [z^n x_h^{\ell}] &(P_1 + P_2 + \cdots + P_k) \\ &= \frac{1}{n-1} [t^n x_h^{\ell}] \Big( 1 - \frac{t}{1+x_h t}\Big)^{-2(h-1)(n-1)} (1+x_h t)^{2(n-1)} (1+t)^{(k+1-2h)(n-1)} \\ &= \frac{1}{n-1} [t^n x_h^{\ell}] \sum_{r \geq 0} \binom{2(h-1)(n-1)+r-1}{r} t^r (1+x_ht)^{2(n-1)-r} (1+t)^{(k+1-2h)(n-1)} \\ &= \frac{1}{n-1} \sum_{r \geq 0} \binom{2(h-1)(n-1)+r-1}{r} [t^{n-r} x_h^{\ell}] (1+x_ht)^{2(n-1)-r} (1+t)^{(k+1-2h)(n-1)} \\ &= \frac{1}{n-1} \sum_{r \geq 0} \binom{2(h-1)(n-1)+r-1}{r} \binom{2(n-1)-r}{\ell} [t^{n-r-\ell}] (1+t)^{(k+1-2h)(n-1)} \\ &= \frac{1}{n-1} \sum_{r=0}^{n-\ell} \binom{2(h-1)(n-1)+r-1}{r} \binom{2(n-1)-r}{\ell} \binom{(k+1-2h)(n-1)}{n-r-\ell}. \end{align*} \end{proof} \begin{cor}\label{cor:average} For every $n > 1$, the average number of vertices labelled $h$ in $k$-plane trees with $n$ vertices is $$\frac{2(k+1-h)n}{k(k+1)}.$$ \end{cor} \begin{proof} As in the previous proof, we only consider the case that $k$ is odd and $h \leq \lceil k/2 \rceil = \frac{k+1}{2}$. Instead of extracting coefficients, we take the derivative with respect to $x_h$ and plug in $x_h=1$ in order to determine the total number of vertices labelled $h$ in all $k$-plane trees. All other variables $x_i$ are immediately taken to be $1$. This gives us \begin{align*} [z^n] &\frac{\partial}{\partial x_h} (P_1+\cdots + P_k) \Big|_{x_1=\cdots=x_k = 1} \\ &= \frac{1}{n-1} [t^n] \frac{\partial}{\partial x_h} (1+x_h t)^{2h(n-1)} (1+t)^{(k+1-2h)(n-1)} (1+(x_h-1)t)^{-2(h-1)(n-1)} \Big|_{x_h=1} \\ &= [t^n] 2t \big(1-(h-1)t \big) (1+t)^{(k+1)(n-1)-1} \\ &= 2[t^{n-1}] \big(1-(h-1)t \big) (1+t)^{(k+1)(n-1)-1} \\ &= 2 \binom{(k+1)(n-1)-1}{n-1} - 2(h-1) \binom{(k+1)(n-1)-1}{n-2}. \end{align*} Dividing by the total number of $k$-plane trees (as given in Corollary~\ref{cor:total}), we obtain the stated formula. \end{proof} It is also possible to derive formulas for the variance of the number of vertices labelled $h$, as well as covariances of two different label counts. However, the formulas are somewhat unwieldy. Moreover, one could also take the root label into account in Corollary~\ref{cor:single_label} and Corollary~\ref{cor:average}. Instead of stating the most general result (which would be rather lengthy), we illustrate this in the special case $k=3$. \begin{cor}\label{variance_plane} Let $n > 1$. Variances and covariances of the number of vertices labelled $1,2,3$ respectively in $3$-plane trees with $n$ vertices are given in the following table: \begin{center} \begin{tabular}{c|ccc} & $1$ & $2$ & $3$ \\ \hline $1$ & $\frac{n(3n-4)}{4(4n-5)}$ & $-\frac{n}{6}$ & $- \frac{n(n-2)}{12(4n-5)}$ \\ $2$ & $-\frac{n}{6}$ & $\frac{2n(4n-3)}{9(3n-2)}$ & $- \frac{n(7n-6)}{18(3n-2)}$ \\ $3$ & $- \frac{n(n-2)}{12(4n-5)}$ & $- \frac{n(7n-6)}{18(3n-2)}$ & $\frac{n(5n-4)(13n-18)}{36(3n-2)(4n-5)}$ \\ \end{tabular} \end{center} \end{cor} \begin{proof} We recall from the proof of Theorem~\ref{thm:main1} that $$[z^nx_1^{\ell_1}x_2^{\ell_2}x_3^{\ell_3}] (P_1+P_2+P_3) = \frac{1}{n-1} [t^nx_1^{\ell_1}x_2^{\ell_2}x_3^{\ell_3}] \Phi(t)^{n-1},$$ where $$\Phi(t) = (1+(x_1+x_2-x_3)t)^2(1+x_2t)^2(1+(x_2-x_3)t)^{-2}$$ in the special case $k=3$. For the variance of the number of vertices labelled $h$, we need to compute the second moment, which is \begin{equation}\label{eq:second_moment} \frac{[z^n] \frac{\partial^2}{\partial x_h^2} (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1} + [z^n] \frac{\partial}{\partial x_h} (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1}}{[z^n] (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1}}, \end{equation} and then subtract the square of the mean. Likewise, the mixed moment of the number of vertices labelled $h$ and the number of vertices labelled $i$ is $$\frac{[z^n] \frac{\partial^2}{\partial x_h \partial x_i} (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1}}{[z^n] (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1}},$$ from which we subtract the product of the means to obtain the covariance. \medskip Let us only show the calculations for the variance of the number of vertices labelled $1$ explicitly. Here, we obtain \begin{align*} [z^n] \frac{\partial^2}{\partial x_1^2} (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1} &= \frac{1}{n-1} [t^n] 2(n-1)(2n-3)t^2 (1+t)^{4n-6} \\ &= 2(2n-3) [t^{n-2}](1+t)^{4n-6} = 2(2n-3) \binom{4n-6}{n-2}. \end{align*} We already found earlier that $$[z^n] \frac{\partial}{\partial x_1} (P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1} = 2 \binom{4n-5}{n-1}$$ and $$[z^n](P_1+P_2+P_3) |_{x_1=x_2=x_3 = 1} = \frac{1}{n-1} \binom{4n-4}{n}.$$ Plugging everything into~\eqref{eq:second_moment} and simplifying, we find a formula for the second moment and thus in turn for the variance. \end{proof} \begin{cor} Let $n > 1$. The average number of vertices labelled $1,2,3$ respectively in $3$-plane trees with $n$ vertices whose root is labelled $1,2,3$ respectively is given in the following table: \begin{center} \begin{tabular}{c|ccc} & $1$ & $2$ & $3$ \\ \hline root label $1$ & $\frac{n^2}{2 n-1}$ & $\frac{n-1}{3}$ & $\frac{(n-1) (n+1)}{3 (2 n-1)}$ \\ root label $2$ & $\frac{n-1}{2}$ & $\frac{n^2+3 n-1}{3 n}$ & $\frac{(n-2) (n-1)}{6 n}$ \\ root label $3$ & $\frac{n}{2}$ & $\frac{(n-2) (n-1)}{3 n-1}$ & $\frac{n^2+5 n-4}{2 (3 n-1)}$ \end{tabular} \end{center} \end{cor} \begin{proof} We recall from the proof of Theorem~\ref{thm:main1} that $$P_1 = \frac{x_1 A}{1 + (x_1+x_2-x_3)A} ,\ P_2 = \frac{x_2 A(1 + (x_2-x_3)A)}{(1 + x_2 A) (1 + (x_1+x_2-x_3)A)},$$ $$P_3 = \frac{x_3 A (1 + (x_2-x_3) A )}{(1 + x_2A)^2 (1 + (x_1+x_2-x_3)A)},$$ with $$A = z (1+(x_1+x_2-x_3)A)^2(1+x_2A)^2(1+(x_2-x_3)A)^{-2}.$$ In order to determine the desired mean values, we need the coefficients of the partial derivatives $\frac{\partial}{\partial x_h} P_i |_{x_1=x_2=x_3 = 1}$. We will show the details of the calculations in one of the cases again: the number of vertices labelled $1$ in $3$-plane trees whose root label is $1$. Since $$\frac{\partial}{ \partial t} \frac{x_1 t}{1 + (x_1+x_2-x_3)t} = \frac{x_1}{(1+(x_1+x_2-x_3)t)^2},$$ we have $$[z^n] P_1 = \frac{1}{n} [t^{n-1}] \frac{x_1}{(1+(x_1+x_2-x_3)t)^2} (1+(x_1+x_2-x_3)t)^{2n}(1+x_2t)^{2n}(1+(x_2-x_3)t)^{-2n}.$$ Now differentiate with respect to $x_1$ and set $x_1 = x_2 = x_3 = 1$: \begin{align*} \frac{\partial}{\partial x_1} [z^n] P_1 \Big|_{x_1 = x_2 = x_3 = 1} &= \frac{1}{n} [t^{n-1}] (1+(2n-1)t) (1+t)^{4n-3} \\ &= \frac{1}{n} \Big( \binom{4n-3}{n-1} + (2n-1) \binom{4n-3}{n-2} \Big). \end{align*} Dividing by the total number of $3$-plane trees with $n$ vertices and root label $1$, which is $\frac{1}{n} \binom{4n-2}{n-1}$, we obtain the mean number of vertices labelled $1$, namely $$\frac{\frac{1}{n}( \binom{4n-3}{n-1} + (2n-1) \binom{4n-3}{n-2})}{\frac{1}{n} \binom{4n-2}{n-1}} = \frac{n^2}{2n-1}.$$ \end{proof} \section{Noncrossing trees} \label{sec:noncrossing} Our aim in this section is to obtain analogous results for noncrossing trees. In particular, we will prove Theorem~\ref{thm:main2}. To this end, we set up a system of functional equations once again. We fix $k$ and let $\mathcal{N}_r$ denote the set of $k$-noncrossing trees whose root is labelled $r$. As before, we let $h_i(T)$ be the number of vertices labelled $i$ in a tree $T$, and $|T|$ the total number of vertices of $T$. Finally, we define the following generating functions in analogy to the generating functions $P_r$ in the previous section. $$N_r = N_r(z,x_1,x_2,\ldots,x_k) = \sum_{T \in \mathcal{N}_r} z^{|T|} \prod_{i=1}^k x_i^{h_i(T)}.$$ The decomposition of $k$-noncrossing trees is slightly more subtle than that of plane trees. Every noncrossing tree can be decomposed into the root and a sequence of so-called \emph{butterflies}, which are pairs of noncrossing trees joined at a common root. The roots of these butterflies are the children $v_{i_1},v_{i_2},\ldots,v_{i_r}$ of the root $v_1$. One part of the butterfly rooted at $v_{i_j}$ contains all those vertices whose indices are less than or equal to than $i_j$ (i.e., vertices $v_s$ with $s \leq i_j$), the other contains all those vertices whose indices are greater than or equal to $i_j$ (i.e., vertices $v_s$ with $s \geq i_j$). We refer to them as \emph{lower} and \emph{upper part} of a butterfly; they only have the vertex $v_{i_j}$ in common. See Figure~\ref{fig:butter} for an illustration. \begin{figure}[htbp] \centering \begin{tikzpicture} \node[draw,circle,inner sep=2pt] (u1) at (0,3) {\phantom{2}}; \node[draw,circle,inner sep=2pt] (u2) at (3,0) {\phantom{2}}; \node[draw,circle,inner sep=2pt] (u3) at (0,-3) {\phantom{2}}; \node[draw,circle,inner sep=2pt] (u4) at (-3,0) {\phantom{2}}; \draw (u1)--(u2); \draw (u1)--(u3); \draw (u1)--(u4); \node at (0,3.6) {$v_1$}; \node at (3.6,0) {$v_{i_1}$}; \node at (0,-3.6) {$v_{i_2}$}; \node at (-3.6,0) {$v_{i_3}$}; \draw (u2)--(2.5,2)--(3.5,2)--(u2); \draw (u2)--(2.5,-2)--(3.5,-2)--(u2); \draw (u3)--(-2,-2.5)--(-2,-3.5)--(u3); \draw (u3)--(2,-2.5)--(2,-3.5)--(u3); \draw (u4)--(-2.5,2)--(-3.5,2)--(u4); \draw (u4)--(-2.5,-2)--(-3.5,-2)--(u4); \node at (4.5,1) {lower part}; \node at (4.5,-1) {upper part}; \end{tikzpicture} \caption{The butterfly decomposition.}\label{fig:butter} \end{figure} \medskip Each of the two parts of a butterfly can be seen as a noncrossing tree. However, because of the definition of $k$-noncrossing trees, which involves the order of the vertices on the circle, the two parts are slightly different. The upper part containing vertices $v_s$ with $s \geq i_j$ forms a proper $k$-noncrossing tree. However, the lower part is only almost a $k$-noncrossing tree: the rule on labels not adding up to values greater than $k+1$ does not apply to the root edges (for all other edges, the rule is exactly as it is in a proper $k$-noncrossing tree). Thus the lower part is not necessarily a proper $k$-noncrossing tree, but it always becomes one by changing the root label to $1$ (if it is not already $1$). This is because a label $1$ can always be paired with any other label along an edge. Thus we find that a butterfly with root label $r$ has generating function $$N_r \cdot \frac{N_1}{x_1 z}.$$ The first factor represents the upper part of the butterfly, the second factor the lower part, but excluding the root. This is achieved by the denominator. \medskip Arguing as in the previous section, we see that a $k$-noncrossing tree with root label $r$ has branches that are butterflies with root labels in $[k+1-r]$. Thus \begin{equation}\label{eq:mainequation_nc} N_r = x_r z \sum_{j \geq 0} \Big( \frac{N_1}{x_1 z} (N_1 + N_2 + \cdots + N_{k+1-r}) \Big)^j = \frac{x_r z}{1-\frac{N_1}{x_1 z}(N_1 + N_2 + \cdots + N_{k+1-r})} \end{equation} for all $r \in [k]$. \medskip We use the same expressions $F_{k,i}(t)$ and $G_{k,i}(t)$ as in the previous section. The substitution, however, will be slightly different. In analogy to~\eqref{eq:subst}, we set \begin{equation}\label{eq:subst_nc} N_1 = x_1 \sqrt{\frac{z B}{F_{k,1}(B)}}. \end{equation} Again, it is not hard to verify that there exists a suitable power series $B$ that satisfies this equation, and that it is unique. Next, we prove an analogue of Proposition~\ref{pro:formulas_for_Pi}. \begin{pro}\label{pro:formulas_for_Ni} The power series $N_1,N_2,\ldots,N_k$ can be expressed in terms of $B$ and the variables $x_1,x_2,\ldots,x_k$ and $z$ in the following way: for $1 \leq h \leq k$, \begin{align*} N_h &= x_h \sqrt{\frac{z B}{F_{k,1}(B)}} \prod_{i=2}^{h} F_{k,i}(B)^{-1} \prod_{i=1}^{h-1} G_{k,i}(B), \\ N_{k+1-h} &= x_{k+1-h} z \prod_{i=1}^{h} F_{k,i}(B) \prod_{i=1}^h G_{k,i}(B)^{-1}. \end{align*} \end{pro} \begin{proof} The proof is analogous to that of Proposition~\ref{pro:formulas_for_Pi} by induction on $h$. For $h = 1$, the first equation is exactly our substitution~\eqref{eq:subst_nc}, while the second equation follows from~\eqref{eq:mainequation_nc} for $r = k$ and an application of~\eqref{eq:rec2}: $$N_k = \frac{x_k z}{1-\frac{N_1^2}{x_1 z}} = \frac{x_k z}{1 - \frac{x_1^2 z B}{x_1 z F_{k,1}(B)}} = \frac{x_k z F_{k,1}(B)}{F_{k,1}(B) - x_1 B} = \frac{x_k z F_{k,1}(B)}{G_{k,1}(B)}.$$ For the induction step, use~\eqref{eq:mainequation_nc} with $r = h$ and $r = h+1$ respectively, which yields \begin{align*} 1 - \frac{N_1}{x_1 z}(N_1 + N_2 + \cdots + N_{k+1-h}) = \frac{x_h z}{N_h}, \\ 1 - \frac{N_1}{x_1 z}(N_1 + N_2 + \cdots + N_{k-h}) = \frac{x_{h+1} z}{N_{h+1}}. \end{align*} Now take the difference: \begin{equation}\label{eq:after_elimination_nc} \frac{N_1 N_{k+1-h}}{x_1 z} = \frac{x_{h+1}z}{N_{h+1}} - \frac{x_hz}{N_h}. \end{equation} After some manipulations, this gives us \begin{equation}\label{eq:induction_step_nc} N_{h+1} = \frac{x_{h+1}z}{\frac{N_1 N_{k+1-h}}{x_1 z} + \frac{x_h z}{N_h}}. \end{equation} Now it only remains to apply the induction hypothesis and simplify: \begin{align*} N_{h+1} &= \frac{x_{h+1}z}{\sqrt{\frac{z B}{F_{k,1}(B)}} x_{k+1-h} \prod_{i=1}^h F_{k,i}(B) \prod_{i=1}^h G_{k,i}(B)^{-1} + \sqrt{\frac{zF_{k,1}(B)}{B}} \prod_{i=2}^h F_{k,i}(B) \prod_{i=1}^{h-1} G_{k,i}(B)^{-1}} \\ &= \frac{x_{h+1}}{x_{k+1-h}B + G_{k,h}(B)} \sqrt{\frac{z B}{F_{k,1}(B)}} \prod_{i=2}^h F_{k,i}(B)^{-1} \prod_{i=1}^h G_{k,i}(B) \\ &= \frac{x_{h+1}}{F_{k,h+1}(B)} \sqrt{\frac{z B}{F_{k,1}(B)}} \prod_{i=2}^h F_{k,i}(B)^{-1} \prod_{i=1}^h G_{k,i}(B) \\ &= x_{h+1} \sqrt{\frac{z B}{F_{k,1}(B)}} \prod_{i=2}^{h+1} F_{k,i}(B)^{-1} \prod_{i=1}^h G_{k,i}(B). \end{align*} Likewise, replacing $h$ by $k-h$ in~\eqref{eq:after_elimination_nc} gives us $$\frac{N_1 N_{h+1}}{x_1 z} = \frac{x_{k+1-h}z}{N_{k+1-h}} - \frac{x_{k-h}z}{N_{k-h}}.$$ Thus $$N_{k-h} = \frac{x_{k-h}z}{\frac{x_{k+1-h}z}{N_{k+1-h}} - \frac{N_1 N_{h+1}}{x_1 z}}.$$ Now plug in~\eqref{eq:induction_step_nc} and apply the induction hypothesis. Again, we obtain the desired formula after some further manipulations. \end{proof} \begin{cor}\label{cor:identity_for_B_nc} The power series $B$ satisfies the equation $$B = z F_{k,1}(B)^3 \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(B)^4 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(B)^{-4}.$$ \end{cor} \begin{proof} In analogy to Corollary~\ref{cor:identity_for_A}, we use the two representations for $N_h$ provided by Proposition~\ref{pro:formulas_for_Ni}: $$N_h = x_h \sqrt{\frac{z B}{F_{k,1}(B)}} \prod_{i=2}^{h} F_{k,i}(B)^{-1} \prod_{i=1}^{h-1} G_{k,i}(B) = x_{h} z \prod_{i=1}^{k+1-h} F_{k,i}(B) \prod_{i=1}^{k+1-h} G_{k,i}(B)^{-1}.$$ Applying the symmetry relations~\eqref{eq:sym}, we get $$\sqrt{\frac{B}{F_{k,1}(B)}} \prod_{i=2}^{h} F_{k,i}(B)^{-1} \prod_{i=1}^{h-1} G_{k,i}(B) = \sqrt{z} \prod_{i=h+1}^{k+1} F_{k,i}(B) \prod_{i=h}^{k} G_{k,i}(B)^{-1}.$$ Squaring and simplifying yields $$B = z F_{k,1}(B)\prod_{i=2}^{k+1} F_{k,i}(B)^2 \prod_{i=1}^{k} G_{k,i}(B)^{-2}.$$ Applying the symmetry properties as well as~\eqref{eq:midpoint} once again, we arrive at the stated formula: $$B = z F_{k,1}(B)^3 \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(B)^4 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(B)^{-4}.$$ As in the proof of Corollary~\ref{cor:identity_for_A}, $h$ was arbitrary in these calculations. \end{proof} We are now ready to prove the second main theorem of this paper. It is very similar to the proof of Theorem~\ref{thm:main1}, with some small modifications. \begin{proof}[Proof of Theorem~\ref{thm:main2}] As before, we apply the Lagrange-B\"urmann formula, based on Proposition~\ref{pro:formulas_for_Ni} and Corollary~\ref{cor:identity_for_B_nc}. We start with the case that $h \leq \lceil k/2 \rceil$. In view of Proposition~\ref{pro:formulas_for_Ni} and Corollary~\ref{cor:identity_for_B_nc}, we can apply~\eqref{eq:lag-bur} with $$\Phi(t) = F_{k,1}(t)^3 \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(t)^4 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-4}$$ and $$f(t) = x_h \sqrt{\frac{t}{F_{k,1}(t)}} \prod_{i=2}^{h} F_{k,i}(t)^{-1} \prod_{i=1}^{h-1} G_{k,i}(t),$$ but we have to take the coefficient of $z^{n-1/2}$ in view of the factor $\sqrt{z}$ in the formula for $N_h$. Once again, we apply logarithmic differentiation to determine the derivative of $f(t)$. We find that \begin{align*} f'(t) &= f(t) \Big( \frac1{2t} - \frac{F'_{k,1}(t)}{2F_{k,1}(t)} - \sum_{i=2}^h \frac{F'_{k,i}(t)}{F_{k,i}(t)} + \sum_{i=1}^{h-1} \frac{G'_{k,i}(t)}{G_{k,i}(t)} \Big) \\ &= f(t) \Big( \frac1{2t} - \frac{1}{2t} \Big( 1 - \frac{1}{F_{k,1}(t)} \Big) - \frac1{t} \sum_{i=2}^h \Big( 1 - \frac{1}{F_{k,i}(t)} \Big) + \frac1{t} \sum_{i=1}^{h-1} \Big( 1 - \frac{1}{G_{k,i}(t)} \Big) \Big) \\ &= \frac{f(t)}{t} \Big( \frac{1}{2F_{k,1}(t)} + \sum_{i=2}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big). \end{align*} So we have \begin{align} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] N_h &= \frac{1}{n-\frac12} [t^{n-3/2} x_1^{\ell_1} \cdots x_k^{\ell_k}] f'(t) \Phi(t)^{n-1/2} \nonumber \\ &= \frac{2}{2n-1} [t^{n-3/2} x_1^{\ell_1} \cdots x_k^{\ell_k}] x_h \sqrt{\frac{1}{tF_{k,1}(t)}} \prod_{i=2}^{h} F_{k,i}(t)^{-1} \prod_{i=1}^{h-1} G_{k,i}(t) \nonumber \\ &\qquad \Big( \frac{1}{2F_{k,1}(t)} + \sum_{i=2}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big) \nonumber \\ &\qquad F_{k,1}(t)^{3n-3/2} \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(t)^{4n-2} \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{2-4n} \nonumber \\ &= \frac{2}{2n-1} [t^{n-1} x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i}(t) \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-1} \Big)^{4n-2}\nonumber \\ &\qquad F_{k,1}(t)^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i}(t) \prod_{i=1}^{h-1} G_{k,i}(t)^{-1} \Big)^{4n-3} \label{eq:after_lagrange_nc} \\ &\qquad \Big( \frac{1}{2F_{k,1}(t)} + \sum_{i=2}^h \frac{1}{F_{k,i}(t)} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}(t)} \Big). \nonumber \end{align} We remark that we are applying the Lagrange-B\"urmann formula, somewhat unusually, in a situation where we have half-integer exponents in our power series, but it is not difficult to verify that it works equally well. At this point, we drop the variable $t$ again (by setting $t=1$), since we know that the coefficient of $t^{n-1} x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}$ is only nonzero when $\ell_1 + \cdots + \ell_k = n$. We will also write $F_{k,i}$ and $G_{k,i}$ instead of $F_{k,i}(1)$ and $G_{k,i}(1)$ again. \medskip As in the proof of Theorem~\ref{thm:main1}, we observe that $F_{k,1},F_{k,2},\ldots,F_{k,h}$ and $G_{k,1},G_{k,2},\ldots,G_{k,h-1}$ are the only factors that contain the variable $x_h$. Moreover, using logarithmic differentiation once again, one finds that \begin{multline*} \frac{\partial}{\partial x_h} F_{k,1}^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3} \\ = (4n-3) F_{k,1}^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3} \Big( \frac{3n-2}{(4n-3)F_{k,1}} + \sum_{i=2}^h \frac{1}{F_{k,i}} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}} \Big). \end{multline*} Now we split the expression in~\eqref{eq:after_lagrange_nc} into two parts, one of which can be seen as a derivative with respect to $x_h$ in the same way as in the proof of Theorem~\ref{thm:main1}: \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] N_h &= \frac{2}{2n-1} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-2} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3} \Big( \frac{3n-2}{(4n-3)F_{k,1}} + \sum_{i=2}^h \frac{1}{F_{k,i}} - \sum_{i=1}^{h-1} \frac{1}{G_{k,i}} \Big) \\ &\quad -\frac{2}{2n-1} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-2} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3} \cdot \frac{2n-1}{2(4n-3)F_{k,1}} \\ &= \frac{2\ell_h}{(2n-1)(4n-3)} [x_1^{\ell_1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-2} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3} \\ &\quad - \frac{1}{4n-3} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=h+1}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-2} \\ &\qquad F_{k,1}^{3n-3} \Big( \prod_{i=2}^{h} F_{k,i} \prod_{i=1}^{h-1} G_{k,i}^{-1} \Big)^{4n-3}. \end{align*} Now we can extract coefficients from both products in the same way as in the proof of Theorem~\ref{thm:main2}, i.e. by considering the variables in the order $x_1,x_k,x_2,x_{k-1},\ldots$. \medskip The derivation of the formula in the case that $h > \lceil k/2 \rceil$ is similar: we start from the representation $$N_h = x_h z \prod_{i=1}^{k+1-h} F_{k,i}(B) \prod_{i=1}^{k+1-h} G_{k,i}(B)^{-1},$$ which means that we can apply the Lagrange-B\"urmann formula with $$f(t) = x_h \prod_{i=1}^{k+1-h} F_{k,i}(t) \prod_{i=1}^{k+1-h} G_{k,i}(t)^{-1}$$ and the same function $\Phi$ as before. In view of the factor $z$ in the expression for $N_h$, we have to extract the coefficient of $z^{n-1}$. Thus \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] N_h &= \frac{1}{n-1} [t^{n-2} x_1^{\ell_1} \cdots x_k^{\ell_k}] \frac{x_h}{t} \prod_{i=1}^{k+1-h} F_{k,i}(t) \prod_{i=1}^{k+1-h} G_{k,i}(t)^{-1} \\ &\qquad \Big( \sum_{i=1}^{k+1-h} \frac{1}{G_{k,i}(t)} - \sum_{i=1}^{k+1-h} \frac{1}{F_{k,i}(t)} \Big) \\ &\qquad F_{k,1}(t)^{3n-3} \Big( \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(t) \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-1} \Big)^{4n-4} \\ &= \frac{1}{n-1} [t^{n-1} x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=k+2-h}^{\lceil k/2 \rceil} F_{k,i}(t) \prod_{i=k+2-h}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-1} \Big)^{4n-4} \\ &\qquad F_{k,1}(t)^{3n-2} \Big( \prod_{i=2}^{k+1-h} F_{k,i}(t) \prod_{i=1}^{k+1-h} G_{k,i}(t)^{-1} \Big)^{4n-3} \\ &\qquad \Big( \sum_{i=1}^{k+1-h} \frac{1}{G_{k,i}(t)} - \sum_{i=1}^{k+1-h} \frac{1}{F_{k,i}(t)} \Big). \end{align*} As before, we drop the variable $t$ now and write $F_{k,i}$ and $G_{k,i}$ instead of $F_{k,i}(1)$ and $G_{k,i}(1)$. The appropriate split in this case is \begin{align*} [z^n x_1^{\ell_1} \cdots x_k^{\ell_k}] N_h &= \frac{1}{n-1} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=k+2-h}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=k+2-h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-4} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{k+1-h} F_{k,i} \prod_{i=1}^{k+1-h} G_{k,i}^{-1} \Big)^{4n-3} \\ &\qquad \Big( \sum_{i=1}^{k+1-h} \frac{1}{G_{k,i}} - \sum_{i=2}^{k+1-h} \frac{1}{F_{k,i}} - \frac{3n-2}{(4n-3)F_{k,1}} \Big) \\ &\quad- \frac{1}{n-1} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=k+2-h}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=k+2-h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-4} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{k+1-h} F_{k,i} \prod_{i=1}^{k+1-h} G_{k,i}^{-1} \Big)^{4n-3} \cdot \frac{n-1}{(4n-3)F_{k,1}}\\ &= \frac{\ell_h}{(n-1)(4n-3)} [x_1^{\ell_1} \cdots x_k^{\ell_k}] \Big( \prod_{i=k+2-h}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=k+2-h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-4} \\ &\qquad F_{k,1}^{3n-2} \Big( \prod_{i=2}^{k+1-h} F_{k,i} \prod_{i=1}^{k+1-h} G_{k,i}^{-1} \Big)^{4n-3} \\ &\quad- \frac{1}{4n-3} [x_1^{\ell_1} \cdots x_h^{\ell_h-1} \cdots x_k^{\ell_k}] \Big( \prod_{i=k+2-h}^{\lceil k/2 \rceil} F_{k,i} \prod_{i=k+2-h}^{\lfloor k/2 \rfloor} G_{k,i}^{-1} \Big)^{4n-4} \\ &\qquad F_{k,1}^{3n-3} \Big( \prod_{i=2}^{k+1-h} F_{k,i} \prod_{i=1}^{k+1-h} G_{k,i}^{-1} \Big)^{4n-3}. \end{align*} Once again, we can now extract coefficients from both products following the order of variables $x_1,x_k,x_2,x_{k-1},\ldots$. \medskip Finally, we consider the generating function for all $k$-noncrossing trees, which is $$N_1+N_2+ \cdots + N_k = \frac{x_1 z}{N_1} \Big(1 - \frac{x_1 z}{N_1}\Big) = \sqrt{\frac{z F_{k,1}(B)}{B}} - \frac{z F_{k,1}(B)}{B}$$ in view of~\eqref{eq:mainequation_nc} (for $r=1$) and~\eqref{eq:subst_nc}. So we can now apply the Lagrange-B\"urmann formula to the functions $f_1(t) = \sqrt{\frac{F_{k,1}(t)}{t}}$ (extracting the coefficient of $z^{n-1/2}$) and $f_2(t) = \frac{F_{k,1}(t)}{t}$ (extracting the coefficient of $z^{n-1}$), again with the same function $\Phi$ as before. \end{proof} As in the previous section, we can now derive a number of corollaries. \begin{cor}\label{cor:total_nc} For every integer $n > 1$, the total number of $k$-noncrossing trees with $n$ vertices is $$\frac{1}{n-1} \binom{(2k+1)(n-1)}{n} - \frac{1}{2n-1} \binom{(2k+1)n-k-1}{n}.$$ The number of $k$-noncrossing trees with $n$ vertices whose root is labelled $h$ is $$\frac{k+1-h}{2kn-k-h+1} \binom{(2k+1)n-k-h-1}{n-1}.$$ \end{cor} \begin{proof} We follow the lines of the proof of Corollary~\ref{cor:total}. Setting $x_1=x_2=\cdots = x_k = 1$, recall that we have $$F_{k,i}(t) = \begin{cases} 1 & k \text{ even,} \\ 1 + t & k \text{ odd,} \end{cases}\quad \text{and} \quad G_{k,i}(t) = \begin{cases} 1 - t & k \text{ even,} \\ 1 & k \text{ odd.} \end{cases}$$ We show the calculations in the case that $k$ is even (the other case is similar once again), where we obtain $$N_h = \sqrt{zB} (1-B)^{h-1}$$ and $$N_1 + N_2 + \cdots + N_k = \frac{z}{N_1}\Big(1 - \frac{z}{N_1}\Big) = \sqrt{\frac{z}{B}} - \frac{z}{B},$$ where $B$ satisfies the implicit equation $$B = z (1-B)^{-2k}.$$ We apply the Lagrange-B\"urmann formula to find that \begin{align*} [z^n] N_h &= [z^{n-1/2}] \sqrt{B} (1-B)^{h-1} \\ &= \frac{1}{n-\frac12} [t^{n-3/2}] \frac{1}{2\sqrt{t}} (1-(2h-1)t) (1-t)^{h-2} (1-t)^{-2k(n-1/2)}\\ &= \frac{1}{2n-1} [t^{n-1}] (1-(2h-1)t) (1-t)^{-2kn+k+h-2} \\ &= \frac{1}{2n-1} \binom{(2k+1)n-k-h}{n-1} - \frac{2h-1}{2n-1} \binom{(2k+1)n-k-h-1}{n-2} \\ &= \frac{k+1-h}{2kn-k-h+1} \binom{(2k+1)n-k-h-1}{n-1} \end{align*} and similarly \begin{align*} [z^n] (N_1 + N_2 + \cdots + N_k) &= [z^{n-1/2}] B^{-1/2} - [z^{n-1}] B^{-1} \\ &= \frac{1}{n-\frac12} [t^{n-3/2}] \Big( - \frac12 t^{-3/2} \Big) (1-t)^{-2k(n-1/2)} \\ &\quad- \frac{1}{n-1} [t^{n-2}] (-t^{-2}) (1-t)^{-2k(n-1)} \\ &= \frac{1}{n-1} [t^n] (1-t)^{-2kn+2k} - \frac{1}{2n-1} [t^n] (1-t)^{-2kn+k} \\ &= \frac{1}{n-1} \binom{(2k+1)(n-1)}{n} - \frac{1}{2n-1} \binom{(2k+1)n-k-1}{n}. \end{align*} \end{proof} Next, we count $k$-noncrossing trees by the number of occurrences of a single label. \begin{cor}\label{cor:single_label_nc} For every $n > 1$, the total number of $k$-noncrossing trees with $n$ vertices of which $\ell$ are labelled $h$ is equal to \begin{align*} \frac{1}{n-1} &\sum_{r=0}^{n-\ell} \binom{4(h-1)(n-1)+r-1}{r} \binom{3n-3-r}{\ell} \binom{2(k+1-2h)(n-1)}{n-r-\ell}\\&-\frac{1}{2n-1} \sum_{r=0}^{n-\ell} \binom{2(h-1)(2n-1)+r-1}{r} \binom{3n-2-r}{\ell} \binom{(k+1-2h)(2n-1)}{n-r-\ell} \end{align*} if $h \leq \lceil k/2 \rceil$, and equal to \begin{align*} \frac{1}{n-1} &\sum_{r=0}^{n-\ell} \binom{(4h-4h+3)(n-1)}{r} \binom{n+r+\ell-2}{\ell} \binom{(4h-2k-3)(n-1)-r-\ell}{n-r-\ell}\\&-\frac{1}{2n-1} \sum_{r=0}^{n-\ell} \binom{2(k+1-h)(2n-1)-n}{r} \binom{n+r+\ell-1}{\ell} \binom{(2h-k-2)(2n-1)+n-r-\ell}{n-r-\ell} \end{align*} otherwise. \end{cor} \begin{proof} Let us give the proof in the case that $k$ is odd and $h \leq \lceil k/2 \rceil = \frac{k+1}{2}$. The other cases are similar. Set all $x_i$ except $x_h$ equal to $1$. As noted in the proof of Corollary~\ref{cor:single_label}, we have $$F_{k,i}(t) = \begin{cases} 1+x_ht & i \leq h, \\ 1+t & i > h,\end{cases} \qquad \text{and} \qquad G_{k,i}(t) = \begin{cases} 1 + (x_h-1)t & i < h, \\ 1 & i \geq h. \end{cases}$$ Now we have to determine \begin{equation}\label{eq:xh_extraction} [z^n x_h^{\ell}] (N_1 + N_2 + \cdots + N_k)=[z^n x_h^{\ell}]\left(\sqrt{\frac{zF_{k,1}(B)}{B}}-\frac{zF_{k,1}(B)}{B}\right). \end{equation} We note that \[\frac{\partial}{\partial t}\frac{F_{k,1}(t)}{t}=-\frac{1}{t^2}\] and \[\frac{\partial}{\partial t}\sqrt{\frac{F_{k,1}(t)}{t}}=\frac{1}{2}\Big(\frac{F_{k,1}(t)}{t}\Big)^{-1/2}\frac{\partial}{\partial t}\frac{F_{k,1}(t)}{t}=-\frac{1}{2t^{3/2}}F_{k,1}(t)^{-1/2}.\] Using the Lagrange-B\"urmann formula once again, we find that \begin{align*} [z^n x_h^{\ell}] &\sqrt{\frac{zF_{k,1}(B)}{B}} \\ &=[z^{n-1/2} x_h^{\ell}] \sqrt{\frac{F_{k,1}(B)}{B}} \\ &= \frac{1}{n-\frac12} [t^{n-3/2}x_h^{\ell}] \Big( -\frac{1}{2t^{3/2}}F_{k,1}(t)^{-1/2} \Big) \Big( F_{k,1}(t)^3 \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(t)^4 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-4} \Big)^{n-1/2} \\ &= -\frac{1}{2n-1} [t^n x_h^{\ell}] (1+x_h t)^{-1/2}(1+x_h t)^{3(n-1/2)}(1+x_h t)^{4(h-1)(n-1/2)} \\ &\qquad(1+t)^{2(k+1-2h)(n-1/2)} (1+(x_h-1)t)^{-4(h-1)(n-1/2)}\\ &=-\frac{1}{2n-1}[t^n x_h^{\ell}] (1+x_h t)^{3n-2}(1+x_h t)^{2(h-1)(2n-1)}(1+t)^{(k+1-2h)(2n-1)}\\ &\qquad(1+(x_h-1)t)^{-2(h-1)(2n-1)}. \end{align*} Now we extract the coefficient as follows: \begin{align*} [z^n x_h^{\ell}] &\sqrt{\frac{zF_{k,1}(B)}{B}}\\ &= -\frac{1}{2n-1} [t^n x_h^{\ell}] \Big( 1 - \frac{t}{1+x_h t}\Big)^{-2(h-1)(2n-1)} (1+x_h t)^{3n-2} (1+t)^{(k+1-2h)(2n-1)} \\ &= -\frac{1}{2n-1} [t^n x_h^{\ell}] \sum_{r \geq 0} \binom{2(h-1)(2n-1)+r-1}{r} t^r (1+x_ht)^{3n-2-r} (1+t)^{(k+1-2h)(2n-1)} \\ &= -\frac{1}{2n-1} \sum_{r \geq 0} \binom{2(h-1)(2n-1)+r-1}{r} [t^{n-r} x_h^{\ell}] (1+x_ht)^{3n-2-r} (1+t)^{(k+1-2h)(2n-1)} \\ &= -\frac{1}{2n-1} \sum_{r \geq 0} \binom{2(h-1)(2n-1)+r-1}{r} \binom{3n-2-r}{\ell} [t^{n-r-\ell}] (1+t)^{(k+1-2h)(2n-1)} \\ &= -\frac{1}{2n-1} \sum_{r=0}^{n-\ell} \binom{2(h-1)(2n-1)+r-1}{r} \binom{3n-2-r}{\ell} \binom{(k+1-2h)(2n-1)}{n-r-\ell}. \end{align*} Similarly, we obtain $[z^n x_h^{\ell}] \frac{zF_{k,1}(B)}{B}.$ We have \begin{align*} [z^n x_h^{\ell}] &\frac{zF_{k,1}(B)}{B} =[z^{n-1} x_h^{\ell}] \frac{F_{k,1}(B)}{B} \\ &= \frac{1}{n-1} [t^{n-2} x_h^{\ell}] \Big( - \frac{1}{t^2} \Big) \Big( F_{k,1}(t)^3 \prod_{i=2}^{\lceil k/2 \rceil} F_{k,i}(t)^4 \prod_{i=1}^{\lfloor k/2 \rfloor} G_{k,i}(t)^{-4} \Big)^{n-1} \\ &=- \frac{1}{n-1}[t^n x_h^{\ell}] (1+x_h t)^{3n-3}(1+x_h t)^{4(h-1)(n-1)} \\ &\qquad(1+t)^{2(k+1-2h)(n-1)} (1+(x_h-1)t)^{-4(h-1)(n-1)}\\ &=- \frac{1}{n-1} [t^n x_h^{\ell}] \Big( 1 - \frac{t}{1+x_h t}\Big)^{-4(h-1)(n-1)} (1+x_h t)^{3n-3} (1+t)^{2(k+1-2h)(n-1)} \\ &=- \frac{1}{n-1} [t^n x_h^{\ell}] \sum_{r \geq 0} \binom{4(h-1)(n-1)+r-1}{r} t^r (1+x_ht)^{3n-3-r} (1+t)^{2(k+1-2h)(n-1)} \\ &=- \frac{1}{n-1} \sum_{r \geq 0} \binom{4(h-1)(n-1)+r-1}{r} [t^{n-r} x_h^{\ell}] (1+x_ht)^{3n-3-r} (1+t)^{2(k+1-2h)(n-1)} \\ &=- \frac{1}{n-1} \sum_{r \geq 0} \binom{4(h-1)(n-1)+r-1}{r} \binom{3n-3-r}{\ell} [t^{n-r-\ell}] (1+t)^{2(k+1-2h)(n-1)} \\ &=- \frac{1}{n-1} \sum_{r=0}^{n-\ell} \binom{4(h-1)(n-1)+r-1}{r} \binom{3n-3-r}{\ell} \binom{2(k+1-2h)(n-1)}{n-r-\ell}. \end{align*} Now, we combine the two by means of~\eqref{eq:xh_extraction}, and the result follows. \end{proof} \begin{cor}\label{cor:average_nc} For every $n > 1$, the average number of vertices labelled $h$ in $k$-noncrossing trees with $n$ vertices is $$\frac{n}{(2k+1)n-(k+1)} \Big( 3n-2 - \frac{2(h-1)(n-1)}{k} + \frac{2(k+1-2h)}{(2k+1)\big(2 - \frac{(2kn+n-2k)^{\overline{k}}}{(2kn+1-2k)^{\overline{k}}}\big)} \Big),$$ where $m^{\overline{k}}=m(m+1)\cdots(m+k-1)$ is the rising factorial. Asymptotically, this is equal to $\frac{3k+2-2h}{k(2k+1)} n + \frac{k+1-2h}{(2 k + 1)^2( 2 (\frac{2k}{2k+1})^{k}-1)} + O(1/n)$. \end{cor} \begin{proof} We only consider the case that $k$ is odd and $h \leq \lceil k/2 \rceil = \frac{k+1}{2}$, as in the previous proof. As in the proof of Corollary~\ref{cor:average}, instead of extracting coefficients, we take the derivative with respect to $x_h$ and plug in $x_h=1$ in order to determine the total number of vertices labelled $h$ in all $k$-noncrossing trees. All other variables $x_i$ are immediately taken to be $1$. This results in \begin{align}\label{cor_single_nc_proof1} \nonumber &[z^n] \frac{\partial}{\partial x_h} (N_1+N_2+\cdots + N_k) \Big|_{x_1=\cdots=x_k = 1} \\ \nonumber&= \frac{1}{n-1} [t^n] \frac{\partial}{\partial x_h}(1+x_h t)^{3n-3+4(h-1)(n-1)} (1+t)^{2(k+1-2h)(n-1)} (1+(x_h-1)t)^{-4(h-1)(n-1)} \Big|_{x_h=1}\\ \nonumber&\quad -\frac{1}{2n-1} [t^n] \frac{\partial}{\partial x_h}(1+x_h t)^{3n-2+2(h-1)(2n-1)} (1+t)^{(k+1-2h)(2n-1)} (1+(x_h-1)t)^{-2(h-1)(2n-1)} \Big|_{x_h=1} \\ \nonumber&= [t^n] t\big(3-4(h-1)t \big) (1+t)^{(2k+1)(n-1)-1}-[t^n]t\left(\frac{3n-2}{2n-1}-2(h-1)t \right) (1+t)^{(2k+1)n-k-2} \\ \nonumber&= 3 \binom{(2k+1)(n-1)-1}{n-1} - 4(h-1) \binom{(2k+1)(n-1)-1}{n-2}- \frac{3n-2}{2n-1} \binom{(2k+1)n-k-2}{n-1}\\ \nonumber&\quad + 2(h-1) \binom{(2k+1)n-k-2}{n-2}\\ \nonumber&=2(3k-2h+2)\binom{(2k+1)n-2k-2}{n-2}-\frac{(3nk-2k-2(h-1)(n-1))}{n-1}\binom{(2k+1)n-k-2}{n-2}. \end{align} Dividing by the total number of $k$-noncrossing trees, we obtain the stated formula after a number of simplifications. \end{proof} As noted for $k$-plane trees, it is also possible to derive formulas for the variance of the number of vertices labelled $h$, as well as covariances of two different label counts. Moreover, one could also take the root label into account in Corollary~\ref{cor:single_label_nc} and Corollary~\ref{cor:average_nc}. However, the formulas for $k$-noncrossing trees are even more complicated than for $k$-plane trees. Instead of stating the most general result (which would be rather lengthy), we only present the special case $k=2$. \begin{cor} Let $n > 1$. Variances and covariances of the number of vertices labelled $1,2$ respectively in $2$-noncrossing trees with $n$ vertices are given in the following table: \begin{center} \begin{tabular}{c|cc} & $1$ & $2$ \\ \hline $1$ & $\frac{3 (2 n-1) (4 n-3)(49 n^2-100 n+44)}{25(5n-6)(7n-5)^2}$ & $-\frac{3 (2 n-1) (4 n-3)(49 n^2-100 n+44)}{25(5n-6)(7n-5)^2}$ \\ $2$ & $-\frac{3 (2 n-1) (4 n-3)(49 n^2-100 n+44)}{25(5n-6)(7n-5)^2}$ & $\frac{3 (2 n-1) (4 n-3)(49 n^2-100 n+44)}{25(5n-6)(7n-5)^2}$ \\ \end{tabular} \end{center} \end{cor} \begin{proof} We recall from the proof of Theorem~\ref{thm:main2} that $$[z^nx_1^{\ell_1}x_2^{\ell_2}] (N_1+N_2) = \frac{1}{n-1} [t^nx_1^{\ell_1}x_2^{\ell_2}] \Phi(t)^{n-1}-\frac{1}{2n-1} [t^nx_1^{\ell_1}x_2^{\ell_2}] F_{2,1}(t)^{-1/2}\Phi(t)^{n-1/2},$$ where $$F_{2,1}(t) = 1+(x_1-x_2)t$$ and $$\Phi(t) = (1+(x_1-x_2)t)^3(1-x_2t)^{-4}$$ in the special case $k=2$. Again as in Corollary \ref{variance_plane}, for us to compute the variance of the number of vertices labelled $h$, we need to first compute the second moment, which is \begin{equation}\label{eq:second_moment_nc} \frac{[z^n] \frac{\partial^2}{\partial x_h^2} (N_1+N_2) |_{x_1=x_2= 1} + [z^n] \frac{\partial}{\partial x_h} (N_1+N_2) |_{x_1=x_2= 1}}{[z^n] (N_1+N_2) |_{x_1=x_2 = 1}}, \end{equation} and then subtract the square of the mean. Likewise, the mixed moment of the number of vertices labelled $h$ and the number of vertices labelled $i$ is $$\frac{[z^n] \frac{\partial^2}{\partial x_h \partial x_i} (N_1+N_2) |_{x_1=x_2= 1}}{[z^n] (N_1+N_2) |_{x_1=x_2 = 1}},$$ from which we subtract the product of the means to obtain the covariance. \medskip Again, we only show the calculations for the variance of the number of vertices labelled $1$ explicitly. The other entries follow automatically in this case, since the sum of the number of vertices labelled $1$ and the number of vertices labelled $2$ is deterministically equal to $n$. We get \begin{align*} [z^n] &\frac{\partial^2}{\partial x_1^2} (N_1+N_2) |_{x_1=x_2 = 1}\\ &= \frac{1}{n-1} [t^n] 3(n-1)(3n-4)t^2 (1-t)^{-4(n-1)}- \frac{1}{2n-1} [t^n] (3n-2)(3n-3)t^2 (1-t)^{-(4n-2)}\\ &= 3(3n-4) [t^{n-2}](1-t)^{-4(n-1)}- \frac{(3n-2)(3n-3)}{2n-1} [t^{n-2}] (1-t)^{-(4n-2)}\\ &= 3(3n-4) \binom{5n-7}{n-2}-\frac{(3n-2)(3n-3)}{2n-1} \binom{5n-5}{n-2}. \end{align*} We already found earlier that $$[z^n] \frac{\partial}{\partial x_1} (N_1+N_2) |_{x_1=x_2 = 1} = 3 \binom{5n-6}{n-1}-\frac{3n-2}{2n-1}\binom{5n-4}{n-1}$$ and $$[z^n](N_1+N_2) |_{x_1=x_2 = 1} = \frac{1}{n-1} \binom{5n-5}{n}-\frac{1}{2n-1} \binom{5n-3}{n}.$$ Plugging everything into~\eqref{eq:second_moment_nc} and simplifying, we find a formula for the second moment and thus in turn for the variance. \end{proof} \begin{cor} Let $n > 1$. The average number of vertices labelled $1,2$ respectively in $2$-noncrossing trees with $n$ vertices whose root is labelled $1,2$ respectively is given in the following table: \begin{center} \begin{tabular}{c|cc} & $1$ & $2$ \\ \hline root label $1$ & $\frac{3 n^{2} -n - 1}{5n-4}$ & $\frac{2n^2-3n+1}{5n-4}$ \\ root label $2$ & $\frac{3n-1}{5}$ & $\frac{2 n+1}{5}$ \end{tabular} \end{center} \end{cor} \begin{proof} We recall from the proof of Theorem~\ref{thm:main2} that $$N_1 = \frac{x_1 B(1-x_2B)^2}{(1 + (x_1-x_2)B)^2} ,\ N_2 = \frac{x_2 B(1-x_2B)^3}{(1 + (x_1-x_2)B)^2},$$ with $$B = z (1+(x_1-x_2)B)^3(1-x_2B)^{-4}.$$ In order to determine the desired mean values, we need the coefficients of the partial derivatives $\frac{\partial}{\partial x_h} N_i |_{x_1=x_2 = 1}$. We will show the details of the calculations in one of the cases again: the number of vertices labelled $1$ in $2$-noncrossing trees whose root label is $1$. Since $$\frac{\partial}{ \partial t} \frac{x_1 t(1-x_2t)^2}{(1 + (x_1-x_2)t)^2} = \frac{x_1 (1-x_2t) (1-x_1t-2x_2t-x_1x_2 t^2+x_2^2 t^2)}{(1 + (x_1-x_2)t)^3},$$ we have $$[z^n] N_1 = \frac{1}{n} [t^{n-1}] \frac{x_1 (1-x_2t) (1-x_1t-2x_2t-x_1x_2 t^2+x_2^2 t^2)}{(1 + (x_1-x_2)t)^3} (1+(x_1-x_2)t)^{3n}(1-x_2t)^{-4n}.$$ Now differentiate with respect to $x_1$ and set $x_1 = x_2 = 1$: \begin{align*} \frac{\partial}{\partial x_1} [z^n] N_1 \Big|_{x_1 = x_2= 1} &= \frac{1}{n} [t^{n-1}] \left(1+(3n-7)t-(9n-8)t^2\right) (1-t)^{1-4n} \\ &= \frac{1}{n} \left[ \binom{5n-3}{n-1} +(3n-7) \binom{5n-4}{n-2}-(9n-8)\binom{5n-5}{n-3}\right] \\ &= \frac{2(3n^2-n-1)}{(n-1)(n-2)} \binom{5n-5}{n-3}. \end{align*} Dividing by the total number of $2$-noncrossing trees with $n$ vertices and root label $1$, which is $\frac{1}{2n-1} \binom{5n-4}{n-1}$, we obtain the mean number of vertices labelled $1$. \end{proof}
2,877,628,090,895
arxiv
\section{Introduction} Let $X$ be a compact complex manifold, $D \subset X$ a smooth divisor and $\beta \in (0,1)$. Let $\omega$ be a $C^{\alpha}$ K\"ahler metric on $X$ with cone angle $2\pi\beta$ along $D$ -see Section \ref{BackgroundSection} for the definitions-; $\alpha \in (0,1)$ is the H\"older exponent, we also require that $\alpha \leq (1/\beta)-1$. Let $0< \alpha' < \alpha$ and $\epsilon >0$. \begin{theorem} \label{Theorem1} There is $\phi \in C^{2, \alpha'}(X)$ with $\| \phi \|_{2, \alpha'} < \epsilon$ such that the Ricci form of \( \omega_{\phi} = \omega + i \partial \overline{\partial} \phi \) extends to $X$ as a 2-form, smooth with respect to the complex coordinates. In particular $\omega_{\phi}$ has bounded Ricci curvature. \end{theorem} We take $\alpha' < \alpha$ in order to approximate as much as we like the relevant $C^{\alpha}$ function (`Ricci potential' of $\omega$) by smooth functions. Theorem \ref{Theorem1} follows by performing a suitable and small change in the volume form of $\omega$, this is done via the Implicit Function Theorem and uses the linear theory developed in \cite{donaldson1}. In a different direction; it is conjectured that the existence of Kahler metrics with cone singularities of bounded sectional curvature imposes strong restrictions on the complex geometry of the pair $(X, D)$ -see \cite{Arezzocurvature}-. More precisely, if we denote the normal bundle of $D$ as $\nu_D$ and the tangent bundles as $TX$ and $TD$; then it is expected a holomorphic splitting \( TX|_D = TD \oplus \nu_D \). Theorem \ref{Theorem1} says that there are no such restrictions for the case of Ricci curvature. As an application of Theorem \ref{Theorem1} we give an alternative proof of a well-established existence result for K\"ahler-Einstein metrics with cone singularities (KEcs). These are metrics with cone angle \(2\pi\beta\) along $D$ which satisfy \( \mbox{Ric}(g) = \lambda g \) in the complement of $D$ for some constant $\lambda$. \begin{theorem} \label{Theorem2} \begin{enumerate} \item If \( c_1 (X) - (1-\beta)c_1(D) < 0 \); then there is a unique KEcs with $\lambda=-1$. \item If \( c_1(X)= (1-\beta) c_1(D) \); then there is a unique KEcs with $\lambda=0$ in each K\"ahler class. \end{enumerate} \end{theorem} We prove Theorem \ref{Theorem2} by means of the classical Aubin-Yau continuity path, starting with a metric of bounded Ricci curvature. The openness follows from \cite{donaldson1} and the closedness from standard a priori estimates. The $C^0$ estimate uses the maximum principle when $\lambda=-1$ (see \cite{Jeffress}) and Moser iteration when $\lambda=0$ (see \cite{brendle}). The $C^2$ estimate follows from the maximum principle applied to the Chern-Lu inequality, together with the fact that there is a reference metric with bisectional curvature bounded above (see \cite{JMR}). Finally, the $C^{2, \alpha}$ estimate follows from the interior Schauder estimates in \cite{ChenWang}. The proof of Theorem \ref{Theorem2} presents -in this simpler compact setting- the arguments in \cite{martin} used to establish an existence theorem for asymptotically conical Ricci-flat K\"ahler metrics with cone singularities. In that context, the method we use to show Theorem \ref{Theorem1} serves to produce asymptotically conical metrics which are Ricci-flat outside a compact set -see Proposition 6 in \cite{martin}-. \subsection*{Acknowledgments} This article contains material from the author's PhD Thesis at Imperial College, founded by the European Research Council Grant 247331 and defended in December 2015. I wish to thank my supervisor, Simon Donaldson, for his encouragement and support. \section{Background} \label{BackgroundSection} \subsection{Linear Theory} Fix \( 0 < \beta < 1 \). We work on \( \mathbb{C}^n \) with complex coordinates \( z_1, \ldots, z_n \). Consider the model metric \begin{equation} \label{FlatMetric} g_{(\beta)} = \beta^2 |z_1|^{2\beta-2} |dz_1|^2 + \sum_{j=2}^n |dz_j|^2, \end{equation} which has cone singularities of total angle \(2\pi\beta\) along \(\{z_1=0\}\). It induces a distance $d_{\beta}$ and therefore, for each \( \alpha \in (0, 1) \), a H\"older semi-norm \begin{equation} \label{Holder seminorm} [u]_{\alpha} = \sup_{x, y} \frac{|u(x) - u(y)|}{d_{\beta}(x, y)^{\alpha}} \end{equation} on continuous functions defined on domains of \( \mathbb{C}^n \). If we write \(z_1= r^{1/\beta} e^{i\theta} \), then \(g_{(\beta)} = dr^2 + \beta^2 r^2 d\theta^2 + \sum_{j=2}^{n} |dz_j|^2\). In these \emph{cone coordinates}, \( (re^{i\theta}, z_2, \ldots, z_n) \), \( g_{(\beta)} \) is quasi-isometric to the Euclidean metric -indeed \( \beta^2 g_{Euc} \leq g_{(\beta)} \leq g_{Euc} \)-; therefore \ref{Holder seminorm} becomes equivalent to the standard H\"older semi-norm with respect to the Euclidean distance. We want to define H\"older continuous 1-forms. Set $\epsilon = dr + i \beta r d\theta$. A $(1,0)$ form $\eta$ is called $C^{\alpha}$ if $\eta= u_1 \epsilon + \sum_{j=2}^n u_j dz_j$ with $u_1, u_j$ $C^{\alpha}$ functions in the usual sense in the cone coordinates; it is also required that $u_1 =0$ on the singular set $\lbrace z_1 = 0 \rbrace$. Note that if we change $\epsilon$ with $\tilde{\epsilon} = e^{i\theta} \epsilon = \beta |z_1|^{\beta-1} dz_1$, say, in the definition then the vanishing condition implies that we get the same space. We move on and consider a 2-form $\eta$ of type $(1, 1)$, we use the basis $\lbrace \epsilon \overline{\epsilon}, \epsilon d\overline{z_j}, dz_j\overline{\epsilon}, dz_jd\overline{z_k} \rbrace$ for \( j, k=2, \ldots, n \). We say that $\eta$ is $C^{\alpha}$ if its components are $C^{\alpha}$ functions; we also require the components corresponding to $\epsilon d\overline{z_j}, dz_j \overline{\epsilon}$ to vanish on \( \{z_1 =0\} \). Finally, we set $C^{2, \alpha}$ to be the space of $C^{\alpha}$ (real) functions $u$ such that $\partial u, i\partial \overline{\partial} u$ are $C^{\alpha}$. It is straightforward to introduce norms; we define the $C^{\alpha}$ norm of a function $\|u\|_{\alpha}$ as the sum of its $C^0$ norm $\|u\|_0$ and its $C^{\alpha}$ semi-norm $[u]_{\alpha}$. The $C^{2, \alpha}$ norm of a function $u$, denoted by \( \|u\|_{2, \alpha} \), is the sum of $\|u\|_{\alpha}$, the $C^{\alpha}$ norm of the components of $\partial u$ in and the $C^{\alpha}$ norm of the components of $i \partial \overline{\partial} u$. Let $X$ be a compact complex manifold, $D \subset X$ a smooth divisor and $g$ a smooth K\"ahler metric on the complement of $D$. \begin{definition} \label{DefinitionMetricConeSing} We say that $g$ is a \(C^{\alpha}\) metric with cone angle $2 \pi \beta$ along $D$, if for every $p \in D$ we can find complex coordinates $(z_1, \ldots, z_n)$ centred at $p$ in which \begin{itemize} \item $D = \{z_1 =0\}$. \item \((1/C) g_{(\beta)} \leq g \leq C g_{(\beta)}\) for some \( C>0 \). \item There is a local K\"ahler potential for \(g\) which belongs to \(C^{2, \alpha}\). \end{itemize} \end{definition} It is easy to show that if \(g\) satisfies Definition \ref{DefinitionMetricConeSing}, then its tangent cone at points of \(D\) is \(g_{(\beta)}\). It is also straightforward to see that its K\"ahler form \(\omega\) defines a closed current on \(X\) -with zero Lelong numbers at points of \(D\)- and therefore there is a positive de Rham co-homology class \( [\omega] \). Nevertheless, we won't make use of these facts. Set $\lbrace v_1, \ldots, v_n \rbrace$ to be the vectors \begin{equation} \label{VECTORS} v_1 = |z_1|^{1-\beta} \frac{\partial}{\partial z_1}, \hspace{3mm} v_j = \frac{\partial}{\partial z_j} \hspace{2mm} \mbox{for} \hspace{1mm} j = 2, \ldots n . \end{equation} Note that, with respect to $g_{(\beta)}$, these vectors are orthogonal and their length is constant. In the complement of $D$ we have smooth functions $g_{i\overline{j}} = g (v_i, \overline{v}_j)$ which admit a H\"older continuous extension to $D$. The matrix $(g_{i\overline{j}} (p))$ is positive definite and $g_{1\overline{j}} =0$ when $j \geq 2$ and $z_1=0$. It is interesting to note that Definition \ref{DefinitionMetricConeSing} is independent of the complex coordinates $z_1, \ldots, z_n$ only if we add the restriction that $\alpha \leq \beta^{-1} -1$. Indeed, assume for simplicity that $g$ is a metric in a domain of $\mathbb{C}^2$, with standard complex coordinates $(\tilde{z}_1, \tilde{z}_2)$, of cone angle $2\pi\beta$ along $D =\{ \tilde{z}_1 =0 \}$. We get smooth functions $\tilde{g}_{i\overline{j}}$ on the complement of $D$ which extend H\"older continuously to $D$. Set $\tilde{z}_1= z_1$ and $\tilde{z}_2 = z_1 + z_2$, so that $$ \frac{\partial}{\partial z_1} = \frac{\partial}{\partial \tilde{z}_1} + \frac{\partial}{\partial \tilde{z}_2} , \hspace{4mm} \frac{\partial}{\partial z_2} = \frac{\partial}{\partial \tilde{z}_2} .$$ In the coordinates $(z_1, z_2)$, $$ g_{1\overline{1}} = \tilde{g}_{1\overline{1}} + |z_1|^{1-\beta} (\tilde{g}_{1\overline{2}} + \tilde{g}_{2\overline{1}}) + |z_1|^{2-2\beta} \tilde{g}_{2\overline{2}}, \hspace{3mm} g_{1\overline{2}}= \tilde{g}_{1\overline{2}} + |z_1|^{1-\beta} \tilde{g}_{2\overline{2}}, \hspace{3mm} g_{2\overline{2}} = \tilde{g}_{2\overline{2}} . $$ However, the function $|z_1|^{1-\beta}$ belongs to $C^{\alpha}$ only if $\alpha \leq \beta^{-1} -1$. There are two types of coordinates we can consider around $D$. The first is given by holomorphic coordinates $z_1, \ldots, z_n$ in which $D= \lbrace z_1 =0 \rbrace$. In the second we replace $z_1$ with $r e^{i\theta}$, by means of \(z_1=r^{1/\beta}e^{i\theta}\), and leave $z_2, \ldots, z_n$ unchanged; we refer to the last as cone coordinates. In other words, there are two relevant differential structures on $X$ in our situation. One is given by the complex manifold structure we started with, the other is given by declaring the cone coordinates to be smooth. The two structures are clearly equivalent by a map modelled on \[ (re^{i\theta}, z_2, \ldots, z_n) \to (r^{1/\beta}e^{i\theta}, z_2, \ldots, z_n) \] in a neighborhood of $D$. The notion of a function being H\"older continuous (without specifying the exponent) is independent of the coordinates we take, however the value of its exponent does depend. We set \( C^{\alpha}(X) \) to be the space of H\"older continuous functions of exponent \( \alpha\) with respect to the cone coordinates -this agrees with the space of \( C^{\alpha \beta} \) functions with respect to the complex coordinates-. Taking a finite covering of $X$ by complex coordinate charts, it is straightforward to define the space \( C^{2, \alpha}(X) \) and endow it with a norm which makes it into a Banach space. The main result we want to recall is the following \begin{theorem} \label{LinearTheorem} Assume that \( \alpha < (1/\beta)-1 \); then \( \Delta_{g} : C^{2, \alpha}(X) \to C^{\alpha}(X) \) is a Fredholm operator of index $0$. \end{theorem} Theorem \ref{LinearTheorem} is proved in \cite{donaldson1}. In this article \( \Delta_g \) denotes the negative or `analyst' Laplacian of \(g\). \subsection{Standard reference metric} Let \( \Lambda \) be the complex line bundle associated to \(D\), \(h\) a smooth Hermitian metric on it and $s$ a holomorphic section of \(\Lambda\) with \(s^{-1}(0)=D \). Let \( \Omega \) be a smooth K\"ahler metric on \(X\); for \(\delta>0\) set \begin{equation} \omega = \Omega + \delta i \partial \overline{\partial} |s|_h^{2\beta} . \end{equation} We have the following \begin{proposition} \label{ReferenceMetric} If \(\delta \) is sufficiently small, then \( \omega \) is \(C^{\alpha}\) according to Definition \ref{DefinitionMetricConeSing} with \( \alpha = (1/\beta)-1 \). Moreover, its sectional curvature is uniformly bounded above. \end{proposition} \begin{proof} By compactness, it is enough to work locally. Let $F$ be a smooth positive function and let $\Omega$ be a smooth K\"ahler form, both defined on a domain in $\mathbb{C}^n$ which contains the origin. Consider the $(1,1)$ form \begin{equation} \label{LOC MET} \omega = \Omega + i \partial \overline{\partial} (F |z_1|^{2\beta}) . \end{equation} Straightforward calculation gives us that $$ i \partial \overline{\partial} (F |z_1|^{2\beta}) = |z_1|^{2\beta} i \partial \overline{\partial} F + \beta |z_1|^{2\beta-2} \left( \overline{z}_1 i dz_1 \wedge \overline{\partial}F + z_1 i\partial F \wedge d \overline{z}_1 \right) + \beta^2 |z_1|^{2\beta-2} F idz_1 \wedge d\overline{z}_1 . $$ Let $I$ be the complex structure of $\mathbb{C}^n$ and $ g = \omega (., I.)$. Let $v_1, \ldots, v_n$ be as in \ref{VECTORS}. We want to compute $g_{i\overline{j}} = g(v_i, \overline{v}_j)$. Write $ \Omega = \sum_{i, j =1}^n \Omega_{i\overline{j}} i dz_i \wedge d\overline{z}_j$. Note that the coefficients $\Omega_{i\overline{j}}$ are given by the contraction of $\Omega$ with the standard coordinate vectors $\partial/ \partial z_i$, $\partial /\partial \overline{z}_j$, while to obtain $g_{i\overline{j}}$ we must contract $g$ with $v_i, \overline{v}_j$. It is easy to check that $$ g_{1\overline{1}} = |z_1|^{2-2\beta} \Omega_{1\overline{1}} + |z_1|^2 \frac{\partial^2 F}{\partial z_1 \partial \overline{z}_1} + \beta \left( z_1 \frac{\partial F}{\partial z_1} + \overline{z}_1 \frac{\partial F}{\partial \overline{z}_1} \right) + \beta^2 F ; $$ $$g_{1\overline{j}} = |z_1|^{1-\beta} \Omega_{1\overline{j}} + |z_1|^{1+\beta} \frac{\partial^2 F}{\partial z_1 \partial \overline{z}_j } + \beta |z_1|^{\beta -1} \left( z_1 \frac{\partial F}{\partial z_j} + \overline{z}_1 \frac{\partial F}{\partial \overline{z}_j} \right) \hspace{4mm} \mbox{for} j\geq 2 ; $$ $$ g_{j\overline{k}}= \Omega_{j\overline{k}} + |z_1|^{2\beta} \frac{\partial^2 F}{\partial z_j \partial \overline{z}_k} \hspace{4mm} \mbox{for} j, k \geq 2 . $$ It is then clear that if \(|z_1|\) is sufficiently small, then $g$ defines a $C^{\alpha}$ K\"ahler metric with $\alpha = \beta^{-1}-1$. There is a useful way of thinking of $g$, due to J. Sturm -see \cite{Rubinstein}, Lemma 3.14- : On $\mathbb{C}^{n+1}$ with standard complex coordinates $(z_1, \ldots, z_{n+1})$ consider the $(1, 1)$ form $$ \Gamma = \Omega + i\partial \overline{\partial} (F |z_{n+1}|^2) . $$ This form defines a smooth K\"ahler metric on $\mathbb{C}^{n+1}$ in a neighbourhood of $0$. Let us delete a ray in the complex plane corresponding to the $z_1$ variable and define $$ \Phi (z_1, \ldots, z_n) = (z_1, \ldots, z_n, z_1^{\beta}) , $$ so that $\omega = \Phi^{*} \Gamma$. The pull-back of $\Gamma$ by $\Phi$ is independent of the branch of $z_1^{\beta}$ that we take and we can think of the metric $g$ in the complement of $D$ as the restriction of the smooth metric defined by $\Gamma$ to a smooth complex hyper-surface in $\mathbb{C}^{n+1}$. A well-known principle says that the holomorphic sectional curvature of a complex submanifold of a K\"ahler manifold is less or equal than that of the ambient manifold, see Section 0.5 in Griffiths-Harris \cite{GriffithsHarris}. We conclude that we can restrict $g$ to a smaller neighbourhood of $0$ if necessary so that its sectional curvature is uniformly bounded above. \end{proof} It is easy to check that \( [\omega]= [\Omega] \) as de Rham co-homology classes. We refer to \( \omega \) as the `standard reference metric'. It follows from the computations in the appendix of \cite{JMR} that the sectional curvature of \( \omega \) is unbounded below at \(D\); it then follows that the same holds for its Ricci curvature. We remark that this negative curvature phenomena is not inherent to the cone singularities, it is a consequence of the particular definition of \(\omega\). A good example to have in mind is the following: Let $a$ be a real number with $|a| <1$. Consider the metric defined in the unit disc of the complex numbers, $$ g_a = (a + |z_1|^{2\beta-2} )|dz_1|^2 . $$ Its Gaussian curvature is given by $$ K_a = - 4(\beta-1)^2 a \frac{|z_1|^{2-4\beta}}{(1 + |z_1|^{2-2\beta}a)^3} . $$ If $1/2 < \beta <1$, then $K_a$ is unbounded below when $a>0$ and unbounded above if $a<0$. In higher dimensions we can take the product of $g_a$ with a flat euclidean factor $\mathbb{C}^{n-1}$. For reference in the future; we recall that on a K\"ahler manifold there are the notions of sectional curvature, holomorphic sectional curvature and bisectional curvature. A uniform (upper or lower) bound in any of these three implies a uniform (upper or lower) bound on the other two. \section{Proof of Theorem \ref{Theorem1}} Consider the functional \( H \) given by \begin{equation} H(\phi)= \log \frac{\omega_{\phi}^n}{\omega^n}, \end{equation} where \( \omega_{\phi}= \omega + i \partial\overline{\partial} \phi \); it is defined in a suitable neighbourhood of \(0\) in \( C^{2, \alpha'}(X) \) and takes values in $C^{\alpha'}(X)$. Let \( v = \int_X \omega^n \) and \( M= \{ h \in C^{\alpha'}(X) \hspace{1mm} \mbox{s.t.} \hspace{1mm} \int_X e^h \omega^n = v \} \); integration by parts shows that \( \int_X \omega_{\phi}^n = \int_X \omega^n \) and therefore \(H(\phi)\in M \) for any \(\phi\). Clearly \(H(0)=0 \); standard computations show that \(H\) is \(C^1\) and that its derivative at \(0\) agrees with \( \Delta_g \). Write \( T_0 M = \{ \psi \in C^{\alpha'}(X) \hspace{1mm} \mbox{s.t.} \hspace{1mm} \int_X \psi \omega^n = 0 \}\) for the tangent space of \(M\) at \(0\); and let \( L = \{ \phi \in C^{2, \alpha'}(X) \hspace{1mm} \mbox{s.t.} \hspace{1mm} \int_X \phi \omega^n = 0 \} \). It follows from Theorem \ref{LinearTheorem} and the Implicit Function Theorem that \(H\) defines a diffeomorphism between small neighborhoods of \(0\), say \( U \subset L \) and \( V \subset M \). We can assume that \(U \subset B(0, \epsilon) \), the ball centred at the origin of radius \(\epsilon\); and that \(B(0, 2\mu) \cap M \subset V \) for some \( \mu>0 \). On the other hand, a standard formula in K\"ahler geometry tells us that, in the complement of \(D\) \begin{equation} \label{Ricci1} \mbox{Ric}(\omega_{\phi})- \mbox{Ric}(\omega) = -i \partial \overline{\partial} H(\phi); \end{equation} and \begin{equation} \label{Ricci2} \mbox{Ric}(\omega)- \mbox{Ric}(\Omega) = i \partial \overline{\partial} \log \frac{\Omega^n}{\omega^n} = i \partial \overline{\partial} \log \frac{ |s|_h^{2\beta-2}\Omega^n}{\omega^n} + (1-\beta) i \partial \overline{\partial} \log |s|_h^2 . \end{equation} Since \( \omega \) is a \(C^{\alpha}\) metric, the function \[ F= \log \frac{ |s|_h^{2\beta-2}\Omega^n}{\omega^n} \] belongs to \(C^{\alpha}(X)\). Since \(\alpha' < \alpha \), there is \( \tilde{h} \in B(0, \mu) \subset C^{\alpha'}(X) \) such that \( F - \tilde{h} \) is a smooth function on \(X\) with respect to the complex coordinates. Note that \( e^{-\mu} v \leq \int_X e^{\tilde{h}}\omega^n \leq e^{\mu} v \), so we can add a constant to \(\tilde{h}\) to get \(h \in V \) such that \(F-h\) is smooth. Write \(h=H(\phi)\) with \(\phi \in U \); \ref{Ricci1} together with \ref{Ricci2} give us \begin{equation} \label{Ricci3} \mbox{Ric}(\omega_{\phi}) = \mbox{Ric}(\Omega) + (1-\beta) i \partial \overline{\partial} \log |s|_h^2 + i \partial \overline{\partial} \left( F-H(\phi) \right) . \end{equation} Note that $i \partial \overline{\partial} \log |s|_h^2$ extends as a smooth 2-form on \(X\), indeed it is the standard representative for \( -2\pi c_1 (\Lambda) \). Theorem \ref{Theorem1} then follows from \ref{Ricci3}. For the sake of clarity we remark that in the proof we use standard derivatives in the complement of \(D\). If we were using currents and working globally on \(X\); then we would have to include the term \( 2 \pi (1- \beta) [D] \) on the right hand sides of equations \ref{Ricci2} and \ref{Ricci3}, with \([D]\) the current of integration along \(D\). \section{Proof of Theorem \ref{Theorem2}} We prove the case of negative Ricci, the case of Ricci-flat metrics goes along the same lines; the major difference is in the \(C^0\) estimate, in which Moser iteration is used instead of the maximum principle. There are no difficulties in extending the Moser iteration technique to the setting of metrics with cone singularities, for the details we refer to \cite{brendle}. We concentrate on the existence part; the uniqueness follows from the maximum principle -see \cite{Jeffress}-. The hypothesis that $c_1 (X) - (1 - \beta) c_1 ([D]) < 0$ implies that there is a smooth K\"ahler form $\Omega$ such that $ - (2\pi)^{-1} [ \Omega ] = c_1 (X) - (1 - \beta) c_1 ([D])$. Take $s$ to be a holomorphic section of $\Lambda$ such that $s^{-1}(0) = D$ and let $h$ be a smooth Hermitian metric on $\Lambda$. Fix $\delta >0$ so that we have the reference metric $ \omega = \Omega + \delta i \partial \overline{\partial} |s|_h^{2\beta}$ of Proposition \ref{ReferenceMetric}. \begin{claim} There is a $C^{\alpha}$ function $f$ on $X$, smooth in the complement of $D$, such that $ \mbox{Ric}(\omega) = - \omega + i \partial \overline{\partial} f.$ We refer to \(f\) as the Ricci potential of $\omega$. \end{claim} \begin{proof} The co-homology condition on $\Omega$ implies that there is a smooth function $F$ on $X$ with $ i\partial \overline{\partial} F = \Omega + \mbox{Ric}(\Omega) + (1- \beta) i\partial \overline{\partial} \log |s|_h^2 $. We use that $ \mbox{Ric}(\omega) - \mbox{Ric}(\Omega) = i \partial \overline{\partial} \log \left( \Omega^n/\omega^n \right)$ to obtain $$ \mbox{Ric}(\omega) = \mbox{Ric}(\Omega) + i \partial \overline{\partial} \log \left( \frac{\Omega^n}{\omega^n} \right) = i\partial \overline{\partial} F - \Omega - i\partial \overline{\partial} \log \left( \frac{|s|_h^{2-2\beta} \omega^n}{\Omega^n} \right) = -\omega + i\partial \overline{\partial}f ; $$ where $$ f = F + \delta |s|_h^{2\beta} - \log \left( \frac{|s|_h^{2-2\beta} \omega^n}{\Omega^n} \right) . $$ Since \(\omega\) is \(C^{\alpha}\), we see that $f$ is a smooth function in the complement of $D$ which extends as a $C^{\alpha}$ function to $X$ with \( \alpha = (1/\beta)-1 \). \end{proof} We want to find $u \in C^{2, \alpha}$ a solution of \begin{equation} \label{NEG KE} (\omega + i \partial \overline{\partial} u )^n = e^{ f + u} \omega^n . \end{equation} It is easy to argue that if we set $\omega_{KE} = \omega + i \partial \overline{\partial} u$, then $\omega_{KE}$ defines a K\"ahler metric with cone angle $2\pi \beta$ along $D$ and $\mbox{Ric}(\omega_{KE} ) = - \omega_{KE} $ in the complement of $D$. In order to solve \ref{NEG KE} we use the Aubin-Yau continuity method. A novel feature is that the path we use $doesn't$ start at the reference metric $\omega$. We start the continuity path with a metric whose Ricci potential is a smooth function rather than \(C^{\alpha}\), to obtain the initial metric we proceed as in the proof of Theorem \ref{Theorem1}. From now on we fix \(\alpha < (1/\beta)-1 \). Consider the functional $\mathcal{F}: U \to C^{\alpha}$, where $U$ is a neighbourhood of $0$ in $C^{2, \alpha}$ and $\mathcal{F} (\phi) = \log (\omega_{\phi}^n / \omega^n) - \phi$. It is clear that $\mathcal{F} (0)=0$ and that the derivative at $0$ is given by $ D_0 \mathcal{F} (\phi) = \triangle_{g} \phi - \phi$. Integration by parts shows that $D_0 \mathcal{F}$ has no kernel, so that the Implicit Function Theorem together with Theorem \ref{LinearTheorem} imply that there is $\epsilon>0$ such that for every $h \in C^{\alpha}$ with $\| h \|_{\alpha} < \epsilon$ there is $\phi \in C^{2, \alpha}$ with $\mathcal{F}(\phi) =h$. There is a function $f_0$, smooth in the complex coordinates, such that $\| f - f_0 \|_{\alpha} < \epsilon$. We let $h = f - f_0$ and take $\phi \in C^{2, \alpha}$ with $\mathcal{F} (\phi) = h$, so that $\omega_{\phi}$ satisfies $ \omega_{\phi}^n = e^{h+\phi} \omega^n $; hence $\mbox{Ric}(\omega_{\phi}) = - \omega_{\phi} + i \partial \overline{\partial} f_0 $. Set \( \omega_0 = \omega_{\phi} \). To solve equation \ref{NEG KE} it is enough to find $u_1 \in C^{2, \alpha}$ such that $ (\omega_0 + i \partial \overline{\partial} u_1)^n = e^{f_0 + u_1} \omega_0^n$; so then $u = \phi + u_1$ is the solution of \ref{NEG KE}. We use the path \begin{equation} \label{CPath} (\omega_0 + i\partial \overline{\partial} u_t)^n = e^{tf_0 + u_t} \omega_0^n \end{equation} and consider the set $$ T = \{ t \in [0, 1] \hspace{2mm} \mbox{such that there is} \hspace{2mm} u_t \in C^{2, \alpha} \hspace{2mm} \mbox{solving \ref{CPath}} \} . $$ We start at $t=0$ with $u_0 =0$. The goal is to show that $T$ is open and closed. Theorem \ref{LinearTheorem} implies that $T$ is open. The fact that $T$ is closed, and hence Theorem \ref{Theorem2}, follows from the following a priori estimate: \begin{proposition} \label{AprioriEstimate} There is a constant $C$, independent of $t \in T$, such that $\|u_t \|_{2, \alpha} \leq C$. \end{proposition} The proof of Proposition \ref{AprioriEstimate} is divided into three steps: \emph{Step 1. $C^0$-estimate} This is an application of the maximum principle. If $u_t$ attains its maximum at $ p \in X \setminus D$ then \ref{CPath} implies that $t f_0 (p) + u_t (p) \leq 0$ , so that $ \sup u_t \leq \max \{- \inf f_0, 0 \}$. If the maximum is attained at $p \in D$ then one considers $ \tilde{u}_t = u_t + \delta |s|_h^{\epsilon}$ for a suitable choice of $\delta$ and $\epsilon$ positive and small. The function $\tilde{u}_t$ attains its maximum outside $D$, one gets a uniform upper bound on the supreme of $ \tilde{u}_t$ -see \cite{Jeffress}- which indeed implies a uniform upper bound on $\sup u_t$. Similarly one gets a uniform lower bound on $ \inf u_t$. As a result $\| u_t \|_0 \leq C$. \emph{Step 2. $C^2$-estimate} Write $\omega_t = \omega_0 + i \partial \overline{\partial}u_t$, then \ref{CPath} implies that $\mbox{Ric}(\omega_t) = - \omega_t + (1-t) i \partial \overline{\partial} f_0$. Since $f_0$ is smooth, there is a constant $C_2>0$ such that $i \partial \overline{\partial} f_0 \geq - C_2 \omega$. Set $C_1 =1$ so that $\mbox{Ric}(\omega_t) \geq -C_1 \omega_t - C_2 \omega$. On the other hand, the reference metric $\omega$ has bisectional curvature bounded above, so there is $C_3 >0$ such that $\mbox{Bisec}(\omega) \leq C_3$. Write $\tilde{u}_t = \phi + u_t$, so that $\omega_t = \omega + i \partial \overline{\partial} \tilde{u}_t$. Let $ A = C_2 + 2C_3 +1$; the Chern-Lu inequality -see \cite{JMR}- tells us that \begin{equation} \label{CHERN LU} \triangle_{\omega_t} ( \mbox{tr}_{\omega_t} \omega - A \tilde{u}_t) \geq -C_1 - An + \mbox{tr}_{\omega_t}\omega. \end{equation} We already have a uniform bound on $\| \tilde{u}_t \|_0$. We use \ref{CHERN LU} and the maximum principle (as in the previous step), together with the estimate on $\| \tilde{u}_t \|_0$, to get the uniform bound $ \mbox{tr}_{\omega_t} \omega \leq C$. This bound together with \ref{CPath} imply that $ C^{-1} \omega \leq \omega_t \leq C \omega $. \emph{Step 3. $C^{2, \alpha}$-estimate} This is a local result. We appeal to the `interior Schauder estimates for the complex Monge-Ampere operator'. In the case that $\beta =1$ (no cone singularities) there is a large literature on this topic; we mention, among others, the work of Caffarelli and Safanov for the real Monge-Ampere operator. More recently, Chen-Wang -\cite{ChenWang}- gave a new proof of these estimates by means of a `blow-up' argument, similar in spirit to Leon Simon's proof of the Schauder estimates for the Laplace operator \cite{SimonSchauder}. This technique works in the setting of metrics with cone singularities. Our previous $C^2$ estimate together with Theorem 1.7 in \cite{ChenWang} gives us that $\| u \|_{2, \alpha} \leq C$. Alternatively; we can refer to Evans-Krillov theory and its analogue for metrics with cone singularities, see \cite{JMR}. \bibliographystyle{plain}
2,877,628,090,896
arxiv
\section{Introduction} \label{sec:intro} Short-period ($P<10$ days) Jupiter-sized ($R_{p}\ge8~\mathrm{R_{\oplus}}$) exoplanets, or hot Jupiters, are rare in the Galaxy. Results from radial velocity (RV) surveys \citep[e.g.,][]{Cumming2008,Mayor2011,Wright2012}, ground-based photometry surveys \citep[e.g.,][]{Obermeier2016}, and space-based surveys \citep{Howard2012,Petigura2018,Zhou2019a} have determined the occurrence rate for hot Jupiters orbiting Sun-like (FGK) dwarfs to be \(\lesssim 1\%\). Despite that over 400 hot Jupiters have been detected orbiting Sun-like stars, there is no consensus as to the origin mechanisms required to create this population of exoplanets \citep[see][]{Dawson2018}. Many hypotheses have been proposed to explain the origin of these planets, including star-planet interactions \citep[e.g.,][]{Wu2003,Petrovich2015a}, planet-planet interactions \citep[e.g.,][]{Naoz2011}, migration due to planet-disk interactions \citep[e.g.,][]{Lin1996}, high-eccentricity migration \citep[e.g.,][]{Rasio1996,Weidenschilling1996,Ford2008,Petrovich2015}, and \textit{in-situ} formation \citep[e.g.,][]{Boley2016,Batygin2016}. From analysis of the Kepler field \citep[e.g.,][]{Dressing2015,Mulders2015,Hardegree-Ullman2019,Hsu2020}, the occurrence rate of small ($1\mathrm{~R_\oplus}<R_p<4\mathrm{~R_\oplus}$) planets on short-period ($P<200$ days) orbits is larger for M dwarfs, the most abundant type of star in the Galaxy \citep[][]{Henry2018}, compared to Sun-like stars. The occurrence rate of these small planets also increases for later type M dwarfs. RV surveys have similarly revealed the abundance of low-mass planets ($1\mathrm{~M_\oplus}<M_p<10\mathrm{~M_\oplus}$) on short-period orbits ($P<200$ days) as companions to M dwarfs \citep[e.g.,][]{Bonfils2013,Tuomi2014,Tuomi2019,Sabotta2021}. Jupiter-like planets, however, are expected to be rare companions to M dwarfs under the theory of core accretion \citep[e.g.,][]{Laughlin2004,Ida2005,Kennedy2008}. In the core accretion model, a gas giant planet forms from a runaway process resulting in the rapid accretion of gas onto a planetary core \citep[e.g.,][]{Pollack1996,Ida2004,Hubickyj2005}. This model predicts a small number of gas giants orbiting M dwarfs, because the low surface density of an M dwarf protoplanetary disk would impede the formation of massive cores required for the onset of runaway gas accretion. To date, M dwarf RV surveys \citep[e.g.,][]{Endl2006,Bonfils2013,Tuomi2019,Sabotta2021} and photometric surveys \citep{Kovacs2013,Obermeier2016} have only been able to constrain the occurrence rate to $\lesssim1-2\%$ for hot Jupiters orbiting M dwarfs. Prior to this paper, there were seven hot Jupiters known to transit M dwarfs: Kepler-45 b \citep{Johnson2012}, HATS-6 b \citep{Hartman2015}, NGTS-1 b \citep{Bayliss2018}, HATS-71 b \citep{Bakos2020}, HATS-74A b and HATS-75b \citep{Jordan2022}, and TOI-3757 b \citep{Kanodia2022}. In this paper, we confirm the planetary nature of two gas giants transiting the M dwarfs TOI-3714 ($V=14.63$, $J=11.42$, $T=12.79$) and TOI-3629 ($V=15.24$, $J=11.74$, $T=13.18$) . We characterize each system using space and ground-based photometry, speckle imaging, and precision RVs obtained with the Habitable-zone Planet Finder \citep[HPF;][]{Mahadevan2012,Mahadevan2014} and NEID \citep[][]{Schwab2016,Halverson2016} spectrographs. We derive stellar parameters for the host stars using our HPF spectra and use the RVs measured from both HPF and NEID to confirm that each transiting companion is a hot Jupiter. This paper is structured as follows: Section \ref{sec:observations} presents the photometric, imaging, and spectroscopic observations used to characterize each system. The characterization of the host stars and the best estimates of the stellar parameters are described in Section \ref{sec:stellarpar}. The modeling and analysis of the photometry and RVs are presented in Section \ref{sec:modelfit}. Section \ref{sec:discussion} provides further discussion of the nature of these planets and the feasibility for future study. We end with a summary of our key results in Section \ref{sec:summary}. \section{Observations} \label{sec:observations} \subsection{TESS} TESS \citep{Ricker2015} observed TOI-3629 (TIC 455784423, Gaia EDR3 2881820324294985856) and TOI-3714 (TIC 155867025, Gaia EDR3 178924390478792320) in long-cadence mode (30-min cadence). TOI-3629 was observed during Sector 17 (2019 October 7 through 2019 November 2) and TOI-3714 was observed during Sector 19 (2019 November 27 through 2019 December 24). Similar to TOI-1899 \citep{Canas2020}, we identified TIC-455784423.01 as a planetary candidate using a custom pipeline to search for transiting candidates in short and long-cadence TESS data orbiting M dwarfs that were amenable to RV observations with HPF. At the time we searched TESS data, the ``quick-look pipeline'' (QLP) developed by \cite{Huang2020,Huang2020b} was releasing candidates from the southern TESS sectors. Our search was not designed for completeness but to identify a few ($\lesssim10$) M dwarfs with Jupiter-sized transiting companions that were most likely planetary in nature. Briefly, this pipeline was developed to identify transiting companions to bright (TESS magnitude of $T<13$) M dwarfs ($T_{e}<4000$ K) from the catalog of cool dwarfs \citep[a value of \texttt{splists = cooldwarfs\_v8};][]{Muirhead2018} in the TESS input catalog \citep[TIC;][]{Stassun2019} that are observable from the Hobby-Eberly Telescope \citep[HET;][]{Ramsey1994,Ramsey1998} at McDonald Observatory ($-11^\circ<\delta<71^\circ$). These constraints resulted in an average of $\sim2000$ stars to process per sector. Our pipeline uses the \texttt{lightkurve} package \citep{LightkurveCollaboration2018} to detrend (i) short-cadence light curves provided by the TESS science processing operations center \citep[][]{Jenkins2016} and (ii) long-cadence derived from calibrated full-frame images using \texttt{eleanor} \citep{Feinstein2019} with a Savitzky-Golay filter. The pipeline searches for transit-like events in the detrended photometry using the box least-squares algorithm \citep{Kovacs2002} and models the transit signal following the formalism from \cite{Mandel2002} as implemented in the \texttt{batman} package \citep{Kreidberg2015}. The transit-like events are vetted for centroid offsets and inconsistencies ($>3\sigma$ discrepant) with the stellar density recovered by the transit fit \citep[e.g.,][]{Seager2003,Winn2010} and the stellar density reported by TIC. Signals that were identified were subsequently vetted by members of the HPF team before we began RV observations. We detected one planet candidate with a depth of $\sim1.5\%$ and a period of $\sim3.94$ days. This event was subsequently identified (at a comparable period and depth) by the QLP and given the designation TOI-3629.01. It is one of the planetary candidates from the ``faint star search''\footnote{\url{https://tess.mit.edu/qlp/}}, an effort to extend the nominal search and vetting of TESS objects of interests to stars with a TESS magnitude of $T<13.5$ \citep{Kunimoto2021}. The faint star search also identified TOI-3714.01 as a transiting candidate with a depth of $\sim4.5\%$ and a period of $\sim2.15$ days. This target was excluded from our search due to its faintness ($T=13.18$). We extract the photometry from the TESS full-frame images using \texttt{eleanor}, which calls the TESScut\footnote{\url{https://mast.stsci.edu/tesscut/}} service \citep{Brasseur2019} to obtain a cut-out of \(31\times31\) pixels of the calibrated full-frame images centered on each target. \texttt{eleanor} removes the background, corrects for systematics, and derives a light curve for various combinations of apertures when processing a target. The final light curve is the one which minimizes the combined differential photometric precision (CDPP) after the data are binned to 1 hour timescales. The CDPP was originally defined for Kepler as the rms of the photometric noise on transit timescales \citep{Jenkins2010}. Minimizing this value ensures that sharp features on relatively short timescales, such as transits, are preserved. The final CDPP was 2902 ppm for TOI-3714 and 2219 ppm for TOI-3629. Figures \ref{fig:3714apertures} and \ref{fig:3629apertures} show the photometric images for TOI-3629 and TOI-3714, respectively. Panel (a) in both figures presents the TESS full frame image cutouts and the apertures used to derive the light curves for each target. In panel (b), a smaller 11x11 pixel subgrid of the TESS image and the light curve apertures are overplotted on images from the Zwicky Transient Facility \citep[ZTF;][]{Masci2019}. For each target, the preferred aperture is a $2\times1$ rectangular aperture centered on the host star. To investigate the impact of background stars as a source of dilution, we searched the $11\times11$ TESS pixel grid centered on each target in Gaia EDR3 \citep{GaiaCollaboration2021}. Similar to \cite{Gandolfi2018}, we use the Gaia $\mathrm{G_{RP}}$ bandpass as an approximation to the TESS bandpass. Gaia EDR3 reveals there are no bright stellar companions in each aperture having $\Delta~\mathrm{G_{RP}}<4$, where $\Delta~\mathrm{G_{RP}}$ is the difference between the $\mathrm{G_{RP}}$ magnitude of a star and the respective value for the TOI host star. \begin{figure*}[!ht] \epsscale{1.15} \plotone{toi3714_pixel.pdf} \caption{\textbf{(a)} The $31\times31$ TESS target pixel cutout centered around TOI-3714 (marked as a star). Stars identified in Gaia EDR3 with magnitudes $\Delta\mathrm{G_{RP}}<4$ are marked with diamond stars for reference. Stars with $\Delta\mathrm{G_{RP}}<0$ are brighter than the host star. The dashed line denotes the TESS $11\times11$ pixel subgrid that is shown in (b). \textbf{(b)} Overlay of the TESS $11\times11$ pixel subgrid, TOI-3714, and other comparably bright stars on a ZTF $zi$ image.} \label{fig:3714apertures} \end{figure*} \begin{figure*}[!ht] \epsscale{1.15} \plotone{toi3629_pixel.pdf} \caption{Identical to Figure \ref{fig:3714apertures} but for TOI-3629. \textbf{(a)} The $31\times31$ TESS target pixel cutout centered around TOI-3629 (marked as a star). Stars identified in Gaia EDR3 having magnitudes $\Delta\mathrm{G_{RP}}<4$ are marked with diamond stars. Stars with $\Delta\mathrm{G_{RP}}<0$ are brighter than the host star. \textbf{(b)} Overlay of the TESS $11\times11$ pixel subgrid, TOI-3629, and other comparably bright stars on a ZTF $zi$ image.} \label{fig:3629apertures} \end{figure*} The TESS light curves used in this work are the \texttt{CORR\_FLUX} values calculated by \texttt{eleanor}. The corrected flux removes signals correlated with position (x and y pixel position), measured background, and time in the simple aperture flux. Observations where the background is larger than the stellar flux (\texttt{FLUX\_BKG} $>$ \texttt{CORR\_FLUX}) or with non-zero data quality flags \citep[Table 28 in][]{Tenenbaum2018} are excluded from analysis. Figures \ref{fig:3714phot} and \ref{fig:3629phot} present all photometry, including the TESS light curve, analyzed in this work. \begin{figure*}[!ht] \epsscale{1.15} \plotone{lc_toi3714.pdf} \caption{\textbf{(a)} The median normalized TESS light curve for TOI-3714 derived with \texttt{eleanor}. The solid blue line is the best-fitting Gaussian process model used to detrend the light curve. The mid-transit times are indicated by the triangles. \textbf{(b)}$-$\textbf{(e)} are the light curves for TESS, ARCTIC, and RBO. In (b)-(e), the best-fitting model from the joint fit to the photometry and RVs is plotted as a dashed line while the shaded regions denote the \(1\sigma\) (darkest), \(2\sigma\), and \(3\sigma\) (lightest) extent of the model posteriors. The modeling of the photometry and RVs is described in detail in Section \ref{sec:modelfit}.} \label{fig:3714phot} \end{figure*} \begin{figure*}[!ht] \epsscale{1.15} \plotone{lc_toi3629.pdf} \caption{Identical to Figure \ref{fig:3714phot}, but for TOI-3629. \textbf{(a)} The median normalized TESS light curve for TOI-3629 derived with \texttt{eleanor} along with the best-fitting Gaussian process model. The triangles indicate the mid-transit times. \textbf{(b)}$-$\textbf{(e)} are the light curves for the TESS, Kuiper, and RBO plotted with model posteriors (shaded regions) from the joint fit to the photometry and RVs.} \label{fig:3629phot} \end{figure*} \subsection{RBO 0.6m Telescope} We used the 0.6m telescope at the Red Buttes Observatory (RBO) in Wyoming \citep{Kasper2016} to observe (i) TOI-3714 on the nights of 2021 August 16 and 2021 November 19 and (ii) TOI-3629 on the nights of 2021 September 26 and 2021 October 4. The 0.6m telescope is a f/8.43 Ritchey-Chr\'etien Cassegrain constructed by DFM Engineering, Inc. and equipped with an Apogee Alta F16M camera. The observations on 2021 October 4 of TOI-3629 were obtained in the Bessell V filter \citep{Bessell1990} while the other observations were obtained in the Bessell I filter. All observations used the $2 \times 2$ on-chip binning mode, which has a gain of 1.39 $\mathrm{e-/ADU}$, a plate scale of \(0.731 \arcsec/\mathrm{pixel}\), and a readout time of $\sim2.4$s. Each target was defocused moderately and observed using an exposure time of 240s. The RBO light curves were derived using \texttt{AstroImageJ} \citep{Collins2017}. Following the methodology in \cite{Stefansson2017}, the estimated scintillation noise was included in the flux uncertainty. The final reductions used a photometric aperture radius of 10 pixels ($7.3\arcsec$), an inner sky radius of 20 pixels ($14.6\arcsec$) and an outer sky radius of 30 pixels ($21.9\arcsec$). \subsection{APO 3.5m Telescope} We used the 3.5m Astrophysical Research Consortium (ARC) Telescope Imaging Camera \citep[ARCTIC;][]{Huehnerhoff2016} on the ARC 3.5m Telescope at Apache Point Observatory (APO) to obtain a transit of TOI-3714 on the night of 2021 November 21. The observations were performed in the Sloan $i^\prime$ filter using an engineered diffuser \citep{Stefansson2017} with an exposure time of 45s. The average seeing for the night was $\sim1.0\arcsec$. ARCTIC was operated in the quad and fast readout modes using the $4 \times 4$ on-chip binning mode to achieve a gain of 2 $\mathrm{e-/ADU}$, a plate scale of $0.456 \mathrm{\arcsec/pixel}$, and a readout time of 2.7 s. Similar to the RBO data, we processed the photometry using \texttt{AstroImageJ} and include the scintillation noise estimate in the flux uncertainty. The final reduction used a photometric aperture radius of 10 pixels ($4.6\arcsec$), an inner sky radius of 20 pixels ($9.1\arcsec$) and outer sky radius of 30 pixels ($13.7\arcsec$). \subsection{Kuiper 61'' Telescope} We used the 61'' (1.55m) Kuiper Telescope located on Mt. Bigelow, Arizona to observe TOI-3629 on the night of 2021 October 16. The Kuiper Telescope\footnote{\url{http://james.as.arizona.edu/~psmith/61inch/CCD/basicinfo.html}} is equipped with the Mont4k imager, which uses a $4096\times4097$ Fairchild CCD486 detector to provide a field of view of $9.7\arcmin\times9.7\arcmin$. TOI-3629 was observed in the Harris R-band using a 30 s exposure time with an average seeing of $\sim1.7\arcsec$. The pixels were binned in $3\times3$ mode to shorten the readout time. This achieves a plate scale of 0.42\arcsec/pixel. Similar to the RBO data, we processed the photometry using \texttt{AstroImageJ} and include the scintillation noise estimate in the flux uncertainty. The final reduction used a photometric aperture radius of 8 pixels ($3.6\arcsec$), an inner sky radius of 14 pixels ($6.3\arcsec$) and outer sky radius of 22 pixels ($9.9\arcsec$). \subsection{ZTF photometry} ZTF data for TOI-3714 and TOI-3629 are publicly available under DR11\footnote{\url{https://www.ztf.caltech.edu/ztf-public-releases.html}}. Both objects were observed through a public program designed to observe TESS northern sectors by ZTF \citep{vanRoestel2019}. ZTF has a plate scale of $1.012\arcsec{}\mathrm{~pixel^{-1}}$ \citep{Yao2019} and the exposures for all observations are 30 s long. We follow the advice of the ZTF Science Data System Explanatory Supplement\footnote{\url{https://web.ipac.caltech.edu/staff/fmasci/ztf/ztf_pipelines_deliverables.pdf}} (ZDS) and reject bad quality data with (i) non-zero \texttt{catflag} values (see \textsection 13.6 in ZDS), (ii) values of $\chi\ge4$, where $\chi$ is the rms of the residuals to the PSF fit on the source performed by the ZTF pipeline, and (iii) values of $|\mathtt{sharp}|\ge0.5$, where \texttt{sharp} is the difference of the observed and model squared PSF FWHM. TOI-3714 has (i) 512 observations spanning 2018 April 08 through 2022 March 2 with a median cadence of 1 day and a median precision of $\sim1.3\%$ in the $zr$ filter and (ii) 355 observations spanning 2018 March 29 through 2022 March 2 with a median cadence of 2 days and a median precision of $\sim1.7\%$ in the $zg$ filter. TOI-3629 has (i) 695 observations spanning 2018 May 18 through 2022 February 18 with a median cadence of 1 day and a median precision of $\sim1.0\%$ in the $zr$ filter and (ii) 574 observations spanning 2018 May 25 through 2022 February 18 with a median cadence of 1 day and a median precision of $\sim1.1\%$ in the $zg$ filter. \subsection{High-contrast imaging} TOI-3629 and TOI-3714 were observed on 2021 October 25 and 2021 December 21 respectively, using the speckle imaging instrument NESSI \citep{Scott2018} on the WIYN 3.5m Telescope at Kitt Peak National Observatory (KPNO). Due to the faintness of these targets ($r^\prime>14$), the images were acquired in Sloan \(r^\prime\) (TOI-3629 only) and \(z^\prime\) instead of the narrower filters that NESSI traditionally uses. TOI-3714 was observed only in the Sloan \(z^\prime\) filter because hardware issues during the observing run allowed for operations only with the redder filter. The images in each filter were reconstructed following the procedures outlined in \cite{Howell2011}. \begin{figure*}[!ht] \epsscale{1.15} \plotone{3629imaging.pdf} \caption{The \(5\sigma\) contrast curves for TOI-3629 obtained from AO imaging using Robo-AO in the Sloan \(i^\prime\) filter and speckle imaging with NESSI in the Sloan \(r^\prime\) and \(z^\prime\) filters. The data reveal no bright companions at separations of $0.2\arcsec{}-1.75\arcsec{}$ from TOI-3629. The inset is the $4.7\arcsec{}\times4.7\arcsec{}$ NESSI speckle image centered on TOI-3629 in the Sloan \(z^\prime\) filter.} \label{fig:3629imaging} \end{figure*} \begin{figure*}[!ht] \epsscale{1.15} \plotone{3714imaging.pdf} \caption{Similar to Figure \ref{fig:3629imaging} but for TOI-3714. The \(5\sigma\) contrast curve for TOI-3714 obtained from speckle imaging using NESSI in the Sloan \(z^\prime\) filter. The data reveal no bright companions at separations of $0.2\arcsec{}-1.75\arcsec{}$ from TOI-3714. The inset is the $4.7\arcsec{}\times4.7\arcsec{}$ NESSI speckle image centered on TOI-3714 in the Sloan \(z^\prime\) filter.} \label{fig:3714imaging} \end{figure*} TOI-3629 was also observed as part of the Robo-AO Kepler M dwarf multiplicity survey \citep{Lamman2020} on 2016 October 19. The observations were performed using the Robo-AO laser adaptive optics system \citep{Baranec2013,Baranec2014} on the 2.1m telescope at KPNO \citep{Jensen-Clem2018} using a 1.85m circular aperture mask on the primary mirror. These observations were taken in the Sloan \(i^\prime\) filter. \cite{Lamman2020} have made the Robo-AO contrast curve for TOI-3629 publicly available on ExoFOP-TESS\footnote{\url{https://exofop.ipac.caltech.edu/tess/view_tag.php?tag=13940}}. The Robo-AO observation reveals no bright ($\Delta \mathrm{mag} < 4$) stellar companions at separations of $0.75-1.75\arcsec$ from TOI-3629. Figures \ref{fig:3629imaging} and \ref{fig:3714imaging} show the $5\sigma$ contrast curves for TOI-3629 and TOI-3714, respectively, along with an inset of the NESSI speckle image in the $z^\prime$ filter. Together, the NESSI and Robo-AO data show there are no bright ($\Delta \mathrm{mag} < 4$) companions and no significant source of dilution at separations of $0.2-1.2\arcsec$ from either target. \subsection{HPF spectrograph} HPF is a high-resolution ($R\sim55,000$), fiber-fed \citep{Kanodia2018a}, temperature controlled \citep{Stefansson2016}, near-infrared (\(\lambda\sim8080-12780\)\ \AA) spectrograph located on the 10m HET at McDonald Observatory in Texas \citep{Mahadevan2012,Mahadevan2014}. Observations are executed in a queue by the HET resident astronomers \citep{Shetrone2007}. Between 2021 January 18 and 2022 January 14, we obtained 12 visits of TOI-3714 and 23 visits of TOI-3629. The median signal-to-noise ratios (S/N) per 1D extracted pixel at 1070nm are 44 and 54, respectively, for these targets. The \texttt{HxRGproc} tool\footnote{\url{https://github.com/indiajoe/HxRGproc}} \citep{Ninan2018} was used to process the raw HPF data and perform bias noise removal, nonlinearity correction, cosmic-ray correction, and slope/flux and variance image calculation. The one-dimensional spectra were extracted following the procedures in \cite{Ninan2018}, \cite{Kaplan2019}, and \cite{Metcalf2019}. The wavelength solution and drift correction were extrapolated using laser frequency comb (LFC) frames obtained from routine calibrations. This extrapolation enables wavelength calibration on the order of $<30~\mathrm{cm~s^{-1}}$ \citep[see Appendix A in][]{Stefansson2020}, a value which is much smaller than the RV uncertainty for our targets (\(>10~\mathrm{m~s^{-1}}\)). The RVs were calculated using a modified version of the \texttt{SpEctrum Radial Velocity AnaLyser} code \citep[SERVAL;][]{Zechmeister2018} optimized for HPF RV extractions (see \cite{Metcalf2019} and \cite{Stefansson2020} for details). \texttt{SERVAL} employs the template-matching technique to derive RVs \citep[e.g.,][]{Anglada-Escude2012} and creates a master template from the observations to determine the Doppler shift by minimizing the \(\chi^2\) statistic. The master template is generated from all observed spectra after masking sky-emission lines and telluric regions identified using a synthetic telluric-line mask generated from \texttt{telfit} \citep{Gullikson2014}. The barycentric correction is calculated using \texttt{barycorrpy}, a Pythonic implementation \citep{Kanodia2018} of the algorithms from \cite{Wright2014}. \begin{deluxetable}{lrcccc} {\tabletypesize{\scriptsize } \tablecaption{RVs of TOI-3714 and TOI-3629. \label{tab:rvs}} \tablehead{ \colhead{$\mathrm{BJD_{TDB}}$} & \colhead{RV} & \colhead{$\sigma$} & \colhead{S/N$^a$} & \colhead{Exp. Time} & \colhead{Instrument}\\ & \colhead{$(\mathrm{m~s^{-1}})$} & \colhead{$(\mathrm{m~s^{-1}})$} & & \colhead{(s)} & } \startdata \multicolumn{4}{l}{\hspace{-0.2cm} TOI-3714:} \\ ~~2459450.941268 & $-196$ & 23 & 44 & 1890 & HPF \\ ~~2459451.948741 & 163 & 25 & 42 & 1890 & HPF \\ ~~2459452.941751 & $-183$ & 22 & 47 & 1890 & HPF \\ ~~2459458.924675 & 15 & 25 & 41 & 1890 & HPF \\ ~~2459511.784103 & 60 & 23 & 44 & 1890 & HPF \\ ~~2459512.783671 & 29 & 23 & 46 & 1890 & HPF \\ ~~2459516.779359 & 209 & 26 & 40 & 1890 & HPF \\ ~~2459516.995256 & 64 & 20 & 50 & 1890 & HPF \\ ~~2459518.748189 & 119 & 30 & 35 & 1890 & HPF \\ ~~2459518.992056 & 135 & 23 & 44 & 1890 & HPF \\ ~~2459519.985625 & $-189$ & 22 & 46 & 1890 & HPF \\ ~~2459571.844384 & $-103$ & 29 & 34 & 1890 & HPF \\ ~~2459479.884140 & 71 & 12 & 15 & 1800 & NEID \\ ~~2459503.998300 & 27 & 15 & 11 & 1200 & NEID \\ ~~2459520.927772 & 77 & 11 & 15 & 1800 & NEID \\ ~~2459531.844720 & 58 & 10 & 16 & 1800 & NEID \\ ~~2459533.801149 & 89 & 9 & 18 & 1800 & NEID \\ ~~2459560.766674 & $-244$ & 14 & 12 & 1800 & NEID \\ ~~2459586.625543 & $-239$ & 11 & 15 & 1800 & NEID \\ ~~2459587.851477 & 91 & 20 & 9 & 1800 & NEID \\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} TOI-3629:} \\ ~~2459232.579925 & 26 & 16 & 52 & 1890 & HPF \\ ~~2459233.576944 & $-24$ & 18 & 48 & 1890 & HPF \\ ~~2459448.764453 & 28 & 13 & 66 & 1890 & HPF \\ ~~2459451.979492 & 1 & 18 & 49 & 1890 & HPF \\ ~~2459452.761162 & 5 & 15 & 59 & 1890 & HPF \\ ~~2459453.979751 & $-72$ & 16 & 58 & 1890 & HPF \\ ~~2459455.739966 & 5 & 14 & 62 & 1890 & HPF \\ ~~2459457.974338 & $-42$ & 13 & 64 & 1890 & HPF \\ ~~2459460.962548 & $-16$ & 15 & 57 & 1890 & HPF \\ ~~2459461.955253 & $-91$ & 18 & 48 & 1890 & HPF \\ ~~2459470.709176 & $-47$ & 17 & 51 & 1890 & HPF \\ ~~2459471.707553 & 9 & 14 & 60 & 1890 & HPF \\ ~~2459475.919121 & 23 & 20 & 47 & 1890 & HPF \\ ~~2459477.918199 & $-72$ & 18 & 50 & 1890 & HPF \\ ~~2459480.910306 & $-10$ & 24 & 51 & 945 & HPF \\ ~~2459485.896427 & $-57$ & 22 & 41 & 1890 & HPF \\ ~~2459499.627624 & 64 & 17 & 54 & 1890 & HPF \\ ~~2459507.814595 & 24 & 23 & 40 & 1890 & HPF \\ ~~2459516.581430 & $-20$ & 15 & 59 & 1890 & HPF \\ ~~2459543.736198 & 11 & 15 & 60 & 1890 & HPF \\ ~~2459588.597139 & $-57$ & 15 & 56 & 1890 & HPF \\ ~~2459592.597117 & $-48$ & 15 & 60 & 1890 & HPF \\ ~~2459593.588844 & 44 & 16 & 54 & 1890 & HPF \\ ~~2459478.965087 & $-20$ & 6 & 22 & 1800 & NEID \\ ~~2459479.794197 & 28 & 6 & 22 & 1800 & NEID \\ ~~2459528.888224 & $-34$ & 14 & 11 & 1800 & NEID \\ ~~2459532.843589 & $-38$ & 8 & 19 & 1800 & NEID \\ ~~2459546.840787 & 62 & 15 & 11 & 1800 & NEID \\ \enddata \tablenotetext{a}{The HPF and NEID S/N are the median values per 1D extracted pixel at 1070nm and 850nm, respectively.} } \end{deluxetable} \subsection{NEID spectrograph} NEID is an environmentally stabilized \citep{Stefansson2016,Robertson2019}, high-resolution ($R\sim110,000$) spectrograph installed on the WIYN 3.5m telescope at KPNO in Arizona \citep[][]{Schwab2016}. NEID features extended red wavelength coverage (\(\lambda\sim3800-9300\)\ \AA) and a fiber-feed system similar to HPF \citep{Kanodia2018a}. Between 2021 September 21 and 2022 January 8, we obtained 8 visits of TOI-3714 and 5 visits of TOI-3629. Observations were obtained in queue mode and NEID operated in high-resolution mode. The median S/N per 1D extracted pixel was 15 and 19, respectively, at 850nm. The NEID data were reduced using the NEID Data Reduction Pipeline\footnote{\url{https://neid.ipac.caltech.edu/docs/NEID-DRP/}} (DRP), and the Level-2 1D extracted spectra were retrieved from the NEID Archive\footnote{\url{https://neid.ipac.caltech.edu/}}. Similar to HPF, to maximize the RV precision from the M dwarf spectra, we measured the RVs using a modified version of the \texttt{SERVAL} code \cite[see][]{Stefansson2021}. We extracted RVs with \texttt{SERVAL} using different segments of an order (inner 3000, 5000, or 7000 pixels) and different wavelength ranges ($4950-8960$ \AA{} or $5440-8960$ \AA{}). The various combinations of pixel and wavelength ranges produced RVs that, when jointly modeled with photometry and HPF RVs, resulted in identical system parameters (within their $1\sigma$ uncertainty). The NEID RVs presented in this work were calculated using the wavelength range from $5440-8920$ \AA{} (order indices $61-104$) and the inner most 3000 pixels of each order. This effectively uses the central blaze region of each order and limits the use of the lower S/N regions near the edge of each order. Table \ref{tab:rvs} reports the HPF and NEID RVs, the \(1\sigma\) uncertainties, the S/N per pixel, and exposure times for TOI-3714 and TOI-3629. Figures \ref{fig:3714rv} and \ref{fig:3629rv} display the RVs for TOI-3714 and TOI-3629, respectively. \begin{figure*}[!ht] \epsscale{1.15} \plotone{rv_toi3714.pdf} \caption{\textbf{(a)} shows the RVs for TOI-3714, after subtracting the instrumental offsets, derived with modified versions of \texttt{SERVAL}. \textbf{(b)} displays the phase-folded RVs plotted with model posteriors. For each panel, the dashed line is the best-fitting Keplerian model. The shaded regions denote the \(1\sigma\) (darkest), \(2\sigma\), and \(3\sigma\) (lightest) extent of the model posteriors. The modeling is described in Section \ref{sec:modelfit}.} \label{fig:3714rv} \end{figure*} \begin{figure*}[!ht] \epsscale{1.15} \plotone{rv_toi3629.pdf} \caption{Identical to Figure \ref{fig:3714rv}, but for TOI-3629. \textbf{(a)} shows RVs for TOI-3629, after subtracting the instrumental offsets. \textbf{(b)} displays the phase-folded RVs plotted with model posteriors.} \label{fig:3629rv} \end{figure*} \section{Stellar Parameters}\label{sec:stellarpar} \subsection{Spectroscopic parameters}\label{sec:specmatch} The stellar effective temperature ($T_e$), surface gravity ($\log g_\star$), and metallicity ([Fe/H]) were calculated using the \texttt{HPF-SpecMatch}\footnote{\url{https://gummiks.github.io/hpfspecmatch/}} package \citep[][]{Stefansson2020}, which derives stellar parameters using the empirical template matching methodology discussed in \cite{Yee2017}. It identifies the best-matching spectra from a library of well-characterized stars using \(\chi^{2}\) minimization, creates a composite spectrum using a weighted, linear combination of the five best-matching library spectra, and derives the stellar properties using these weights. When searching for the best-matching library spectra, \texttt{HPF-SpecMatch} broadens the stellar templates using a linear limb darkening law. The reported uncertainty is the standard deviation of the residuals from a leave-one-out cross-validation procedure applied to the entire spectral library in the chosen spectral order. The HPF spectral library contains 166 stars and spans the following parameter space: $2700~\mathrm{K} < T_{e} < 6000~\mathrm{K}$, $4.3<\log g_\star < 5.3$, and $-0.5 < \mathrm{[Fe/H]} < 0.5$. The library includes 87 M dwarfs ($T_{e}\le4000$ K) of which 37 are early M dwarfs spanning $3500\mathrm{K} \le T_{e} \le 4000~\mathrm{K}$, $4.6<\log g_\star < 4.9$, and $-0.5 < \mathrm{[Fe/H]} < 0.5$. The spectral matching was performed on HPF order index 5 ($8534-8645$ \AA) for both targets because this order has little to no telluric contamination. The resolution limit of HPF places a constraint of $v \sin i < 2 \mathrm{~km~s^{-1}}$ for both TOI-3714 and TOI-3629. TOI-3714 is determined to have $T_{e}=3660\pm90$ K, $\log g_\star=4.75\pm0.05$, and $\mathrm{[Fe/H]=0.1\pm0.1}$. TOI-3629 is determined to have $T_{e}=3870\pm90$ K, $\log g_\star=4.67\pm0.05$, and $\mathrm{[Fe/H]=0.4\pm0.1}$. Table \ref{tab:stellarparam} presents the derived spectroscopic parameters with their uncertainties. \begin{deluxetable*}{lcccc} {\tabletypesize{\tiny} \rotate \tablecaption{Summary of Stellar Parameters. \label{tab:stellarparam}} \tablehead{\colhead{~~~Parameter}& \colhead{Description}& \colhead{TOI-3714}& \colhead{TOI-3629}& \colhead{Reference}} \startdata \multicolumn{4}{l}{\hspace{-0.2cm} Main Identifiers:} \\ ~~~TIC & \(\cdots\) & 155867025 & 455784423 & TIC \\ ~~~Gaia EDR3 & \(\cdots\) & 178924390478792320 & 2881820324294985856 & Gaia EDR3 \\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} Coordinates, Proper Motion, Distance, Maximum Extinction, and Spectral Type:} \\ ~~~$\alpha_{\mathrm{J2016}}$ & Right Ascension (RA) & 04:38:12.56 & 23:59:10.42 & Gaia EDR3 \\ ~~~$\delta_{\mathrm{J2016}}$ & Declination (Dec) & 39:27:28.77 & 39:18:51.32 & Gaia EDR3 \\ ~~~$\mu_{\alpha}$ & Proper motion (RA, mas yr$^{-1}$) & $19.83 \pm 0.03$ & $185.71 \pm 0.01$ & Gaia EDR3 \\ ~~~$\mu_{\delta}$ & Proper motion (Dec, mas yr$^{-1}$) & $-70.74 \pm 0.02$ & $1.01 \pm 0.01$ & Gaia EDR3 \\ ~~~$l$ & Galactic longitude & 163.30437 & $112.02292$ & Gaia EDR3 \\ ~~~$b$ & Galactic latitude & $-5.02268$ & $-22.44783$ & Gaia EDR3 \\ ~~~$d$ & Geometric distance (pc) & $112.5_{-0.4}^{+0.2}$ & $129.7 \pm 0.3$ & Bailer-Jones\\ ~~~\(A_{V,max}\) & Maximum visual extinction & $0.02$ & $0.01$ & Green\\ ~~~Spectral Type & \(\cdots\) & M$2\pm0.5$ & M$1\pm0.5$ & LAMOST \\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} Broadband Photometric Magnitudes:} \\ ~~~$B$ & Johnson $B$ mag & $16.8 \pm 0.2$ & $16.1 \pm 0.1$ & APASS\\ ~~~$V$ & Johnson $V$ mag & $15.24 \pm 0.09$ & $14.63 \pm 0.05$ & APASS\\ ~~~$g'$ & Sloan $g'$ mag & $15.9 \pm 0.1$ & $15.33 \pm 0.04$ & APASS\\ ~~~$r'$ & Sloan $r'$ mag & $14.73 \pm 0.09$ & $14.06 \pm 0.05$ & APASS\\ ~~~$i'$ & Sloan $i'$ mag & $13.66 \pm 0.09$ & $13.12 \pm 0.07$ & APASS\\ ~~~$J$ & $J$ mag & $11.74 \pm 0.02$ & $11.42 \pm 0.03$ & 2MASS\\ ~~~$H$ & $H$ mag & $11.06 \pm 0.02$ & $10.73 \pm 0.03$ & 2MASS\\ ~~~$K_s$ & $K_s$ mag & $10.85 \pm 0.02$ & $10.55 \pm 0.02$ & 2MASS\\ ~~~W1 & WISE1 mag & $10.72 \pm 0.02$ & $10.48 \pm 0.02$ & WISE\\ ~~~W2 & WISE2 mag & $10.68 \pm 0.02$ & $10.52 \pm 0.02$ & WISE\\ ~~~W3 & WISE3 mag & $10.5 \pm 0.1$ & $10.38 \pm 0.07$ & WISE\\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} Spectroscopic Parameters$^a$:}\\ ~~~$T_{e}$ & Effective temperature (K) & $3660 \pm 90$ & $3870 \pm 90$& This work\\ ~~~$\log g_\star$ & Surface gravity (cgs) & $4.75\pm0.05$ & $4.67 \pm 0.05$ & This work\\ ~~~$\mathrm{[Fe/H]}$ & Metallicity (dex) & $0.1\pm0.1$ & $0.4\pm0.1$ & This work\\ ~~~$v\sin i_{\star}$ & Rotational broadening (km s$^{-1}$) & $<2$ & $<2$ & This work\\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} Model-dependent Stellar SED and Isochrone Fit Parameters$^b$:}\\ ~~~$M_\star$ & Mass ($M_{\odot}$) & $0.53 \pm 0.02$ & $0.63 \pm 0.02$ & This work\\ ~~~$R_\star$ & Radius ($R_{\odot}$) & $0.51 \pm 0.01$ & $0.60_{-0.01}^{+0.02}$ & This work\\ ~~~$\rho_\star$ & Density ($\mathrm{g~cm^{-3}}$) & $5.8_{-0.3}^{+0.4}$ & $4.0 \pm 0.2$ & This work\\ ~~~$A_v$ & Visual extinction (mag) & $0.011 \pm 0.007$ & $0.005 \pm 0.003$ & This work\\ ~~~Age$^c$ & Age (Gyrs) & $0.7-5.1$ & $7\pm2$ & This work\\ \hline \multicolumn{4}{l}{\hspace{-0.2cm} Other Stellar Parameters:}\\ ~~~$RV$ & Systemic RV (km s$^{-1}$) & $36.4 \pm 0.2$ & $-24.83 \pm 0.06$ & This work\\ ~~~$P_{rot}$ & Stellar rotation period (days) & $23.3 \pm 0.3$ & $\cdots$ & This work\\ ~~~$U, V, W$ & Barycentric Galactic velocities (km s$^{-1}$) & $-43.5 \pm 0.2, -23.9 \pm 0.1, -20.43 \pm 0.05$ & $-91.9 \pm 0.2, -72.0 \pm 0.1, -13.06 \pm 0.06$ & This work\\ ~~~$(U, V, W)_{\mathrm{LSR}}$ & Galactic velocities w.r.t. LSR$^d$ (km s$^{-1}$) & $-32.4 \pm 0.8, -11.7 \pm 0.5, -13.2 \pm 0.4$ & $-80.8 \pm 0.8, -59.7 \pm 0.5, -5.8 \pm 0.4$ & This work\\ \enddata \tablerefs{TIC \citep{Stassun2019}, Gaia EDR3 \citep{GaiaCollaboration2021}, Bailer-Jones \citep{Bailer-Jones2021}, Green \citep{Green2019}, LAMOST \citep{Zhong2019}, APASS \citep{Henden2018}, 2MASS \citep{Cutri2003}, WISE \citep{Wright2010}} \tablenotetext{a}{Derived with the \texttt{HPF-SpecMatch} package.} \tablenotetext{b}{Derived with the \texttt{EXOFASTv2} package using MIST isochrones.} \tablenotetext{c}{We report the age estimate of TOI-3714 using the rotation period and classification from \cite{Newton2016}. The age for TOI-3629 is from Galactic models based on its tangential velocity.} \tablenotetext{d}{Calculated using the solar velocities from \cite{Schoenrich2010}.} } \end{deluxetable*} \subsection{Spectral classification} The Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) collaboration observed TOI-3714 on 2012 January 11 and TOI-3629 on 2018 November 3 as part of a survey of the Galactic anti-center \citep{Yuan2015,Xiang2017}. LAMOST is a 4m telescope equipped with 4000 fibers distributed over a 5\degr\ FOV that is capable of acquiring spectra in the optical band (3700-9000\AA) at a resolution \(R\approx1800\) with a limiting magnitude of SDSS \(r^\prime=19\) mag \citep{Cui2012}. The data used in this work are from the public DR7v2.0\footnote{\url{http://dr7.lamost.org/}} release. The LAMOST stellar classification pipeline \citep[][]{Zhong2015} uses stellar templates to identify molecular absorption features (e.g., CaH, TiO) that are typical for M-type stars and reports the subclass of an M dwarf with an accuracy of $\pm0.5$ subtypes. To be classified as M dwarfs, targets must have (i) a mean S/N$>5$, (ii) a best-matching template that is an M type, and (iii) the spectral indices of the absorption features must be located in the M-type stellar regime identified in \cite{Zhong2019} ($0<\mathrm{TiO5}< 1.2$ and $0.6<\mathrm{CaH2+CaH3}< 2.4$). LAMOST classifies TOI-3714 as an M$2\pm0.5$ dwarf and TOI-3629 as an M$1\pm0.5$ dwarf, which agrees with the derived parameters from \texttt{HPF-SpecMatch} in Section \ref{sec:specmatch}. \subsection{Spectral energy distribution fitting} To derive model-dependent stellar parameters, we modeled the spectral energy distribution (SED) for each target using the {\tt EXOFASTv2} analysis package \citep{Eastman2019}. {\tt EXOFASTv2} calculates the bolometric corrections for the SED fit by linearly interpolating the precomputed bolometric corrections\footnote{\url{http://waps.cfa.harvard.edu/MIST/model_grids.html\#bolometric}} in \(\log g_\star\), \(\mathrm{T_{e}}\), [Fe/H], and \(A_V\) from the MIST model grids \citep{Dotter2016,Choi2016}. The SED fits use Gaussian priors on the (i) 2MASS \(J,~H,~K\) magnitudes, Sloan \(g^\prime,~r^\prime,~i^\prime\) magnitudes and Johnson \(B,~V\) magnitudes from \cite{Henden2018}, and Wide-field Infrared Survey Explorer magnitudes \citep{Wright2010}; (ii) $\log g_\star$, $T_{e}$, and [Fe/H] derived from \texttt{HPF-SpecMatch}, and (iii) the geometric distance calculated from \cite{Bailer-Jones2021} for each respective star. We apply an upper limit to the visual extinction based on estimates of Galactic dust \citep{Green2019} calculated at the distance determined by \cite{Bailer-Jones2021}. The \(R_{v}=3.1\) reddening law from \cite{Fitzpatrick1999} is used to convert the extinction from \cite{Green2019} to a visual magnitude extinction. Table \ref{tab:stellarparam} contains the stellar priors and derived stellar parameters with their uncertainties. The model-dependent mass and radius are (i) \(0.53\pm0.02~\mathrm{M_{\odot}}\) and \(0.51\pm0.01~\mathrm{R_{\odot}}\) for TOI-3714 and (ii) \(0.63\pm0.02~\mathrm{M_{\odot}}\) and \(0.60_{-0.01}^{+0.02}~\mathrm{R_{\odot}}\) for TOI-3629. The masses and radii are identical within their respective $1\sigma$ uncertainties to the parameters from the TIC catalog for TOI-3714 (\(0.51\pm0.02~\mathrm{M_{\odot}}\) and \(0.51\pm0.02~\mathrm{R_{\odot}}\)) and TOI-3629 (\(0.60\pm0.02~\mathrm{M_{\odot}}\) and \(0.61\pm0.02~\mathrm{R_{\odot}}\)). \subsection{Stellar rotation period} \label{sec:prot} If we assume TOI-3714 and TOI-3629 are well-aligned with the orbit of their transiting planets ($\sin i_\star\sim1$), the constraint of $v \sin i_\star<2\mathrm{~km~s^{-1}}$ from HPF spectra requires each star to have a rotation period of $>10$ days. We do not search for photometric modulation in the \texttt{corr\_flux} because long-period ($>10$ day) astrophysical signals, such as starspot-induced photometric variability, are attenuated when removing systematics with \texttt{eleanor} \citep{Feinstein2019}, similar to how long-period rotation signals are damped and distorted in Kepler PDCSAP light curves \cite{Gilliland2015,VanCleve2016}. We instead searched for photometric modulation in TESS data using the \texttt{TESS-SIP} package \citep{Hedges2020}, which is designed to simultaneously create a Lomb-Scargle periodogram and detrend systematics. \texttt{TESS-SIP} uses a linear model with two components: (i) regressors (by default 2 principal components and a mean offset) to remove instrument systematics and (ii) a sinusoidal component to fit a power spectrum. Only one sector of TESS data exists for each target and we limit this search to a rotation period range of $1-30$ days. For this search, the transits were excised using the duration and ephemeris from the QLP. \texttt{TESS-SIP} recovers no significant period for either TOI-3714 and TOI-3629 between $1-30$ days. We also used data from ZTF DR11 in the $zg$ and $zr$ filters to search for any long-period signals caused by activity-induced photometric modulations in the target stars. This search used the generalized Lomb-Scargle (GLS) periodogram \citep{Zechmeister2009} because it has been shown to successfully recover rotation periods in photometry \citep[see][]{VanderPlas2018,CantoMartins2020,Reinhold2020}. The GLS periodogram is based on Fourier decomposition and provides peaks in frequency space where the highest peak in the periodogram is associated with the period of the best fit sine wave that minimizes the $\chi^2$ statistic. We use the \texttt{GLS}\footnote{\url{https://github.com/mzechmeister/GLS}} package to perform this analysis and only consider significant peaks where the false alarm probability (FAP), as calculated following \cite{Zechmeister2009}, is below a threshold of $0.1\%$. Data within transits were excised using the duration and ephemeris from the QLP. A significant peak (FAP$<0.1\%$) of $\sim23.6$ days was found in both the $zr$ and $zg$ photometry of TOI-3714. A significant peak (FAP$<0.1\%$) of $\sim29.5$ days was found in the $zr$ photometry while no significant peaks were seen in the $zg$ photometry for TOI-3629. To derive the rotation period and an estimate of its uncertainty, we modeled the ZTF photometry using the \texttt{juliet} analysis package \citep{Espinoza2019}, which performs the parameter estimation using the dynamic nested-sampling algorithm \texttt{dynesty} \citep{Speagle2020}. The photometric model is a Gaussian process noise model using the approximate quasi-periodic covariance function presented in \cite{Foreman-Mackey2017} of the form: \begin{equation} \resizebox{.88\hsize}{!}{$k(\tau) = \frac{B}{2 + C} e^{-\tau/L} \left[ \cos \left( \frac{2 \pi \tau}{P_\mathrm{GP}} \right) + (1 + C) \right],$} \label{eq:kernelperiodic} \end{equation} where $\tau$ is the time of observation while $B$, $C$, $L$, and $P_{\mathrm{GP}}$ are the hyperparameters of the covariance function. $B$ and $C$ represent the weight of the exponential term with a decay constant of $L$ (in days). $P_{\mathrm{GP}}$ determines the periodicity of the quasi-periodic oscillations, which is interpreted as the stellar rotation period. This kernel is able to reproduce the behavior of a more traditional quasi-periodic covariance function and has allowed for computationally efficient inference of stellar rotation periods even for large datasets that are not uniformly sampled \citep[e.g.,][]{Angus2018}. The photometric model includes a simple white-noise model $\sigma_{\mathrm{phot}}$ in the form of a jitter term that is added in quadrature to the error bars of each filter. \begin{figure*}[!htb] \epsscale{1.15} \plotone{3714phasedrot.pdf} \caption{\textbf{(a)} displays the ZTF photometry for TOI-3714 in each filter along with the best-fitting Gaussian process model for reference. \textbf{(b)} presents the ground-based ZTF photometry from panel (a) phased to the derived rotation period. The large black points represent 2-day bins of the phased photometry. \textbf{(c)} presents the posterior distribution of the rotation period from the Gaussian process model. The derived rotation period is \(23.3 \pm 0.3\) days.} \label{fig:3714phasedrot} \end{figure*} The fit for each star uses a uniform prior on the Gaussian process period of $1-1500$ days where the upper limit coincides with the baseline of existing ZTF data. For TOI-3714, the $zr$ and $zg$ data are jointly modeled and share the value of $P_{\mathrm{GP}}$, while the nuisance parameters ($B$, $C$, $L$, and $\sigma_\mathrm{phot}$) are different for each filter. For TOI-3629, we only model the $zr$ photometry. Figure \ref{fig:3714phasedrot} displays the ZTF photometry folded to the median value of $P_{\mathrm{GP}}$ and the posterior distribution for $P_{\mathrm{GP}}$, which we interpret as the rotation period. The measured rotation period is \(23.3 \pm 0.3\) days, which suggests that TOI-3714 most likely has an age between $0.7-5.1$ Gyr after adopting the classification scheme of \cite{Newton2016}. This age range is consistent with the rotation period and age relationship from \cite{Engle2018} and the values from our model-dependent SED fit. We are unable to place any additional constraint on the rotation period of TOI-3629 as the fit does not recover a significant period ($P_{\mathrm{GP}}=750_{-460}^{+450}$ days). \subsection{Galactic kinematics} The \textit{UVW} velocities in the barycentric frame were derived with \texttt{galpy} \citep{Bovy2015} using the Gaia EDR3 proper motions and the systemic velocity derived from HPF. The values in Table \ref{tab:stellarparam} are in a right-handed coordinate system \citep{Johnson1987} where \textit{UVW} are positive in the directions of the Galactic center, Galactic rotation, and the north Galactic pole, respectively. The \(UVW\) velocities in Table \ref{tab:stellarparam} are also provided with respect to the local standard of rest using the solar velocities and uncertainties from \cite{Schoenrich2010}. The BANYAN \(\Sigma\) algorithm \citep{Gagne2018}, which uses sky positions, proper motions, parallax, and RVs to constrain cluster membership probabilities, classifies both TOI-3714 and TOI-3629 as field stars showing no associations with known young clusters. Using kinematic selection criteria from \cite{Bensby2003}, TOI-3714 is classified as a member of the thin disk ($\mathrm{P_{Thick}}/\mathrm{P_{Thin}}=0.02$). The classification for TOI-3629 is ambiguous as the relative probability of thick disk to thin disk is $\mathrm{P_{Thick}}/\mathrm{P_{Thin}}=1.1$. We note that TOI-3629 has a large Galactic tangential velocity ($|V_T|=100.7 \pm 0.5\mathrm{~km~s^{-1}}$) with respect to the local standard of rest. \cite{Hwang2020a} used Galactic models to calculate the age distribution for different tangential velocity bins and determined a star with a tangential velocity in the range $100-120\mathrm{~km~s^{-1}}$ has a $\sim55\%$ chance of belonging to the thick disk and an estimated age of $7\pm2$ Gyr. While we cannot unambiguously classify TOI-3629 as a thick disk star, its high tangential velocity suggests that its age is most likely $>5$ Gyr. \section{Photometric and RV Modeling} \label{sec:modelfit} We use the \texttt{juliet} analysis package to jointly model the TESS photometry, ground-based photometry, and velocimetry and perform the parameter estimation using \texttt{dynesty}. \texttt{juliet} models the RVs with a standard Keplerian RV curve generated from the \texttt{radvel} \citep{Fulton2018} package and models the light curves with a transit model generated from the \texttt{batman} package \citep{Kreidberg2015}. The limb-darkening parameters are sampled from uniform priors following the parameterization presented in \cite{Kipping2013b}. For the long-cadence TESS photometry, the transit model utilizes the supersampling option in \texttt{batman} with exposure times of 30 minutes and a supersampling factor of 30. The photometric model also includes a dilution factor, \(D\), for TESS representing the ratio between the out-of-transit flux of the host star to that of all stars within the photometric aperture. We include this term for TESS data because \texttt{eleanor} does not correct for dilution from nearby stars despite the large apertures adopted (see Figures \ref{fig:3714apertures} and \ref{fig:3629apertures}). Both the photometric and RV models include a simple white-noise model parameterized as a jitter term that is added in quadrature to the error bars of each data set. To account for correlated noise in the TESS light curves, each fit includes a Gaussian process noise model of the same form described in Section \ref{sec:prot}. Tables \ref{tab:3714par} and \ref{tab:3629par} provide a summary of the priors used for the fit along with the inferred system parameters and the confidence intervals ($16\mathrm{th}-84\mathrm{th}$ percentile) for TOI-3714 and TOI-3629, respectively. Figures \ref{fig:3714phot}, \ref{fig:3629phot}, \ref{fig:3714rv} and \ref{fig:3629rv} display the model posteriors for each system. The modeling reveals that (i) TOI-3714 b is a hot Jupiter ($M_{2}=0.70 \pm 0.03~\mathrm{M_J}$ and $R_{2}=1.01 \pm 0.03~\mathrm{R_J}$) orbiting its host star on a nearly circular orbit with a period of \(2.154849 \pm 0.000001\) days and (ii) TOI-3629 b is a hot Jupiter ($M_{2}=0.26 \pm 0.02~\mathrm{M_J}$ and $R_{2}=0.74 \pm 0.02~\mathrm{R_J}$) orbiting its host star on a nearly circular orbit with a period of \(3.936551_{-0.000006}^{+0.000005}\) days. To determine if the low eccentricity of the orbits is consistent with tidal evolution, we estimate the timescales for circularization using the formalism of \cite{Jackson2008}. For the tidal quality factors, we assume each hot Jupiter is comparable to Jupiter and adopt a value of \(Q_{p}=10^5\) \citep[see][]{Goldreich1966,Lainey2009,Lainey2016}. We adopt a nominal value of \(Q_{\star}=10^7\) for early M dwarfs based on the modeling of \cite{Gallet2017}. Using the orbit parameters from joint modeling of each system and the stellar parameters, the timescale for circularization is $<0.1$ Gyrs for both systems, suggesting that these systems should be consistent with a circular orbit. If we adopt a larger value of $Q_p=10^7-10^9$ \citep[based on upper limits from][]{Bonomo2017} the circularization timescale can exceed $10$ Gyr and these systems may be able to retain a non-zero eccentricity. The RVs are able to place upper $3\sigma$ limits on eccentricity are $e<0.12$ for TOI-3714 b and $e<0.20$ for TOI-3629 b, revealing that even if these systems were not fully circularized, the planets are on low-eccentricity orbits. \begin{deluxetable*}{llccccc} {\tabletypesize{\tiny} \tablecaption{System Parameters for TOI-3714 \label{tab:3714par}} \tablehead{\colhead{~~~Parameter} & \colhead{Units} & \colhead{Prior} & \multicolumn{4}{c}{Value} } \startdata \noalign{\vskip 1.5ex} Photometric Parameters & & & TESS & RBO (08-16) & RBO (11-19) & ARCTIC\\ \noalign{\vskip .8ex} ~~~Linear Limb-darkening Coefficient$^a$ & $q_1$ & $\mathcal{U}(0,1)$ & $0.4_{-0.1}^{+0.2}$ & $0.5 \pm 0.2$ & $0.5 \pm 0.2$ & $0.4 \pm 0.1$ \\ ~~~Quadratic Limb-darkening Coefficient$^a$ & $q_2$ & $\mathcal{U}(0,1)$ & $0.4 \pm 0.2$ & $0.2_{-0.1}^{+0.2}$ & $0.2_{-0.1}^{+0.2}$ & $0.5 \pm 0.1$\\ ~~~Photometric Jitter & $\sigma_{phot}$ (ppm) & $\mathcal{J}(10^{-6},10^{3})$ & $0.007_{-0.007}^{+1.765}$ & $0.003_{-0.003}^{+1.431}$ & $0.03_{-0.03}^{+11.3}$ & $20_{-20}^{+778}$ \\ ~~~Dilution Factor & $D$ & $\mathcal{U}(0,2)$ & $0.87 \pm 0.02$ & $\cdots$ & $\cdots$ & $\cdots$ \\ \hline \noalign{\vskip 1.5ex} RV Parameters & & & \multicolumn{2}{c}{HPF} & \multicolumn{2}{c}{NEID}\\ \noalign{\vskip .8ex} ~~~Systemic velocity & $\gamma~\mathrm{(m~s^{-1})}$ & $\mathcal{U}(-10^3,10^3)$ & \multicolumn{2}{c}{$-3 \pm 7$} & \multicolumn{2}{c}{$-81 \pm 6$} \\ ~~~RV Jitter & $\sigma_{RV}~\mathrm{(m~s^{-1})}$ & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{2}{c}{$4_{-4}^{+20}$} & \multicolumn{2}{c}{$7_{-3}^{+6}$}\\ \hline \sidehead{Orbital Parameters:} ~~~Orbital Period & $P$ (days) & $\mathcal{N}(2.15,0.01)$ & \multicolumn{4}{c}{$2.154849 \pm 0.000001$}\\ ~~~Time of Transit Center & $T_C$ (BJD\textsubscript{TDB}) & $\mathcal{N}(2458840.51,0.01)$ & \multicolumn{4}{c}{$2458840.5093 \pm 0.0004$}\\ ~~~$\sqrt{e}\cos\omega$ & & $\mathcal{U}(-1,1)$ & \multicolumn{4}{c}{$0.0 \pm 0.1$}\\ ~~~$\sqrt{e}\sin\omega$ & & $\mathcal{U}(-1,1)$ & \multicolumn{4}{c}{$0.1 \pm 0.1$}\\ ~~~Semi-amplitude velocity & $K~\mathrm{(m~s^{-1})}$ & $\mathcal{J}(1,10^3)$ & \multicolumn{4}{c}{$169_{-5}^{+6}$}\\ ~~~Scaled Radius & $R_{p}/R_{\star}$ & $\mathcal{U}(0,1)$ & \multicolumn{4}{c}{$0.204 \pm 0.003$}\\ ~~~Impact Parameter & $b$ & $\mathcal{U}(0,1)$ & \multicolumn{4}{c}{$0.26_{-0.1}^{+0.08}$}\\ ~~~Scaled Semi-major Axis & $a/R_{\star}$ & $\mathcal{J}(1,100)$ & \multicolumn{4}{c}{$11.5_{-0.5}^{+0.4}$}\\ \hline \sidehead{Gaussian Process Hyperparameters:} ~~~$B$ & Amplitude (ppm) & $\mathcal{J}(10^{-4},10^{12})$ & \multicolumn{4}{c}{$1.8_{-0.7}^{+2.3}$}\\ ~~~$C$ & Additive Factor & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{4}{c}{$1_{-1}^{+50}$}\\ ~~~$L$ & Length scale (days) & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{4}{c}{$3_{-2}^{+6}$}\\ ~~~$P_{GP}$ & Period (days) & $\mathcal{J}(1.0,100)$ & \multicolumn{4}{c}{$11_{-8}^{+30}$}\\ \hline \sidehead{Derived Parameters:} ~~~Eccentricity & $e$ & $\cdots$ & \multicolumn{4}{c}{$0.03_{-0.02}^{+0.03}$, $3\sigma<0.12$}\\ ~~~Argument of Periastron & $\omega$ (degrees) & $\cdots$& \multicolumn{4}{c}{$100_{-200}^{+61}$}\\ ~~~Orbital Inclination & $i$ (degrees) & $\cdots$ & \multicolumn{4}{c}{$88.7 \pm 0.5$}\\ ~~~Transit Duration & $T_{14}$ (hours) & $\cdots$ & \multicolumn{4}{c}{$1.66_{-0.01}^{+0.02}$}\\ ~~~Mass & $M_{p}$ ($\mathrm{M_{J}}$) & $\cdots$ & \multicolumn{4}{c}{$0.70 \pm 0.03$}\\ ~~~Radius & $R_{p}$ ($\mathrm{R_{J}}$) & $\cdots$ & \multicolumn{4}{c}{$1.01 \pm 0.03$}\\ ~~~Surface Gravity & $\log g_{p}$ (cgs) & $\cdots$ & \multicolumn{4}{c}{$3.25 \pm 0.04$}\\ ~~~Density & $\rho_{p}$ ($\mathrm{g~cm}^{-3}$) & $\cdots$ & \multicolumn{4}{c}{$0.85 \pm 0.08$}\\ ~~~Semi-major Axis & $a$ (au) & $\cdots$ & \multicolumn{4}{c}{$0.027 \pm 0.001$}\\ ~~~Average Incident Flux & $\langle F \rangle$ ($\mathrm{10^8\ erg~s^{-1}~cm^{-2}}$) & $\cdots$ & \multicolumn{4}{c}{$0.74_{-0.07}^{+0.06}$}\\ ~~~Equilibrium Temperature\(^{b}\) & $T_{eq}$ (K) & $\cdots$ & \multicolumn{4}{c}{$750 \pm 20$}\\ \enddata \tablenotetext{a}{Using the $q1$ and $q2$ parameterization from \cite{Kipping2013b}.} \tablenotetext{b}{The planet is assumed to be a black body and we ignore heat redistribution.} } \end{deluxetable*} \begin{deluxetable*}{llccccc} {\tabletypesize{\tiny } \tablecaption{System Parameters for TOI-3629 \label{tab:3629par}} \tablehead{\colhead{~~~Parameter} & \colhead{Units} & \colhead{Prior} & \multicolumn{4}{c}{Value} } \startdata \noalign{\vskip 1.5ex} Photometric Parameters & & & TESS & RBO (09-26) & RBO (10-04) & Kuiper\\ \noalign{\vskip .8ex} ~~~Linear Limb-darkening Coefficient$^a$ & $q_1$ & $\mathcal{U}(0,1)$ & $0.4 \pm 0.2$ & $0.3_{-0.1}^{+0.2}$ & $0.6_{-0.3}^{+0.2}$ & $0.7 \pm 0.2$ \\ ~~~Quadratic Limb-darkening Coefficient$^a$ & $q_2$ & $\mathcal{U}(0,1)$ & $0.3_{-0.2}^{+0.3}$ & $0.5 \pm 0.3$ & $0.3 \pm 0.2$ & $0.21 \pm 0.09$\\ ~~~Photometric Jitter & $\sigma_{phot}$ (ppm) & $\mathcal{J}(10^{-6},10^{3})$ & $0.03_{-0.03}^{+423.61}$ & $0.001_{-0.001}^{+0.442}$ & $0.03_{-0.03}^{+16.45}$ & $10_{-10}^{+421}$\\ ~~~Dilution Factor & $D$ & $\mathcal{U}(0,2)$ & $0.90 \pm 0.04$ & $\cdots$ & $\cdots$ & $\cdots$ \\ \hline \noalign{\vskip 1.5ex} RV Parameters & & & \multicolumn{2}{c}{HPF} & \multicolumn{2}{c}{NEID}\\ \noalign{\vskip .8ex} ~~~Systemic velocity & $\gamma~\mathrm{(m~s^{-1})}$ & $\mathcal{U}(-10^3,10^3)$ & \multicolumn{2}{c}{$-15 \pm 3$} & \multicolumn{2}{c}{$-4_{-8}^{+10}$} \\ ~~~RV Jitter & $\sigma_{RV}~\mathrm{(m~s^{-1})}$ & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{2}{c}{$5_{-3}^{+5}$} & \multicolumn{2}{c}{$16_{-7}^{+10}$}\\ \hline \sidehead{Orbital Parameters:} ~~~Orbital Period & $P$ (days) & $\mathcal{N}(3.94,0.01)$ & \multicolumn{4}{c}{$3.936551_{-0.000006}^{+0.000005}$}\\ ~~~Time of Transit Center & $T_C$ (BJD\textsubscript{TDB}) & $\mathcal{N}(2458784.26,0.01)$ & \multicolumn{4}{c}{$2458784.256 \pm 0.001$}\\ ~~~$\sqrt{e}\cos\omega$ & & $\mathcal{U}(-1,1)$ & \multicolumn{4}{c}{$-0.1 \pm 0.1$}\\ ~~~$\sqrt{e}\sin\omega$ & & $\mathcal{U}(-1,1)$ & \multicolumn{4}{c}{$-0.1_{-0.1}^{+0.2}$}\\ ~~~Semi-amplitude velocity & $K~\mathrm{(m~s^{-1})}$ & $\mathcal{J}(1,10^3)$ & \multicolumn{4}{c}{$45 \pm 4$}\\ ~~~Scaled Radius & $R_{p}/R_{\star}$ & $\mathcal{U}(0,1)$ & \multicolumn{4}{c}{$0.126 \pm 0.002$}\\ ~~~Impact Parameter & $b$ & $\mathcal{U}(0,1)$ & \multicolumn{4}{c}{$0.2 \pm 0.1$}\\ ~~~Scaled Semi-major Axis & $a/R_{\star}$ & $\mathcal{J}(1,100)$ & \multicolumn{4}{c}{$15.4 \pm 0.8$}\\ \hline \sidehead{Gaussian Process Hyperparameters:} ~~~$B$ & Amplitude (ppm) & $\mathcal{J}(10^{-4},10^{12})$ & \multicolumn{4}{c}{$1.8_{-0.5}^{+1.5}$}\\ ~~~$C$ & Additive Factor & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{4}{c}{$3_{-3}^{+80}$}\\ ~~~$L$ & Length scale (days) & $\mathcal{J}(10^{-3},10^3)$ & \multicolumn{4}{c}{$0.8_{-0.4}^{+1.6}$}\\ ~~~$P_{GP}$ & Period (days) & $\mathcal{J}(1.0,100)$ & \multicolumn{4}{c}{$9_{-8}^{+37}$}\\ \hline \sidehead{Derived Parameters:} ~~~Eccentricity & $e$ & $\cdots$ & \multicolumn{4}{c}{$0.05_{-0.04}^{+0.05}$, $3\sigma<0.20$}\\ ~~~Argument of Periastron & $\omega$ (degrees) & $\cdots$& \multicolumn{4}{c}{$-110_{-40}^{+200}$}\\ ~~~Orbital Inclination & $i$ (degrees) & $\cdots$ & \multicolumn{4}{c}{$89.1 \pm 0.5$}\\ ~~~Transit Duration & $T_{14}$ (hours) & $\cdots$ & \multicolumn{4}{c}{$2.20 \pm 0.03$}\\ ~~~Mass & $M_{p}$ ($\mathrm{M_{J}}$) & $\cdots$ & \multicolumn{4}{c}{$0.26 \pm 0.02$}\\ ~~~Radius & $R_{p}$ ($\mathrm{R_{J}}$) & $\cdots$ & \multicolumn{4}{c}{$0.74 \pm 0.02$}\\ ~~~Surface Gravity & $\log g_{p}$ (cgs) & $\cdots$ & \multicolumn{4}{c}{$3.09_{-0.07}^{+0.06}$}\\ ~~~Density & $\rho_{p}$ ($\mathrm{g~cm}^{-3}$) & $\cdots$ & \multicolumn{4}{c}{$0.8 \pm 0.1$}\\ ~~~Semi-major Axis & $a$ (au) & $\cdots$ & \multicolumn{4}{c}{$0.043 \pm 0.002$}\\ ~~~Average Incident Flux & $\langle F \rangle$ ($\mathrm{10^8\ erg~s^{-1}~cm^{-2}}$) & $\cdots$ & \multicolumn{4}{c}{$0.53 \pm 0.06$}\\ ~~~Equilibrium Temperature\(^{b}\) & $T_{eq}$ (K) & $\cdots$ & \multicolumn{4}{c}{$690 \pm 20$}\\ \enddata \tablenotetext{a}{Using the $q1$ and $q2$ parameterization from \cite{Kipping2013b}.} \tablenotetext{b}{The planet is assumed to be a black body and we ignore heat redistribution.} } \end{deluxetable*} \section{Discussion}\label{sec:discussion} \subsection{Constraints on unresolved stellar companions}\label{sec:secondarylight} For both targets, the stellar density derived from the transit fit \citep[see][]{Seager2003,Winn2010} in Section \ref{sec:modelfit} is consistent with the value derived with an SED fit (Section \ref{sec:stellarpar}). Following the methodology presented in \cite{Kanodia2020}, we place limits on any spatially unresolved stellar companions to our targets by quantifying the lack of flux from a secondary stellar object in the HPF spectra. The highest S/N spectrum for each target is parameterized as a linear combination of a primary M dwarf\footnote{GJ\_205 and BD+29\_2279 for TOI-3714 and TOI-3629, respectively, as identified by \texttt{HPF-SpecMatch}} and a secondary stellar companion. The flux ratio between the secondary and primary star, $F$, is calculated as: \begin{eqnarray} S_{\mathrm{obs}} &=& A \left( (1-x)S_{\mathrm{primary}} + (x)S_{\mathrm{secondary}} \right) \label{eq:spectra} \\ F &=& \frac{x}{1-x} \label{eq:fluxratio} \end{eqnarray} \noindent where $S_{\mathrm{obs}}$ is the observed spectrum, $S_{\mathrm{primary}}$ is the primary spectrum, $S_{\mathrm{secondary}}$ represents the secondary spectrum, and $A$ is the normalization constant. For a given primary and secondary template, we (i) shift the secondary spectrum in velocity space, (ii) add this shifted spectrum to the primary spectrum, and (iii) fit for the value of $x$ that best fits the observed spectrum. We limit the secondary spectral type to M dwarfs earlier than M7 and to the orders where telluric absorption is minimal. \begin{figure*}[!ht] \fig{SecondaryLightTOI_3714.pdf}{0.45\textwidth}{{\small (a) Flux ratio upper limits for TOI-3714.}} \fig{SecondaryLightTOI_3629.pdf}{0.45\textwidth}{ \small (b) Flux ratio upper limits for TOI-3629.} \caption{\small Flux upper limits placed on the flux ratio of a secondary companion to a template spectrum as a function of $\Delta \rm{v}$, obtained by fitting the wavelength region spanning $\sim 10450 - 10580$ \AA (HPF order index 17). We include the $1\sigma$ error bars, and shade the region corresponding to $\pm 5 ~\mathrm{km~s^{-1}}$. We place a conservative upper limit on the flux ratio of 0.07 for an unresolved stellar companion at separations $|\Delta v| > 5~\mathrm{km~s^{-1}}$.}\label{fig:secondarylight} \end{figure*} Figure \ref{fig:secondarylight} presents the results from HPF order index 17 spanning $10450 - 10580$ \AA. We place a conservative upper limit for a secondary of flux ratio = 0.07 or $\Delta \rm{mag} \simeq 2.9$ for both TOI-3714 and TOI-3629. As shown in Figure \ref{fig:secondarylight}, there is no significant flux contamination at $|\Delta v|$ $> 5~\mathrm{km~s^{-1}}$. We perform this secondary light analysis for velocity offsets from $5-100~\mathrm{km~s^{-1}}$, where the lower limit coincides with the spectral resolution of HPF ($\sim 5.5~ \rm{km~s^{-1}}$). The degeneracy between the primary and secondary spectra at velocity offsets $< 5\mathrm{~km~s^{-1}}$ prevent any meaningful flux ratio constraints at those velocity offsets. \subsection{Constraints on unresolved bound companions} We use {\tt thejoker} \citep{Price-Whelan2017} to perform a rejection sampling analysis on the residuals for the HPF RVs to constrain the existence of additional signals within the HPF RVs. This analysis used a log-uniform prior for the period (between 1 day and twice the HPF RV baseline), the Beta distribution from \cite{Kipping2013a} as a prior for the eccentricity, and a uniform prior for the argument of pericenter and the orbital phase. For both TOI-3714 and TOI-3629, we analyzed \(>10^8\) (\(2^{28}\)) samples with {\tt thejoker} and had a total acceptance rate of \(<3\%\). The surviving samples place an upper limit on any low-inclination ($\sin i\sim1$) companions of $M<3.1~\mathrm{M_J}$ ($K<\mathrm{300~m~s^{-1}}$) within 0.6 au ($P<242$ days) for TOI-3714 and $M<2.9~\mathrm{M_J}$ ($K<160\mathrm{~m~s^{-1}}$) within 1.4 au ($P<722$ days) for TOI-3629. Gaia EDR3 provides an additional constraint on the presence of close-in, massive companions with the re-normalized unit weight error (RUWE) statistic. \cite{Lindegren2021} note that the RUWE, or the square root of the reduced $\chi^2$ statistic that has been corrected for calibration errors, is sensitive to the photocentric motions of unresolved objects. In systems with massive companions on orbital periods much shorter than the baseline of Gaia (34 months for EDR3), the astrometric motion of the primary star around the center of mass may appear as noise when adopting a single-star astrometric solution \citep[e.g.,][]{Kervella2019,Kiefer2019}. \(\mathrm{RUWE}\gtrsim1.4\) is a threshold that correlates with the existence of an unresolved stellar companion in recent studies of stellar binaries \citep[e.g.,][]{Belokurov2020,Penoyre2020,Gandhi2020,Stassun2021}. With RUWE values of 1.15 and 1.05, Gaia EDR3 suggests TOI-3714 and TOI-3629 do not have massive stellar companions on short-periods ($\gtrsim0.1-3\mathrm{~years}$). Instead, these systems are in agreement with a single-star astrometric solution. \subsection{Constraints on resolved bound companions} We also use results from Gaia EDR3 to determine if either star has a wide separation stellar companion. \cite{El-Badry2021} provide a list of spatially resolved binary stars from an analysis of proper motions. TOI-3629 is not contained in the catalog but TOI-3714 is identified as having a white dwarf stellar companion, Gaia EDR3 178924390476838784 (TIC 662037581). Systems in \cite{El-Badry2021} are flagged as having a white dwarf companion based on the location of the companion on the Gaia color-absolute magnitude diagram \citep{El-Badry2018b}. This object has a negligible probability ($\sim0.0006\%$) of being the chance alignment of a background source with spurious parallax and proper motion measurements. The white dwarf companion is located at a projected distance of 2.67\arcsec{} or a projected separation of 302 au from TOI-3714. This companion is outside both the HPF fiber \citep[$\sim1.7\arcsec$ on-sky;][]{Kanodia2018a} and the NEID HR fiber \citep[$\sim0.9\arcsec$ on-sky;][]{Schwab2016}. \begin{figure*}[!ht] \epsscale{1.15} \plotone{3714wdcomp.pdf} \caption{The nominal position of TIC 662037581, the white dwarf companion to TOI-3714, on the color-magnitude diagram for white dwarfs identified in Gaia EDR3 by \cite{GentileFusillo2021}. Contours for fixed masses from \cite{Bedard2020} are plotted for reference. The best-matching cooling track from the models is shown with a dashed line.} \label{fig:wdloc} \end{figure*} To estimate the physical parameters of the white dwarf companion, we use the \texttt{WD\_models}\footnote{\url{https://github.com/SihaoCheng/WD_models}} package from \cite{Cheng2019} to derive a photometric age and mass from its location (see Figure \ref{fig:wdloc}) on the Gaia color-magnitude diagram \citep[data from][]{GentileFusillo2021}. We assume the atmosphere is composed of hydrogen and adopted the cooling models of \cite{Bedard2020}. We adopt these colors as nominal values but note that the proximity to TOI-3714 introduces some blending and contamination in the colors of the white dwarf companion, limiting the reliability of the estimated parameters. We use the \texttt{phot\_bp\_rp\_excess\_factor} as a diagnostic to determine if any of the measured blue or red Gaia photometry are problematic \citep{Evans2018, Riello2021}. We use Table 2 and Equation 6 from \cite{Riello2021} to calculate the corrected \texttt{phot\_bp\_rp\_excess\_factor}, which attempts to account for the color-dependent mean trend in this parameter. The corrected \texttt{phot\_bp\_rp\_excess\_factor} for the white dwarf companion is 0.81 and the deviation from zero suggests this object has some degree of contamination. Without additional photometry of TIC 662037581, we provide nominal parameters to qualitatively describe the companion. The estimated mass for the white dwarf companion is $M_{WD}\sim1.07~\mathrm{M_\odot}$ with a cooling age of $\sim2.4$ Gyr. The MIST semi-empirical white dwarf initial-final mass relationship from \cite{Cummings2018} suggests the progenitor star had a mass between $4.5-6.8~\mathrm{M_\odot}$. Stars in this mass range have typical lifetimes (pre-main sequence through post-asymptotic giant branch) of $<0.1$ Gyrs \citep{Dotter2016,Choi2016}. Assuming this object is coeval with TOI-3714, the combined progenitor lifetime and white dwarf cooling age ($\sim2.4$ Gyr) is consistent with the age range of $0.7-5.1$ Gyr estimated from the rotation period of TOI-3714 ($23.3 \pm 0.3$ days). Approximately half of all hot Jupiter systems are known to have resolved stellar companions between separations of $50 - 2000$ au \citep[e.g.,][]{Wang2014a,Knutson2014a,Ngo2015,Ngo2016,Marzari2019,Hwang2020,Fontanive2021}, but only fourteen other exoplanetary systems \citep[see Table 2 in][]{Martin2021} are known to have a distant white dwarf companion. Of these fourteen systems, only the TOI-1259 \citep{Martin2021} and WASP-98 \citep{Hellier2014} systems host hot Jupiters. The existence of a distant stellar companion has been proposed as one mechanism to form hot Jupiters via a combination of secular interactions with the stellar companion and tidal friction \citep[e.g.,][]{Fabrycky2007,Anderson2016,Vick2019}. \cite{Ngo2016} note that most hot Jupiters with distant stellar companions are too separated to form via this mechanism. If the TOI-3714 system was initially a wide binary with an initial progenitor separation comparable to the observed separation ($\sim302$ au), the timescale for the Kozai cycles \citep[Equation 7 from][]{Kiseleva1998} would be \(\sim 2.8\) Gyr. This timescale is comparable to the age of the system and too long to effectively perturb a gas giant. The separation between the progenitor star and TOI-3714 could not have been too small, as a stellar binary with an initial separation $a\lesssim10$ au may interact when the primary star evolves off the main sequence and common envelope effects would subsequently shrink the orbit \citep[see][]{Paczynski1971,Paczynski1976,Ivanova2013}. Instead, the progenitor star could have been on a smaller orbit of tens of au and the onset of mass loss could have caused the orbit to expand \citep[see][]{Nordhaus2010,Nordhaus2013} to the observed separation. For example, at separations of 30 au, the Kozai timescale approaches $\sim3$ Myr and it may be possible the progenitor was close enough to perturb a nascent gas giant and far enough from TOI-3714 to prevent significant orbital decay. Gaia EDR3 is able to place constraints on the eccentricity of resolved wide binaries \citep[e.g.,][]{Tokovinin2020,Hwang2021}. The precision of Gaia EDR3 proper motion measurements allow for a measurement of the relative velocity for wide binaries (within the orbital plane) and allow a measurement of the angle between the separation vector and the relative velocity vector (the $v-r$ angle). This is a function of the phase, inclination, eccentricity, and argument of pericenter \citep[see Appendix A in][for a detailed derivation]{Hwang2021}. The measured $v-r$ angle is $169\pm14^\circ$ and is significantly discrepant from a circular, face-on orbit ($v-r=90^\circ$). We follow the methodology and use the software\footnote{\url{https://github.com/HC-Hwang/Eccentricity-of-wide-binaries}} described in \cite{Hwang2021} to estimate the posterior of the eccentricity distribution after adopting the parameters for the best-fitting power law \citep[Equation 29 in][]{Hwang2021} to the wide binary sample identified by \cite{El-Badry2021}. We note this eccentricity inference assumes that the wide companion has a random orbital orientation, an assumption which may not be true if the inner system is a transiting system. If the orbital orientation is not random, a large $v-r$ angle can indicate either a (i) high eccentricity or (ii) the outer companion lies on an orbit that is co-planar with the inner transiting system \citep[see Appendix B in][]{Hwang2020a,Behmard2022}. The inferred eccentricity with $1\sigma$ uncertainties for the orbit of the white dwarf companion is $e=0.99^{+0.01}_{-0.47}$. The high-eccentricity is consistent with the scenario in which the progenitor star was on a smaller orbit that widened and became eccentric due to mass loss. In this scenario, the resolved companion may have interacted with and impacted the migration of TOI-3714 b as it evolved into a white dwarf. \subsection{Comparison to the M dwarf planet population} With the discovery of TOI-3714 b and TOI-3629 b, there are 9 M dwarf systems hosting transiting hot Jupiters ($P<10$ days and $R_p>8\mathrm{R_\oplus}$). Figure \ref{fig:planetprops} compares the planetary mass-radius, stellar $T_{e} - \log g_\star$, and the insolation flux of M dwarfs with transiting planets that have $R>2~R_\oplus$. Of the transiting hot Jupiters orbiting M dwarfs, TOI-3629 b has the smallest radius and mass ($\sim0.9\mathrm{~R_{Saturn}}$ and $\sim0.9\mathrm{~M_{Saturn}}$ ), and is the second coolest hot Jupiter with an insolation flux of $S=39\pm2~S_{\oplus}$. TOI-3714 has a radius comparable to the median value ($1\mathrm{~R_J}$) and an insolation flux ($S=54\pm5~S_{\oplus}$) comparable to the median value ($\sim60~S_\oplus$) of the population of hot Jupiters transiting M dwarfs. TOI-3714 is, however, the only known M dwarf with both a transiting hot Jupiter and a resolved wide companion. All M dwarfs hosting short-period ($P<10$ days) Jupiter-sized gas giants, including TOI-3714 b and TOI-3629 b, are early M dwarfs (M0-M3, $3400\mathrm{~K}<T_e<4000\mathrm{~K}$). This may simply be an observational bias or a result of a small population size, but in the framework of core accretion, factors such as protoplanetary disk mass may impact the formation of gas giants \citep[e.g.,][]{Mordasini2012,Hasegawa2013,Hasegawa2014,Adibekyan2019}. M dwarf protoplanetary disks have lower masses than the disks around Sun-like stars \citep[e.g.,][]{Andrews2013,Mohanty2013,Stamatellos2015,Ansdell2017}; disk masses for these stars are typically below a few Jupiter masses \citep{Ansdell2017,Manara2018}, such that the efficiency of gas giant formation is expected to increase when orbiting more massive M dwarfs because the materials that form gas giant cores are more abundant compared to the low-mass protoplanetary disks around later M dwarfs. \begin{figure*}[!ht] \epsscale{1.15} \plotone{planetprops.pdf} \caption{The physical parameters of the TOI-3714 and TOI-3629 systems. \textbf{(a)} places TOI-3714 b (star) and TOI-3629 b (diamond star) on the mass-radius diagram for transiting M dwarf exoplanets with mass measurements and $R_p>2~R_\oplus$. All previously known hot Jupiters ($P<10$ days and $R_P\ge8~\mathrm{R_\oplus}$) transiting M dwarfs are marked as pentagons with number linked to the planet name. Contours of fixed bulk density are plotted for reference. \textbf{(b)} highlights the position of TOI-3714 and TOI-3629 on an effective temperature $-$ surface gravity diagram. \textbf{(c)} presents the insolation flux for M dwarf exoplanets. The data were compiled from the \href{https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=PSCompPars}{NASA Exoplanet Archive} \citep{Akeson2013} on 2022 May 4.} \label{fig:planetprops} \end{figure*} In addition to the protoplanetary disk mass, the stellar metallicity has been known to be important for gas giant formation. The planet-metallicity correlation, in which metal-rich stars are more likely to host gas giant planets, has been extensively observed in Sun-like stars \citep[e.g.,][]{Fischer2005,Johnson2010,Guo2017,Osborn2020}. RV studies \citep[e.g.,][]{Neves2013,Maldonado2020} have suggested the planet-metallicity correlation exists for M dwarfs but there is no statistical study for M dwarfs with transiting gas giants because of the small population size. Figure \ref{fig:planetmetal} compares the metallicity and planetary radii of these two systems with transiting exoplanets from the NASA Exoplanet Archive. All M dwarfs hosting a transiting Jupiter-sized planet ($R\ge8~\mathrm{R_\oplus}$), including TOI-3714 and TOI-3629, have metallicities of $\mathrm{[Fe/H]}>0$. We perform a simple binomial probability calculation to assess how likely it is that all 10 Jupiter-sized companions are found to have $\mathrm{[Fe/H]}\ge0$ by random chance. If we assume a uniform distribution in metallicity between the range of $-0.5<\mathrm{[Fe/H]}<0.5$, the probability that all nine M dwarfs hosting hot Jupiters fall within the observed range of $\mathrm{[Fe/H]}\ge0$ is $\sim0.2\%$. \begin{figure*}[!ht] \epsscale{1.15} \plotone{metal.pdf} \caption{The position of TOI-3714 and TOI-3629 on a metallicity-mass diagram for transiting hot Jupiters ($R\ge8~\mathrm{R_\oplus}$ and $P<10$ days). All known hot Jupiters transiting M dwarfs have $\mathrm{[Fe/H]}>0$. The same numbers from Figure \ref{fig:planetprops} are included to identify the M dwarf hot Jupiters. HATS-6 b is behind the marker for TOI-3714 b. NGTS-1 b lacks a metallicity measurement \citep{Bayliss2018} and is not plotted. The data were compiled from the \href{https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=PSCompPars}{NASA Exoplanet Archive} on 2022 May 04.} \label{fig:planetmetal} \end{figure*} We note that the metallicities on the NASA Exoplanet Archive are also not homogeneously derived. Metallicities derived using different techniques or instruments may exhibit offsets \citep[e.g.,][]{Guo2017,Petigura2018}. A magnitude limited search from a non-targeted transit survey, such as TESS, to identify a population of hot Jupiters orbiting M dwarfs is required to statistically evaluate if the planet-metallicity and stellar mass correlations apply to the population of transiting M dwarf gas giants. TESS is an ideal mission for this study, as it has been shown to be complete for hot Jupiters transiting earlier Sun-like stars \citep{Zhou2019a} and it should detect almost all transiting hot Jupiter - M dwarf systems given the large transit depth. This population of hot Jupiters could then be extensively studied to provide statistical constraints on the existence and strength of correlations of gas giant planets with metallicity or stellar mass. TOI-3714 and TOI-3629, like the six existing M dwarf systems with transiting hot Jupiters, also lack additional transiting planets on nearby orbits. We search for additional transiting companions in the TESS data for each system by using the transit least squares algorithm \citep[\texttt{TLS};][]{Hippke2019} after subtracting the best-fitting transit model for each planet. For both TOI-3714 and TOI-3629, \texttt{TLS} only identifies candidate signals (depths $>1$ ppm) between $1-13$ days where the test statistic is below the suggested value of 7. This threshold corresponds to a false positive rate of $\sim1\%$ for the \texttt{TLS} algorithm. The maximum radius of a candidate signal identified by \texttt{TLS} was $\sim4\mathrm{~M_\oplus}$ and $\sim5\mathrm{~M_\oplus}$ for TOI-3714 and TOI-3629, respectively, such that the current TESS data excludes the existence of additional transiting gas giant companions in the TESS data. Only five transiting hot Jupiters are known to exist in compact multiplanet systems: WASP-47 b \citep{Becker2015}, Kepler-730 b \citep{Zhu2018,Canas2019}, TOI-1130 c \citep{Huang2020a}, WASP-148 b \citep{Wang2022}, and WASP-132 b \citep{Hord2022}. This apparent low planetary multiplicity rate for hot Jupiters orbiting Sun-like stars has been detected in the analysis of multiple statistical samples from ground and space-based transiting hot Jupiters \citep[e.g.,][]{Steffen2012,Huang2016,Maciejewski2020,Hord2021,Wang2021a,Zhu2021}. The apparent lack of close-period companions to hot Jupiters may be imprints of high-eccentricity migration \citep[e.g.,][]{Mustill2015,Dawson2018}, as this mechanism would destabilize shorter period planets. TOI-3714 and TOI-3629 will both be observed in TESS cycle 5 and both sectors of data for each target could be analyzed in detail \citep[e.g., similar to][]{Hord2021} to provide robust constraints on additional transiting companions. \subsection{Comparison to planetary models} The equilibrium temperatures of TOI-3714 b ($T_{eq}=750\pm20$ K) and TOI-3629 b ($T_{eq}=690\pm20$ K) are $<1000$ K and it is unlikely these planets exhibit radius inflation due to stellar flux-driven mechanisms. Studies of the population of Kepler hot Jupiters \citep[e.g.,][]{Demory2011} determined that gas giants receiving an incident flux \(\lesssim 2 \times 10 ^{8}\mathrm{~erg~s^{-1}~cm^{-2}}\) have radii that are independent of the stellar incident flux. More recent analyses on transiting hot Jupiters \citep[e.g.,][]{Thorngren2018,Thorngren2021} have confirmed that inflated radii are evident in the population of hot Jupiters with $T_{eq}>1000$ K serving as a threshold for the onset of Ohmic heating and planetary inflation \citep[e.g.,][]{Batygin2010,Miller2011,Batygin2011}. Both the hot Jupiters TOI-3714 b and TOI-3629 b have $T_{eq}<1000$ K and do not show anomalously large radii when compared to models for gas giants from \cite{Baraffe2008} and \cite{Fortney2007}. The models from \cite{Fortney2007} assume a solar metallicity hydrogen and helium atmosphere with a heavy element core that is composed of a $50-50$ mixture of ice (water) and rock (olivine) while the models from \cite{Baraffe2008} assume a gaseous hydrogen and helium envelope with a distribution of heavy elements (water, dunite, and iron). Although these models are generated for the population of hot Jupiters transiting Sun-like stars, both TOI-3714 and TOI-3629 are in agreement with the non-irradiated models for gas giant interiors. We compared the observed radii of TOI-3714 b and TOI-3629 b to the radius predicted between $1-7$ Gyr models for a solar metallicity atmosphere and note agreement within $2-3\sigma$ regardless of age. The mass and radius of TOI-3714 b are in agreement with the models from \cite{Baraffe2008} containing a small fraction of heavy metals ($\sim2\%$) and with the \cite{Fortney2007} models for a core mass of $2-4\%$ of the planetary mass. The mass and radius of TOI-3629 are consistent with models from \cite{Fortney2007} having a core mass $\sim30\%$ of the planetary mass or the \cite{Baraffe2008} models with a heavy metal fraction of $\sim20-40\%$. These heavy metal fractions are consistent with what is seen in Jupiter \citep[$<10\%$ core mass;][]{Wahl2017} and Saturn \citep[$\sim20\%$;][]{Mankovich2021} in the Solar System. \subsection{Future Characterization} \subsubsection{Stellar Obliquity} The projected stellar obliquity ($\lambda$) is the apparent angle between the stellar rotation axis and the normal to the planet of the orbit. It can shed light on the dynamical and formation history of planets \citep[e.g.,][]{Albrecht2012,Winn2015,Triaud2018, Albrecht2021}. Measurements of $\lambda$ for hot Jupiters orbiting Sun-like stars \citep[e.g.,][]{Albrecht2012,Dawson2014} have revealed an obliquity distribution that is consistent with tidal realignment, indicating that their origin channels most likely involve dynamical interactions, such as planet-planet scattering. To date, there is no measurement of $\lambda$ for any M dwarf hosting a hot Jupiter. The measurement of $\lambda$ for either system via the Rossiter-McLaughlin (RM) effect \citep{Triaud2018} could limit the physical processes involved during formation because some mechanisms, such as disk migration, prohibit highly misaligned orbits \citep[see][]{Dawson2018}. The amplitude of the RM effect can be estimated as $\Delta V = 2/3 \left(R_{p}/R_\star\right)^2 v\sin i_\star \sqrt{1-b^2}$ \citep[Equation 1,][]{Triaud2018}. For TOI-3714, we estimate an equatorial ($\sin i=1$) rotational velocity of $v_{eq}=1.08 \pm 0.06~\mathrm{km~s^{-1}}$ using the derived rotation period and stellar radius. Additional photometric observations of TOI-3629 are required to determine the rotation period, however, if we adopt a value of $P_{rot}=30$ days corresponding to the marginally significant peak seen in its ZTF $zr$ data, the equatorial rotational velocity would be $v_{eq}= 1~\mathrm{km~s^{-1}}$. We use the derived transit parameters to estimate the RM effect amplitudes as \(\sim30~\mathrm{m~s^{-1}}\) and \(\sim10~\mathrm{m~s^{-1}}\) for TOI-3714 and TOI-3629, respectively. The precision to detect these amplitudes can be achieved using current high-resolution spectrographs with extended red wavelength coverage because both of these targets are early M dwarfs with a peak in the SED at around $\sim0.8-0.9$ microns. \subsubsection{Transmission Spectroscopy} TOI-3714 and TOI-3629 are the two brightest M dwarfs ($J<12$) with a transiting hot Jupiter and are potential targets to probe the atmosphere of warm ($T_{eq}\sim700$ K) M dwarf - hot Jupiter systems. \cite{Sing2016} obtained transmission spectra of hot Jupiters transiting Sun-like stars and noticed the observed sample contained both cloudy and clear planets, suggesting that hot Jupiters did not exhibit a strong relationship to cloud formation. While no extensive studies have been performed on M dwarf hot Jupiters, the transmission spectroscopy metric \citep[TSM;][]{Kempton2018} suggests that both TOI-3714 b ($\mathrm{TSM}=98\pm7$) and TOI-3629 b ($\mathrm{TSM}=80\pm9$) are amenable to observations with the James Webb Space Telescope \citep[JWST;][]{Gardner2006}. These systems also have the precision on mass and radius (both determined at $>10\sigma$) needed for detailed atmospheric analysis \citep{Batalha2019}. It may be possible to determine atmospheric abundances of C-, N-, and O-bearing molecules in the atmospheres of these planets to probe the thermal structure of the interior \citep{Fortney2020}. While these hot Jupiters do not have the highest TSM of the existing population (TOI-3757 has the highest TSM of $180\pm30$), they are unique in the population as TOI-3629 b is the smallest hot Jupiter orbiting an M dwarf while TOI-3714 is one of the coolest M dwarfs hosting a hot Jupiter. TOI-3714 and TOI-3629 provide an opportunity to examine the prevalence of clouds and photochemical hazes for M dwarf exoplanets. Under certain combinations of temperature and surface gravity, clouds or hazes may form in the visible region of a hot Jupiter atmosphere either through condensation chemistry or photochemical processes \citep[e.g.,][]{Sudarsky2003,Helling2008,Marley2013} and the presence of clouds or hazes may weaken or mask spectral features \citep{Sing2016,Sing2018}. Photochemical processes are more efficient in cooler exoplanets \citep[e.g.,][]{Moses2011} and high incident stellar UV irradiation is thought to enhance the photochemical production of hydrocarbon aerosol \citep[e.g.,][]{Liang2004,Line2010}. Transmission spectra of TOI-3714 b and TOI-3629 b with JWST would probe atmospheric chemistry of gas giants orbiting M dwarfs and the effects of higher UV radiation environment of early M dwarfs on atmospheric chemistry \citep[e.g.,][]{Pineda2021}. \section{Summary}\label{sec:summary} We report the discovery of two gas giants orbiting M dwarfs. TOI-3714 b is a hot Jupiter ($M_{p}=0.70 \pm 0.03\mathrm{M_J}$ and $R_{p}=1.01 \pm 0.03\mathrm{R_J}$) on a $P=2.154849 \pm 0.000001$ day orbit. TOI-3629 b is a hot Jupiter ($M_{p}=0.26 \pm 0.02\mathrm{M_J}$ and $R_{p}=0.74 \pm 0.02\mathrm{R_J}$) on a $P=3.936551_{-0.000006}^{+0.000005}$ day orbit. Only TOI-3714 has a detectable rotation period of \(23.3 \pm 0.3\) days and most probably has an age between \(0.7-5.1\) Gyrs which is comparable to the nominal cooling age of its white dwarf companion ($\sim2.4$ Gyr). All hot Jupiters known to transit M dwarfs, including TOI-3714 and TOI-3629, orbit metal-rich early M dwarfs (M0-M3). A larger population size and homogeneously derived metallicities are required to confirm if the correlations with metallicity and stellar mass observed for hot Jupiters orbiting Sun-like stars are also observed in the population of M dwarf gas giants. Constraints from Gaia EDR3 and RVs reject the presence of massive short-period companions to both gas giants, but TOI-3714 has a resolved white dwarf companion at a projected separation of \(\sim300\) au and most likely on an eccentric orbit. The progenitor may have been close enough to impact the orbit of a nascent TOI-3714 b as it evolved into a white dwarf. TOI-3714 and TOI-3629 are the brightest M dwarfs hosting hot Jupiters ($J<12$) and are amenable to observations during transit to (i) further our understanding of their dynamical history with a measurement of the projected obliquity and (ii) explore the atmospheric chemistry of hot gas giants orbiting cool stars. {\vskip6pt{\large\it Acknowledgments:}} We thank the anonymous referee for valuable feedback which has improved the quality of this manuscript. We thank Kareem El-Badry and David V. Martin for useful discussions. CIC acknowledges support by NASA Headquarters under the NASA Earth and Space Science Fellowship Program through grant 80NSSC18K1114, the Alfred P. Sloan Foundation's Minority Ph.D. Program through grant G-2016-20166039, and the Pennsylvania State University's Bunton-Waller program. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University and the Eberly College of Science. The computations for this research were performed on the Pennsylvania State University's Institute for Computational and Data Sciences' Roar supercomputer, including the CyberLAMP cluster supported by NSF grant MRI-1626251. This content is solely the responsibility of the authors and does not necessarily represent the views of the Institute for Computational and Data Sciences. HCH acknowledges the support of the Infosys Membership at the Institute for Advanced Study. TNS acknowledges support from the Wyoming Research Scholars Program. The Pennsylvania State University campuses are located on the original homelands of the Erie, Haudenosaunee (Seneca, Cayuga, Onondaga, Oneida, Mohawk, and Tuscarora), Lenape (Delaware Nation, Delaware Tribe, Stockbridge-Munsee), Shawnee (Absentee, Eastern, and Oklahoma), Susquehannock, and Wahzhazhe (Osage) Nations. As a land grant institution, we acknowledge and honor the traditional caretakers of these lands and strive to understand and model their responsible stewardship. We also acknowledge the longer history of these lands and our place in that history. We acknowledge support from NSF grants AST 1006676, AST 1126413, AST 1310875, AST 1310885, AST 2009554, AST 2009889, AST 2108512 and the NASA Astrobiology Institute (NNA09DA76A) in our pursuit of precision RVs in the near-infrared. We acknowledge support from the Heising-Simons Foundation via grant 2017-0494. We acknowledge support from NSF grants AST 1907622, AST 1909506, AST 1909682, AST 1910954 and the Research Corporation in connection with precision diffuser-assisted photometry. This work is Contribution 0046 from the Center for Planetary Systems Habitability at the University of Texas at Austin. These results are based on observations obtained with HPF on the HET. The HET is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August Universit\"at Gottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. The HET collaboration acknowledges the support and resources from the Texas Advanced Computing Center. We are grateful to the HET Resident Astronomers and Telescope Operators for their valuable assistance in gathering our HPF data. We would like to acknowledge that the HET is built on Indigenous land. Moreover, we would like to acknowledge and pay our respects to the Carrizo \& Comecrudo, Coahuiltecan, Caddo, Tonkawa, Comanche, Lipan Apache, Alabama-Coushatta, Kickapoo, Tigua Pueblo, and all the American Indian and Indigenous Peoples and communities who have been or have become a part of these lands and territories in Texas, here on Turtle Island. Some of the data presented were obtained by the NEID spectrograph built by the Pennsylvania State University and operated at the WIYN Observatory by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the NSF, and operated under the NN-EXPLORE partnership of NASA and the NSF. Observations with NEID were obtained under proposals 2021B-0035 (PI: S. Kanodia), 2021B-0435 (PI: S. Kanodia), and 2021B-0438 (PI: C. Ca\~nas). NEID results included here utilize the Data Reduction Pipeline operated by NExScI and developed under subcontract 1644767 between JPL and the University of Arizona. This work was performed for the Jet Propulsion Laboratory, California Institute of Technology, sponsored by the United States Government under the Prime Contract 80NM0018D0004 between Caltech and NASA. WIYN is a joint facility of the University of Wisconsin-Madison, Indiana University, NSF's NOIRLab, the Pennsylvania State University, Purdue University, University of California-Irvine, and the University of Missouri. The authors are honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham. Some of results are based on observations obtained with the Apache Point Observatory 3.5m telescope, which is owned and operated by the Astrophysical Research Consortium. We wish to thank the APO 3.5m telescope operators in their assistance in obtaining these data. Some of the observations in this paper made use of the NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI). NESSI was funded by the NASA Exoplanet Exploration Program and the NASA Ames Research Center. NESSI was built at the Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. Some of the data presented in this paper were obtained from MAST at STScI. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. This work includes data collected by the TESS mission, which are publicly available from MAST. Funding for the TESS mission is provided by the NASA Science Mission directorate. This research made use of the (i) NASA Exoplanet Archive, which is operated by Caltech, under contract with NASA under the Exoplanet Exploration Program, (ii) SIMBAD database, operated at CDS, Strasbourg, France, (iii) NASA's Astrophysics Data System Bibliographic Services, (iv) NASA/IPAC Infrared Science Archive, which is funded by NASA and operated by the California Institute of Technology, and (v) data from 2MASS, a joint project of the University of Massachusetts and IPAC at Caltech, funded by NASA and the NSF. This work has made use of data from the European Space Agency (ESA) mission Gaia (\url{https://www.cosmos.esa.int/gaia}), processed by the Gaia Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Some of the observations in this paper made use of the Guoshoujing Telescope (LAMOST), a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. Some of the observations in this paper were obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the ZTF project. ZTF is supported by the NSF under Grant No. AST-2034437 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, and IN2P3, France. Operations are conducted by COO, IPAC, and UW. \facilities{ARC (ARCTIC), Exoplanet Archive, Gaia, HET (HPF), IRSA, KPNO:2.1m (Robo-AO), LAMOST, MAST, PO:1.2m (ZTF), PO:1.5m (ZTF), SO:Kuiper, TESS, WIYN (NEID, NESSI)} \software{ \texttt{astroquery} \citep{Ginsburg2019}, \texttt{astropy} \citep{AstropyCollaboration2018}, \texttt{barycorrpy} \citep{Kanodia2018}, \texttt{dynesty} \citep{Speagle2020}, \texttt{EXOFASTv2} \citep{Eastman2019}, \texttt{GLS} \citep{Zechmeister2009}, \texttt{HPF-SpecMatch} (S. Jones et al. 2022), \texttt{juliet} \citep{Espinoza2019}, \texttt{lightkurve} \citep{LightkurveCollaboration2018}, \texttt{matplotlib} \citep{hunter2007}, \texttt{numpy} \citep{vanderwalt2011}, \texttt{pandas} \citep{McKinney2010}, \texttt{scipy} \citep{Virtanen2020}, \texttt{telfit} \citep{Gullikson2014}, \texttt{thejoker} \citep{Price-Whelan2017}, \texttt{TLS} \citep{Hippke2019}, \texttt{WD\_models} \citep{Cheng2019} }
2,877,628,090,897
arxiv
\section{Introduction} \label{sec:intro} In this work we show how the techniques developed in the companion paper \cite{GP} to investigate the stability properties of the cnoidal periodic waves of the cubic defocusing nonlinear Schr\"odinger equation in one space dimension can be extended to provide a new and rather elementary proof of orbital stability in the limiting case of the black soliton. We thus consider the cubic defocusing NLS equation \begin{equation} \label{nls} i \psi_t(x,t) + \psi_{xx}(x,t) - |\psi(x,t)|^2 \psi(x,t) \,=\, 0, \end{equation} where $\psi$ is a complex-valued function of $(x,t) \in \mathbb{R} \times \mathbb{R}$. The black soliton is the particular solution of \eqref{nls} given by $\psi(x,t) = e^{-it}u_0(x)$, where \begin{equation} \label{black-soliton} u_0(x) \,=\, \tanh\Bigl(\frac{x}{\sqrt{2}}\Bigr), \qquad x \in \mathbb{R}. \end{equation} For later use, we note that the soliton profile $u_0 : \mathbb{R} \to \mathbb{R}$ satisfies the differential equations \begin{equation} \label{wave} u_0' \,=\, \frac{1}{\sqrt{2}}\Bigl(1-u_0^2\Bigr), \qquad \hbox{hence}\qquad u_0'' + u_0 - u_0^3 \,=\, 0. \end{equation} The NLS equation \eqref{nls} has many symmetries and conserved quantities, which play a crucial role in the dynamics of the system. In particular, the gauge invariance $\psi \mapsto e^{i\theta}\psi$ and the translation invariance $\psi \mapsto \psi(\cdot-\xi)$ give rise to the conservation of the charge $Q$ and the momentum $M$, respectively, where \begin{equation} \label{QMdef} Q(\psi) \,=\, \int_\mathbb{R} \Bigl(|\psi|^2 -1\Bigr)\,\mathrm{d} x, \qquad M(\psi) \,=\, \frac{i}{2} \int_\mathbb{R} \Bigl(\bar{\psi} \psi_x - \psi \bar{\psi}_x\Bigr) \,\mathrm{d} x. \end{equation} Since the NLS equation \eqref{nls} is an autonomous Hamiltonian system, we also have the conservation of the energy \begin{equation} \label{energy} E(\psi) \,=\, \int_\mathbb{R} \left(|\psi_x|^2 + \frac{1}{2} (1 - |\psi|^2)^2 \right) \,\mathrm{d} x. \end{equation} In what follows, our goal is to study the stability of the black soliton \eqref{black-soliton}, and we shall therefore restrict ourselves to solutions of \eqref{nls} for which $|\psi| \to 1$ as $|x| \to \infty$. This is why we defined the conserved quantities \eqref{QMdef}, \eqref{energy} in such a way that the integrands vanish when $|\psi| = 1$ and $\psi_x = 0$. The nonlinear stability of the black soliton \eqref{black-soliton} has been studied in several recent works. In \cite{BGSS} the authors apply the variational method of Cazenave and Lions \cite{CL}, which relies on the fact that the black soliton \eqref{black-soliton} is a global minimizer of the energy $E$ for a fixed value of the momentum $M$. The difficulty with this approach is that the momentum is not defined for all finite-energy solutions, so that the integral defining $M$ in \eqref{QMdef} has to be renormalized and properly interpreted. A slightly different proof was subsequently given in \cite{GS}, in the spirit of the work by Weinstein \cite{We} and Grillakis, Shatah, and Strauss \cite{GSS}. The main idea is to show that the energy functional \eqref{energy} becomes coercive in a neighborhood of the black soliton \eqref{black-soliton} if the conservation of the momentum is used to get rid of one unstable direction. Both results in \cite{BGSS,GS} are variational in nature and establish orbital stability of the black soliton in the energy space. Note that asymptotic stability of the black soliton is also proved in \cite{GS}, using ideas and techniques developed by Martel and Merle for the generalized Korteweg-de Vries equation \cite{MM}. In a different direction, a more precise orbital stability result was obtained in \cite{GZ} for sufficiently smooth and localized perturbations, using the inverse scattering transform method which relies on the integrability of the cubic defocusing NLS equation \eqref{nls}. Similarly, asymptotic stability of the black soliton and several dark solitons was recently proved in \cite{Cuccagna-private}. As as consequence of integrability, the NLS equation \eqref{nls} has many conserved quantities in addition to the charge, the momentum, and the energy. In the present work, we introduce a new variational approach based on the higher-order functional \begin{equation} \label{Sdef} S(\psi) \,=\, \int_{\mathbb{R}} \left( |\psi_{xx}|^2 + 3 |\psi|^2 |\psi_x|^2 + \frac{1}{2} (\bar{\psi} \psi_x + \psi \bar{\psi}_x)^2 + (1 - |\psi|^2)^2 \Bigl( 1 + \frac{1}{2} |\psi|^2 \Bigr) \right) \,\mathrm{d} x, \end{equation} which is also conserved under the evolution defined by \eqref{nls}. The latter claim can be proved by a straightforward but cumbersome calculation, or by more educated techniques as described, e.g., in \cite[Section~2.3]{Yang}. The natural domain of definition for the functional \eqref{Sdef} is the $H^2$ energy space defined by \begin{equation} \label{Xdef} X \,=\, \Bigl\{\psi \in H^2_{\rm loc}(\mathbb{R})\,: \quad \psi_x \in H^1(\mathbb{R}), ~1 - |\psi|^2 \in L^2(\mathbb{R}) \Bigr\}. \end{equation} Indeed, if $\psi \in X$, then $\zeta := 1 - |\psi|$ belongs to $H^1(\mathbb{R})$, because $|\zeta| \le |1 - |\psi|^2| \in L^2(\mathbb{R})$ and $\zeta_x = -|\psi|_x \in L^2(\mathbb{R})$. By Sobolev's embedding of $H^1(\mathbb{R})$ into $L^{\infty}(\mathbb{R})$, we thus have $|\psi| = 1 - \zeta \in L^\infty(\mathbb{R})$, and from the definitions (\ref{Sdef}) and (\ref{Xdef}), it follows easily that $S(\psi) < \infty$. Since $u_0'$, $u_0''$, and $1 - u_0^2$ decay exponentially to zero as $|x| \to \infty$, it is clear that $u_0 + H^2(\mathbb{R}) \subset X$, so that the functional \eqref{Sdef} is well defined for $H^2$ perturbations of the soliton profile $u_0$. This allows us to define the differential of $S$ at $u_0$, and a direct calculation using the differential equations \eqref{wave} reveals that $u_0$ is a {\em critical point} of $S$, in the sense that $S'(u_0) = 0$. Unfortunately, the second variation $S''(u_0)$ has no definite sign \cite{GP}, hence it is not possible to prove orbital stability of the black soliton using the functional $S$ alone. As is explained in the companion paper \cite{GP}, which is devoted to the stability of periodic waves for the NLS equation \eqref{nls}, it is possible to cure that problem by subtracting from $S$ an appropriate multiple of the energy $E$, which is well defined on $X$ and also satisfies $E'(u_0) = 0$. The optimal choice is \begin{equation} \label{Lamdef} \Lambda(\psi) \,=\, S(\psi) - 2 E(\psi), \qquad \psi \in X. \end{equation} We then have $\Lambda'(u_0) = 0$, and the starting point of our approach is the following result, which asserts that the second variation $\Lambda''(u_0)$ is nonnegative. \begin{proposition} \label{prop-main} The second variation of the functional \eqref{Lamdef} at the black soliton \eqref{black-soliton} is nonnegative for perturbations in $H^2(\mathbb{R})$. \end{proposition} It is important to realize that Proposition~\ref{prop-main} gives an {\em unconstrained} variational characterization of the black soliton $u_0$, which is our main motivation for introducing the higher-order conserved quantity \eqref{Sdef}. In contrast, the approach in \cite{BGSS,GS} relies on the fact that $u_0$ is a minimum of the energy $E(\psi)$ subject to the constraint $\mathcal{M}(\psi) = \mathcal{M}(u_0)$, where $\mathcal{M}$ is a suitably renormalized version of the momentum $M$ defined in \eqref{QMdef}. The proof of Proposition~\ref{prop-main} developed in Section \ref{sec:positive} actually shows that the second variation $\Lambda''(u_0)$ is positive except for degeneracies due to symmetries: the nonnegative self-adjoint operator associated with $\Lambda''(u_0)$ has a simple zero eigenvalue which is due to translation invariance, and the essential spectrum extends all the way to the origin due to gauge invariance. As a consequence, perturbations in $H^2(\mathbb{R})$ can include slow modulations of the phase of the black soliton far away from the origin, which hardly increase the functional $\Lambda$. This means that the second variation $\Lambda''(u_0)$ is not coercive in $H^2(\mathbb{R})$, even if modulation parameters are used to remove the zero modes due to the symmetries. For that reason, we are not able to control the perturbations of the black soliton in the topology of $H^2(\mathbb{R})$, but only in a weaker sense that allows for a slow drift of the phase at infinity, see Section~\ref{sec:modulation} below for a more detailed discussion. To formulate our main result, we equip the space $X$ with the distance \begin{equation} \label{distance} d_R(\psi_1,\psi_2) \,=\, \|(\psi_1 - \psi_2)_x\|_{H^1(\mathbb{R})} + \| |\psi_1|^2 - |\psi_2|^2\|_{L^2(\mathbb{R})} + \| \psi_1-\psi_2\|_{L^2(-R,R)}, \end{equation} where $R \ge 1$ is a parameter. Note that $d_R$ is the exact analogue, at the $H^2$ level, of the distance that is used in previous variational studies of the black soliton, including \cite{BGSS,Gerard,GS}. As is easily verified, a function $\psi \in H^2_{\rm loc}(\mathbb{R})$ belongs to $X$ if and only if $d_R(\psi,u_0) < \infty$; moreover, different choices of $R$ give equivalent distances on $X$. To prove orbital stability of the black soliton with profile $u_0$, the idea is to consider solutions $\psi$ of the NLS equation \eqref{nls} for which $d_R(\psi,u_0)$ is small. This is certainly the case if $\|\psi-u_0\|_{H^2}$ is small, but the converse is not true because $d_R(\psi,u_0)$ does not control the $L^2$ norm of the difference $\psi - u_0$ on the whole real line. We shall prove in Section~\ref{sec:stability} that the distance $d_R$ is well adapted to the functional $\Lambda$ near $u_0$, in the sense that \begin{equation} \label{Lamcoer} \Lambda(\psi) - \Lambda(u_0) \,\ge\, C d_R(\psi,u_0)^2 \qquad \hbox{when}\quad d_R(\psi,u_0) \ll 1, \end{equation} provided the perturbation $\psi-u_0$ satisfies a pair of orthogonality conditions. As is usual in orbital stability theory, these orthogonality conditions can be fulfilled if we replace $\psi$ by $e^{i\theta} \psi(\cdot+\xi)$ for some appropriate modulation parameters $\theta,\xi \in \mathbb{R}$, see Section~\ref{sec:modulation} below. It is then easy to deduce from \eqref{Lamcoer} that solutions of the NLS equation \eqref{nls} with initial data $\psi$ satisfying $d_R(\psi_0,u_0) \ll 1$ will stay close for all times to the orbit of the black soliton under the group of translations and phase rotations. The precise statement is: \begin{theorem} \label{theorem-soliton} Fix $R \ge 1$ and let $u_0 \in X$ be the black soliton \eqref{black-soliton}. Given any $\epsilon > 0$, there exists $\delta > 0$ such that, for any $\psi_0 \in X$ satisfying \begin{equation} \label{bound-initial} d_R(\psi_0,u_0) \,\le\, \delta, \end{equation} the global solution $\psi(\cdot,t)$ of the NLS equation \eqref{nls} with initial data $u_0$ has the following property. For any $t \in \mathbb{R}$, there exist $\xi(t) \in \mathbb{R}$ and $\theta(t) \in \mathbb{R}/(2\pi\mathbb{Z})$ such that \begin{equation} \label{bound-final} d_R\Bigl(e^{i (t + \theta(t))} \psi(\cdot + \xi(t),t)\,,u_0\Bigr) \,\le\, \epsilon. \end{equation} Moreover $\xi$ and $\theta$ are continuously differentiable functions of $t$ which satisfy \begin{equation} \label{bound-time-per} |\dot \xi(t)| + |\dot \theta(t)| \,\le\, C \epsilon, \quad t \in \mathbb{R}, \end{equation} for some positive constant $C$. \end{theorem} \begin{remark} It is known from the work of Zhidkov \cite{Zhidkov} that the Cauchy problem for the NLS equation \eqref{nls} is globally well-posed in $X$. This is the functional framework that is used to define solutions of \eqref{nls} in Theorem~\ref{theorem-soliton}. \end{remark} \begin{remark} Except for the use of a different distance $d_R$, which controls the perturbations in the topology of $H^2_{\rm loc}(\mathbb{R})$, Theorem~\ref{theorem-soliton} is the exact analogue of the orbital stability results obtained in \cite{BGSS,GS}. However the proof is quite different, and in some sense simpler, because the profile $u_0$ of the black soliton is an unconstrained local minimizer of the higher-order functional $\Lambda$. \end{remark} \begin{remark} It is also possible to prove asymptotic stability results for the black soliton of the cubic NLS equation \eqref{nls}. In that perspective, it is useful to consider the black soliton as a member of the one-parameter family of traveling dark solitons, given by the exact expression \begin{equation} \label{darksoliton} e^{it}\psi_{\nu}(x + \nu t,t) \,=\, \sqrt{{\textstyle 1 - \frac{1}{2} \nu^2}} \,\tanh\left(\sqrt{{\textstyle\frac{1}{2} - \frac{1}{4} \nu^2}}\,x\right) + \frac{i\nu}{\sqrt{2}}, \end{equation} where $\nu \in (-\sqrt{2},\sqrt{2})$. Asymptotic stability of the family of dark solitons with nonzero speed $\nu$ was proved in \cite{Bethuel}, using the Madelung transformation and the hydrodynamic formulation of the NLS equation. This approach applies to solutions whose modulus is strictly positive, and therefore excludes the case of the black soliton. Very recently, the asymptotic stability of the black soliton (within the one-parameter family of all dark solitons) has been established in \cite{Cuccagna-private,GS}. \end{remark} The rest of this article is organized as follows. In Section~\ref{sec:positive} we establish positivity and coercivity properties for the quadratic form associated with the second variation of the functional \eqref{Lamdef} at $u_0$. In Section~\ref{sec:modulation}, we introduce modulation parameters in a neighborhood of the soliton profile to eliminate the zero modes of the second variation $\Lambda''(u_0)$. Combining these results and using a new variable borrowed from \cite{GS}, we prove in Section~\ref{sec:stability} the orbital stability of the black soliton \eqref{wave} in the space $X$. \section{Positivity and coercivity of the second variation} \label{sec:positive} Let $u_0$ be the soliton profile \eqref{black-soliton} and $\Lambda = S - 2E$ be the functional defined by \eqref{energy}, \eqref{Sdef}, and \eqref{Lamdef}. In this section, we prove that the second variation $\Lambda''(u_0)$ is nonnegative, as stated in Proposition~\ref{prop-main}, and we deduce some coercivity properties that will be used in the proof of Theorem~\ref{theorem-soliton}. We consider perturbations of $u_0$ of the form $\psi = u_0 + u + i v$, where $u,v \in H^2(\mathbb{R})$ are real-valued. As in \cite{GP}, the second variations at $u_0$ of the functionals $E$ and $S$ satisfy \begin{align*} \textstyle\frac12 \langle E''(u_0)[u,v], [u,v]\rangle \,&=\, \langle L_+ u,u\rangle_{L^2} + \langle L_- v,v\rangle_{L^2}, \\[1mm] \textstyle\frac12 \langle S''(u_0)[u,v], [u,v]\rangle \,&=\, \langle M_+ u,u\rangle_{L^2} + \langle M_- v,v\rangle_{L^2}, \end{align*} where $\langle\cdot\,,\cdot\rangle_{L^2}$ denotes the usual scalar product in $L^2(\mathbb{R})$. The self-adjoint operators $L_\pm$ and $M_\pm$ have the following expressions: \begin{equation} \label{operatorsdef} \begin{array}{l} L_+ \,=\, -\partial_x^2 + 3 u_0^2 - 1, \\[1mm] L_- \,=\, -\partial_x^2 + u_0^2 - 1, \end{array} \qquad \begin{array}{lcl} M_+ \,=\, \partial_x^4 - 5 \partial_x u_0^2 \partial_x -5 u_0^4 + 15 u_0^2 - 4, \\[1mm] M_- \,=\, \partial_x^4 - 3 \partial_x u_0^2 \partial_x + u_0^2 - 1. \end{array} \end{equation} In view of \eqref{Lamdef}, it follows that \begin{equation} \label{Lambdasecond} \textstyle\frac12 \langle \Lambda''(u_0)[u,v], [u,v]\rangle \,=\, \langle K_+ u,u\rangle_{L^2} + \langle K_- v,v\rangle_{L^2}, \end{equation} where $K_{\pm} = M_{\pm} - 2L_{\pm}$. More explicitly, the quadratic forms associated with $K_\pm$ are given by \begin{align} \label{Kquad+} \langle K_+ u,u\rangle_{L^2} \,&=\, \int_\mathbb{R} \Bigl(u_{xx}^2 + (5u_0^2-2)u_x^2 + (9u_0^2 -5u_0^4-2)u^2\Bigr)\,\mathrm{d} x, \\ \label{Kquad-} \langle K_- v,v\rangle_{L^2} \,&=\, \int_\mathbb{R} \Bigl(v_{xx}^2 + (3u_0^2-2)v_x^2 + (1-u_0^2)v^2\Bigr)\,\mathrm{d} x. \end{align} Our first task is to show that the quadratic forms \eqref{Kquad+}, \eqref{Kquad-} are nonnegative on $H^2(\mathbb{R})$. Due to translation invariance of the NLS equation \eqref{nls}, we have $L_+ u_0' = M_+ u_0' = 0$, hence also $K_+ u_0' = 0$. As $u_0' \in H^2(\mathbb{R})$, this shows that the quadratic form associated with $K_+$ has a neutral direction, hence is not strictly positive, see Lemma \ref{lemma-K-plus} below. The situation is slightly different for $K_-$: due to gauge invariance, we have $L_- u_0 = M_- u_0 = 0$, hence $K_- u_0 = 0$, but of course $u_0 \not\in H^2(\mathbb{R})$. In fact, the result of Lemma \ref{lemma-K-minus} below shows that the quadratic form associated with $K_-$ is strictly positive on $H^2(\mathbb{R})$. \medskip We first prove that the quadratic form \eqref{Kquad+} is nonnegative, see also \cite[Corollary~4.5]{GP}. \begin{lemma} \label{lemma-K-plus} For any $u \in H^2(\mathbb{R})$, we have \begin{equation} \label{operator-K-plus-soliton} \langle K_+ u, u \rangle_{L^2} \,=\, \|w_x\|_{L^2}^2 + \|w\|_{L^2}^2 \,\ge\, 0, \end{equation} where $w = u_x + \sqrt{2} u_0 u$. \end{lemma} \begin{proof} Integrating by parts and using the differential equations \eqref{wave} satisfied by $u_0$, we easily obtain \begin{equation}\label{LemK+1} \int_\mathbb{R} w^2 \,\mathrm{d} x \,=\, \int_\mathbb{R} \Bigl(u_x^2 + 2\sqrt{2}u_0 u u_x + 2u_0^2 u^2\Bigr) \,\mathrm{d} x \,=\, \int_\mathbb{R} \Bigl(u_x^2 + (3u_0^2-1)u^2\Bigr)\,\mathrm{d} x. \end{equation} Similarly, as $w_x = u_{xx} + \sqrt{2}u_0u_x + \sqrt{2}u_0'u$, we find \begin{align}\nonumber \int_\mathbb{R} w_x^2 \,\mathrm{d} x \,&=\, \int_\mathbb{R} \Bigl(u_{xx}^2 + 2\sqrt{2}u_0 u_x u_{xx}+ 2u_0^2 u_x^2 + 2\sqrt{2}u_0' uu_{xx} + 4u_0 u_0' uu_x + 2u_0'^2 u^2\Bigr)\,\mathrm{d} x \\ \nonumber \,&=\, \int_\mathbb{R} \Bigl(u_{xx}^2 + (5u_0^2-3)u_x^2 + 8u_0 u_0' uu_x + 2u_0'^2 u^2\Bigr)\,\mathrm{d} x \\ \label{LemK+2} \,&=\, \int_\mathbb{R} \Bigl(u_{xx}^2 + (5u_0^2-3)u_x^2 + (1-u_0^2)(5u_0^2-1) u^2\Bigr)\,\mathrm{d} x, \end{align} because $2u_0'^2 - 4(u_0u_0')' = (1-u_0^2)(5u_0^2-1)$. If we now combine \eqref{LemK+1} and \eqref{LemK+2}, we see that $\|w_x\|_{L^2}^2 + \|w\|_{L^2}^2$ is equal to the right-hand side of \eqref{Kquad+}, which is the desired conclusion. \end{proof} \begin{remark} \label{remark-black-plus} The right-hand side of \eqref{operator-K-plus-soliton} vanishes if and only if $w = 0$, which is equivalent to $u = C u_0'$ for some constant $C$. Thus zero is a simple eigenvalue of $K_+$ in $L^2(\mathbb{R})$. Moreover, since $u_0(x) \to \pm 1$ as $x \to \pm \infty$, it is clear from \eqref{Kquad+} that the essential spectrum of $K_+$ is the interval $[2,\infty)$. Thus if we restrict ourselves to the orthogonal complement of $u_0'$ with respect to the scalar product $\langle\cdot\,, \cdot\rangle_{L^2}$, the spectrum of $K_+$ is bounded from below by a strictly positive constant, and the corresponding quadratic form is thus coercive in the topology of $H^2(\mathbb{R})$, see Remark~\ref{oldcond} below. \end{remark} We next prove the positivity of the quadratic form \eqref{Kquad-}, see also \cite[Lemma~4.1]{GP}. \begin{lemma} \label{lemma-K-minus} For any $v \in H^2(\mathbb{R})$, we have \begin{equation} \label{operator-K-minus} \langle K_- v, v \rangle_{L^2} \,=\, \| L_- v \|_{L^2}^2 + \| u_0 v_x - u_0' v \|_{L^2}^2 \,\ge\, 0, \end{equation} where $L_- = -\partial_x^2 + u_0^2 - 1$. \end{lemma} \begin{proof} Integrating by parts we obtain \begin{align*} \int_\mathbb{R} (L_-v)^2 \,\mathrm{d} x \,&=\, \int_{\mathbb{R}} \Bigl(v_{xx}^2 + 2(1 - u_0^2) v v_{xx} + (1-u_0^2)^2 v^2 \Bigr) \,\mathrm{d} x \\ \,&=\, \int_{\mathbb{R}} \Bigl(v_{xx}^2 + 2(u_0^2-1) v_x^2 -2(u_0 u_0')'v^2 + (1-u_0^2)^2v^2 \Bigr) \,\mathrm{d} x. \end{align*} Similarly, we have \[ \int_\mathbb{R} \Bigl(u_0 v_x - u_0' v\Bigr)^2\,\mathrm{d} x \,=\, \int_{\mathbb{R}} \Bigl(u_0^2 v_x^2 + (u_0u_0')'v^2 + u_0'^2 v^2\Bigr) \,\mathrm{d} x. \] It follows that \[ \|L_- v\|_{L^2}^2 + \|u_0 v_x - u_0' v\|_{L^2}^2 \,=\, \int_{\mathbb{R}} \Bigl(v_{xx}^2 + (3 u_0^2 - 2) v_x^2 + [(1-u_0^2)^2 - u_0 u_0''] v^2 \Bigl)\,\mathrm{d} x, \] and that expression coincides with the right-hand side of \eqref{Kquad-} since $(1-u_0^2)^2 - u_0 u_0'' = 1 - u_0^2$ by \eqref{wave}. This proves \eqref{operator-K-minus}. \end{proof} \begin{remark} The right-hand side of \eqref{operator-K-minus} vanishes if and only if $L_-v = 0$ and $u_0 v_x - u_0' v = 0$, namely if $v = C u_0$ for some constant $C$. As $u_0 \notin H^2(\mathbb{R})$, this shows that $\langle K_- v, v \rangle_{L^2} > 0$ for any nonzero $v \in H^2(\mathbb{R})$. However, since $|u_0(x)| \to 1$ as $|x| \to \infty$, it is clear from the representation \eqref{Kquad-} that zero belongs to the essential spectrum of the operator $K_-$, hence the associated quadratic form is not coercive in the topology of $H^2(\mathbb{R})$. Some weaker coercivity property will nevertheless be established below, see Remark~\ref{newcond}. \end{remark} \begin{remark} In view of the decomposition \eqref{Lambdasecond}, Proposition~\ref{prop-main} is an immediate consequence of Lemmas~\ref{lemma-K-plus} and \ref{lemma-K-minus}. \end{remark} In the rest of this section, we show that the quadratic forms \eqref{Kquad+}, \eqref{Kquad-} are not only positive, but also coercive in some appropriate sense. \begin{lemma} \label{lemma-soliton-1} Let $u_0$ be the black soliton \eqref{black-soliton}. There exists a positive constant $C$ such that, for any $u \in H^2(\mathbb{R})$ satisfying $\langle u_0', u \rangle_{L^2} = 0$, we have the estimate \begin{equation} \label{bound-u} \| u \|_{H^2} \,\le\, C \| w \|_{H^1}, \end{equation} where $w = u_x + \sqrt{2} u_0 u$. \end{lemma} \begin{proof} Solving the linear differential equation $u_x + \sqrt{2} u_0 u = w$ by Duhamel's formula, we find $u = A u_0' + W$ for some $A \in \mathbb{R}$, where \begin{equation} \label{variation-u} W(x) \,=\, \int_0^x K(x,y) w(y) \,\mathrm{d} y, \qquad K(x,y) \,=\, \frac{\cosh^2(y/\sqrt{2})}{\cosh^2(x/\sqrt{2})}. \end{equation} The constant $A$ is uniquely determined by the orthogonality condition $\langle u_0', u \rangle_{L^2} = 0$, which implies that $A \|u_0'\|_{L^2}^2 + \langle u_0', W \rangle_{L^2} = 0$. Using \eqref{variation-u}, we easily obtain \begin{align}\nonumber \langle u_0', W \rangle_{L^2} \,&=\, \int_{-\infty}^\infty \biggl\{\int_0^x K(x,y) w(y) \,\mathrm{d} y\biggr\}u_0'(x) \,\mathrm{d} x \\ \nonumber \,&=\, \int_0^\infty \biggl\{\int_y^\infty K(x,y) u_0'(x)\,\mathrm{d} x\biggr\}\Bigl(w(y)-w(-y)\Bigr) \,\mathrm{d} y \\ \label{intex} \,&=\, \frac{1}{3} \int_0^\infty e^{-\sqrt{2}y}\,\frac{3+e^{-\sqrt{2}y}}{ 1+e^{-\sqrt{2}y}}\Bigl(w(y)-w(-y)\Bigr) \,\mathrm{d} y, \end{align} hence $|\langle u_0', W \rangle_{L^2}| \le 2^{-1/4}\|w\|_{L^2}$. It follows that $|A| \le C\|w\|_{L^2}$ for some $C > 0$. On the other hand, if we introduce the operator notation $W = \hat{K}(w)$ for the representation \eqref{variation-u}, we note that $\hat{K}$ is a bounded operator from $L^{\infty}(\mathbb{R})$ to $L^{\infty}(\mathbb{R})$ with norm $$ K_{\infty} \,=\, \sup_{x \in \mathbb{R}} \int_0^{|x|} K(x,y) \,\mathrm{d} y \,=\, \frac{1}{\sqrt{2}} \sup_{x \in \mathbb{R}} \frac{1 + 2 \sqrt{2} |x| e^{-\sqrt{2}|x|} - e^{-2 \sqrt{2}|x|}}{1 + 2 e^{-\sqrt{2}|x|} + e^{-2 \sqrt{2}|x|}} \,<\, \infty, $$ as well as a bounded operator from $L^1(\mathbb{R})$ to $L^1(\mathbb{R})$ with norm $$ K_1 \,=\, \sup_{y \in \mathbb{R}} \int_{|y|}^{\infty} K(x,y) \,\mathrm{d} x \,=\, \frac{1}{\sqrt{2}} \sup_{y \in \mathbb{R}} \Bigl(1 + e^{-\sqrt{2}|y|}\Bigr) = \sqrt{2}. $$ By the Riesz-Thorin interpolation theorem, it follows that $\hat{K}$ is a bounded operator from $L^2(\mathbb{R})$ to $L^2(\mathbb{R})$, and we have the estimate $\|W\|_{L^2} = \|\hat{K}(w) \|_{L^2} \leq (K_1 K_{\infty})^{1/2} \|w\|_{L^2}$. Summarizing, we have shown that $\|u\|_{L^2} \le |A| \|u_0'\|_{L^2} + \|W\|_{L^2} \le C \|w\|_{L^2}$ for some $C > 0$. Since $w = u_x + \sqrt{2} u_0 u$, we also have $\|u_x\|_{L^2} \le \|w\|_{L^2} + \sqrt{2}\|u\|_{L^2}$ and (after differentiating) $\|u_{xx}\|_{L^2} \le \|w_x\|_{L^2} + \sqrt{2} \|u_x\|_{L^2} + \| u \|_{L^2}$. This proves the bound \eqref{bound-u}. \end{proof} \begin{remark}\label{oldcond} Combining \eqref{operator-K-plus-soliton} and \eqref{bound-u}, we conclude that there exists a constant $C_+ > 0$ such that \begin{equation} \label{K+coercive} \langle K_+ u, u \rangle_{L^2} \,\ge\, C_+ \|u\|_{H^2}^2, \end{equation} for all $u \in H^2(\mathbb{R})$ satisfying $\langle u_0', u \rangle_{L^2} = 0$. \end{remark} \begin{lemma} \label{lemma-soliton-2} Let $u_0$ be the black soliton \eqref{black-soliton}. There exists a positive constant $C$ such that, for any $v \in H^2_{\rm loc}(\mathbb{R})$ satisfying $v_x \in H^1(\mathbb{R})$ and $\langle u_0'', v \rangle_{L^2} = 0$, we have the estimate \begin{equation} \label{bound-v} \| v_{xx} \|_{L^2} + \| v_x \|_{L^2} + |v(0)| \,\le\, C(\| p \|_{L^2} + \| q \|_{L^2}), \end{equation} where $p = u_0 v_x - u_0' v$ and $q = -L_- v = v_{xx} + (1-u_0^2) v$. \end{lemma} \begin{proof} Any solution of the linear differential equation $u_0 v_x - u_0' v = p$ has the form $v = B u_0 + Z$ for some $B \in \mathbb{R}$, where \begin{equation} \label{variation-v} Z(x) \,=\, u_0(x)\int_0^x \Bigl(p(y) + \sqrt{2}q(y)\Bigr)\,\mathrm{d} y - \sqrt{2}p(x). \end{equation} Indeed, we observe that $p_x = u_0 v_{xx} - u_0'' v = u_0 (v_{xx} + (1-u_0^2) v) = u_0 q$. Thus, if $v = B u_0 + Z$, we have \begin{equation} \label{variation-v2} v_x(x) \,=\, u_0'(x)\left(B + \int_0^x \Bigl(p(y) + \sqrt{2}q(y) \Bigr)\,\mathrm{d} y\right) + u_0(x)p(x), \end{equation} hence $u_0 v_x - u_0' v = (u_0^2 + \sqrt{2}u_0')p = p$. The constant $B$ is uniquely determined by the orthogonality condition $\langle u_0'', v \rangle_{L^2} = 0$, which implies that $B \|u_0'\|_{L^2}^2 = \langle u_0'',Z\rangle$. Since $p \in L^2(\mathbb{R})$ and $p_x = u_0 q \in L^2(\mathbb{R})$, we have $p \in L^\infty(\mathbb{R})$ by Sobolev's embedding resulting in the bound $\|p\|_{L^\infty}^2 \le \|p\|_{L^2} \|p_x\|_{L^2} \le \|p\|_{L^2}\|q\|_{L^2}$. Thus, using \eqref{variation-v} and H\"older's inequality, we deduce that $$ |Z(x)| \,\le\, \sqrt{2} (|x|^{1/2} + 1)(\|p\|_{L^2} + \|q\|_{L^2}), \quad x \in \mathbb{R}. $$ This moderate growth of $Z$ is compensated for by the exponential decay of $u_0''$ to zero at infinity, and we obtain $|\langle u_0'',Z\rangle| \le C(\|p\|_{L^2} + \|q\|_{L^2})$ for some $C > 0$, hence also $|B| \le C(\|p\|_{L^2} + \|q\|_{L^2})$. In the same way, it follows from \eqref{variation-v2} that $\|v_x\|_{L^2} \le C(\|p\|_{L^2} + \|q\|_{L^2})$. A similar estimate holds for $\|v_{xx}\|_{L^2}$ because $v_{xx} = q - (1 - u_0^2) v$ and $1 - u_0^2$ has the exponential decay to zero at infinity. Finally, since $v(0) = -\sqrt{2}p(0)$, we also have $|v(0)| \le C(\|p\|_{L^2} + \|q\|_{L^2})$. This proves the bound \eqref{bound-v}. \end{proof} \begin{remark} \label{newcond} Combining \eqref{operator-K-minus} and \eqref{bound-v}, we conclude that there exists a constant $C_- > 0$ such that \begin{equation} \label{K-coercive} \langle K_- v, v \rangle_{L^2} \,\ge\, C_- \Bigl(\|v_x\|_{H^1}^2 + |v(0)|^2\Bigr), \end{equation} for all $v \in H^2_{\rm loc}(\mathbb{R})$ satisfying $v_x \in H^1(\mathbb{R})$ and $\langle u_0'', v \rangle_{L^2} = 0$. As is clear from the proof of Lemma~\ref{lemma-soliton-2}, we need some orthogonality condition on $v$ to prove estimate \eqref{K-coercive}, and since $u_0 \notin L^2(\mathbb{R})$ we cannot impose $\langle u_0, v \rangle_{L^2} = 0$. Thus we use $u_0'' = u_0(u_0^2-1)$ instead of $u_0$. Although $u_0''$ is only an approximate eigenfunction of $K_-$, the orthogonality condition $\langle u_0'', v \rangle_{L^2} = 0$ is good enough for our purposes, as we shall see in Section~\ref{sec:modulation}. \end{remark} \section{Modulation parameters near the black soliton} \label{sec:modulation} This section contains some important preliminary steps in the proof of Theorem \ref{theorem-soliton}. To establish the orbital stability of the black soliton with profile $u_0$, our general strategy is to consider solutions $\psi(x,t)$ of the cubic NLS equation \eqref{nls} of the form \begin{equation} \label{decomposition2} e^{i(t + \theta(t))} \psi(x + \xi(t),t) \,=\, u_0(x) + u(x,t) + i v(x,t), \qquad (x,t) \in \mathbb{R} \times \mathbb{R}, \end{equation} where the perturbations $u,v$ are real-valued and satisfy the orthogonality conditions \begin{equation} \label{projections2} \langle u_0', u(\cdot,t) \rangle_{L^2} \,=\, 0, \qquad \langle u_0'', v(\cdot,t) \rangle_{L^2} \,=\, 0, \qquad t \in \mathbb{R}. \end{equation} As was discussed in Remarks \ref{oldcond} and \ref{newcond}, these conditions are needed to exploit the coercivity properties of the second variation $\Lambda''(u_0)$, where $\Lambda$ is the conserved quantity \eqref{Lamdef}. They also allow us to determine uniquely the ``modulation parameters'', namely the translation $\xi(t)$ and the phase $\theta(t)$, at least for solutions $\psi(x,t)$ in a small neighborhood of the black soliton. To make these considerations rigorous, we first need to specify in which topology that neighborhood is understood; in other words, we need to choose an appropriate perturbation space. Next we have to verify that the modulation parameters exist and depend smoothly on the solution $\psi(x,t)$ in the vicinity of the black soliton. Concerning the first point, we observe that the functional \eqref{Lamdef} which serves as a basis for our analysis is invariant under translations and gauge transformations, and we recall that $\Lambda'(u_0) = 0$. Thus, if $\psi(x,t)$ is a solution of the NLS equation \eqref{nls} of the form \eqref{decomposition2} with $u,v \in H^2(\mathbb{R})$, we have for each fixed $t \in \mathbb{R}$ the following expansion \begin{equation} \label{DeltaLambda2} \Lambda(\psi) - \Lambda(u_0) \,=\, \langle K_+ u, u \rangle_{L^2} + \langle K_- v, v \rangle_{L^2} + N(u,v), \end{equation} where $N(u,v)$ collects all terms that are at least cubic in $u$ and $v$. However, unlike in the periodic case considered in the companion paper \cite{GP}, the decomposition \eqref{DeltaLambda2} is not sufficient to prove the orbital stability of the black soliton. Indeed, the quadratic terms in \eqref{DeltaLambda2} are nonnegative, but they are degenerate in the sense that they do not control the $L^2(\mathbb{R})$ norm of $v$, as can be seen from the lower bound \eqref{K-coercive}. This is due to the fact that the operator $K_-$ has essential spectrum touching the origin, with generalized eigenfunctions corresponding to slow modulations of the phase of the black soliton. As is clear from the proof of Lemma~\ref{lemma-soliton-2}, one cannot even prove that $v \in L^\infty(\mathbb{R})$ if we only know that $\langle K_- v, v \rangle_{L^2} < \infty$. This in turn makes it impossible to control the nonlinearity $N(u,v)$ in \eqref{DeltaLambda2} in terms of the quadratic part $\langle K_+ u, u \rangle_{L^2} + \langle K_- v, v \rangle_{L^2}$. There are good reasons to believe that the above problem is not just a technical one, and that the $H^2$ topology for the perturbations $u,v$ is not appropriate to prove orbital stability of the black soliton. Indeed, as is well known, the cubic NLS equation \eqref{nls} has a family of travelling dark solitons $\psi_{\nu}(x,t)$ given by \eqref{darksoliton}. Rigorous results \cite{GS} and numerical simulations indicate that a small, localized perturbation of the black soliton $\psi_0$ can lead to the formation of a dark soliton $\psi_{\nu}$ with a small nonzero speed $\nu$. If this happens, the functions $u,v$ defined in \eqref{decomposition2} cannot stay bounded in $L^2(\mathbb{R})$ for all times, because $\psi_{\nu} - \psi_0 \notin L^2(\mathbb{R})$ if $\nu \neq 0$. Note, however, that the quantity $|\psi_{\nu}| - |\psi_0|$ does belong to $L^2(\mathbb{R})$ and decays exponentially at infinity. This suggests that a particular combination of $u,v$ may be controlled in $L^2(\mathbb{R})$ for all times. Following \cite{GS}, we introduce the auxiliary variable \begin{equation} \label{etadef} \eta \,=\, |u_0 + u + iv|^2 - |u_0|^2 \,=\, 2 u_0 u + u^2 + v^2, \end{equation} which allows us to control the perturbations of the modulus of the black soliton $u_0$. The idea is now to consider perturbations $u,v$ for which $u_x, v_x \in H^1(\mathbb{R})$, $\eta \in L^2(\mathbb{R})$, and $u,v \in L^2(-R,R)$ for some fixed $R \ge 1$. If $\psi = u_0 + u + iv$, this is equivalent to requiring that $\psi \in X$, where $X$ is the function space \eqref{Xdef}, or that $d_R(\psi,u_0) < \infty$, where $d_R$ is the distance \eqref{distance}. Indeed, we have by definition \begin{equation} \label{distance2} d_R(\psi,u_0) \,=\, \|u_x + iv_x\|_{H^1(\mathbb{R})} + \|\eta\|_{L^2(\mathbb{R})} + \|u + iv\|_{L^2(-R,R)}. \end{equation} Note, however, that we do not assume any longer that $u,v$ are square integrable at infinity. In particular, the perturbed solutions we consider include dark solitons $\psi_{\nu}$ with nonzero speed $\nu$. Now that we have defined a precise perturbation space, we can state our first result showing the existence and the continuity of the modulation parameters $\xi$ and $\theta$ in a neighborhood of the orbit of the soliton profile $u_0$. The following statement is very close in spirit to Proposition~2 in \cite{GS} or Lemma~6.1 in \cite{GP}. \begin{lemma} \label{lemma-xith} Fix any $R \ge 1$. There exists $\epsilon_0 > 0$ such that, for any $\psi \in X$ satisfying \begin{equation} \label{inf2} \inf_{\xi, \theta \in \mathbb{R}} d_R\Bigl(e^{i \theta} \psi(\cdot + \xi), u_0\Bigr) \,\le\, \epsilon_0, \end{equation} there exist $\xi \in \mathbb{R}$ and $\theta \in \mathbb{R}/(2\pi\mathbb{Z})$ such that \begin{equation} \label{decomp2} e^{i \theta} \psi(x + \xi) \,=\, u_0(x) + u(x) + i v(x), \quad x \in \mathbb{R}, \end{equation} where the real-valued functions $u$ and $v$ satisfy the orthogonality conditions \eqref{projections2}. Moreover, the modulation parameters $\xi \in \mathbb{R}$ and $\theta \in \mathbb{R}/(2\pi\mathbb{Z})$ depend continuously on $\psi$ in the topology defined by the distance \eqref{distance}. \end{lemma} \begin{proof} It is sufficient to prove \eqref{decomp2} for all $\psi \in X$ such that $\epsilon := d_R(\psi,u_0)$ is sufficiently small. Given such a $\psi \in X$, we consider the smooth function ${\bf f} : \mathbb{R}^2 \to \mathbb{R}^2$ defined by \[ {\bf f}(\xi,\theta) \,=\, \begin{pmatrix} \langle u_0'(\cdot - \xi), {\rm Re}(e^{i \theta} \psi) \rangle_{L^2} \\[1mm] \langle u_0''(\cdot - \xi), {\rm Im}(e^{i \theta} \psi) \rangle_{L^2} \end{pmatrix}, \qquad (\xi,\theta) \in \mathbb{R}^2. \] By construction, we have ${\bf f}(\xi,\theta) = {\bf 0}$ if and only if $\psi$ can be represented as in \eqref{decomp2} for some real-valued functions $u,v$ satisfying the orthogonality conditions \eqref{projections2}. If we decompose $\psi = u_0 + u + iv$ where $u,v$ are real-valued, we have $\langle u_0', {\rm Re}(\psi)\rangle_{L^2} = \langle u_0', u\rangle_{L^2}$ because $\langle u_0',u_0\rangle_{L^2} = 0$. As in the proof of Lemma~\ref{lemma-soliton-2}, we observe that \[ |u(x)| \,\le\, C\Bigl(\|u\|_{L^2(-1,1)} + (1+|x|^{1/2})\|u_x\|_{L^2(\mathbb{R})} \Bigr) \,\le\, C(1+|x|^{1/2})d_R(\psi,u_0), \] where in the last inequality we have used \eqref{distance2}. Thus $|\langle u_0', {\rm Re}(\psi)\rangle_{L^2}| \le C d_R(\psi,u_0)$, and a similar argument gives $|\langle u_0'', {\rm Im}(\psi) \rangle_{L^2}| \le C d_R(\psi,u_0)$. This shows that $\|{\bf f} (0,0) \| \le C \epsilon$ for some positive constant $C$ independent of $\epsilon$. On the other hand, the Jacobian matrix of the function ${\bf f}$ at the origin $(0,0)$ is given by \[ D {\bf f}(0,0) \,=\, \begin{pmatrix} \|u_0'\|_{L^2}^2 & 0 \\ 0 & -\|u_0'\|_{L^2}^2 \end{pmatrix} \,+\, \begin{pmatrix} -\langle u_0'',{\rm Re}(\psi - u_0) \rangle_{L^2} & -\langle u_0', {\rm Im}(\psi - u_0)\rangle_{L^2} \\[.5mm] -\langle u_0''', {\rm Im}(\psi- u_0)\rangle_{L^2} & \langle u_0'', {\rm Re}(\psi - u_0)\rangle_{L^2} \end{pmatrix}. \] The first term in the right-hand side is a fixed invertible matrix and the second term is bounded in norm by $C\epsilon$, hence $D {\bf f}(0,0)$ is invertible if $\epsilon$ is small enough. In addition, the norm of the inverse of $D {\bf f}(0,0)$ is bounded by a constant independent of $\epsilon$. Finally, it is straightforward to verify that the second-order derivatives of ${\bf f}$ are uniformly bounded when $\epsilon \le 1$. These observations together imply that there exists a unique pair $(\xi,\theta)$, in a neighborhood of size $\mathcal{O}(\epsilon)$ of the origin, such that ${\bf f}(\xi,\theta) = {\bf 0}$. Thus the decomposition \eqref{decomposition2} holds for these values of $(\xi,\theta)$. In addition, the above argument shows that the modulation parameters $\xi,\theta$ depend continuously on $\psi \in X$ in the topology defined by the distance \eqref{distance}. This concludes the proof. \end{proof} As was already mentioned, the Cauchy problem for the NLS equation \eqref{nls} is globally well-posed in the space $X$ \cite{Zhidkov}. If $\psi(\cdot,t)$ is a solution of \eqref{nls} in $X$ which stays for all times in a neighborhood of the orbit of the black soliton, the modulation parameters $\xi(t)$, $\theta(t)$ given by the decomposition (\ref{decomposition2}) subject to the orthogonality conditions (\ref{projections2}) are continuous functions of time. In fact, as in \cite[Lemma 6.3]{GP}, we have the following stronger conclusion: \begin{lemma} \label{difflem} If $\epsilon > 0$ is sufficiently small, and if $\psi(\cdot,t)$ is any solution of the NLS equation \eqref{nls} satisfying estimate \eqref{bound-final} for all $t \in \mathbb{R}$, then the modulation parameters $\xi(t),\theta(t)$ in the decomposition (\ref{decomposition2}) subject to (\ref{projections2}) are continuously differentiable functions of $t$ satisfying \eqref{bound-time-per}. \end{lemma} \begin{proof} If $\psi(\cdot,t)$ is any solution of the NLS equation \eqref{nls} in $X$, we know from \cite{Gerard,Zhidkov} that $t \mapsto \psi(\cdot,t)$ is continuous in the topology defined by the distance \eqref{distance}. Thus, if estimate \eqref{bound-final} holds for all $t \in \mathbb{R}$, Lemma~\ref{lemma-xith} shows that $\psi(\cdot,t)$ can be decomposed as in \eqref{decomposition2} with modulation parameters $\xi(t),\theta(t)$ that depend continuously on time. To prove differentiability, we first consider more regular solutions for which $\psi(\cdot,t) \in Y$, where \[ Y \,=\, \Bigl\{\psi \in H^4_{\rm loc}(\mathbb{R})\,: \quad \psi_x \in H^3(\mathbb{R}), ~1 - |\psi|^2 \in L^2(\mathbb{R}) \Bigr\}. \] For such solutions, it is not difficult to verify (by inspecting the proof of Lemma~\ref{lemma-xith}) that the modulation parameters are $C^1$ functions of time, so that we can differentiate both sides of \eqref{decomposition2} and obtain from (\ref{nls}) the evolution system \[ \left\{\!\!\begin{array}{l} ~\,\,u_t \,=\, L_- v + \dot{\xi} (u_0' + u_x) - \dot{\theta} v + (2 u_0 u + u^2 + v^2) v, \\ -v_t \,=\, L_+ u -\dot{\xi} v_x - \dot{\theta} (u_0 + u) + (3 u_0 u + u^2 + v^2) u + u_0 v^2, \end{array} \right. \] where the operators $L_\pm$ are defined in \eqref{operatorsdef}. Using the orthogonality conditions \eqref{projections2}, we eliminate the time derivatives $u_t, v_t$ by taking the scalar product of the first line with $u_0'$ and of the second line with $u_0''$. This gives the following linear system for the derivatives $\dot{\xi}$ and $\dot{\theta}$: \begin{equation} \label{Bsys} B \begin{pmatrix} \dot{\xi} \\[.5mm] \dot{\theta} \end{pmatrix} \,=\, \begin{pmatrix} \langle L_- u_0', v \rangle_{L^2} \\[.5mm] \langle L_+ u_0'', u \rangle_{L^2} \end{pmatrix} \,+\, \begin{pmatrix} \langle u_0', (2 u_0 u + u^2 + v^2) v \rangle_{L^2} \\[.5mm] \langle u_0'', (3 u_0 u + u^2 + v^2) u + u_0 v^2 \rangle_{L^2} \end{pmatrix}, \end{equation} where \begin{equation} \label{BBdef} B \,=\, \begin{pmatrix} -\| u_0' \|^2_{L^2} & 0 \\ 0 & -\|u_0'\|^2_{L^2}\end{pmatrix} \,+\, \begin{pmatrix} -\langle u_0', u_x \rangle_{L^2} & \langle u_0', v \rangle_{L^2} \\[.5mm] \langle u_0'', v_x \rangle_{L^2} & \langle u_0'', u\rangle_{L^2} \end{pmatrix}. \end{equation} As in the proof of Lemma~\ref{lemma-xith}, it is easy to verify using \eqref{bound-final} that the second term in the right-hand side of \eqref{BBdef} is bounded by $C \epsilon$ for some positive constant $C$, hence the matrix $B$ is invertible if $\epsilon$ is small enough. Inverting $B$ in \eqref{Bsys}, we obtain a formula for the derivatives $\dot{\xi},\dot{\theta}$ in which the right-hand side makes sense (and is a continuous function of time) for any solution $\psi(\cdot,t) \in X$ of \eqref{nls} satisfying \eqref{bound-final} for all times. Since $Y$ is dense in $X$, we conclude by a standard approximation argument that the modulation parameters $\xi(t),\theta(t)$ are $C^1$ functions of time in the general case, and that their derivatives satisfy \eqref{Bsys}. Finally, the first term in the right-hand side of \eqref{Bsys} is of size $\mathcal{O}(\epsilon)$, whereas the second term is $\mathcal{O}(\epsilon^2)$, hence $|\dot{\xi}(t)| + |\dot{\theta}(t)| \le C\epsilon$ for all $t \in \mathbb{R}$, where the positive constant $C$ is independent of $t$. This concludes the proof. \end{proof} \section{Proof of orbital stability of the black soliton} \label{sec:stability} This final section is entirely devoted to the proof of Theorem~\ref{theorem-soliton}. As in the previous section, we consider solutions of the NLS equation \eqref{nls} of the form \eqref{decomposition2}, where the real-valued perturbations $u,v$ satisfy the orthogonality conditions \eqref{projections2}. Our main task is a detailed analysis of the functional \eqref{Lamdef} in a neighborhood of the orbit of the soliton profile $u_0$. Instead of using the straightforward decomposition \eqref{DeltaLambda2}, the main idea is to express the difference $\Lambda(\psi) - \Lambda(u_0)$ in terms of the variables $u$, $v$, and $\eta$, where $\eta$ is defined in \eqref{etadef}. \begin{lemma} \label{lemma-soliton-3} If $\psi = u_0 + u + iv$ satisfies $d_R(\psi,u_0) < \infty$, then \begin{align}\nonumber \Lambda(\psi) - \Lambda(u_0) \,=\, \int_{\mathbb{R}} \Bigl(&u_{xx}^2 + v_{xx}^2 + (3u_0^2-2) (u_x^2 + v_x^2) + (1-u_0^2)(u^2+v^2) \\ \label{Lamexp} &-3(1-u_0^2)(1-3u_0^2)u^2 + \frac12 \eta_x^2 + \frac12(3u_0^2-2) \eta^2\\ \nonumber &+\frac12 \eta^3 + 3\eta(u_x^2 + v_x^2) + 6u_0'(u^2+v^2)u_x\Bigr)\,\mathrm{d} x~. \end{align} \end{lemma} \begin{proof} We observe that $|\psi|^2 = u_0^2 + \eta$ and $\bar \psi \psi_x + \psi \bar \psi_x = 2u_0 u_0' + \eta_x$. Thus, if $$ A(\psi) \,=\, |\psi_{xx}|^2 + |\psi_x|^2 (3 |\psi|^2 -2) + \frac{1}{2} (\bar{\psi} \psi_x + \psi \bar{\psi}_x)^2 + \frac{1}{2} |\psi|^2 (1 - |\psi|^2)^2 $$ denotes the integrand in the functional $\Lambda = S - 2 E$, a direct calculation shows that \begin{align}\nonumber A(\psi) - A(u_0) \,=~ &\mathcal{L}(u,\eta) + 6\eta u_0' u_x + u_{xx}^2 + v_{xx}^2 + (3u_0^2-2)(u_x^2 + v_x^2) \\ \label{Aexp} &+ \frac12 \eta_x^2 + \frac12(3u_0^2-2)\eta^2 + \frac12 \eta^3 + 3\eta(u_x^2 + v_x^2), \end{align} where $\mathcal{L}(u,\eta) = 2u_0''u_{xx} + 2(3u_0^2-2)u_0'u_x + 2u_0u_0'\eta_x + \eta(1-u_0^2)(2-3u_0^2)$. We now integrate the right-hand side of \eqref{Aexp} over $x \in \mathbb{R}$, starting with the terms $\mathcal{L}(u,\eta)$ which are linear in $u$ and $\eta$. Using the identities $u_0'' + u_0 - u_0^3 = 0$ and $u_0'''' + (1-3u_0^2)u_0'' -6u_0 u_0'^2 = 0$, we find \begin{align*} 2\int_\mathbb{R} \Bigl(u_0''u_{xx} + (3u_0^2-2)u_0'u_x\Bigr)\,\mathrm{d} x \,&=\, 2\int_\mathbb{R} \Bigl(u_0'''' - (3u_0^2-2)u_0'' - 6u_0 u_0'^2\Bigr)u\,\mathrm{d} x \\ \,&=\, 2\int_\mathbb{R} u_0'' u \,\mathrm{d} x \,=\, -2\int_\mathbb{R} (1-u_0^2)u_0 u \,\mathrm{d} x. \end{align*} Similarly, as $2(u_0 u_0')' = (1-u_0^2)(1-3u_0^2)$, we have $$ 2\int_\mathbb{R} u_0 u_0' \eta_x \,\mathrm{d} x = -2\int_R (u_0 u_0')'\eta \,\mathrm{d} x = - \int_\mathbb{R} (1-u_0^2)(1-3u_0^2)\eta d x. $$ We conclude that \begin{equation} \label{Lterms} \int_\mathbb{R} \mathcal{L}(u,\eta)\,\mathrm{d} x \,=\, \int_\mathbb{R} (1-u_0^2) (\eta - 2u_0 u)\,\mathrm{d} x \,=\, \int_\mathbb{R} (1-u_0^2)(u^2+v^2)\,\mathrm{d} x. \end{equation} Note that \eqref{Lterms} is now quadratic in $u$ and $v$, which could be expected since $u_0$ is a critical point of the functional $\Lambda$. We next consider the quadratic term $6\eta u_0' u_x$ in \eqref{Aexp}, which has no definite sign. Using the representation \eqref{etadef}, we find $6\eta u_0' u_x = 12 u_0 u_0' u u_x + 6 u_0' (u^2+v^2)u_x$, and integrating by parts, we obtain \begin{equation} \label{Qterm} 6\int_\mathbb{R} \eta u_0' u_x\,\mathrm{d} x \,=\, -3\int_\mathbb{R} (1-u_0^2)(1-3u_0^2)u^2 \,\mathrm{d} x + 6 \int_\mathbb{R} u_0' (u^2+v^2)u_x \,\mathrm{d} x. \end{equation} Now, combining \eqref{Aexp}, \eqref{Lterms}, and \eqref{Qterm}, we arrive at \eqref{Lamexp}. \end{proof} To simplify the notations, we define \begin{align}\nonumber B_0(u) \,&=\, u_{xx}^2 + (5u_0^2-2)u_x^2 - (1-3u_0^2)u^2 - (1-u_0^2)(1-5u_0^2)u^2\\ \label{Bdef} B_1(u) \,&=\, u_{xx}^2 + (3u_0^2-2)u_x^2 + (1-u_0^2)u^2 - 3 (1-u_0^2)(1-3u_0^2)u^2\\ \nonumber B_2(v) \,&=\, v_{xx}^2 + (3u_0^2-2)v_x^2 + (1-u_0^2)v^2\\ \nonumber B_3(\eta) \,&=\, {\textstyle \frac12\eta_x^2 + \frac12(3u_0^2-2)\eta^2}. \end{align} The quadratic terms in the right-hand side of \eqref{Lamexp} can be written in the compact form \begin{equation} \label{Qdef} Q(u,v,\eta) = \int_\mathbb{R} \Bigl(B_1(u) + B_2(v) + B_3(\eta)\Bigr)\,\mathrm{d} x. \end{equation} We see that $Q(u,v,\eta)$ contains $\langle K_-v,v\rangle \equiv \int_\mathbb{R} B_2(v)\,\mathrm{d} x$, but not $\langle K_+u,u\rangle \equiv \int_\mathbb{R} B_0(u)\,\mathrm{d} x$. Instead, it only contains $\int_\mathbb{R} B_1(u)\,\mathrm{d} x$ and $\int_\mathbb{R} B_3(\eta)\,\mathrm{d} x$. This discrepancy is due to that fact that the variables $u$ and $\eta$ are not independent. As $\eta = 2 u_0 u + u^2 + v^2$, the quantity $\int_\mathbb{R} B_3(\eta)\,\mathrm{d} x$ also contains quadratic terms in $u$ and $u_x$, which should be added to $\int_{\mathbb{R}} B_1(u) dx$ to obtain $\int_{\mathbb{R}} B_0(u) dx$. Due to the relation between $u$ and $\eta$, it is not obvious that each quadratic term in \eqref{Qdef} is positive independently of the others. To avoid that difficulty, we fix some $R \ge 1$ (which will be chosen large enough below) and we split the integration domain into two regions. When $|x| \le R$, we replace $\eta$ by $2 u_0 u + u^2 + v^2$, and we use extensions of Lemmas \ref{lemma-soliton-1} and \ref{lemma-soliton-2} to prove positivity of the quadratic terms in \eqref{Qdef}. In the outer region $|x| > R$, the analysis is much simpler, because the expressions $B_1(u)$, $B_2(v)$, and $B_3(\eta)$ are obviously positive if $R$ is large enough. Since $\eta$ is a nonlinear function of $u$ and $v$, the analysis of the quadratic expression \eqref{Qdef} will produce higher-order terms, which will be controlled using a smallness assumption on the distance $d_R(\psi,u_0)$. To that purpose, we find it convenient to introduce the quantity \begin{equation} \label{rhodef} \rho^2(u,v,\eta) \,=\, \int_\mathbb{R} \Bigl(u_{xx}^2 + v_{xx}^2 + u_x^2 + v_x^2 \Bigr)\,\mathrm{d} x + \int_{|x|\le R} \Bigl(u^2 + R^{-2}v^2\Bigr) \,\mathrm{d} x + \int_{|x|\ge R} \Bigl(\eta_x^2 + \eta^2\Bigr)\,\mathrm{d} x, \end{equation} which is equivalent to the squared distance \eqref{distance2} in a neighborhood of $u_0$. Indeed, we have the following elementary result: \begin{lemma} \label{auxlem} Fix $R \ge 1$, and assume that $\psi = u_0 + u + iv$, where $u,v \in H^2_{\rm loc}(\mathbb{R})$ are real-valued. Let $d_R(\psi,u_0)$ be given by \eqref{distance2} and $\rho(u,v,\eta)$ by \eqref{rhodef}. \\[1mm] {\bf a)} One has $d_R(\psi,u_0) < \infty$ if and only if $\rho(u,v,\eta) < \infty$.\\[1mm] {\bf b)} There exists a constant $C_0 \ge 1$ (independent of $R$) such that, if $d_R(\psi,u_0) \le 1$ or if\\ \null\hspace{6mm}$R^{1/2}\rho(u,v,\eta) \le 1$, then \begin{equation} \label{rhoequiv} C_0^{-1} \rho(u,v,\eta) \le d_R(\psi,u_0) \le C_0 R \rho(u,v,\eta). \end{equation} \end{lemma} \begin{proof} Throughout the proof, we denote $d_R(\psi,u_0)$ by $d_R$ and $\rho(u,v,\eta)$ simply by $\rho$. We proceed in three steps. \smallskip \noindent{\bf Step 1:} Assume first that $d_R < \infty$, so that $u_x,v_x \in H^1(\mathbb{R})$, $u,v \in L^2(-R,R)$, and $\eta \in L^2(\mathbb{R})$, where $\eta = |\psi|^2 - |u_0|^2 = 2 u_0 u + u^2 + v^2$. We claim that $u,v \in L^\infty(\mathbb{R})$ and that \begin{equation} \label{Kbound1} K \,:=\, \|u\|_{L^\infty(\mathbb{R})} + \|v\|_{L^\infty(\mathbb{R})} \,\le\, C(1 + d_R), \end{equation} for some universal constant $C > 0$. Indeed, if $f = |\psi| - |u_0|$, we observe that $$ d_R^2 \,\ge\, \int_\mathbb{R} \eta^2 \,\mathrm{d} x \,\ge\, \int_{|x|\ge 1} (|\psi| - |u_0|)^2(|\psi| + |u_0|)^2\,\mathrm{d} x \ge C \int_{|x|\ge 1} f^2 \,\mathrm{d} x, $$ hence $f \in L^2(I)$, where $I = \{x \in \mathbb{R} : |x| \ge 1\}$, and $\|f\|_{L^2(I)} \le C d_R$. Moreover, we have $|f_x| \le 2 u_0' + |u_x| + |v_x|$ almost everywhere, hence $f_x \in L^2(\mathbb{R})$ and $\|f_x\|_{L^2(\mathbb{R})} \le C (1+d_R)$. By Sobolev embedding, this implies that $f \in L^\infty(I)$, hence also $u,v \in L^\infty(I)$, and we have the bound $\|u\|_{L^\infty(I)} + \|v\|_{L^\infty(I)} \le C (1+d_R)$. Finally, since $\|u_x\|_{L^2(\mathbb{R})} + \|v_x\|_{L^2(\mathbb{R})} \le C d_R$, we conclude that $u,v \in L^\infty(\mathbb{R})$ and that \eqref{Kbound1} holds. \smallskip \noindent{\bf Step 2:} Next, we assume that $\rho < \infty$, so that $u_x,v_x \in H^1(\mathbb{R})$, $u,v \in L^2(-R,R)$, and $\eta \in H^1(I_R)$, where $I_R = \{x \in \mathbb{R} : |x| \ge R\}$. We claim that $u,v \in L^\infty(\mathbb{R})$ and that \begin{equation} \label{Kbound2} K \,:=\, \|u\|_{L^\infty(\mathbb{R})} + \|v\|_{L^\infty(\mathbb{R})} \,\le\, C(1 + R^{1/2}\rho), \end{equation} for some universal constant $C > 0$. Indeed, we know that $\eta \in L^\infty(I_R)$ with $\|\eta\|_{L^\infty(I_R)} \le C\rho$. This implies that $\psi \in L^\infty(I_R)$, hence also $u,v \in L^\infty(I_R)$, and that $\|u\|_{L^\infty(I_R)}+\|v\|_{L^\infty(I_R)} \le C(1+\rho)^{1/2}$. On the other hand, we know that $\|u\|_{L^\infty(-R,R)} \le C \|u\|_{H^1(-R,R)} \le C\rho$ and that $$ \|v\|_{L^\infty(-R,R)} \,\le\, C\biggl(\frac{\|v\|_{L^2(-R,R)}}{R^{1/2}} + \|v\|_{L^2(-R,R)}^{1/2}\|v_x\|_{L^2(-R,R)}^{1/2}\biggr) \,\le\, CR^{1/2}\rho, $$ because $\|v\|_{L^2(-R,R)} \le R\rho$ and $\|v_x\|_{L^2(-R,R)} \le \rho$. Thus we conclude that $u,v \in L^\infty(\mathbb{R})$ and that \eqref{Kbound2} holds. \smallskip \noindent{\bf Step 3:} Finally we assume that $K = \|u\|_{L^\infty(\mathbb{R})} + \|v\|_{L^\infty(\mathbb{R})} < \infty$, which is the case if $d_R < \infty$ or if $\rho < \infty$. As $\eta = 2u_0u + u^2 + v^2$, we find $$ \|\eta\|_{L^2(-R,R)} \,\le\, C(1+K) \Bigl(\|u\|_{L^2(-R,R)} + \|v\|_{L^2(-R,R)}\Bigr) \,\le\, C(1+K)R\rho, $$ because $\|u\|_{L^2(-R,R)} \le \rho$ and $\|v\|_{L^2(-R,R)} \le R\rho$. This shows that, if $\rho < \infty$, then $\eta \in L^2(\mathbb{R})$, so that $d_R < \infty$, and we have the bound $d_R \le C(1+K)R\rho$. Conversely, since $\eta_x = 2(u_0' u + u_0 u_x + uu_x + vv_x)$, we obtain $$ \|\eta_x\|_{L^2(\mathbb{R})} \,\le\, C(1+K) \Bigl(\|u\|_{L^2(-1,1)} + \|u_x\|_{L^2(\mathbb{R})} + \|v_x\|_{L^2(\mathbb{R})}\Bigr) \,\le\, C(1+K)d_R, $$ where to estimate $u_0' u$ we used the fact that $|u(x)| \le C(\|u\|_{L^2(-1,1)} + (1+|x|)^{1/2}\|u_x\|_{L^2(\mathbb{R})})$. This shows that, if $d_R < \infty$, then $\eta_x \in L^2(\mathbb{R})$, so that $\rho < \infty$, and we have the bound $\rho \le C(1+K) d_R$. This concludes the proof. \end{proof} In the calculations below, to avoid boundary terms when integrating by parts in expressions such as \eqref{Qdef}, it is technically convenient to split the integration domain using a smooth partition of unity. Let $\chi : \mathbb{R} \to [0,1]$ be a smooth cut-off function such that $$ \chi(x) \,=\, 1 \quad \mbox{\rm for} \quad |x| \le \frac{1}{2}\,, \qquad {\rm and} \qquad \chi(x) \,=\, 0 \quad \mbox{\rm for} \quad |x| \ge \frac{3}{2}\,. $$ We further assume that $\chi$ is even, that $\chi'(x) \le 0$ for $x \ge 0$, and that $\chi(1) = \frac{1}{2}$. Given $R \ge 1$, we denote $\chi_R(x) = \chi(x/R)$. The following estimates will be useful to control the functions $u,v$ on the support of $\chi_R'$. \begin{lemma} \label{auxlem2} Fix $R \ge 1$, and assume that $\psi = u_0 + u + iv$ satisfies $d_R(\psi,u_0) < \infty$. Then there exists a constant $C_1 > 0$ (independent of $R$) such that \begin{align} \label{locbound1} \|u\|_{L^2(-2R,2R)} \,&\le\, C_1 (\rho(u,v,\eta) + R^{3/2}\rho(u,v,\eta)^2), \\ \label{locbound2} \|u\|_{L^\infty(-2R,2R)} + \|v\|_{L^\infty(-2R,2R)} \,&\le\, C_1 R^{1/2}\rho(u,v,\eta), \end{align} where $\rho(u,v,\eta)$ is given by \eqref{rhodef}. \end{lemma} \begin{proof} If $f$ is either $u$ or $v$, then $|f(x)| \le C(R^{-1/2}\|f\|_{L^2(-R,R)} + (|x|+R)^{1/2}\|f_x\|_{L^2(\mathbb{R})})$, and this gives the bound \eqref{locbound2}. To prove estimate \eqref{locbound1}, we recall that $\|u\|_{L^2(-R,R)} \le \rho(u,v,\eta)$, so we only need to control $u(x)$ for $R \le |x| \le 2R$. In that region we have $|u| \le C(|\eta| + u^2 + v^2)$, hence using the bound \eqref{locbound2} and the fact that $\|\eta\|_{L^2(|x| \ge R)} \le \rho(u,v,\eta)$ we obtain the desired result. \end{proof} We now analyze the quadratic terms in the representation \eqref{Qdef}. \begin{lemma} \label{lemma-soliton-4} Under the assumptions of Lemma~\ref{auxlem}, if $d_R(\psi,u_0) \le 1$, we have \begin{equation}\label{Bident} \int_\mathbb{R} \Bigl(B_1(u) + B_3(\eta)\Bigr)\chi_R(x)\,\mathrm{d} x \,=\, \int_\mathbb{R} B_0(u)\chi_R(x)\,\mathrm{d} x + \mathcal{O}(R^3\rho(u,v,\eta)^3 + e^{-R}\rho(u,v,\eta)^2), \end{equation} where the estimate in the big O term holds uniformly for $R \ge 1$. \end{lemma} \begin{proof} Since $\eta = 2u_0u + u^2 + v^2$, we find by a direct calculation $$ B_3(\eta) \,=\, 2u_0'^2u^2 + 2 u_0^2 u_x^2 + 4 u_0 u_0' u u_x + 2(3u_0^2-2) u_0^2 u^2 + \tilde N(u,v), $$ where \begin{align*} \tilde N(u,v) \,&=\, 4(uu_x + vv_x)(u_0'u + u_0u_x) + 2(uu_x + vv_x)^2 \\ &\quad + 4(3u_0^2-2)u_0u(u^2+v^2) + 2(3u_0^2-2)(u^2+v^2)^2. \end{align*} In view of the definitions \eqref{Bdef}, this implies that $$ B_1(u) + B_3(\eta) \,=\, B_0(u) + (2u_0 u_0' u^2)_x + \tilde N(u,v). $$ If we now multiply both sides by $\chi_R(x)$ and integrate over $x \in \mathbb{R}$, we arrive at \eqref{Bident}, because it is straightforward to verify using \eqref{rhodef}, \eqref{Kbound1} and \eqref{locbound2} that $$ -2\int_\mathbb{R} u_0 u_0' u^2 \chi_R'(x) \,\mathrm{d} x \,=\, \mathcal{O}(e^{-R} \rho(u,v,\eta)^2), \quad \hbox{and}\quad \int_\mathbb{R} \tilde N(u,v) \chi_R(x) \,\mathrm{d} x \,=\, \mathcal{O}(R^3\rho(u,v,\eta)^3). $$ This concludes the proof of the lemma. \end{proof} Using Lemma~\ref{lemma-soliton-4}, we are able to derive the desired lower bound on the difference $\Lambda(\psi) - \Lambda(u_0)$ in terms of the quantity $\rho(u,v,\eta)$. \begin{proposition} \label{prop-soliton} If $R \ge 1$ is sufficiently large, there exists a constant $C_2 > 0$ such that, if $\psi = u_0 + u + iv$ satisfies $d_R(\psi,u_0) \le 1$ and if $\langle u_0',u\rangle_{L^2} = \langle u_0'',v\rangle_{L^2} = 0$, then \begin{equation} \label{Lamlower} \Lambda(\psi) - \Lambda(u_0) \,\ge\, C_2 \rho(u,v,\eta)^2 + \mathcal{O}(R^3\rho(u,v,\eta)^3), \end{equation} where the estimate in the big O term is uniform in $R$. \end{proposition} \begin{proof} Proceeding as in the proof of Lemma~\ref{auxlem}, it is easy to estimate the cubic terms in \eqref{Lamexp} in terms of $\rho(u,v,\eta)$ using, in particular, the uniform bound \eqref{Kbound1} and the estimate \eqref{locbound2}. We thus find \begin{equation} \label{lowbd0} \Lambda(\psi) - \Lambda(u_0) \,=\, Q(u,v,\eta) + \mathcal{O}(R^3\rho(u,v,\eta)^3), \end{equation} where $Q(u,v,\eta)$ is given by \eqref{Bdef} and \eqref{Qdef}. Then, in the definition \eqref{Qdef}, we split the integral using the partition of unity $1 = \chi_R + (1-\chi_R)$ and we use Lemma~\ref{lemma-soliton-4}. This gives \begin{align}\nonumber Q(u,v,\eta) \,&=\, \int_\mathbb{R} B_2(v)\,\mathrm{d} x + \int_\mathbb{R} B_0(u)\chi_R(x)\,\mathrm{d} x \\ \label{lowbd1} &\quad + \int_\mathbb{R} \Bigl(B_1(u) + B_3(\eta)\Bigr)(1-\chi_R(x))\,\mathrm{d} x + \mathcal{O}(R^3\rho(u,v,\eta)^3 + e^{-R}\rho(u,v,\eta)^2). \end{align} As $\langle u_0'',v\rangle = 0$, we know from Lemmas~\ref{operator-K-minus} and \ref{lemma-soliton-2} that \begin{equation} \label{lowbd2} \int_\mathbb{R} B_2(v)\,\mathrm{d} x \,\ge\, C \int_\mathbb{R} (v_{xx}^2 + v_x^2)\,\mathrm{d} x + \frac{C}{R^2}\int_{|x| \le R} v^2 \,\mathrm{d} x, \end{equation} where the last term in the right-hand side follows from the bound $|v(x)| \le |v(0)| + |x|^{1/2}\|v_x\|_{L^2}$, which implies $$ \int_{|x| \le R} v^2 \,\mathrm{d} x \,\le\, 4R|v(0)|^2 + 2R^2 \int_\mathbb{R} v_x^2 \,\mathrm{d} x \,\le\, C R^2 \int_\mathbb{R} B_2(v)\,\mathrm{d} x. $$ On the other hand, if $R \ge 1$ is large enough so that $3u_0^2 - 2 \ge \frac{1}{2}$ for $|x| \ge R$, it is clear from \eqref{Bdef} that \begin{equation} \label{lowbd3} \int_\mathbb{R} \Bigl(B_1(u) + B_3(\eta)\Bigr)(1-\chi_R(x))\,\mathrm{d} x \,\ge\, C \int_{|x| \ge R} (u_{xx}^2 + u_x^2 + \eta_x^2 + \eta^2)\,\mathrm{d} x. \end{equation} Finally, we estimate from below the term $\int_\mathbb{R} B_0(u)\chi_R(x)\,\mathrm{d} x$ under the orthogonality assumption $\langle u_0',u\rangle_{L^2} = 0$. Arguing as in Lemma~\ref{lemma-K-plus} and Corollary \ref{lemma-K-minus}, we introduce the auxiliary variable $w = u_x + \sqrt{2}u_0 u$. After integrating by parts, we obtain the identity $$ \int_\mathbb{R} B_0(u) \chi_R(x) \,\mathrm{d} x = \int_\mathbb{R} \Bigl(w_x^2 + w^2 \Bigr)\chi_R(x) \,\mathrm{d} x + J_R, $$ where $$ J_R \,=\, \int_\mathbb{R} \Bigl(\sqrt{2}u_0 u_x^2 + 2\sqrt{2}u_0' u u_x + (2u_0 u_0'- \sqrt{2}u_0'')u^2 + \sqrt{2}u_0^2 u^2\Bigr) \chi_R'(x)\,\mathrm{d} x. $$ Since $\chi_R'(x) = R^{-1} \chi'(x/R)$, we have using the estimate \eqref{locbound1} $$ |J_R| \,\le\, \frac{C}{R} \int_{|x| \le 3R/2} \Bigl(u_x^2 + u^2\Bigr)\,\mathrm{d} x \,\le\, \frac{C_3 \rho(u,v,\eta)^2}{R} + \mathcal{O}(R^2\rho(u,v,\eta)^4), $$ where $C_3 > 0$ is independent of $R$. Moreover, proceeding as in the proof of Lemma~\ref{lemma-soliton-1}, we find \begin{equation} \label{lowbdaux} \int_{|x| \le R} \Bigl(u_{xx}^2 + u_x^2 + u^2\Bigr)\,\mathrm{d} x \,\le\, C \int_{|x| \le R} \Bigl(w_x^2 + w^2\Bigr)\,\mathrm{d} x + \mathcal{O}(e^{-R}\rho(u,v,\eta)^2). \end{equation} Indeed, we have the representation $u = A u_0' + W$, where the function $W$ is defined in \eqref{variation-u} and the constant $A$ is fixed by the orthogonality condition $\langle u_0',u\rangle_{L^2} = 0$. The proof of Lemma~\ref{lemma-soliton-1} shows that $\|W\|_{L^2(|x|\le R)} \le C \|w\|_{L^2(|x|\le R)}$. From the orthogonality relation \[ 0 \,=\, \int_{|x|\le R} u_0'(x)\Bigl(A u_0'(x) + W(x)\Bigr)\,\mathrm{d} x + \int_{|x|\ge R} u_0'(x) u(x)\,\mathrm{d} x, \] we easily obtain the bound $|A| \le C\|W\|_{L^2(|x|\le R)} + \mathcal{O}(e^{-R} \rho(u,v,\eta))$. This shows that $$ \|u\|_{L^2(|x|\le R)} \,\le\, C\|w\|_{L^2(|x|\le R)} + \mathcal{O}(e^{-R}\rho(u,v,\eta)), $$ and since $u_x = w - \sqrt{2}u_0 u$ we obtain similar estimates for the derivatives $u_x$ and $u_{xx}$, which altogether give \eqref{lowbdaux}. Summarizing, we have shown \begin{align}\nonumber \int_\mathbb{R} B_0(u) \chi_R(x) \,\mathrm{d} x \,&\ge\, C \int_{|x| \le R} \Bigl(u_{xx}^2 + u_x^2 + u^2\Bigr)\,\mathrm{d} x - \frac{C_3 \rho(u,v,\eta)^2}{R} \\ \label{lowbd4} &\quad\, + \mathcal{O}(R^2\rho(u,v,\eta)^3 + e^{-R}\rho(u,v,\eta)^2), \end{align} where in the big O term we replaced $R^2\rho(u,v,\eta)^4$ with $R^2\rho(u,v,\eta)^3$ using the fact that $\rho(u,v,\eta) \le C_0 d_R(\psi,u_0) \le C_0$ by \eqref{rhoequiv}. Now, combining \eqref{lowbd0}, \eqref{lowbd1}, \eqref{lowbd2}, \eqref{lowbd3}, \eqref{lowbd4}, and taking $R \ge 1$ sufficiently large, we arrive at \eqref{Lamlower}. \end{proof} \begin{corollary}\label{Lambdafinal} Fix any $R \ge 1$. There exist $\epsilon_1 \in (0,1)$ and $C_4 \ge 1$ such that, if $\psi = u_0 + u + iv$ satisfies $d_R(\psi,u_0) \le \epsilon_1$ and if $\langle u_0',u\rangle_{L^2} = \langle u_0'',v\rangle_{L^2} = 0$, then \begin{equation} \label{Lambdaest} C_4^{-1}d_R(\psi,u_0)^2 \le \Lambda(\psi) - \Lambda(u_0) \,\le\, C_4 d_R(\psi,u_0)^2. \end{equation} \end{corollary} \begin{proof} Choose $R \ge 1$ large enough so that the conclusion of Proposition~\ref{prop-soliton} holds, and $\rho_0 > 0$ small enough so that $R^3 \rho_0 \ll C_2$, where $C_2$ is as in \eqref{Lamlower}. Take $\epsilon_1 \le 1$ such that $C_0\epsilon_1 \le \rho_0$, where $C_0$ is as in \eqref{rhoequiv}. If $\psi = u_0 + u + iv$ satisfies $d_R(\psi,u_0) \le \epsilon_1$ and $\langle u_0',u\rangle_{L^2} = \langle u_0'',v\rangle_{L^2} = 0$, it follows from \eqref{rhoequiv} that the quantity $\rho(u,v,\eta)$ defined in \eqref{rhodef} satisfies $\rho(u,v,\eta) \le \rho_0$. By Proposition~\ref{prop-soliton}, we thus have $$ \frac12 C_2 \rho(u,v,\eta)^2 \,\le\, \Lambda(\psi) - \Lambda(u_0) \le C_2' \rho(u,v,\eta)^2, $$ where the lower bound follows from \eqref{Lamlower}, and the upper bound can be established by a much simpler argument (which does not use any orthogonality condition). Since $\rho(u,v,\eta)$ is equivalent to $d_R(\psi,u_0)$ by Lemma~\ref{auxlem}, we obtain \eqref{Lambdaest}. Finally, Corollary~\ref{Lambdafinal} holds for any $R \ge 1$ because different values of $R$ give equivalent distances $d_R$ on $X$. \end{proof} It is now easy to conclude the proof of Theorem~\ref{theorem-soliton}. Fix any $R \ge 1$. Given any $\epsilon > 0$, we take $$ \delta \,=\, \frac{1}{2C_4}\,\min(2\epsilon,\epsilon_0,\epsilon_1), $$ where $C_4 \ge 1$ and $\epsilon_1 > 0$ are as in Corollary~\ref{Lambdafinal} and $\epsilon_0 > 0$ is as in Lemma~\ref{lemma-xith}. If $\psi_0 \in X$ satisfies $d_R(\psi_0,u_0) \le \delta$, then $\Lambda(\psi_0) - \Lambda(u_0) \le C_4 \delta^2$ by the upper bound in \eqref{Lambdaest}, which does not require any orthogonality condition. Since $\Lambda$ is a conserved quantity, we deduce that the solution $\psi(\cdot,t)$ of the cubic NLS equation \eqref{nls} with initial data $\psi_0$ satisfies $\Lambda(\psi(\cdot,t)) - \Lambda(u_0) \le C_4 \delta^2$ for all $t \in \mathbb{R}$. We claim that, for all $t \in \mathbb{R}$, we have \begin{equation}\label{inf3} \inf_{\xi, \theta \in \mathbb{R}} d_R\Bigl(e^{i \theta} \psi(\cdot + \xi,t), u_0\Bigr) \,\le\, 2C_4\delta \le \epsilon_0. \end{equation} Indeed, the bound \eqref{inf3} holds for $t = 0$ by assumption. Let $\mathcal{J} \subset \mathbb{R}$ be the largest time interval containing the origin such that the bound \eqref{inf3} holds for all $t \in \mathcal{J}$. As is well-known \cite{Gerard,Zhidkov}, the solutions of the cubic NLS equation \eqref{nls} with initial data in $X$ depend continuously on time with respect to the distance $d_R(\psi,u_0)$. This implies that the left-hand side of the bound \eqref{inf3} is a continuous function of $t$, so that $\mathcal{J}$ is closed. On the other hand, if $t \in \mathcal{J}$, then by Lemma~\ref{lemma-xith} we can find $\xi,\theta \in \mathbb{R}$ such that the function $\tilde \psi(x) = e^{i(\theta+t)}\psi(x+\xi,t)$ can be decomposed as in \eqref{decomp2} with $u,v$ satisfying the orthogonality conditions \eqref{projections2}. Applying Corollary~\ref{Lambdafinal} to $\tilde \psi$, we deduce that $$ C_4^{-1}d_R(\tilde\psi,u_0)^2 \le \Lambda(\tilde\psi) - \Lambda(u_0) \,=\, \Lambda(\psi_0) - \Lambda(u_0) \le C_4 \delta^2, $$ so that $d_R(\tilde\psi,u_0) \le C_4\delta$. Using again a continuity argument, we conclude that $\mathcal{J}$ contains a neighborhood of $t$. Thus $\mathcal{J}$ is open, hence finally $\mathcal{J} = \mathbb{R}$, so that the bound \eqref{inf3} holds for all $t \in \mathbb{R}$. Using Lemma~\ref{lemma-xith}, we thus obtain modulations parameters $\xi(t)$, $\theta(t)$ such that $$ d_R\Bigl(e^{i(\theta(t)+t)} \psi(\cdot + \xi(t),t)\,,u_0\Bigr) \,\le\, C_4 \delta \le \epsilon, \qquad t \in \mathbb{R}. $$ Finally, Lemma~\ref{difflem} shows that the functions $\xi : \mathbb{R} \to \mathbb{R}$ and $\theta : \mathbb{R} \to \mathbb{R}/(2\pi\mathbb{Z})$ are continuously differentiable and satisfy the bounds \eqref{bound-time-per}. The proof of Theorem~\ref{theorem-soliton} is now complete. \begin{remark} Instead of introducing the auxiliary variable $\eta$ to cure the imperfect decomposition \eqref{decomposition2}, it would be advantageous to find a parametrization of the perturbations that fully takes into account the geometry of the functional $\Lambda$, and in particular the degeneracy of $\Lambda''(u_0)$. Near the constant solution $u_1 \equiv 1$, it is most natural to write $\psi(x,t) = (1 + r(x,t))e^{i\phi(x,t)}$, where $r$ and $\phi$ are real-valued functions. In that case, the usual energy function \eqref{energy} allows us to control $r$ in $H^1(\mathbb{R})$ and $\phi_x$ in $L^2(\mathbb{R})$. In the same spirit, it is tempting to consider perturbations of the black soliton of the form \begin{equation} \label{decomposition3} \psi(x,t) \,=\, (u_0(x) + r(x,t))e^{i\phi(x,t)}, \quad x \in \mathbb{R}, \end{equation} where $r,\phi$ are again real-valued functions. With this representation, we find \begin{equation} \label{Lambdatry} \Lambda(\psi) - \Lambda(u_0) \,=\, \langle K_+r,r\rangle + \int_\mathbb{R} \Bigl(u_0^2 \phi_{xx}^2 + \phi_x^2)\Bigr)\,\mathrm{d} x + \tilde N(r,\phi_x), \end{equation} where $\tilde N(r,\phi_x)$ collects the higher order terms. This formula is interesting, because it is not difficult to verify that $\tilde N(r,\phi_x)$ can be controlled by the quadratic terms in \eqref{Lambdatry} if $r$ is small in $H^2(\mathbb{R})$ and $\phi_x$ small in $H^1(\mathbb{R})$. However, not all perturbations of the black soliton can be written in the form \eqref{decomposition3} with $r,\phi$ satisfying such smallness conditions, because $u_0$ vanishes at $x = 0$ in \eqref{decomposition3}. \end{remark} \vspace{0.5cm} \noindent{\bf Acknowledgement.} D.P. is supported by the Chaire d'excellence ENSL/UJF. He thanks members of Institut Fourier, Universit\'e Grenoble for hospitality and support during his visit (January-June, 2014).
2,877,628,090,898
arxiv
\section{Introduction} \input{introduction} \section{Anatomy of a {\mbox{\sc Gemm}} micro-kernel} \input{BLIS} \section{Identifying Outer-Product Kernels} \input{SIMD} \input{queuing} \section{Generating the {\mbox{\sc Gemm}} micro-kernel} \input{generating_the_micro} \section{Experimental Results} \input{experiments} \section{Conclusion} \input{conclusion} \bibliographystyle{acm} \subsection{The micro-kernel} The micro-kernel is a small matrix multiplication that implements $ C +\hspace{-5pt}= A B$, where $C$ is a $m_r \times n_r$ matrix, while $A$ and $B$ are micro-panels of size $m_r \times k_c$ and $k_c \times n_r$ respectively. In addition, because of the packing of $A$ and $B$ prior to the invocation of the micro-kernel, it can be assumed that $A$ is stored in a contiguous block of memory in column-major order while $B$ is contiguously stored in row-major order. Since the micro-kernel is a small {{\sc gemm} } kernel, the micro-kernel can be described, using compiler terminology, as a {{\sc gemm} } kernel computed using a triply-nested loop of the KIJ or KJI variant. Within the BLIS framework, it can also be assumed that $m_r, n_r \gg k_c$. In addition, we assume that the bounds of the loops (i.e. $k_c$, $m_r$, and $n_r$) are determined analytically using the models from~\cite{BLIS4}. {\bf Computing the micro-kernel.} Mathematically, the micro-kernel is computed by first partitioning $A$ into columns and $B$ into rows. The output $C$ is then computed in the following manner: \begin{eqnarray*} C & +\hspace{-5pt}= & \large ( \begin{array}{c|c|c} a_0 & \hdots & a_{k_c-1}\end{array} \large ) \left ( \begin{array}{c}b_0^T \\ \hline \vdots \\ \hline b_{k_c-1}^T\end{array}\right) \\ & +\hspace{-5pt}= & \sum^{k_c-1}_{i = 0} a_i b_i^T, \end{eqnarray*} where the fundamental computation is now \[ C +\hspace{-5pt}= a_ib_i^T, \] a single {\em outer-product}, and our task is to compute the outer-product multiple times, each time with a new column and row from $A$ and $B$, in as efficient a manner as possible. \subsection{Decomposing the outer-product} Focusing on a single outer-product, $C += ab^T$, we can decompose the outer-product further by performing loop tiling of $C$. This, in turn, will require us to block the columns of $A$ and rows of $B$ into sub-columns and sub-rows of conformal lengths respectively, as follows: \[ C \rightarrow \left(\begin{array}{c|c|c} C^{0,0} & C^{0,1} & \hdots \\ \hline C^{1,0} & C^{1,1} & \hdots \\ \hline \vdots & \vdots & \ddots \\ \end{array}\right) a \rightarrow \left( \begin{array}{c} a^{0} \\ \hline a^{1} \\ \hline \vdots \\ \end{array} \right) b \rightarrow \left( \begin{array}{c} b^{0} \\ \hline b^{1} \\ \hline \vdots \\ \end{array} \right) \] In this case, the outer-product is decomposed into a smaller unit of computation, which we will term as {\em Unit Update}, which computes \[ C^{i,j} += (a^{i})^T b^{j}, \] where $C^{i,j}$ is now a $m_v \times n_u$ matrix, and the subvectors of $a^{i}$ and $b^{j}$ are vectors of length $m_v$ and $n_u$. Decomposing the outer-product into smaller unit updates now allow us to determine how the outer-product can be computed with the available instructions on the targeted architectures. In the case where $m_v = n_u = 1$, each unit update computes a single element in the matrix. This means that the outer-product, $C$, is computed element-wise, can be performed using the following code: \[ \begin{array}{l} \mbox{for }i = 0; : m_r/m_v - 1 \\ \quad \mbox{for }j = 0; : n_r/n_u - 1 \\ \quad \quad C^{i,j} +\hspace{-5pt}= (a^{i})^Tb^{j}, \end{array} \] where $b^{j}$ is streamed from the L1 cache, and $a^{i}$ is loaded into the registers from the L2 cache. Alternatively, interchanging the two loops yields, \[ \begin{array}{l} \mbox{for }j = 0; : n_r/n_u - 1 \\ \quad \mbox{for }i = 0; : m_r/m_v - 1 \\ \quad \quad C^{i,j} +\hspace{-5pt}= (a^{i})^Tb^{j}, \end{array} \] where each iteration of the inner-most loop requires new values of $a^{i}$ to be loaded from the L2 cache. In either case, we can replace the micro-kernel block in Figure~\ref{fig:BLIS} with the new diagram in Figure~\ref{fig:rank1}. The interesting case is when $m_v~\neq~1$ and/or $n_u~\neq~1$. In this case, each unit update is a smaller outer-product. By selecting appropriate values of $m_v$ and $n_u$, we gain the flexibility of mapping the computation of the unit update ($C^{i,j}$) to the availability of vector / single-instruction-multiple data (SIMD) instructions available on modern architecture. This flexibility of mapping also yields us a family of algorithms that compute the outer-product, which is the kernel within the micro-kernel we are trying to optimized. \begin{figure} \begin{center} \includegraphics[scale=0.4]{BLIS_micro_kernel.png} \end{center} \caption{ An additional three loops are introduced after decomposing the BLIS micro-kernel into smaller outer-product kernels of size $m_v \times n_u$. These set of loops would replace the micro-kernel shown in Figure~\ref{fig:BLIS}.} \label{fig:rank1} \end{figure} \subsection{Sequential Performance on Modern Architectures} In this section, we test the effectiveness of our kernel generation system in automating the last-mile for high performance dense linear algebra. We evaluate both the queueing theory model, which finds an efficient outer-product instruction-mix, and the code generation system, which translates that mix into a high performance kernel. We use a variety of machines listed in Table~\ref{tab:machine_table} that span a diverse range of double precision vector lengths ($v\in \{2,4,8\}$), number and partitioning of functional units, and instruction latencies. Because our kernel fits in the context of a larger Goto/BLIS-style \mbox{\sc Gemm} context, the blocking parameters $m_c,k_c,m_r$ and $n_r$ are determined from the analytical models developed in \cite{Goto:2008:AHP} and \cite{BLIS4} along with the cache and microarchitecture details listed in Table~\ref{machine_table_cache} and Table~\ref{machine_table_uarch} respectively. The microarchitecture details in particular (Table~\ref{machine_table_uarch}) were used by the queueing theory model to select the highest throughput outer-product instruction-mix. Additionally, these details determined $N_{\mbox{\scriptsize {updates}}}$ and the register sub-blocking dimensions $m_s,n_s$ using the formula developed in the previous section. Lastly, the Xeon Phi requires that four threads run concurrently in order to effectively utilize a core. This requires that we distribute the work across multiple threads. Therefore we used the implementation in \cite{BLIS3} with the following parameters: the number of threads used in each dimension ($i_c$ and $j_r$) must satisfy $i_c * j_r \le 59*4$, and ideally should be factors of $\frac{m}{m_c}$ and $n$ respectively. By empirical selection, $i_c=12$ and $j_r=16$ satisfied both of those requirements and resulted in the largest number of cores that achieved efficient per core performance. \begin{landscape} \begin{figure {\small \begin{tabular}{|l||r|r|r|r|r|r|r|r|r|r|r|} \hline Proc. & uArch. & Freq. & $S_{\textrm{L1}}$ & $W_{\textrm{L1}}$ & $N_{\textrm{L1}}$ & $S_{\textrm{L2}}$ & $W_{\textrm{L2}}$ & $N_{\textrm{L2}}$ & $S_{\textrm{L3}}$ & $W_{\textrm{L3}}$ & $N_{\textrm{L3}}$ \\ \hline \hline Core 2 X9650 & Penryn & 3 GHz & $4 \times 32$ KiB & 8 & 256 & $2 \times 6$ MiB& 24 & 16384 & - & - & - \\ \hline Xeon X5680 & Nehalem & 3.333 GHz & $6 \times 32$ KiB& $8$ & $64$ & $6 \times 256$ KiB& $8$ & $512$ & $ 12 $ MiB& $16$ & $12288$ \\ \hline Core i5-2500 & Sandy Bridge & 3.3 GHz & $4 \times 32$ KiB & 4 & 512 & $4 \times 256$ KiB & 4 & 4096 & 6 MiB & 12 & 32768 \\ \hline Core i7-4770K & Haswell & 3.5 GHz & $4 \times 32$ KiB & 8 & 256 & $4 \times 256$ KiB & 8 & 2048 & 8 MiB & 16 & 32768 \\ \hline Xeon Phi 5110p & Knights Corner & 1.053 GHz & $60 \times 32$ KiB & 8 & 256 & $60 \times 512$ KiB & 8 & 4096 & - & - & - \\ \hline \end{tabular} } \caption{ Cache details of the processors used in our experiments. These cache details are needed for determining $m_c,k_c,m_r$ and $n_r$ according to \cite{Goto:2008:AHP} and \cite{BLIS4}. The value $S_l$ corresponds to the size of the $l$ level of cache. $W_l$ is the number of ways and $N_l$ is the number of cache lines in each way.} \label{machine_table_cache} \end{figure} \begin{figure}[h!] {\small \begin{tabular}{|l||r|r|r|r|r|r|r|r|r|r|r|r|} \hline uArch. & Reg. & $\ell_{\textrm{fma}}$ & $\ell_{\textrm{L1}}$ & $\ell_{\textrm{L2}}$ & $\ell_{\textrm{shuf.}}$ & $\ell_{\textrm{perm.}}$ & $\ell_{\textrm{bcast.}}$ & $R_{\textrm{fma}}$ & $R_{\textrm{mem}}$ & $R_{\textrm{shuf.}}$ & $R_{\textrm{perm.}}$ & $R_{\textrm{bcast.}}$ \\ \hline \hline Penryn & 16 & $5 + 3$ & $3$ & $15$ & $1$ & - & $1$ & $p_0 \wedge p_1$ & $p_2$ & $p_5$ & - & $p_0$ \\ \hline Nehalem & 16 & $5 + 3$ & 4 & 10 & $1$ & - & 2 & $p_0 \wedge p_1$ & $p_2$ & $p_0 \vee p_5$ & - & $p_5$ \\ \hline Sandy Bridge & 16 & $5 + 3$ & 4 & 12 & 1 & 2 & 3 & $p_0 \wedge p_1$ & $p_2 \vee p_3$ & $p_5$ & $p_5$ & $p_5 \wedge (p_2 \vee p_3)$ \\ \hline Haswell & 16 & $5$ & $4$ & $12$ & $1$ & $3$ & $5$ & $p_0 \vee p_1$ & $p_2 \vee p_3$ & $p_5$ & $p_5$ & $p_2 \vee p_3$ \\ \hline \hline uArch. & Reg. & $\ell_{\textrm{fma}}$ & $\ell_{\textrm{L1}}$ & $\ell_{\textrm{L2}}$ & $\ell_{\textrm{shuf. fma}}$ & $\ell_{\textrm{perm.}}$ & $\ell_{\textrm{bcast. fma}}$ & $R_{\textrm{fma}}$ & $R_{\textrm{mem}}$ & $R_{\textrm{shuf. fma}}$ & $R_{\textrm{perm.}}$ & $R_{\textrm{bcast. fma}}$ \\ \hline \hline Knights Corner & 32 & $4$ & $1$ & $11$ & $4$ & $6$ & $4$ & $p_0$ & $p_{\textrm{mem}}$ & $p_0 \wedge p_{\textrm{mem}}$ & $p_0$ & $p_0 \wedge p_{\textrm{mem}}$ \\ \hline \end{tabular} } \caption{Here we capture the pertinent microarchitecture parameters that are used for our queueing theory model. The column $\ell_u$ represents the latency in cycles of instruction $u$, where L1 and L2 represents instruction reads that hit in those caches. In the case of a system without fused-multiply-add (fma), the latency is represented as the sum of the multiply instruction and add instruction. The columns $R_u$ represent the functional units that are required to compute instruction $u$. For some instructions multiple function units may be required (represented by $\wedge$) and some instruction may take multiple paths (represented by $\vee$).} \label{machine_table_uarch} \end{figure} \begin{figure}[h!] {\small \begin{tabular}{|l||r|r|r|r|} \hline {\bf Processor}( {\bf uArch.}) & ${ m_c \times k_c} $ &${ m_r \times n_r}$ &$N_{\mbox{\scriptsize {updates}}}$ &${ m_s \times n_s}$\\ \hline \hline Core 2 X9650 (Penryn) & $256 \times 256$ & $4 \times 4$ & 3 &$2 \times 2$ \\ \hline Xeon X5680 (Nehalem) & $256 \times 256$ & $2 \times 8$ & 3 &$2 \times 2$ \\ \hline Core i5-2500 (Sandy Bridge) & $96 \times 256$ & $8 \times 4$ & 3 &$4 \times 2$ \\ \hline Core i7-4770K (Haswell) & $256 \times 512$ & $4 \times 12$ & 3 &$4 \times 4$ \\ \hline Xeon Phi 5110p (Xeon Phi) & $120 \times 240$ & $30 \times 8$ &1 & $8 \times 1$ \\ \hline \end{tabular} } \caption{ The cache blocking parameters $m_c$ and $k_c$ where determined using the results in \cite{Goto:2008:AHP} and the hardware parameters in Table~\ref{machine_table_cache} and Table~\ref{machine_table_uarch}. The register blocking parameters $m_r$ and $n_r$ were determined from \cite{BLIS4} using the values in Table~\ref{machine_table_cache}. Lastly, $N_{\mbox{\scriptsize {updates}}}$ and subsequently the sub-blocking dimension $m_s$ and $n_s$ were determined using Equation~\ref{eqn:nupdate}. $m_c,k_c,m_r,n_r,m_s $ and $n_s$ correspond to the values used in the generated code (see Figure~\ref{fig:code_example}). Note $N_{\mbox{\scriptsize {updates}}}v \ge m_sn_s$}. \label{tab:machine_table} \end{figure} \end{landscape} \subsection{Analysis of Queueing Model} \begin{figure}[h!] \center {\small \begin{tabular}{ | l | l |l |l | l| l| l | } \hline \multicolumn{7}{|c|}{ {\bf Xeon Phi $m_r\times n_r=8\times 30$ Impl.} }\\ \hline {\bf \shortstack{\# vperm.\\ updates}}& {\bf \shortstack{\# vbcast.\\ updates}}& {\bf \shortstack{\# Reads \\($L_{\textrm{mem}}$)} } & {\bf \shortstack{\# Vect. \\($L_{p_0}$)}}& {\bf \shortstack{$\lambda_{\textrm{outer-product}}$\\ $\min(\frac{1}{L_{\textrm{mem}}},\frac{1}{L_{p_0}})$}} & {$\frac{\textrm{flop}}{\textrm{cyc.}}$} & Est. $\frac{\textrm{GFLOP}}{s}$ \\ \hline \hline 0& 30& 1+0+30+4=35& 31& 0.02857 & 13.71& 14.44 \\ \hline 1& 26& 1+1+26+4=32& 32& 0.03125 & 15 & 15.80\\ \hline 2& 22& 1+2+22+4=29& 33& 0.03030 & 14.55& 15.32\\ \hline 3& 18& 1+3+18+4=26& 34& 0.02941 & 14.12& 14.87\\ \hline 4& 14& 1+4+14+4=23& 35& 0.02857 & 13.71& 14.41\\ \hline 5& 10& 1+5+10+4=20& 36& 0.02778 & 13.33& 14.04\\ \hline 6& 6& 1+6+6+4 =17& 37& 0.02703 & 12.97& 13.66\\ \hline 7& 2& 1+7+2+4 =14& 38& 0.02632 & 12.63& 13.30\\ \hline \end{tabular} } \caption{ We estimate the number of cycles needed to compute our generated Xeon Phi outer-product kernels. The first column is the number of \texttt{vpermute} unit updates of size $4\times 8$ used to implement the outer product. The remainder of the outer-product is computed using multiple $1\times 8$ broadcast based unit updates.} \label{tab:mic_prediction} \end{figure} In order to demonstrate the effectiveness of our model, we compare the predicted performance against the actual performance estimated by our queueing theory model. For the Xeon Phi we compare the performance of eight different instruction-mix implementations of an $8\times 30$ outer-product. We selected a family of instruction-mixes where the work is partitioned between $8 \time 4$ permute unit updates and $8 \times 1$ broadcast based unit updates. In Table~\ref{tab:mic_prediction} we detail each outer-product implementation. Each row represents a specific implementation, where the first two columns represent the number of permute and broadcast unit updates in the implementation.In the next two columns we compute the number of instructions that need the memory port ($p_{\textrm{mem}}$) and vector port ($p_0$) respectively. For the Xeon Phi each permute component requires one load instruction, a permute instruction and four fma instructions. The broadcast based component require one load and one fma instruction. Additionally, each implementation requires four prefetch instructions that occupy the memory ports. In the fifth column we use our queueing theory model to estimate the performance of the implementation. We can estimate the performance in FLOP per cycle as: \begin{equation} \frac{\textrm{flop}}{\textrm{cyc.}} = \lambda_{\textrm{outer-product}} (m_r)(n_r)(2 flop) \end{equation} In the last column we estimate the performance in GFLOP using the following formula: \begin{equation} \frac{\textrm{GFLOP}}{\textrm{s}} = f\frac{\textrm{flop}}{\textrm{cyc.}} \end{equation} Each of these implementations has a different throughput predicted by our model. In our experiment (Figure~\ref{fig:model_evaluation}), we compare the relative performance of these implementations. Assuming that the overheads are similar between all implementations, then if the model does not fit, we expect a significant difference between the relative ordering of the implementations and the predictions. However, for the Xeon Phi we see that the relative ordering of the implementations is preserved in the experimental results, with the exception of one of the implementations. We suspect that the overhead is slightly lower for the {\it 0 permute, 30 broadcast} implementation. \begin{figure}[h!] \center {\small \begin{tabular}{ | l | l |l | l| l| l | l| l| l| } \hline $m_r \times n_r$ & {\bf \shortstack{\#bcast.\\ updt.}}& {\bf \shortstack{\#shuf.\\ updt.}}& {\bf \shortstack{\#mem.\\ $L_{p_2 \vee p_3}$}}& {\bf \shortstack{\#fma\\ $L_{p_0 \wedge p_1}$}}& {\bf \shortstack{\#shuf. \\$L_{p_5}$} } & {\bf $\lambda_{\textrm{out.-prod.}}$ } & {$\frac{\textrm{flop}}{\textrm{cyc.}}$} & $\frac{\textrm{GFLOP}}{s}$ \\ \hline \hline $8 \times 4$ & 8 & 0 & 2 + 8 = 10 & 8 & 8 & 0.125 & 4 & 26.4\\ \hline $8 \times 4$ & 0 & 2 & 2 + 1 = 3 & 8 & 3 & 0.125 & 4 & 26.4\\ \hline $4 \times 12$ & 0 & 3 & 1 + 3 = 4 & 12 & 9 & 0.083 & 4 & 26.4\\ \hline $4 \times 12$ & 12 & 0 & 1 + 12 = 13 & 12 & 12 & 0.083 & 4 & 26.4\\ \hline \end{tabular} } \caption{ We estimate the number of cycles needed to compute our generated Sandy Bridge outer-product kernels. We implement outer-products of size $m_r \times n_r \in \{8 \times 4, 4 \times 12\}$. Note that the model predicts similar performance across these implementation, however due to subtle microarchitecture details the experimental performance is different. } \label{tab:snb_prediction} \end{figure} We repeat the same experiment with the Sandy Bridge processor. In Table~\ref{tab:snb_prediction} we estimate the performance for several implementations. Unlike the Xeon Phi experiment we chose two different kernel sizes ($m_r \times n_r$). According to \cite{BLIS4}, the $8 \times 4$ implementations is more efficient than the $4 \times 12$. In Figure~\ref{fig:model_evaluation} we plot the performance of the four implementations. What we see is despite that our model predicts identical performance, we see a significant difference between the kernels of different sizes. What this demonstrates is that even if we can produce an efficient kernel in isolation, our model operates within the constraints of the larger GotoBLAS/BLIS algorithm. There are also additional and subtle microarchitectural details that explain the difference between implementations of the same size on this system. For example, even though both ports $p_2$ and $p_3$ service memory operations, they are limited in the total number of bytes that can be read in a cycle. Therefore, the Sandy Bridge retires less than 2 memory operations per cycle. In the case where this is not an issue (between the two $8 \times 4$ implementations, we attribute the performance difference to scheduling because the permute based approach has fewer dependencies than the broadcast implementation, giving the scheduler greater freedom to hide instruction latency. \begin{figure* \centering \begin{tabular}{l} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_instructionmix_spiralstyle_xeonphi.pdf} \\ \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_instructionmix_spiralstyle_sandybridge.pdf} \\ \end{tabular} \caption{In both of these experiments we test the accuracy of our queueing theory model. {\bf Top:} On the Xeon Phi we evaluate the performance of eight implementations of the same outer-product. {\bf Bottom:} We do a similar experiment, but with two different outer-product sizes.} \label{fig:model_evaluation} \end{figure*} This experiment demonstrates that for outer-product implementations of same size we can accurately estimate the performance of our generated implementations. However, our kernels operate within the constraints of a bigger GotoBLAS/BLIS \mbox{\sc Gemm} algorithm, and our performance is ultimately limited by the parameters selected for the bigger algorithm. In the next subsections, we look at this interaction in the opposite direction, how decisions made in generating the kernel affect the overall GotoBLAS/BLIS algorithm. \subsection{Analysis of the Generated Kernel} \begin{figure* \centering \begin{tabular}{lr} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_comparison_spiralstyle_haswell.pdf} \\ \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_comparison_spiralstyle_sandybridge.pdf} \\ \end{tabular} \caption{ We compare the performance of our generated kernels against ATLAS and the OpenBLAS for various problem sizes to demonstrate that expert level performance can be automated. We see that our generated code approaches the performance of hand tuned expert code and for most architectures exceeds the performance of the generated ATLAS code.} \label{all_sequential} \end{figure*} \begin{figure* \centering \begin{tabular}{l} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_comparison_spiralstyle_nehalem.pdf} \\ \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/dgemm_comparison_spiralstyle_penryn.pdf} \\ \end{tabular} \caption{ Like the graphs in Figure~\ref{all_sequential}, we compare the performance of our generated kernels against the OpenBLAS and ATLAS. } \label{all_sequential_p2} \end{figure*} We evaluate the effectiveness of our kernel generation approach by comparing the performance of our generated outer-product kernels against state-of-the-art {\mbox{\sc Gemm}} implementations such as OpenBLAS \cite{OpenBLAS} and ATLAS \cite{ATLAS}. We selected OpenBLAS because it is the highest performance open source BLAS implementation on most architectures, including the systems used in this paper. ATLAS was also selected because it is a high performance code generation system. Unlike, our code generator, this framework relies on hand tuned assembly kernels and uses search to determine the blocking dimensions around these kernels. The systems used in this experiment represent the past four major microarchitecture designs from Intel Table~\ref{tab:machine_table}. The parameters for the GotoBLAS/BLIS \mbox{\sc Gemm} algorithm were analytically selected to maximize performance. These values also match the ones used by the OpenBLAS. In these experiments (Figure~\ref{all_sequential} and Figure~\ref{all_sequential_p2}), our generated code is within 2-5\% of the expert tuned OpenBLAS. We suspect this difference is due to loop overhead because we rely on the compiler to optimize this which results in several extra instructions over the expert code. The older the processor generation, the more pronounced of an effect this has, which is why ATLAS outperforms our code on the Penryn. We believe we can resolve this difference by implementing the looping structure in inline assembly which should give us performance that is near identical to the expert written code. \subsection{Sensitivity to Parameters} \begin{figure*} \includegraphics[clip, trim=1cm 5cm 1cm 4cm,scale=.4]{figures/uniform_vs_nonuniform_update.pdf} \caption{Given an outer-product instruction-mix of an $m_r \times n_r = 8\times 4 $, we can partition it into uniformly sized N-updates or non-uniform size N-updates. In the uniform case, each N-update is $m_s\times n_s = 4 \times 2$, in the non-uniform case each N-update is $m_s\times n_s \in \{4 \times 3, 4 \times 1\}$. } \label{fig:nupdate_block_diagram} \end{figure*} \begin{figure} \begin{center} \begin{tabular}{ |l|| l| l| l| l| l| l| } \hline Port ($R_U$) & $p_0$ & $p_1$ & $p_2$ - $\ell_D$ & $p_3$ - $\ell_D$ & $p_5$ \\ \hline Cycles - Uniform & 32.0 & 32.0 & 8.0 - 12.0 & 8.0 - 16.0 & 16.0\\ \hline Cycles - Non-Uniform & 32.0 & 32.0 & 8.0 - 12.0 & 8.0 - 16.0 & 16.0 \\ \hline \end{tabular} \end{center} \caption{IACA results comparing instruction throughput between the uniform and non-uniform N-update implementations. Each port represents a functional unit that is used for our operation. $\ell_D$ represents data fetch latency. What this shows is that both the uniform and non-uniform shaped implementations of the same outer product look identical to the Out-of-Order engine, as simulated by the IACA tool. However, we will show the performance of the two implementations are significantly different. } \label{iaca_n_update} \end{figure} In addition to using static scheduling and avoiding register spilling, we observe that even in the presence of an Out-Of-Order engine there is a benefit from maintaining uniformly-sized N-updates for creating the outer product. Given two implementation of the micro-kernel, we vary the instruction tile sizes and compare the performance. Our reference implementation uses uniformly sized N-updates of size $m_s \times n_s = 4 \times 2$. We compare this to an implementation composed of two types of N-updates of size $m_s \times n_s= 4 \times 3$ and $m_s \times n_s = 4 \times 1$. We illustrate these two implementations in Figure~\ref{fig:nupdate_block_diagram}, where each outer-product is partitioned according to a uniform or non-uniform scheme. We ensure that both implementations are free of register spilling and are scheduled -- not only to avoid stalls -- but also to insure that the number of instructions between prefetch instructions and their subsequent loads are uniform. We ran both implementations through the Intel Code Architecture Analyzer (IACA), a software simulator for Intel microarchitectures, and determined that both implementations lack instruction stalls, spend an equal number of cycles on each functional unit, and have an identical throughput (Figure~\ref{iaca_n_update}). However, the results in Fig~\ref{fig:uneven_experiment} do not reflect the results we obtained from IACA because the non-uniform N-update implementation performs 4\% worse than the uniform N-update. The non-uniform N-update implementation leads to clusters of instructions with very long encodings. This would present a bottleneck for the decoder and slow down the overall execution rate. Using uniform N-updates results in large instructions being evenly distributed throughout the code which prevents the decoder from becoming a bottleneck. \begin{figure*} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/n_update_uniformity_vs_perf.pdf} \caption{ In this experiment, we compare the performance of two kernels implementing the same $m_r \times n_r = 8\times 4 $ outer-product using either uniform or non-uniform N-update sizes. The uniform N-update implementation performs better because it leads to few clusters of large (in Bytes) instructions which prevent the fetch and decode stages from becoming bottlenecks.} \label{fig:uneven_experiment} \end{figure*} \paragraph{Register} For generating our kernels, our aversion to register spilling goes beyond the performance penalty of the additional store to and load from memory. The reason is that these kernels fit in a much larger {\mbox{\sc Gemm} } algorithm that achieves high performance by reducing Translation Look-aside Buffer (TLB) misses, and by spilling registers to memory this is disrupted and performance degrades significantly as result of these TLB misses. To demonstrate how large of an effect that register spilling in the kernel has on the number of TLB misses, we evaluate three kernels with varying degrees of register spilling (No Spilling, Moderate and Heavy). This is achieved by varying how much the N-updates overlap when we schedule them using software pipelining. The greater the overlap, the greater the register pressure and the larger the number of spills. In addition to measuring performance (FLOPs per cycle), we also measure TLB misses using PAPI \cite{Mucci99papi:a}. The goal is to show that by increasing the amount of register spilling we will disrupt how the larger {\mbox{\sc Gemm} } algorithm avoids TLB misses. The performance per cycle results in Fig~\ref{fig:spilling_results} demonstrate that as we increase the number of spills performance decreases -- which is what we would expect. We see that for large problem sizes the number of TLB misses is greater for the Heavy amount of spilling compared to the Moderate amount which is greater for the No Spilling case. If it were the case that the added latency incurred by the register spills were the only source of performance penalty, then we would not expect to see a change in the number of TLB misses between the three cases. This shows that register spilling has performance implication beyond the additional round trip to cache because it disrupts the TLB miss avoiding characteristics of the {\mbox{\sc Gemm} } algorithm described in \cite{Goto:2008:AHP}. For practical purposes this removes spilling as an option when the outer-product instruction-mix is translated into a kernel. \begin{figure*} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/register_spilling_and_sched_vs_perf.pdf} \label{fig:spilling_results} \caption{ In this experiment we show that improving static scheduling by increasing the number of register spills degrades the overall performance of the \mbox{\sc Gemm} operation. The GotoBLAS/BLIS algorithm attempts to minimize the number of TLB misses, however spilling into memory that would not have otherwise been used, increases TLB misses in this algorithm. } \end{figure*} \begin{figure*} \includegraphics[clip, trim=1cm 2cm 1cm 2cm,scale=.5]{analysis_plots/register_spilling_and_sched_vs_tlb_misses.pdf} \caption{ The overall algorithm that our kernels embedded in, the GotoBLAS/BLIS \mbox{\sc Gemm}, maximizes performance by insuring that the kernel receives data at a sufficient rate while minimizing TLB misses. Spilling into memory requires that extra TLB entries be utilized to address memory that would not have been used otherwise. Thus even if spilling improves the kernel performance in isolation, it will degrade the overall performance of the {\mbox{\sc Gemm} } operation.} \end{figure*} \subsection{A Work Flow for Kernel Generation} Our complete work flow is captured in Figure~\ref{fig:workflow}, and accomplishes the following: We start with the ISA for the target architecture, this includes the available instructions and their latency and throughput. This information is passed to our {\it Unit Update Enumeration} stage which enumerates the space of all unit updates, for example given the ISA in Figure~\ref{fig:simd_instruction_examples} the unit updates in Figure~\ref{fig:components} are enumerated. After the unit update space is enumerated, all possible tilings of unit updates that form outer-products of our desired $m_r \times n_r$ are enumerated (Figure~\ref{fig:tilings}). These outer-products are then modeled and estimated using our queueing theory model described in the previous section. This process insures that the tiling, or instruction mix, selected can sustain a high throughput, given that all other instruction overheads are minimize. The steps that follow focus on minimizing said overhead through several optimizations, namely static instruction scheduling. Continuing with the work flow, the highest performance instruction mix tiling is selected and passed to a {\it kernel builder} which blocks the outer-product. This will reduce register pressure when we perform instruction scheduling (see Figure~\ref{fig:subblock_schedule}). The kernel builder then outputs a skeleton of the kernel, like the one in Figure~\ref{fig:code_example}, which captures the various blocking parameters of the kernel along with a set of {\it embedding functions} such as Figure~\ref{fig:embedded_func}. These embedding functions capture the selected instruction mix as functions. At this point the embedding functions and skeleton represented an untuned matrix multiply kernel that is implemented using the selected instruction mix. The embedding functions and the skeleton are then passed to a {\it Scheduling and Optimization} phase which hides the instruction latency by statically performing software pipelining scheduling \cite{software_pipeline}, allowing the kernel to perform near the predicted performance. The resulting statically scheduled code is then emitted using inline assembly ANSI C intrinsics from \cite{hands_off_hands_on}, which produces C code that can be compiled with a fixed static schedule. This process yields outer-product kernel code (Figure~\ref{fig:kernel_excerpt}) that achieves expert level performance. The role of the external compiler on this emitted C code is to provide register coloring, simplify memory indexing computation, and insure efficient instruction alignments for the fetch and decode stages of the processor. \begin{figure \includegraphics[clip, trim=1cm 5cm 2cm 5cm,scale=.35]{figures/registers_and_instructions.pdf} \caption{These cartoons illustrate the SIMD Vector instructions that are considered for outer-product kernel generation.} \label{fig:scalar_vector_registers} \label{fig:simd_instruction_examples} \end{figure} \begin{figure} \center \includegraphics[clip, trim=5cm 2cm 4cm 3cm,scale=.4]{figures/component_diagram_with_op.pdf} \caption{In this figure we show the smallest unit updates (or small outer products) that can be constructed from our base set of SIMD instructions. } \label{fig:components} \end{figure} \begin{figure} \center \includegraphics[scale=.4]{figures/instruction_mix.pdf} \caption{Given the unit updates in Figure~\ref{fig:components} we can enumerate two possible implementations of a $m_r \times n_r = 8 \times 4$ outer-product. On the left we have an instruction mix composed entirely of \texttt{vbroadcast} unit updates and on the right we have an instruction mix composed of \texttt{vshuffle} unit updates.} \label{fig:tilings} \end{figure} \subsection{Embedding Function Capture the Instruction Mix} The instruction-mix selected by our queuing model is not a complete kernel, instead it is a collection of instructions that describe the data movement and floating-point computation of data elements in a permuted outer-product. The dependencies, register utilization and memory address computation is implicit in this stage. Therefore, the first step in the transformation from instruction-mix to kernel is to make these characteristics explicit. This is done by expressing the three components of an outer-product instruction mix (gathering of the elements $A$, gathering of the elements of $B$, performing the multiply and accumulate of $C$) are expressed as embedding functions: \texttt{get\_a\_element}, \texttt{get\_b\_element}, and \texttt{fma} (Figure~\ref{fig:embedded_func}). In these functions dependencies, register utilization, and memory address computation are made explicit. \begin{figure} {\small \begin{verbatim} get_b_element( b_reg, ii, jj, pp ) if( ii == 0 ) switch( jj ) case 0: b_reg[jj] = vload(&B[jj + pp*nr]) case 1: b_reg[jj] = vshuffle(b_reg[jj-1]) case 2: b_reg[jj] = vpermute(b_reg[jj-1]) case 3: b_reg[jj] = vshuffle(b_reg[jj-1]) get_a_element( a_reg, ii, jj, pp ) if( jj == 0 && ii mod v == 0) a_reg[ii] = vload( &A[ii + pp*mr] ) fma( a_reg, b_reg, c_reg, ii, jj, pp ) if( ii mod v == 0 ) c_reg[ii][jj] = vfma( a_reg[ii], b_reg[jj], c_reg[ii][jj] ) \end{verbatim} } \caption{In order to pass the instruction mix to the kernel code generator, it is encoded as a function, similar to listing. These functions dispatch to a specific instruction which depends on what element $C_{ii,jj}$ is being on in the outer product.} \label{fig:embedded_func} \end{figure} These functions take the following inputs: arrays of registers that represent the elements of $a$,$b$ and $c$, along with the indices for the $m$, $n$ and $k$ dimension of the current unit-update. The embedding functions maps the indices of the outer-product to the appropriate instructions from the instruction-mix. Because these functions capture the majority of the work a handful of optimizations are applied to these embedding functions. Namely, we optimize for the fetch and decode stage by minimizing bytes per instruction, and we simplify dependencies to minimize register utilization. \paragraph{Optimizing for bytes per instruction.} On most modern processors, the maximum throughput of the fetch and decode units is low enough to become a bottleneck. Thus, some of the optimizations performed by our generator minimize the instruction length and decode complexity in order to avoid this bottleneck. The following decisions ensure that shorter instructions are generated: \begin{enumerate} \item In some cases, we generate instructions that are meant to operate on single-precision data instead of instructions that operate on double-precision data. An example of this is the use of the \texttt{vmovaps} instruction to load \texttt{reg\_a}, instead of \texttt{vmovapd}. This is because both instructions perform the identical operation but the single-precision instruction can be encoded in fewer bytes. \item We hold the partially accumulated intermediate $m_r \times n_r$ matrix of $C$, which we will refer to as $T$, using high-ordered registers (i.e. register \texttt{xmm8} to \texttt{xmm16}). On most architecture we tested, high-ordered SIMD registers require more bytes to encode. Thus, by using the low-ordered registers to hold working values and high-ordered registers to store $T$, we ensure that each instruction has at most one register operand (i.e. the output operand) that is a high-ordered register. \item For memory operations, address offsets that are beyond the range of $-128$ to $127$ bytes require additional bytes to encode. Therefore, we restrict address offsets to fit in this range by subtracting $128$ bytes from the base pointers into $A$ and $B$. \end{enumerate} \paragraph{Eliminating unnecessary dependencies.} Notice that each permute-multiply-add step of a unit update can be performed independently once the permutation of \texttt{reg\_b} has been completed. The key is to ensure that in generating the permutations, unnecessary dependencies are not introduced to make the independent permute-multiply-add steps dependent. Consider the following code snippet for producing four permutations of $B$: \begin{center} \begin{verbatim} vmovapd (addr_B), vshufpd $5, vperm2f128 $1, vshufpd $5, \end{verbatim} \end{center} Notice that each instruction is dependent on the result of the permutation instruction immediately before it. As a result, the previous set of instructions would take longer to compute than the following sequence of instructions. \begin{center} \begin{verbatim} vmovapd (addr_B), vperm2f128 $1, vshufpd $5, vshufpd $5, \end{verbatim} \end{center} In both cases the same permutations are being computed, but in the latter case the permutations are stored in different registers. This eliminates the false dependencies between the instructions, so the shuffle instructions are independent and can be executed independently. \subsection{From Embedding Functions to Generated Code} \begin{figure} \includegraphics[scale=.4]{figures/software_pipeline_picture_updated.pdf} \caption{Once a candidate outer product tiling is selected (figure~\ref{fig:tilings}), we perform an additional layer of blocking ($m_s$, and $n_s$) to assist the code generator in minimizing register spills. Register blocking allows fewer registers to be live at a given cycle thus allowing the code generator to aggressively schedule the instructions.} \label{fig:subblock_schedule} \end{figure} Once we have an instruction mix for the outer product, as determined by our queueuing theory model, we can generate an implementation that hides the instruction latency in this instruction mix. Static instruction scheduling is key for this next step, however this optimization is limited by the number of available registers. Thus, the primary step in the {\it kernel builder} (Figure~\ref{fig:workflow}) is to determine a further layer of blocking for the outer product. \begin{figure} {\small \begin{verbatim} /* initialize temp buffer */ for( i = 0; i < m_r; i++ ) for( j = 0; j < n_r; j++ ) c_reg[ii][jj] = 0; /* computation */ #unroll(k_u) #schedule_software_pipeline for( pp = 0; pp < k_b; pp++ ) /* perform the outer products */ for( i = 0; i < m_r; i+=m_s ) for( j = 0; j < n_r; j+=n_s ) for( ii = i; ii < i+m_s; ii++ ) get_a_elem(a_reg, ii,j ); for( jj = j; jj < j+n_s; jj++ ) get_b_elem(b_reg, ii,jj ); fma( c_reg, a_reg, b_reg, ii,jj,pp ); /* accumulate temp to results */ for( i = 0; i < m_r; i++ ) for( j = 0; j < n_r; j++ ) C[(ii,jj)] += c_reg[ii][jj]; \end{verbatim} } \caption{In this code skeleton we capture an outline of the generated kernel. We pass a similar outline to our code generated along with our instruction mix. This mix is encoded as the functions \texttt{get\_a\_elem}, \texttt{get\_b\_elem} and \texttt{fma}. The code generator uses this information to generate the code, perform optimizations such as unrolling and code motion and schedule the resulting kernel code using software pipelining in way that targets the microarchitecture.} \label{fig:code_example} \end{figure} \subsection{Limits imposed by registers.} Recall that to compute a unit update, $u_b$ permutations of the elements in \texttt{reg\_b} are required. However, a multiply and an add is performed with each permutation. This implies that each unit update will require two new registers ($R_{R}=2$): One to store the permutation of \texttt{reg\_b}, and another to hold the output of the multiplication. On architectures with a {\em fused-multiply-add} instruction, only one new register is required (i.e. $R_{R}=1$). As the register that holds the output of the accumulated result is reused over multiple outer-products, it means that the number of unit updates ($N_{\mbox{\scriptsize {updates}}}$) that can be performed without register spilling is constraint only by the number of registers given by the following: \begin{equation} N_{\mbox{\scriptsize {updates}}} = \left\lfloor \frac{R_{total} - \frac{m_r n_r}{v} - R_{A}}{R_{R}}\right\rfloor, \label{eqn:nupdate} \end{equation} where $R_{total}$ and $R_A$ are the total number of registers and the registers required to hold the the column of $A$. We select the additional blocking dimensions such that: \begin{equation} m_s n_s \le N_{\mbox{\scriptsize {updates}}} v \end{equation} In Figure~\ref{fig:subblock_schedule} we show the blocking and scheduling of a \texttt{vshuffle} instruction mix (Figure~\ref{fig:tilings}) for a $m_r \times n_r = 8 \times 4$ outer-product. \subsection{Scheduling and Tuning} The instruction mix selected by the {\it Queueing Model Estimator} is translated by the {\it Kernel Builder} into several {\it embedding functions} (\texttt{get\_a\_elem}, \texttt{get\_b\_elem} and \texttt{fma}), like those in Figure~\ref{fig:embedded_func}. These function are embedded in a looping structure that matches the outer-product kernel (Figure~\ref{fig:code_example}). These loops iterate over the $ m_r, n_r, m_s $ and $n_s$ dimensions and generate the dependencies between the instructions inside the embedding functions. Once these dependencies are built, a few basic optimizations, such as common sub-expression elimination, are performed such that the only the original instruction mix plus a few looping instructions exist in the final output code. Next, the code generator performs software pipelining \cite{software_pipeline} over the entire looping structure of the outer-product. By statically scheduling the kernel, the risk of instruction stalls is minimized thus allowing the processor to compute the instruction mix near the rate predicted by our model. Once the code is scheduled, the generator emits a mix of C code and inline assembly instruction macros which preserve the schedule \cite{hands_off_hands_on}. This resulting code implements a high performance outer-product kernel with the desired dimensions and the selected instruction mix. We provide an excerpt of a generated kernel in Figure~\ref{fig:kernel_excerpt}. \begin{figure} {\small \begin{verbatim} for( pp = 0; pp < k_b; pp+=KUNR ) { /* STEADY STATE CODE */ VLOAD_IA(GET_A_ADDR(0),GET_A_REG(0)) VLOAD_IA(GET_A_ADDR(1),GET_A_REG(1)) VLOAD_IA(GET_B_ADDR(0),GET_B_REG(0)) VSHUFFLE_IA(0x05,GET_B_REG(0),GET_B_REG(1)) VFMA(GET_A_REG(0),GET_B_REG(0),GET_C_REG(0,0)) VFMA(GET_A_REG(0),GET_B_REG(1),GET_C_REG(0,1)) VPERM2F128_IA(0x01,GET_B_REG(1),GET_B_REG(2)) VSHUFFLE_IA(0x05,GET_B_REG(2),GET_B_REG(3)) VFMA(GET_A_REG(1),GET_B_REG(0),GET_C_REG(0,0)) VFMA(GET_A_REG(1),GET_B_REG(1),GET_C_REG(0,1)) /* snip */ } \end{verbatim} } \caption{This is generated excerpt from our kernel generator. The resulting kernel code implements the instruction mix identified by our queueing theory model and is statically scheduled to maintain the estimated performance of the mix.} \label{fig:kernel_excerpt} \end{figure} \section{The Operation} .. reduce to outer product \begin{lstlisting} ++ HP Mat Mul is a layering of loops to shuttle data to a tuned kernel -- goto/blis deconstruction -- Micro kernel Our contribution -- Micro kernel can be decomposed as an outer product -- Further we can permute our outer product. We are not restricted to an outer product. $ C = P_aab^tP_b$ $ C = P_a^T(\Sigma_{p}^k (P_aa_p) (b^T_pP_b))P_b^T$...check -- We can break this outer product into several constituents -- Given an architecture there is an incredibly large search space: all the ways to splat, all the ways to load, all of the ways to accumulate -- If the combination of these constituents can be efficient then we can have an efficient ** Question, how do we find a mix of instructions for these parts that can sustain a high throughput? \end{lstlisting} \section{The Algorithm} .. select the instruction mix for the outer product \begin{lstlisting} ++ Our pipeline: -- have outer product dimensions (mr nr) -- determine instruction mix for our hardware that maximizes throughput (main: worry about bandwidth, lesser: worry about sub-blocking) -- implement that mix efficiently (main: worry about latency, lesser: worry about resources) ## Right no we focus on the instruction mix ++ We want to implement the outer-product with the set of instructions that maximizes throughput (according to our hardware model) -- This implementation will vary between different architectures because the throughput and latency of an instruction varies between architectures -- We need a mix of instructions that performs well -- If the outer product needs to be further reduced to several outer products that are implemented differently, but sustain a high throughput we take advantage of that. ** Show an example of an outer product implementation ++ How we do this: we enumerate all possible permuted outer products that can be implemented using a given ISA (with some restrictions), we then enumerate all tilings of these instruction based outer products that can implement our desired Outer Product Kernel. -- We use a model of the hardware and queueing theory to estimate each possible tiling for the Outer Product Kernel. ++ Queueing theory throughput model -- Rather than implementing every kernel and timing... -- The queueing theory model allows us to estimate the steady-state throughput of many iterations of the outer product kernel. as we would see in the microkernel -- The model works as follows: we treat resources as queues, the instructions as tasks, and we use little's law to determine what our throughput would be given a mix of instructions. In the case where an instruction can go to multiple queues we split the tasks proportionally based on how much work is in the queue. -- This approach would allow us to do a backtrack based search for the best instruction mix combination. ++ System Implementation details: -- In general, broadcast and butterfly cover most hardware targets well -- For those we can cast the problem as the n-rooks problem for the permutation -- For more elaborate architectures we need a branch-and-bound approach -- Our current approach behaves as follows: Given an ISA We enumerate all possible permuted outer products, where A is simply loaded as a vector, and the B elements are allowed to be permuted. we also restrict to single load-type instruction for A and for B in order to get the smallest possible permuted outer products (see figure). We fix the retrieval of the A elements to be loads, \end{lstlisting} \section{The implementation } We start with the dimension for $m_r,n_r,k_b,k_u$, but we have also included register blocking dimensions $m_s,n_s$ so we can effectively schedule without spilling. For the implementation, we can split it into three phases, initialize, compute and finalize. We can perform some basic code motion such that we minimize the number of loads by storing them into registers. \subsection{Concept in C generating the code} \begin{verbatim} for( int pp = 0; pp < k_b; pp++ ) for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) for( int jj = j; jj < j+n_s; jj ++ ) C[(ii,jj)] += A[(ii,pp)] * B[(pp,jj)]; \end{verbatim} Splitting into phases: \begin{verbatim} for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) for( int jj = j; jj < j+n_s; jj ++ ) c_reg[ii][jj] = 0; for( int pp = 0; pp < k_b; pp++ ) for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) for( int jj = j; jj < j+n_s; jj ++ ) c_reg[ii][jj] += A[(ii,pp)] * B[(pp,jj)]; for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) for( int jj = j; jj < j+n_s; jj ++ ) C[(ii,jj)] += c_reg[ii][jj]; \end{verbatim} Now we focus on the computation phase and we take advantage of the fact that we can reuse elements. \begin{verbatim} for( int pp = 0; pp < k_b; pp++ ) for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) a_reg[ii] = A[ii+pp*m_r ]; for( int jj = j; jj < j+n_s; jj ++ ) b_reg[jj] = B[jj+pp*n_r ]; c_reg[ii][jj] += a_reg[ii] * b_reg[jj] \end{verbatim} We can split the computation up into the key components that we get from the user. (implementation) Inside the code generator, the registers are treated as a table with the 3 corresponding dimensions. The user defined instruction mix is encoded in these functions. \begin{verbatim} inline void get_a_element( a_reg[][], A, ii, jj ) a_reg[ii] = A[ ii+pp*m_r ]; inline void get_b_element( b_reg[][], B, ii, jj ) b_reg[jj] = B[jj+pp*n_r ]; inline void compute( c_reg, a_reg, b_reg, C, A, B, ii,jj,pp ) c_reg[ii][jj] += a_reg[ii] * b_reg[jj] // Initialize a_reg and b_reg as a table in space // Initialize c_reg as a table in space/time for( int pp = 0; pp < k_b; pp++ ) for( int i = 0; i < m_r; i ++ ) for( int j = 0; j < n_r; j ++ ) for( int ii = i; ii < i+m_s; ii ++ ) get_a_element(a_reg, ii,j ); for( int jj = j; jj < j+n_s; jj ++ ) compute( c_reg, a_reg, b_reg, C, A, B, ii,jj,pp ) get_a ---> dup so input is mxk = 1x4 output is 1x4 get_b ---> vload so input is kxn = 1x4 output is 1x4 compute -> fma input is mxn 1x4 output is 1x4 \end{verbatim} These functions have their own dimensions. For example each call of the $\texttt{get\_a\_element}$ function has an input of $R^{1 \times v}$ where $v$ is our vector length. Similarly, each call of $\texttt{get\_b\_element}$ and $\texttt{compute}$ are $1 \times ... $. The key insight is that user has given us our register blocking dimensions $m_r$ and $n_r$ and our register partitions $m_s$ and $n_s$, along with the instruction mixes encoded as these helper functions for retrieving $A$ and $B$ and computing. For example, if $m_s\times n_s = 4\times 4 $ and we are applying the butterfly permutations to $B$, then conceptually we would encode that instruction mix as follows: \begin{verbatim} void get_b_element( b_reg[][], B, ii, jj ) { if( jj mod v == 0 ) switch( jj ) { case 0: b_reg[jj] = vload( &B[jj] ) case 1: b_reg[jj] = shuffle( b_reg[jj-1] ) case 2: b_reg[jj] = permute( b_reg[jj-1] ) case 2: b_reg[jj] = shuffle( b_reg[jj-1] ) } } \end{verbatim} Note, that we are not actually implementing this function and its associated overhead, we are simply capturing the instruction mix. This function is encoded in python and the code generator uses this to determine the required operations and their dependencies. What we really have is: \begin{verbatim} def get_b_element( var array b_reg[][], ptr B, ii, jj ) options = { 0: assign( b_reg[jj], vload(B,jj)), 1: assign( b_reg[jj], shuffle(b_reg[jj-1]) ), 2: assign( b_reg[jj], permute(b_reg[jj-1]) ), 3: assign( b_reg[jj], shuffle(b_reg[jj-1]) )} if ii mod v = 0 return options[jj] \end{verbatim} This python based function allows the user to pass their desired instruction mix to the code generator. The generator uses this This can be further extended \begin{verbatim} assign( reg_xx, operation) // creates new node operation --> has constraints to it for OASIC \end{verbatim} \begin{verbatim} for( int pp = 0; pp < k_b; pp++ ) for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) C[(ii,jj)] += A[(ii,pp)] * B[(pp,jj)]; \end{verbatim} \begin{verbatim} for( int pp = 0; pp < k_b; pp++ ) for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) C[(ii,jj)] += A[ii+pp*m_r ] * B[jj+pp*n_r]; \end{verbatim} Split operation into 3 phases centered around a block of temporary C registers. First initialize the temporaries, then perform the computation, finally write back the result. \begin{verbatim} for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) c_reg[ii][jj] = 0; for( int pp = 0; pp < k_b; pp++ ) for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) c_reg[ii][jj] += A[ii+pp*m_r ] * B[jj+pp*n_r]; for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) C[(ii,jj)] += c_reg[ii][jj]; \end{verbatim} \begin{verbatim} .. \end{verbatim} We can perform some basic code motion. \begin{verbatim} for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) c_reg[ii][jj] = 0; for( int pp = 0; pp < k_b; pp++ ) for( int ii = 0; ii < m_r; ii ++ ) a_reg[ii] = A[ii+pp*m_r ]; for( int jj = 0; jj < n_r; jj ++ ) b_reg[jj] = B[jj+pp*n_]; c_reg[ii][jj] += a_reg[ii] * b_reg[jj]; for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) C[(ii,jj)] += c_reg[ii][jj]; \end{verbatim} Here we can perform some memory addressing optimization that are fairly portable. \begin{verbatim} OFFSET = 128B // We center the addresses at the middle of the instructions range ptr *A_ptr = &A - OFFSET; ptr *B_ptr = &B - OFFSET; for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) c_reg[ii][jj] = 0; for( int p = 0; p < k_b; p+= k_u ) A_ptr += k_u*m_r; B_ptr += k_u*n_r; #unroll #swp for( int pp = 0; pp < k_u; pp++ ) for( int ii = 0; ii < m_r; ii ++ ) a_reg[ii] = A_ptr[ii+pp*m_r + OFFSET ]; #vector for( int jj = 0; jj < n_r; jj ++ ) b_reg[jj] = B_ptr[jj+pp*n_ + OFFSET ]; c_reg[ii][jj] += a_reg[ii] * b_reg[jj]; for( int ii = 0; ii < m_r; ii ++ ) for( int jj = 0; jj < n_r; jj ++ ) C[(ii,jj)] += c_reg[ii][jj]; \end{verbatim} \section{Selecting Outer-product Kernels} Having derived a family of algorithms to compute the outer-product, we need to select one of these algorithms to implement. We build a model of the architecture and then rely on queuing theory to select and implement the kernel with the highest throughput. \subsection{A model architecture} On most modern architectures, there are a number of pipelines where each pipeline processes a subset of the entire instruction set architecture (ISA). In addition, there are a fixed number of functional units on each architecture, and each functional unit is connected to each pipelines. When the instructions are sent into the system, they are processed by the different execution pipelines. The code is computed when the instructions have been retired, or exited the pipelines. Such anarchitecture can be modelled as a series of parallel queues and servers, where pipelines and functional units are modelled as queues and servers respectively. Instructions are jobs that are queued in the appropriate pipelines until processed. For instructions that require multiple functional units, we treat them as multiple independent jobs. A model of the Sandy Bridge architecture relevant to the computation of the outer-product kernel is shown in Figure~\ref{fig:queues}. \begin{figure} \begin{center} \includegraphics[scale=0.3]{queue.pdf} \end{center} \caption{Model of a subset of the Intel Sandy Bridge architecture, showing only the floating point addition and multiplication units, the load/store units and the vector shuffle units. Instructions enter the pipelines, and when all instructions required for computing the outer-product leave their respectively pipeline, the outer-product is computed.} \label{fig:queues} \end{figure} By viewing the architecture as queues and servers, we can leverage queuing theory to analytically compute the throughput of computing a single iteration of the micro-kernel at steady state. \subsection{Little's Law} Little's Law~\cite{little_law} states that in a steady state the expected number of jobs ($L$) waiting for a server in a system is given by: \[ L = \lambda W, \] where $\lambda$ and $W$ are the average arrival rate of new jobs, and average time spent in the system (waiting and processing time) of a job, respectively. Rearranging the above equation, we obtain \begin{equation} \label{eqn:little} \lambda = \frac{L}{W}, \end{equation} which gives us the throughput of a particular queue with an average of $L$ jobs, each taking an average of $W$ unit of time. The overall throughput of the system for computing the outer-product can then be estimated using \begin{eqnarray*} \lambda_{\mbox{outer product}} &=& \min(\frac{1}{T_i})\quad \forall_i \mbox{pipeline},\\ &=&\min(\frac{\lambda_{i}}{L_i}) \end{eqnarray*} where $\lambda_i$ is the throughput of pipeline $i$, $L_i$ is the number of instructions from the instruction mix that has been assigned to the pipeline. Essentially, the throughput for an outer-product is the inverse of the time it takes for the instruction mix to clear the pipeline of the lowest throughput. \subsection{Estimating throughput} Consider the instruction mix required to compute the $ 4~\times~4 $ outer-product shown in Figure~\ref{fig:broadcast_algo} being executed on the model Sandy Bridge architecture described in Figure~\ref{fig:queues}. The instruction mix contains a single Load of a vector of $a$, four Loads (with duplication) of elements from $b$, four multiplies and four adds (Computation) instructions. All instructions will be sent to their respective pipelines. In addition, another four job items are also sent to the pipeline connected to the shuffle functional unit. This is because the Load of element from $b$ on the SandyBridge is a Composite instruction, that comprises of two instructions, a Load and a Shuffle. Based on documentations from the hardware manufacturers~\cite{inteloptimize}, we know that the throughputs of the Shuffle and Computation instructions are $1$ per cycle, and Loads have a throughput of $2$ per cycle. This means that the estimated throughput of the system is \begin{eqnarray*} \lambda_{\mbox{outer product}} &=& \min(\frac{\lambda_{\mbox{load}}}{L_{\mbox{load}}}, \frac{\lambda_{\mbox{add}}}{L_{\mbox{add}}}, \frac{\lambda_{\mbox{mul}}}{L_{\mbox{mul}}}, \frac{\lambda_{\mbox{shuffle}}}{L_{\mbox{shuffle}}} ) \\ &= &\min(\frac{2}{5}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}) \\ & =& 0.25 \mbox{ outer-product per cycle} \end{eqnarray*} However, these throughput values are based on the assumption that the instructions in all queues are fully pipelined, and independent. When the instructions in the pipelines are not independent, this implies that the latency of executing that instruction cannot be hidden. As such the throughput of the pipeline drops. This can happen when one instruction has to be computed before dependent instructions can be processed. What this means is that the pipeline has to stall, thus increasing the average waiting time of that pipeline. The new average waiting time for the stalled pipeline can be esimated as \[ W = L + {n*k}, \] where $n$ is the number of dependent instructions and $k$ is the latency of the instruction. Using this new value of $W$, we can then compute the effective throughput using Little's Law (Equation~\ref{eqn:little}). \subsection{Dealing with dependencies across pipelines} The authors of~\cite{BLIS4} proposed an analytical model for sizing the micro-kernels, where the values of $m_r$ and $n_r$ are sized such that all computations within a single iteration, i.e. the computation required to compute a single outer-product are independent. Hence, by adopting the BLIS analytical model presented in~\cite{BLIS4}, we know that all the computation instructions are independent, and can be pipelined without introducing bubbles into the pipelines. To overcome other dependencies between other instructions, we can perform loop unrolling~\cite{Padua:1986:ACO:7902.7904} and software pipelining~\cite{software_pipeline} during code generation to identify and schedule independent instructions. \section{Introduction} As a new technology, Wireless Sensor Networks (WSNs) has a wide range of applications [Culler 2001,Bahl 2002,Akyildiz 2001], including environment monitoring, smart buildings, medical care, industrial and military applications. Among them, a recent trend is to develop commercial sensor networks that require pervasive sensing of both environment and human beings, for example, assisted living [Akyildiz 2002,Harvard 2001,CROSSBOW] and smart homes [Harvard 2001,Adya 2001,CROSSBOW]. \begin{quote} ``For these applications, sensor devices are incorporated into human cloths [Natarajan 2001,Zhou 2006,Bahl 2002,Adya 2001] for monitoring health related information like EKG readings, fall detection, and voice recognition". \end{quote} While collecting all these multimedia information [Akyildiz 2002] requires a high network throughput, off-the-shelf sensor devices only provide very limited bandwidth in a single channel: 19.2Kbps in MICA2 [Bahl 2002] and 250Kbps in MICAz. In this article, we propose MMSN, abbreviation for Multifrequency Media access control for wireless Sensor Networks. The main contributions of this work can be summarized as follows. \begin{itemize} \item To the best of our knowledge, the MMSN protocol is the first multifrequency MAC protocol especially designed for WSNs, in which each device is equipped with a single radio transceiver and the MAC layer packet size is very small. \item Instead of using pairwise RTS/CTS frequency negotiation [Adya 2001,Culler 2001; Tzamaloukas 2001; Zhou 2006], we propose lightweight frequency assignments, which are good choices for many deployed comparatively static WSNs. \item We develop new toggle transmission and snooping techniques to enable a single radio transceiver in a sensor device to achieve scalable performance, avoiding the nonscalable ``one control channel + multiple data channels'' design [Natarajan 2001]. \end{itemize} \section{MMSN Protocol} \subsection{Frequency Assignment} We propose a suboptimal distribution to be used by each node, which is easy to compute and does not depend on the number of competing nodes. A natural candidate is an increasing geometric sequence, in which \begin{equation} \label{eqn:01} P(t)=\frac{b^{\frac{t+1}{T+1}}-b^{\frac{t}{T+1}}}{b-1}, \end{equation} where $t=0,{\ldots}\,,T$, and $b$ is a number greater than $1$. In our algorithm, we use the suboptimal approach for simplicity and generality. We need to make the distribution of the selected back-off time slice at each node conform to what is shown in Equation (\ref{eqn:01}). It is implemented as follows: First, a random variable $\alpha$ with a uniform distribution within the interval $(0, 1)$ is generated on each node, then time slice $i$ is selected according to the following equation: \[ i=\lfloor(T+1)\log_b[\alpha(b-1)+1]\rfloor. \] It can be easily proven that the distribution of $i$ conforms to Equation (\ref{eqn:01}). So protocols [Bahl 2002,Culler 2001,Zhou 2006,Adya 2001,Culler 2001; Tzamaloukas-01; Akyildiz-01] that use RTS/CTS controls\footnote{RTS/CTS controls are required to be implemented by 802.11-compliant devices. They can be used as an optional mechanism to avoid Hidden Terminal Problems in the 802.11 standard and protocols based on those similar to [Akyildiz 2001] and [Adya 2001].} for frequency negotiation and reservation are not suitable for WSN applications, even though they exhibit good performance in general wireless ad hoc networks. \subsubsection{Exclusive Frequency Assignment} In exclusive frequency assignment, nodes first exchange their IDs among two communication hops so that each node knows its two-hop neighbors' IDs. In the second broadcast, each node beacons all neighbors' IDs it has collected during the first broadcast period. \paragraph{Eavesdropping} Even though the even selection scheme leads to even sharing of available frequencies among any two-hop neighborhood, it involves a number of two-hop broadcasts. To reduce the communication cost, we propose a lightweight eavesdropping scheme. \subsection{Basic Notations} As Algorithm~\ref{alg:one} states, for each frequency number, each node calculates a random number (${\textit{Rnd}}_{\alpha}$) for itself and a random number (${\textit{Rnd}}_{\beta}$) for each of its two-hop neighbors with the same pseudorandom number generator. \begin{algorithm}[t] \SetAlgoNoLine \KwIn{Node $\alpha$'s ID ($ID_{\alpha}$), and node $\alpha$'s neighbors' IDs within two communication hops.} \KwOut{The frequency number ($FreNum_{\alpha}$) node $\alpha$ gets assigned.} $index$ = 0; $FreNum_{\alpha}$ = -1\; \Repeat{$FreNum_{\alpha} > -1$}{ $Rnd_{\alpha}$ = Random($ID_{\alpha}$, $index$)\; $Found$ = $TRUE$\; \For{each node $\beta$ in $\alpha$'s two communication hops }{ $Rnd_{\beta}$ = Random($ID_{\beta}$, $index$)\; \If{($Rnd_{\alpha} < Rnd_{\beta}$) \text{or} ($Rnd_{\alpha}$ == $Rnd_{\beta}$ \text{and} $ID_{\alpha} < ID_{\beta}$)\; }{ $Found$ = $FALSE$; break\; } } \eIf{$Found$}{ $FreNum_{\alpha}$ = $index$\; }{ $index$ ++\; } } \caption{Frequency Number Computation} \label{alg:one} \end{algorithm} Bus masters are divided into two disjoint sets, $\mathcal{M}_{RT}$ and $\mathcal{M}_{NRT}$. \begin{description} \item[RT Masters] $\mathcal{M}_{RT}=\{ \vec{m}_{1},\dots,\vec{m}_{n}\}$ denotes the $n$ RT masters issuing real-time constrained requests. To model the current request issued by an $\vec{m}_{i}$ in $\mathcal{M}_{RT}$, three parameters---the recurrence time $(r_i)$, the service cycle $(c_i)$, and the relative deadline $(d_i)$---are used, with their relationships. \item[NRT Masters] $\mathcal{M}_{NRT}=\{ \vec{m}_{n+1},\dots,\vec{m}_{n+m}\}$ is a set of $m$ masters issuing nonreal-time constrained requests. In our model, each $\vec{m}_{j}$ in $\mathcal{M}_{NRT}$ needs only one parameter, the service cycle, to model the current request it issues. \end{description} Here, a question may arise, since each node has a global ID. Why don't we just map nodes' IDs within two hops into a group of frequency numbers and assign those numbers to all nodes within two hops? \section{Simulator} \label{sec:sim} If the model checker requests successors of a state which are not created yet, the state space uses the simulator to create the successors on-the-fly. To create successor states the simulator conducts the following steps. \begin{enumerate} \item Load state into microcontroller model. \item Determine assignments needed for resolving nondeterminism. \item For each assignment. \begin{enumerate} \item either call interrupt handler or simulate effect of next instruction, or \item evaluate truth values of atomic propositions. \end{enumerate} \item Return resulting states. \end{enumerate} Figure~\ref{fig:one} shows a typical microcontroller C program that controls an automotive power window lift. The program is one of the programs used in the case study described in Section~\ref{sec:sim}. At first sight, the programs looks like an ANSI~C program. It contains function calls, assignments, if clauses, and while loops. \begin{figure} \centerline{\includegraphics{acmsmall-mouse}} \caption{Code before preprocessing.} \label{fig:one} \end{figure} \subsection{Problem Formulation} The objective of variable coalescence-based offset assignment is to find both the coalescence scheme and the MWPC on the coalesced graph. We start with a few definitions and lemmas for variable coalescence. \begin{definition}[Coalesced Node (C-Node)]A C-node is a set of live ranges (webs) in the AG or IG that are coalesced. Nodes within the same C-node cannot interfere with each other on the IG. Before any coalescing is done, each live range is a C-node by itself. \end{definition} \begin{definition}[C-AG (Coalesced Access Graph)]The C-AG is the access graph after node coalescence, which is composed of all C-nodes and C-edges. \end{definition} \begin{lemma} The C-MWPC problem is NP-complete. \end{lemma} \begin{proof} C-MWPC can be easily reduced to the MWPC problem assuming a coalescence graph without any edge or a fully connected interference graph. Therefore, each C-node is an uncoalesced live range after value separation and C-PC is equivalent to PC. A fully connected interference graph is made possible when all live ranges interfere with each other. Thus, the C-MWPC problem is NP-complete. \end{proof} \begin{lemma}[Lemma Subhead]The solution to the C-MWPC problem is no worse than the solution to the MWPC. \end{lemma} \begin{proof} Simply, any solution to the MWPC is also a solution to the C-MWPC. But some solutions to C-MWPC may not apply to the MWPC (if any coalescing were made). \end{proof} \section{Performance Evaluation} During all the experiments, the Geographic Forwarding (GF) [Akyildiz 2001] routing protocol is used. GF exploits geographic information of nodes and conducts local data-forwarding to achieve end-to-end routing. Our simulation is configured according to the settings in Table~\ref{tab:one}. Each run lasts for 2 minutes and repeated 100 times. For each data value we present in the results, we also give its 90\% confidence interval. \begin{table \tbl{Simulation Configuration\label{tab:one}} \begin{tabular}{|l|l|} \hline TERRAIN{$^a$} & (200m$\times$200m) Square\\\hline Node Number & 289\\\hline Node Placement & Uniform\\\hline Application & Many-to-Many/Gossip CBR Streams\\\hline Payload Size & 32 bytes\\\hline Routing Layer & GF\\\hline MAC Layer & CSMA/MMSN\\\hline Radio Layer & RADIO-ACCNOISE\\\hline Radio Bandwidth & 250Kbps\\\hline Radio Range & 20m--45m\\\hline \end{tabular}} \begin{tabnote \Note{Source:}{This is a table sourcenote. This is a table sourcenote. This is a table sourcenote.} \vskip2pt \Note{Note:}{This is a table footnote.} \tabnoteentry{$^a$}{This is a table footnote. This is a table footnote. This is a table footnote.} \end{tabnote \end{table \section{Conclusions} In this article, we develop the first multifrequency MAC protocol for WSN applications in which each device adopts a single radio transceiver. The different MAC design requirements for WSNs and general wireless ad-hoc networks are compared, and a complete WSN multifrequency MAC design (MMSN) is put forth. During the MMSN design, we analyze and evaluate different choices for frequency assignments and also discuss the nonuniform back-off algorithms for the slotted media access design. \section{Typical references in new ACM Reference Format} A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01}, \cite{Ablamowicz07}, \cite{Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. \subsection{The building blocks: SIMD instructions} The key to high performance is to use Single Instruction Multiple Data (SIMD) vector instructions available on many of the modern processors. We assume that all computation involves double precision arithmetic and that each vector register can store $v$ double precision floating point numbers, where $v$ is a power of two. In addition, we assume that the following classes of vector instructions are available: \begin{enumerate} \item Vector {\bf Stores.} Vector store instructions write all $v$ elements of a vector register to memory. \item Vector {\bf Load.} Load instructions read $u$ unique elements of data from memory, where $u~\le~v$ and $u$ is a power of two. An element is considered unique if it resides in a unique memory address. In cases where $u~<~v$, each of the $u$ unique elements are duplicated $v/u$ times. We assume that all elements loaded by a single Load instruction are within $v$ memory addresses of each other\footnote{This implies that our framework do not handle vector gather instructions. However, these Gather/Scatter instructions are not required in dense linear algebra kernels, as matrices are often packed for locality.}. Prefetches are considered Load instructions. \item Vector {\bf Shuffles.} Shuffle Instructions reorders and/or duplicates the elements in a vector register. We restrict ourselves to only instructions that be represented by a $n_v \times n_v$ matrix where each row contains exactly $n_v-1$ {\em zeros} and a single {\em one}, but each column may contain multiple {\em ones}. In addition, we assume that the number of {\em ones} in each column is a power of two. \item Vector {\bf Computation.} It is assumed instructions that perform computation on vector registers are element-wise operations. This means that, given vector registers, $\texttt{reg\_a}$ and $\texttt{reg\_b}$, the output is of the form: \[ \texttt{reg\_a} \: \mbox{op} \: \texttt{reg\_b} = \left( \begin{array}{rcl} \alpha_0&op_0&\beta_0 \\ \alpha_1&op_1&\beta_1 \\ &\vdots \\ \alpha_{v-1}&op_{v-1}&\beta_{v-1} \end{array} \right), \] where $op_i$ and $op_j, i \neq j$ may be different binary operators. The result of the computation may be stored in one of the input registers, or a third vector register. \item {\bf Composite Instructions.} Some instructions -- we will call them Composite Instructions -- can be viewed as a combination of some of the previous three types of instruction. For example, the instruction, \begin{center} {\texttt{vfmadd231pd, reg, reg, mem{1to8}} } \end{center} instruction on the Xeon Phi can be expressed as a Load instruction, followed by a broadcast (Shuffle) instruction, followed by a fused multiply-add (Computation) instruction. \end{enumerate} \subsection{Mapping unit updates to SIMD instructions} Given the available SIMD instructions, one possible size of a single unit update is for $m_v = v$ and $n_u = 1$, where $v$ is the size of a SIMD register. This means that $v$ values from $a$, and a single value of $b$ is loaded into two SIMD registers, \texttt{reg\_a} and \texttt{reg\_b}. In addition, we know that the loaded value of $b$ is duplicated $v$ times because $n_u~<~v$. Computing with \texttt{reg\_a} and \texttt{reg\_b} will yield a single unit update of size $v \times 1$. A single outer-product of $m_r \times n_r$ can then be computed through $m_r/m_v \times n_r/n_u$ multiple unit updates as shown in Figure~\ref{fig:broadcast_algo}. \begin{figure} {\small \begin{tabular}{ccc} \texttt{reg\_a} & \texttt{reg\_b} & registers storing $C^{i,j}$ \\ \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \beta_0 \\ \beta_0 \\ \beta_0 \\ \beta_0 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{1in} \[ \begin{array}{|>{\columncolor{olive!20}}c|} \hline \chi_{00} \\ \chi_{10} \\ \chi_{20} \\ \chi_{30} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{01} \\ \chi_{11} \\ \chi_{21} \\ \chi_{31} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{02} \\ \chi_{12} \\ \chi_{22} \\ \chi_{32} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{03} \\ \chi_{13} \\ \chi_{23} \\ \chi_{33} \\ \hline \end{array} \] \end{minipage} \\ \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \beta_1 \\ \beta_1 \\ \beta_1 \\ \beta_1 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{1in} \[ \begin{array}{|c|} \hline \chi_{00} \\ \chi_{10} \\ \chi_{20} \\ \chi_{30} \\ \hline \end{array} \begin{array}{|>{\columncolor{olive!20}}c|} \hline \chi_{01} \\ \chi_{11} \\ \chi_{21} \\ \chi_{31} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{02} \\ \chi_{12} \\ \chi_{22} \\ \chi_{32} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{03} \\ \chi_{13} \\ \chi_{23} \\ \chi_{33} \\ \hline \end{array} \] \end{minipage} \\ \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \beta_2 \\ \beta_2 \\ \beta_2 \\ \beta_2 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{1in} \[ \begin{array}{|c|} \hline \chi_{00} \\ \chi_{10} \\ \chi_{20} \\ \chi_{30} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{01} \\ \chi_{11} \\ \chi_{21} \\ \chi_{31} \\ \hline \end{array} \begin{array}{|>{\columncolor{olive!20}}c|} \hline \chi_{02} \\ \chi_{12} \\ \chi_{22} \\ \chi_{32} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{03} \\ \chi_{13} \\ \chi_{23} \\ \chi_{33} \\ \hline \end{array} \] \end{minipage} \\ \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{0.5in} \[ \begin{array}{|c|} \hline \beta_3 \\ \beta_3 \\ \beta_3 \\ \beta_3 \\ \hline \end{array} \] \end{minipage} & \begin{minipage}{1in} \[ \begin{array}{|c|} \hline \chi_{00} \\ \chi_{10} \\ \chi_{20} \\ \chi_{30} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{01} \\ \chi_{11} \\ \chi_{21} \\ \chi_{31} \\ \hline \end{array} \begin{array}{|c|} \hline \chi_{02} \\ \chi_{12} \\ \chi_{22} \\ \chi_{32} \\ \hline \end{array} \begin{array}{|>{\columncolor{olive!20}}c|} \hline \chi_{03} \\ \chi_{13} \\ \chi_{23} \\ \chi_{33} \\ \hline \end{array} \] \end{minipage} \\ \end{tabular} } \caption{SIMD computation of a $4\times4$ outer-product using four unit updates of size $4 \times 1$. Shaded register represents the register that is being updated during the particular stage of computation.} \label{fig:broadcast_algo} \end{figure} Alternatively, a different algorithm emerges when $m_v = v$ and $n_u = 2$. The difference is that \texttt{reg\_b} would contain two unique values, each duplicated $v/2$ times. After the first computation is performed, the values in \texttt{reg\_b} has to be shuffled before the next computation can be performed. This computation-shuffle cycle has to be repeated at least $n_u-1$ times in order to compute a single unit update. Pictorially, this is shown in Figure~\ref{fig:shuffle2}. The astute reader will recognized that we could have chosen to shuffle the values in \texttt{reg\_a} without significantly changing the computation of the unit update. \begin{figure} {\small \input fig_shuffle } \caption{SIMD computation of a $4\times4$ outer-product using two unit updates of size $4 \times 2$. As there are multiple (2) unique values, vector shuffles must be performed to compute each unit update. Shaded registers denotes output register for the current stage of computation.} \label{fig:shuffle2} \end{figure} \subsection{A family of outer-product algorithms} What we have learnt from the two algorithms described previously are the following: \begin{itemize} \item[--] The size of a single unit update can be determined by the number of unique values loaded into registers \texttt{reg\_a} and \texttt{reg\_b}. \item[--] When there are more than one unique values in registers \texttt{reg\_a} and \texttt{reg\_b}, the number of computation-shuffles required is the minimum of $m_v$ and $n_u$. \item[--] Loading more unique values into \texttt{reg\_b} reduces the number of Loads of $b$ from the L1 cache, at the cost of increasing the number of shuffles required to compute the unit update. \end{itemize} Given that we chose not to shuffle \texttt{reg\_a}, this means that there are $\log_2(v)+1$ different ways of picking $n_u$, i.e. the number of unique elements loaded into \texttt{reg\_b} (while still being a power of 2)\footnote{ In practice, there are less than $\log_2(v)+1$ ways in which data can be loaded into the registers, as the Load instructions for a particular $n_u$ value on the targeted architecture may not be available. For example, the $4\times2$ unit update cannot be implemented on most x86 architectures. The limited shuffle instructions also serve to limit the combinatorial explosion in implementations.}. For a given choice of $n_u$, the different ways in which the data in \texttt{reg\_b} should be shuffled yields different implementations for the unit update. By accumulating the instructions for computing all $m_r/m_v \times n_r/n_u$ unit updates, different sets of instructions, or {\em instruction mix}, describing different implementations of the outer-product can be obtained. Recall that the different number of loaded unique elements results in different number of loads and shuffle stages required. On different architectures, the cost (in term of latency) of loads and shuffles may differ, which suggests a need for a means to estimate the cost of computing with a set of instruction mix.
2,877,628,090,899
arxiv
\section{Introduction} Recently, a complex short pulse (CSP) equation~\cite{Feng_ComplexSPE,KYKPRE14} \begin{equation} q_{xt}+q+\frac{1}{2}\left(|q|^{2}q_{x}\right) _{x}=0\,, \label{CSP} \end{equation}% was proposed as an improvement for the short pulse (SP) equation \begin{equation} u_{xt}=u+\frac{1}{6}\left( u^{3}\right) _{xx}\,, \label{SPE} \end{equation}% proposed by Sch\"{a}fer and Wayne~\cite{SPE_Org} to describe the propagation of ultra-short optical pulses in nonlinear media. In contrast with a real-valued function for $u=u(x,t)$ in Eq.(\ref{SPE}), $q=q(x,t)$ in Eq.(\ref{CSP}) is a complex-valued function. Since the complex-valued function can contain the information of both amplitude and phase, it is more appropriate for the description of the optical waves~\cite{Yarivbook}. It was shown that the CSP equation (\ref{CSP}) is integrable in the sense that it admits a Lax pair and multi-soliton solutions~\cite{Feng_ComplexSPE,FengShen_ComplexSPE,FLZPhysD,FMO-PJMI} In contrast with no physical interpretation for one-soliton solution (loop soliton) to the SP equation~\cite{Sakovich2,Matsuno_SPE}, the one-soliton solution for the CSP equation is an envelope soliton with a few optical cycles~\cite{Feng_ComplexSPE}. Besides the envelop soliton solution, the CSP equation possesses rogue wave solution of any order in analogue to the nonlinear Schr\"odinger (NLS) equation~\cite{FLZPhysD}. The CSP equation can be viewed as an analogue of the NLS equation in the ultra-short regime when the width of optical pulse is of the order $10^{-15} s$. It is well known that the NLS equation describes the evolution of slowly varying wave packets waves in weakly nonlinear dispersive media under quasi-monochromatic assumption, which has been very successful in many applications such as nonlinear optics and water waves~\cite{Yarivbook,Kodamabook,Agrawalbook,Ablowitzbook}. However, as the width of optical pulse is in the order of femtosecond ($10^{-15}$ s), then the width of spectrum of this ultra-short pulses is approximately of the order $10^{15} s^{-1}$, the monochromatic assumption to derive the NLS equation is not valid~\cite{Rothenberg}. Description of ultra-short pulses requires a modification of standard slow varying envelope models based on the NLS equation. That is the motivation for the study of the short pulse equation, the CSP equation and their coupled models. The CSP equation is mathematically related to a two-component short pulse (2-SP) equation proposed by Dimakis and M\"uller-Hoissen~\cite{Hoissen_CSPE} and Matsuno~\cite{Matsuno_CSPE} independently. If we take $u={\text{Re}} (q)$ and $v={\text{Im}} (q)$, then the 2-SP equation in \cite{Hoissen_CSPE,Matsuno_CSPE} becomes the CSP equation. We showed that the CSP equation can be derived from the motion of space curves and provide an alternative multi-soliton solution in terms of determinant based on the KP hierarchy reduction technique~\cite{FengShen_ComplexSPE}. The multi-breather and higher order rogue wave solutions to the CSP equation were constructed by the Darboux transformation method in \cite{FLZPhysD}. Furthermore, we have constructed integrable semi-discrete CSP equation and apply it as a self-adaptive moving mesh method for the numerical simulations of the CSP equation~\cite{FMO-PJMI}. Since the NLS equation has the focusing and defocusing cases, which admits the bright and dark type soliton solutions, respectively. It is natural that the CSP equation can also have the focusing and defocusing type, which may be proposed as \begin{equation} q_{xt}+q+\frac{1}{2} \sigma \left( |q|^{2}q_{x}\right) _{x}=0\,, \label{gCSP} \end{equation} where $\sigma=1$ represents the focusing case, and $\sigma=-1$ stands for the defocusing case. It turns out that this is indeed the case. Same as the focusing CSP equation discussed in \cite{Feng_ComplexSPE,FengShen_ComplexSPE}, the defocusing CSP equation can also occur in nonlinear optics when ultra-short pulses propagate in a nonlinear media of defocusing type~\cite{FenglingzhuPRE}. In the present paper, we will study the defocusing CSP equation \begin{equation} q_{xt}+q-\frac{1}{2}\left( |q|^{2}q_{x}\right) _{x}=0\,, \label{dCSP} \end{equation} both geometrically and algebraically. The goal of the present paper is twofold. The one is to investigate the geometric meaning of the defocusing CSP equation, especially, its connection with the motion of space curves. The other is to find out how these equations are reduced from the extended KP hierarchy and to construct their $N$-soliton solutions. The remainder of this paper is organized as follows. In section 2, we firstly establish the connection between the motions of space curves in Minkowski space $\mathbf{R}^{2,1}$ and the defocusing CSP equation via a hodograph (reciprocal) transformation. The Lax pair is constructed geometrically to assure the integrability of the defocusing CSP equation. Then, starting from the fundamental forms of the surfaces embedded in $\mathbf{R}^3$ and $\mathbf{R}^{2,1}$, the focusing and defocusing complex coupled dispersionless (CCD) system are derived, respectively. The curve flows are also made clear. In section 3, starting from a set of bilinear equations for the single-component extended KP hierarchy, as well as their tau functions, we deduce the defocusing CSP equation by the KP hierarchy reduction method. Meanwhile, as a by-product, the $N$-dark soliton solution is obtained. Section 4 is devoted to concluding remarks. \section{Geometric Formulations} It has been known for several decades that there are deep connections between the differential geometry and the theory of integrable systems, and various integrable differential or difference equations arise from either curve dynamics or surfaces. For example, the sine-Gordon (sG) equation, the modified KdV (mKdV) equation and the NLS equation from the compatibility conditions of the motion of either plane or space curves, as shown by many researchers including Lamb and Hasimoto~\cite{Lamb,Hasimoto,GoldsteinPRL,NakayamaPRL,DoliwaPLA94,NakayamaJPSJ98,Calini00,Ivey,QuChouPhyD}. On the other hand, a more broad class of soliton equations including the ones mentioned above can also be derived from the theory of surfaces (see the pioneer work in \cite{Sasaki79,Chern80,Symsoliton,Tafel,SymVII,Symloopsoliton,Terng97} and also the book by Roger and Schief~\cite{RogerSchiefbook}). In this section, a link between the CSP equation and the motion of curves, as well as surfaces, in three-dimensional space is established. \subsection{The link with the motion of space curves in Minkowski space} In the study of curve flows of soliton equations, it has shown in the past that, very often, we have to turn from the Euclidean space to Minkowski space when we attempt to find the links between some soliton equations such as the defocusing NLS equation~\cite{NakayamaJPSJ98,Anco16}. This is also the case when we attempt to establish a link of the defocusing CSP equation (\ref{dCSP}) to the motion of space curves, as will be shown in this subsection. Firstly, noting that the CSP equation (\ref{gCSP}) admits the following conservative law \begin{equation} \label{gCSP_conv} (\sqrt{1+\sigma|q_x|^2})_t + \frac{1}{2} \sigma (|q|^2\sqrt{1+\sigma|q_x|^2})_x=0\,, \end{equation} which allows us to define a hodograph (reciprocal) transformation \begin{equation} \mathrm{d}s=\rho^{-1} \mathrm{d}x - \frac{1}{2} \sigma \rho^{-1} {|q|^2} \mathrm{d% }t,\quad \, \mathrm{d}y=-\mathrm{d}t, \end{equation} where $\rho^{-1}=\sqrt{1+\sigma|q_x|^2}$. By doing so, one can obtain the differential conversion formula between $(x,t)$ and $(y,s)$ \begin{equation} \partial_x = \rho^{-1} \partial_s\, \quad \partial_t=-\partial_y- \frac{1}{2} \sigma \rho^{-1} {|q|^2} \partial_s\,, \end{equation} or \begin{equation} \partial_s = \rho \partial_x\, \quad \partial_y=-\partial_t- \frac{1}{2} \sigma {|q|^2} \partial_x\,. \end{equation} Therefore, the CSP equation (\ref{gCSP}) is converted into \begin{equation} \label{CCDa} q_{ys}=\rho q\,, \end{equation} while the conservative form of the CSP (\ref{gCSP_conv}) \begin{equation} \label{CCDb} \rho_y + \frac{1}{2}\sigma (|q|^2)_s=0\,. \end{equation} We remark here that equations (\ref{CCDa}) and (\ref{CCDb}) constitute a coupled nonlinear system. When $s$ is viewed as a spatial variable, and $y$ as a temporal variable, the system for $\sigma=1$ is called the focusing complex coupled dispersionless (CCD) system, which has been studied in \cite{KonnoKakuhata2} and related references. For some reason, the system (\ref{CCDa}) and (\ref{CCDb}) for $\sigma=-1$ has been overlooked in the past and its soliton solution has not been studied yet. In what follows, we reformulate the system (\ref{CCDa}) and (\ref{CCDb}) with $\sigma=-1$ geometrically in order to make clear the geometric interpretation of the defocusing CSP equation (\ref{dCSP}). It can be easily checked that the quantity $\rho^2+\sigma |q_s|^2$ is independent of $y$, thus, we can assume $\rho^2+\sigma |q_s|^2=1$ without loss of generality. Under this case, if we assume $Q=q_s$, the system can be simplified into a single equation \begin{equation} \label{CsG_general} \left(\frac{Q_y}{\sqrt{1-\sigma|Q|^2}} \right)_s=Q\,, \end{equation} which is called the complex sine-Gordon equation for $\sigma=1$~\cite{Symsoliton,SymVII}. Eq. (\ref{CsG_general}) is also a reduction of a so-called vector sine-Gordon equation~\cite{JWang1}. If $Q=q_s$ is a real-valued function, the complex sine-Gordon equation (\ref{CsG_general}) with $\sigma=1$ leads to the sine-Gordon equation $\theta_{ys}=\sin \theta$ by setting $Q=q_s=\sin \theta$ and $\rho=\cos \theta$. In the case of $\sigma=-1$, if $Q=q_s$ is a real-valued function, we obtain the sinh-Gordon equation $\theta_{ys}=\sinh \theta$ from (\ref{CsG_general}) by setting $Q=q_s=\sinh \theta$ and $\rho=\cosh \theta$. Thus we can call (\ref{CsG_general}) the complex sine-Gordon equation for $\sigma=1$ and complex sinh-Gordon equation for $\sigma=-1$. It was pointed out in \cite{FengShen_ComplexSPE,FMO-PJMI} that the focusing CCD system is linked to the focusing CSP equation by a hodograph (reciprocal) transformation, by which the Lax pair of the focusing CSP equation was established. Both the generalized CD equation and the focusing CSP equation were interpreted as the motion of space curve in Euclidean space~\cite{FengShen_ComplexSPE}. In what follows, we proceed to the study for the relationship between the defocusing CSP equation (\ref{dCSP}) and the motion of space curves. To this end, we need to turn to consider the motion of curves which lies in a surface $S$ embedded in a Minkowski space $\mathbf{R}^{2,1}$ equipped with a Lorentz metric $dl^2=-dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}$. For any $\vec{x}=(x_{1},x_{2},x_{3})$, $\vec{y}=(y_{1},y_{2},y_{3})$ in $\mathbf{R}^{2,1}$, the scalar product is defined as $\langle \vec{x},\vec{y}\rangle=-x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}$ and the vector product is defined as $\vec{x}\times \vec{y}=(x_{3}y_{2}-x_{2}y_{3},x_{3}y_{1}-x_{1}y_{3},x_{1}y_{2}-x_{2}y_{1})$. Moreover, we use the Darboux frame $\{\mathbf{T},\mathbf{N},\mathbf{t}\}$ attached to a curve $\vec{\textbf{r}} (y,s)$ parameterized by the arc-length: \begin{eqnarray*} \mathbf{T} &=&\vec{\textbf{r}}_{y}\qquad \text{ (the unit tangent vector)}\,, \\ \mathbf{N} &=&\mathbf{N(\vec{\textbf{r}} )}\qquad \text{ (the unit normal vector of a surface $S$)}\,, \\ \mathbf{t} &=&\mathbf{N\times T}\qquad \text{(the tangent normal vector)}\,, \end{eqnarray*}% where $y$ stands for the arc length and $s$ represents the time. The general equation (the Darboux equation) for the orthogonal triad $\{\mathbf{T},\mathbf{N},\mathbf{t}\}$ along the curve takes the form \begin{equation} \left[ \begin{array}{c} \mathbf{T} \\ \mathbf{t} \\ \mathbf{N}% \end{array}% \right] _{y}=\left[ \begin{array}{ccc} 0 & \kappa _{g} & \kappa _{n} \\ \kappa _{g} & 0 & \tau _{r} \\ \kappa _{n} & -\tau _{r} & 0% \end{array}% \right] \left[ \begin{array}{c} \mathbf{T} \\ \mathbf{t} \\ \mathbf{N}% \end{array}% \right]\,, \end{equation}% where $\kappa_g$ is the geodesic curvature, $\kappa_n$ is the normal curvature and $\tau_r$ is the relative torsion (geodesic torsion) while the general temporal evolution of $\gamma$ can be expressed as \begin{equation} \left[ \begin{array}{c} \mathbf{T} \\ \mathbf{t} \\ \mathbf{N}% \end{array}% \right] _{s}=\left[ \begin{array}{ccc} 0 & \alpha & \beta \\ \alpha & 0 & \gamma \\ \beta & -\gamma & 0% \end{array}% \right] \left[ \begin{array}{c} \mathbf{T} \\ \mathbf{t} \\ \mathbf{N}% \end{array}% \right]\,. \end{equation}% The compatibility conditions lead to the following system \begin{equation} \kappa_{g,s}=\alpha _{y}+\kappa _{n}\gamma -\tau _{r}\beta \,, \label{3Dcurve_Integrabilitym1} \end{equation}% \begin{equation} \kappa_{n,s}=\beta _{y}-\kappa _{g}\gamma +\tau _{r}\alpha \,, \label{3Dcurve_Integrabilitym2} \end{equation}% \begin{equation} \tau_{r,s}=\gamma _{y}-\kappa _{g}\beta +\kappa _{n}\alpha \,. \label{3Dcurve_Integrabilitym3} \end{equation}% Combining Eq.(\ref{3Dcurve_Integrabilitym1}) with Eq.(\ref{3Dcurve_Integrabilitym2}), we have \begin{equation} (\kappa _{g}+\mathrm{i}\kappa _{n})_{s}=(\alpha +\mathrm{i}\beta )_{y}-% \mathrm{i}\gamma (\kappa _{g}+\mathrm{i}\kappa _{n})+\mathrm{i}\tau _{r}(\alpha +\mathrm{i}\beta )\,. \label{3Dcurve_Integrabilitym4} \end{equation} If we choose \begin{equation} \kappa _{g}+\mathrm{i}\kappa _{n}=-\mathrm{i}q\,,\quad \tau _{r}=-c^{-1} \,, \label{CCD_assup1} \end{equation}% \begin{equation} \alpha +\mathrm{i}\beta =cq_{s}\,,\quad \gamma =c\rho\,, \label{CCD_assup2} \end{equation} then Eq.(\ref{3Dcurve_Integrabilitym4}) becomes \begin{equation} \label{CCDa1} q_{ys}=\rho q. \end{equation}% On the other hand, since \begin{eqnarray*} \frac{1}{2}\left( |q|^{2}\right) _{s} &=&\frac{1}{2}(qq_{s}^{\ast }+q_{s}q^{\ast }), \\ &=&\frac{1}{2c}\left( \mathrm{i}(\alpha -\mathrm{i}\beta )(\kappa _{g}+% \mathrm{i}\kappa _{n})-\mathrm{i}(\kappa _{g}-\mathrm{i}\kappa _{n})(\alpha +% \mathrm{i}\beta )\right) , \\ &=&-\frac{1}{c}(\kappa _{n}\alpha -\kappa _{g}\beta ), \end{eqnarray*}% one has \begin{equation} \rho_{y}-\frac{1}{2}(|q|^{2})_{s}=0\,, \label{CCDb1} \end{equation}% from Eq.(\ref{3Dcurve_Integrabilitym3}). Thus the link of the defocusing CCD system to the motion of space curves in Minkowski space $\mathbf{R}^{2,1}$ is established. Recall the Lie group \begin{equation*} SU(1,1)= \{g \in SL(2, C) \mid g^*Jg=J\}\,, \end{equation*}% where $J={\text{diag}}(1, -1)$, and the Lie algebra \begin{equation*} su(1,1)= \left\{ \begin{pmatrix} \mathrm{i}x & z \\ z^* & -\mathrm{i}x \end{pmatrix} \mid x \in R, z \in C \right\}\,. \end{equation*}% We choose the basis of $su(1,1)$ as \begin{equation*} \mathrm{\tilde{e}}_{1}=\frac{1}{2}\left( \begin{array}{cc} \mathrm{i} & 0 \\ 0 & -\mathrm{i}% \end{array}% \right) ,\quad \mathrm{\tilde{e}}_{2}=\frac{1}{2}\left( \begin{array}{cc} 0 & 1 \\ 1 & 0% \end{array}% \right) ,\quad \mathrm{\tilde{e}}_{3}=\frac{1}{2}\left( \begin{array}{cc} 0 & \mathrm{i} \\ -\mathrm{i} & 0% \end{array}% \right)\,, \end{equation*} which satisfies the communication relation \begin{equation*} [\mathrm{\tilde{e}}_{1}, \mathrm{\tilde{e}}_{2}]=\mathrm{\tilde{e}}_{3}\,, \quad [\mathrm{\tilde{e}}_{2},\mathrm{\tilde{e}}_{3}]=-\mathrm{\tilde{e}}_{1}\,, \quad [\mathrm{\tilde{e}}_{3}, \mathrm{\tilde{e}}_{1}]=\mathrm{\tilde{e}}_{2}\,. \end{equation*} We identify $\mathbf{R}^{2,1}$ as $su(1,1)$ by $r=-Z \mathrm{\tilde{e}}_{1} +Y \mathrm{\tilde{e}}_{2}-X \mathrm{\tilde{e}}_{3} \rightarrow \vec{\mathbf{r}}=(X,Y,Z)^T$, then \begin{equation} \langle \vec{r}, \vec{w}\rangle = -2{\text{tr}}(rw)\,, \quad \vec{r} \times \vec{w}= r \times w\,. \end{equation} On the other hand, recall the Lie group \begin{equation*} SO(1,2)= \{A \in SL(3) \mid A^TI_{1,2}A=I_{1,2}\}\,, \end{equation*}% where $I_{1,2}={\text{diag}} (-1, 1,1)$, and the Lie algebra \begin{equation*} so(1,2)= \left\{ \begin{pmatrix} 0 & X & Y \\ X & 0 & Z \\ Y & -Z & 0 \end{pmatrix} \mid X, Y,Z \in R \right\}\,, \end{equation*}% we choose the basis of $so(1,2)$ as \begin{equation*} \mathrm{\tilde{L}}_{1}=\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0% \end{array}% \right) ,\quad \mathrm{\tilde{L}}_{2}=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0% \end{array}% \right) ,\quad \mathrm{\tilde{L}}_{3}=\left( \begin{array}{ccc} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0% \end{array}% \right)\,, \end{equation*} which satisfies the communication relation \begin{equation*} [\mathrm{\tilde{L}}_{1}, \mathrm{\tilde{L}}_{2}]=\mathrm{\tilde{L}}_{3}\,, \quad [\mathrm{\tilde{L}}_{2},\mathrm{\tilde{L}}_{3}]=-\mathrm{\tilde{L}}_{2}\,, \quad [\mathrm{\tilde{L}}_{3}, \mathrm{\tilde{L}}_{1}]=\mathrm{\tilde{L}}_{2}\,. \end{equation*} We identify $\mathbf{R}^{2,1}$ as $so(1,2)$ by $r=-Z \mathrm{\tilde{L}}_{1} +Y \mathrm{\tilde{L}}_{2}-X \mathrm{\tilde{L}}_{3} \rightarrow \vec{\mathbf{r}}=(X,Y,Z)^T$. Obviously, there is an isomorphism between the Lie algebras $su(1,1)$ and $so(1,2)$ which is reflected by the correspondence $\mathrm{\hat{L}}_{j}\leftrightarrow \mathrm{\tilde{e}}_{j}$ ($% j=1,2,3$). Based on this fact, we can easily construct the Lax pair for the defocusing CCD system geometrically as follows \begin{equation} \Psi _{y}=U\Psi ,\quad \Psi _{s}=V\Psi \,, \end{equation}% where \begin{eqnarray} U &=&-\kappa _{g}\mathrm{\tilde{e}}_{3}+\kappa _{n}\mathrm{\tilde{e}}_{2}-\tau _{r} \mathrm{\tilde{e}}_{1} \nonumber\\ &=&\frac{1}{2}\left( \begin{array}{cc} -\mathrm{i}\tau _{r} & \kappa _{n}-\mathrm{i}\kappa _{g} \\ \kappa _{n}+\mathrm{i}\kappa _{g} & \mathrm{i}\tau _{r}% \end{array}% \right) \, \nonumber\\ &=& \left( \begin{array}{cc} \frac 12 \mathrm{i}\lambda & -\frac{1}{2}q \\ -\frac{1}{2}q^{\ast } & -\frac 12 \mathrm{i}\lambda% \end{array}% \right)\,, \end{eqnarray}% \begin{eqnarray} V &=&-\alpha \mathrm{\tilde{e}}_{3}+\beta \mathrm{\tilde{e}}_{2}-\gamma\mathrm{\tilde{e}}_{1} \nonumber\\ &=&\frac{1}{2}\left( \begin{array}{cc} -\mathrm{i}\gamma & \beta -\mathrm{i}\alpha \\ \beta +\mathrm{i}\alpha & \mathrm{i}\gamma% \end{array}% \right) \, \nonumber\\ &=& -\frac{\mathrm{i}}{2\lambda}\left( \begin{array}{cc} \rho & q_{s} \\ -q_{s}^{\ast } & -\rho% \end{array}% \right)\,, \end{eqnarray} by setting $c^{-1}=\lambda$. The above Lax pair is consistent with the one used for constructing the Darboux transformation of the defocusing CCD system~\cite{FenglingzhuPRE}. The Lax pair found geometrically here shows that the CCD system is simply the negative order of the AKNS hierarchy~\cite{CsG1,PavlovPLA,DajunPhysD}. It is known that there exists a relationship between the Frenet-Serret frame and the Darboux frame in 3-dimensional Euclidean space $\mathbf{R}^3$: \begin{equation*} \kappa_g = \kappa \cos \alpha\,, \ \ \kappa_n = -\kappa \sin \alpha\,, \ \ \frac{d \alpha}{d y} = \tau -\tau_r\,, \end{equation*} where $\alpha$ is the rotation angle in the tangent plane from the Frenet-Serret frame to the Darboux frame. Therefore, we have \begin{equation} q= \mathrm{i} (\kappa_g + \mathrm{i} \kappa_n) = \kappa e^{-\mathrm{i} \alpha + \mathrm{i} \pi/2} = \kappa e^{-\mathrm{i} \int \tau dy + \mathrm{i} (c^{-1}y+\pi/2)} \,. \end{equation} The above formula can be viewed as Hasimoto transformation for the case of the CCD system. The reciprocal link between the CCD system and the CSP equation can be defined geometrically. Putting $c=1$, we define \begin{equation} x=Z=\int_{s_0}^{s}\gamma (y,s^{\prime })ds^{\prime }=\int_{s_0}^{s}\rho (y,s^{\prime })ds^{\prime }\,,\quad t=-y\,, \end{equation} and \begin{equation} q=X+\mathrm{i}Y=\int_{s_0}^{s} (\alpha +\mathrm{i}\beta) ds^{\prime }=\int_{s_0}^{s} q_{s^{\prime}}(y,s^{\prime})ds^{\prime }\,, \end{equation} where $s_0$ in the arc-length parameter $s$ corresponds to the origin of $x$-coordinate. It can be easily shown that \begin{equation*} \frac{\partial x}{\partial s}=\rho \,, \ \ \frac{\partial x}{\partial y}=\int_{s_0}^{s}\rho _{y}(y,s^{\prime })ds^{\prime }=\frac{1}{2}\int_{s_0}^{s}(|q|^{2})_{s^{\prime}}ds^{\prime }=\frac{1}{2}|q|^{2}\,, \end{equation*}% which realize the hodograph (reciprocal) transformation mentioned in the previous section. Therefore, we have a geometric interpretation for the CSP equation, that is, the CSP equation represents the same integrable curve flow as the CCD system in either $\mathbf{R}^3$ (focusing case) or $\mathbf{R}^{2,1}$ (defocusing case), when $Z$-coordinate becomes one independent variable (spatial one) via the hodograph (reciprocal) transformation, $X$- and $Y$- coordinates are interpreted as the real part and imaginary part of the dependent variable $q$. Under this hodograph (reciprocal) transformation, we can obtain the Lax pair for the defocusing CSP equation (\ref{dCSP}) as follows \begin{equation} \Psi _{x}=P\Psi ,\quad \Psi _{t}=Q\Psi \,, \end{equation}% where \begin{equation} P=\rho ^{-1}V=-\frac{\mathrm{i}}{2\lambda} \left( \begin{array}{cc} 1 & q_{x} \\ -q_{x}^{\ast } & -1% \end{array}% \right) \,, \end{equation} \begin{equation} Q=\frac{1}{2}|q|^{2}P-U=\left( \begin{array}{cc} -\frac{\mathrm{i}}{2} \lambda-\frac{\mathrm{i}}{4\lambda}|q|^{2} & -\frac{\mathrm{i}}{4\lambda}|q|^{2}q_{x}+\frac 12 q \\ \frac{\mathrm{i}}{4\lambda}|q|^{2}q_{x}^{\ast}+\frac{1}{2}q^{\ast} & \frac{\mathrm{i}}{2} \lambda+\frac{\mathrm{i}}{4\lambda }|q|^{2}% \end{array}% \right) \,. \end{equation} \subsection{The link with surfaces embedded in space} Prior to the further pursuit of geometric meaning of the defocusing CSP equation, we turn to reveal the geometric interpretation of the focusing CCD system. If we interchange $y$ and $s$ and take $\lambda \to - 1/(2\lambda)$, then the Lax representation for the CCD system of focusing type~\cite{FengShen_ComplexSPE} can be cast into \begin{equation*} \Psi _{y}=U\Psi ,\quad \Psi _{s}=V\Psi \,, \end{equation*}% \begin{equation*} U =\left( \begin{array}{cc} -\frac{1}{2} \mathrm{i} \lambda & -\frac{1}{2}q \\ \frac{1}{2}q^{\ast } & \frac{1}{2} \mathrm{i} \lambda % \end{array}% \right) \,, \quad V =\frac{\mathrm{i}}{2\lambda} \left( \begin{array}{cc} \rho & q_s \\ q_s^{\ast}& -\rho% \end{array}% \right) \end{equation*}% Since $\rho^2 + |q_s|^2=1$ so we can assume \begin{equation} \rho =\cos \theta\,, \quad q_s= \sin \theta e^{-\mathrm{i} \omega}\,, \end{equation} and \begin{equation} q=(\theta _{y}-\mathrm{i}\omega _{y}\tan \theta) e^{-\mathrm{i}\omega}\,. \end{equation} Then the matrices $U$ and $V$ become \begin{eqnarray} U &=& \left( \begin{array}{cc} -\frac{1}{2} \mathrm{i} \lambda & -\frac 12(\theta _{y}-\mathrm{i}\omega _{y}\tan \theta) e^{-\mathrm{i}\omega} \\ \frac 12(\theta _{y}+\mathrm{i}\omega _{y}\tan \theta) e^{\mathrm{i}\omega} & \frac{1}{2} \mathrm{i} \lambda \end{array}% \right) , \end{eqnarray} \begin{eqnarray} V&=&\frac{\mathrm{i}}{2\lambda} \left( \begin{array}{cc} \cos \theta & \sin \theta e^{-\mathrm{i}\omega } \\ \sin \theta e^{\mathrm{i}\omega } & -\cos \theta% \end{array}% \right)\,. \end{eqnarray} It has been known, for any integrable system possessing a $su(2)$ linear representation, we can come up with fundamental forms of surfaces~\cite{Symsoliton,RogerSchiefbook}, which read \begin{equation} \text{I}= dy^2 + 2 \cos \theta dy ds + ds^2\,, \label{Form1} \end{equation} \begin{equation} \text{II} = (\tan \theta) \omega_y dy^2 + 2 \sin \theta dy ds +(\sin \theta) \omega_s ds^2 \,. \label{Form2} \end{equation} The resulting first fundamental form represents a Chebyschev net for a curve $\textbf{r}(y,s)$ embedded on the surface $\Sigma$. $\theta$ represents the angle between $\textbf{r}_y$ and $\textbf{r}_s$. The associated Gauss equations read \begin{eqnarray} \textbf{r}_{yy} &=& (\cot \theta) \theta_y \textbf{r}_{y} -(\csc \theta) \theta_y \textbf{r}_{s} -(\tan \theta) \omega_y \textbf{N} \,, \label{Gauss_eq1} \\ \textbf{r}_{ys} &=& \sin \theta \textbf{N} \,, \label{Gauss_eq2}\\ \textbf{r}_{ss} &=& -(\csc \theta) \theta_s \textbf{r}_{y}+(\cot \theta) \theta_s \textbf{r}_{s} +(\sin \theta) \omega_s \textbf{N} \label{Gauss_eq3} \,, \end{eqnarray} while the Weingarten equations are \begin{eqnarray} \textbf{N}_{y} &=& (\cot \theta + \csc \theta \sec \theta \omega_y) \textbf{r}_{y}-(\csc \theta \omega_y +\csc \theta ) \textbf{r}_{s} \,, \label{Weigarten_eq1} \\ \textbf{N}_{s} &=& -(\csc \theta - \cot \theta \omega_s) \textbf{r}_{y}+(\cot \theta + \csc \theta \omega_s ) \textbf{r}_{s} \,. \label{Weigarten_eq2} \end{eqnarray} The Mainardi-Codazzi equations give \begin{equation} (\omega_s \cos \theta)_y=\left( \frac{\omega_y}{\cos \theta}\right)_s\,. \label{complexsG2} \end{equation} The Gaussian curvature is \begin{equation} K=-\frac{(\tan \theta) \omega_{y}\omega _{s}+\sin\theta}{\sin \theta}\,, \label{Gaussian_curvature} \end{equation} so the Liouville-Beltrami form of the \textit{Theorema egregium} takes \begin{equation} \theta _{ys}-\sin \theta -(\tan \theta) \omega_{y}\omega _{s}=0\,. \label{complexsG1} \end{equation} The system (\ref{complexsG2}) and (\ref{complexsG1}) is an alternative form of the focusing CCD system. As mentioned in \cite{RogerSchiefbook}, it is also equivalent to the self-induced transparency (SIT) equations~\cite{SteudalPLA} and an integrable model for the stimulated Raman scattering (SRS)~\cite{KaupPhysD,SteudalPhysD}. Let us assume the position vector on the surface $$\textbf{r}(y,s)=(X(y,s),Y(y,s),Z(y,s)),$$ which can be represented as a matrix form \begin{equation} \label{positionvec} {r}(y,s)=X(y,s) \mathrm{e}_1 + Y(y,s)\mathrm{e}_2+Z(y,s) \mathrm{e}_3 \, \end{equation} by referring to the isomorphism between $su(2)$ and $SO(3)$ in $\mathbf{R}^3$. Moreover, we define \begin{equation} \textbf{T} = \Phi^{-1} \mathrm{e}_3 \Phi \,, \quad \textbf{N} = \Phi^{-1}\mathrm{e}_2 \Phi \,, \quad \textbf{t} = \Phi^{-1} \mathrm{e}_1 \Phi \,, \end{equation} where $\mathrm{e}_1, \mathrm{e}_2, \mathrm{e}_3$ are expressed as $\mathrm{e}_i=\frac{1}{2\mathrm{i}}\sigma_i$ for $i=1,2,3$ by using the Pauli matrices. Since \begin{equation} \textbf{r}_y= \left. \Phi^{-1} U_\lambda \Phi\right|_{\lambda=1}\,, \end{equation} we then have \begin{equation} \label{r_y} \textbf{r}_y= \Phi^{-1} \mathrm{e}_3 \Phi = \textbf{T}\,, \end{equation} which coincides with the assumption of Darboux frame and $y$ plays a role of arc length. From \begin{equation} \textbf{r}_s= \left. \Phi^{-1} V_\lambda \Phi\right|_{\lambda=1}\,, \end{equation} we have \begin{equation} \label{r_u} \textbf{r}_s= (\cos \theta) \textbf{T} + (\sin \theta \cos \omega) \textbf{N} + (\sin \theta \sin \omega) \textbf{t}\,, \end{equation} which can also written as \begin{equation} \label{r_curve} \textbf{r}_s= (\cos \theta) \textbf{r}_y + \textbf{r}_y \times \textbf{r}_{ys}\,. \end{equation} This gives the curve flow for the focusing CCD system. Furthermore, based on the Darboux transform~\cite{FLZPhysD} of \begin{eqnarray} U &=& \left( \begin{array}{cc} e^{-\frac 12 \mathrm{i}\lambda y + \frac{\mathrm{i}}{2\lambda} s} & 0 \\ 0 & e^{\frac 12 \mathrm{i}\lambda y - \frac{\mathrm{i}}{2\lambda} s} \end{array}% \right) \,, \end{eqnarray} and the Sym-Tafel formula~\cite{Symsoliton,Tafel} \begin{equation} \label{SymTafel} r=\left.\Phi^{-1}\Phi_{\lambda}\right|_{\lambda=1}\,, \end{equation} we can calculate the one-soliton surface as follows \begin{eqnarray} X &=& \frac{b}{(1-a)^2+b^2} \text{sech} R \cos W\,, \\ Y &=& \frac{b}{(1-a)^2+b^2} \text{sech} R \sin W\,, \\ Z &=& \frac{b}{(1-a)^2+b^2} \tanh R +y + s\,, \end{eqnarray} where \begin{equation*} R=b y + \frac{b}{a^2+b^2}s\,, \quad W=(1-a) y +\left(1+ \frac{a}{a^2+b^2} \right)s \end{equation*} The surface for a moving one-soliton with parameter $a=0.4, b=1$ is shown in Fig. 1. \begin{figure}[htb] \centering \includegraphics[height=60mm,width=80mm]{Fig1.eps} \caption{One-soliton surface for $a=0.4, b=1$.} \label{fig1} \end{figure} As mentioned in \cite{RogerSchiefbook}, the system (\ref{complexsG2}) and (\ref{complexsG1}) is directly related to the so-called Pohlmeyer--Lund--Regge system which was originally demonstrated by Lund and Regge~\cite{LundRegge76} to represent the relativistic motion of a string in a uniform and static external field, and by Pohlmeyer~\cite{PohlmeyerCMP76} to represent a $O(4)$ nonlinear sigma model. It was shown by Lund~\cite{Lund77,Lund78} that the Pohlmeyer--Lund--Regge system can also be interpreted as the Gauss--Mainardi--Codazzi equations for particular surfaces in $S^3$ and can be solved by the inverse scattering transformation (IST) method. From (\ref{r_curve}), it is obvious that \begin{equation} \label{PLG_curve} \textbf{r}_{ys}= \textbf{r}_y \times \textbf{r}_{s}\,. \end{equation} We proceed to establish a link of the defocusing CCD system with the surfaces. Similarly, we could obtain fundamental forms of surfaces~\cite{Symsoliton,RogerSchiefbook} embedded in Minkowski space $\mathbf{R}^{2,1}$ \begin{equation} \text{I}= dy^2 + 2 \cosh \theta dy ds + ds^2\,, \label{Form11} \end{equation} \begin{equation} \text{II} = (\tanh \theta) \omega_y dy^2 + 2 \sinh \theta dy ds +(\sinh \theta) \omega_s ds^2 \,. \label{Form22} \end{equation} Here $\theta$ represents the angle between $\textbf{r}_y$ and $\textbf{r}_s$ in Minkowski space~\cite{Lopez}. Therefore, the associated Gauss equations take the form \begin{eqnarray} \textbf{r}_{yy} &=& (\coth \theta) \theta_y \textbf{r}_{y} -(\sinh \theta)^{-1} \theta_y \textbf{r}_{s} -(\tanh \theta) \omega_y \textbf{N} \,, \label{Gauss_eq11} \\ \textbf{r}_{ys} &=& \sinh \theta \textbf{N} \,, \label{Gauss_eq22}\\ \textbf{r}_{ss} &=& -(\sinh \theta)^{-1} \theta_s \textbf{r}_{y}+(\coth \theta) \theta_s \textbf{r}_{s} +(\sinh \theta) \omega_s \textbf{N} \label{Gauss_eq33} \,, \end{eqnarray} while the Weingarten equations read \begin{eqnarray} \textbf{N}_{y} &=& (\coth \theta + (\sinh \theta \cosh \theta)^{-1} \omega_y) \textbf{r}_{y}-((\sinh \theta)^{-1} \omega_y +(\sinh \theta)^{-1} ) \textbf{r}_{s} \,, \label{Weigarten_eq11} \\ \textbf{N}_{s} &=& -((\sinh \theta)^{-1} - \coth \theta \omega_s) \textbf{r}_{y}+(\coth \theta + (\sinh \theta)^{-1} \omega_s ) \textbf{r}_{s} \,. \label{Weigarten_eq22} \end{eqnarray} Both of the Mainardi-Codazzi equations lead to the same equation \begin{equation} (\omega_s \cosh \theta)_y=\left( \frac{\omega_y}{\cosh \theta}\right)_s\,. \label{complexshG2} \end{equation} Since the Gaussian curvature is \begin{equation} K=-\frac{(\tanh \theta) \omega_{y}\omega _{s}+\sinh\theta}{\sinh \theta}\,, \label{Gaussian_curvature2} \end{equation} then the Liouville-Beltrami form of the \textit{Theorema egregium} becomes \begin{equation} \theta _{ys}-\sinh \theta -(\tanh \theta) \omega_{y}\omega _{s}=0\,. \label{complexshG1} \end{equation} If we assume the following parameterizations \begin{equation} \rho = \cosh \theta\,, \quad q_s=\sinh \theta e^{-\mathrm{i} \omega}\,, \end{equation} and \begin{equation} q=(\theta _{y}-\mathrm{i}\omega _{y}\tanh \theta) e^{-\mathrm{i}\omega}\,, \end{equation} Then the system (\ref{complexshG2}) and (\ref{complexshG1}) becomes the defocusing CCD system. On the other hand, if we assume \begin{equation} \omega _{s}=\chi _{s}\frac{\cosh \theta }{2\cosh ^{2}(\theta /2)}; \quad \omega_{y}=\chi _{y}\frac{1}{2\cosh ^{2}(\theta /2)}\,, \end{equation} then the system (\ref{complexshG2}) and (\ref{complexshG1}) leads to the Pohlmeyer--Lund--Regge system of hyperbolic type~\cite{NakayamaJPSJ98} \begin{equation} \theta _{ys}-\sinh \theta -\frac{1}{2}\frac{\sinh \left( \theta /2\right) }{% \cosh ^{3}\left( \theta /2\right) }\chi _{y}\chi _{s}=0\,, \end{equation} \begin{equation} \chi _{ys}+\frac{1}{\sinh \theta }(\theta _{y}\chi _{s}+\theta _{s}\chi _{y})=0\,. \end{equation} The Darboux frame can be cast into \begin{equation} \textbf{T} = \Psi^{-1} \mathrm{\tilde{e}}_1 \Psi \,, \quad \textbf{t} = \Psi^{-1}\mathrm{\tilde{e}}_2 \Psi \,, \quad \textbf{N} = \Psi^{-1} \mathrm{\tilde{e}}_3 \Psi \,, \end{equation} then from \begin{equation} \textbf{r}_y= \left. \Phi^{-1} U_\lambda \Phi\right|_{\lambda=1}\,, \quad \textbf{r}_s= \left. \Phi^{-1} V_\lambda \Phi\right|_{\lambda=1}\,, \end{equation} we have \begin{equation} \label{r_ys} \textbf{r}_y= \textbf{T}\,, \ \ \textbf{r}_s= (\cosh \theta) \textbf{T} + (\sinh \theta \cos \omega) \textbf{N} + (\sinh \theta \sin \omega) \textbf{t}\,, \end{equation} the former coincides with the assumption of Darboux frame where $y$ serves as the arc length, the latter gives the curve flow for the defocusing CCD system, which can also cast into \begin{equation} \textbf{r}_s= (\cosh \theta) \textbf{r}_y + \textbf{r}_y \times \textbf{r}_{ys} \,. \end{equation} \section{Reduction from the extended KP hierarchy} \subsection{Bilinearizations of the defocusing CCD and CSP equations} The bilinearizations of the defocusing CCD system and CSP equation are established by the following propositions. \begin{prop} By means of the dependent variable transformations \begin{equation} \label{var_tran1} q= \frac{\beta}{2}\frac{g}{f} e^{\mathrm{i}(y+\gamma s/2)}\,, \end{equation} \begin{equation} \label{var_tran2} \rho= -\frac{\gamma}{2} -2(\log f)_{ys}\,, \end{equation} the defocusing CCD system (\ref{CCDa1})--(\ref{CCDb1}) are transformed into the following bilinear equations for the tau functions $f$ and $g$ \begin{equation} (D_{y}D_{s}+\mathrm{i} D_{s}+\frac{\gamma}{2} \mathrm{i} D_y)g\cdot f=0\,, \label{CDSP_bilinear1} \end{equation} \begin{equation} \left(D_{s}^{2}-\frac{\beta^2}{8}\right)f\cdot f=-\frac{\beta^2}{8}gg^{\ast}\,, \label{CDSP_bilinear2} \end{equation} where $D$ is the Hirota $D$-operator defined by \cite{Hirotabook} \begin{equation*} D_s^n D_y^m f\cdot g=\left(\frac{\partial}{\partial s} -\frac{\partial}{% \partial s^{\prime }}\right)^n \left(\frac{\partial}{\partial y} -\frac{% \partial}{\partial y^{\prime }}\right)^m f(y,s)g(y^{\prime },s^{\prime })|_{y=y^{\prime }, s=s^{\prime }}\,. \end{equation*} \end{prop} \begin{proof} The substitution of dependent variable transformation (\ref{var_tran2}) into Eq.(\ref{CCDb1}) yields \begin{equation*} -2(\log f)_{yss} - \frac{\beta^2}{8} \left( \frac {gg^{\ast}}{f^2} \right)_y=0\,. \end{equation*} Integrating once in $y$ and setting the integration constant to be $\beta^2/8$, we then have \begin{equation*} 2(\log f)_{ss} - \frac{\beta^2}{8} = -\frac{\beta^2}{8} \frac {gg^{\ast}}{f^2}\,, \end{equation*} which is exactly the bilinear equation (\ref{CDSP_bilinear2}). On the other hand, Eq.(\ref{CCDa1}) is converted into \begin{equation} \left(\frac{g}{f}\right)_{y s} + \mathrm{i} \left(\frac{g}{f}\right)_{s} + \mathrm{i} \frac{\gamma}{2} \left(\frac{g}{f}\right)_{y} - \frac{\gamma}{2} \frac{g}{f} = \left( -\frac{\gamma}{2}-2(\log f)_{ys}\right) \frac{g}{f} \,, \end{equation} via the dependent variable transformation (\ref{var_tran1}), or, is simplified into \begin{equation} \left(\frac{g}{f}\right)_{y s} + 2 (\log f)_{ys} \frac{g}{f} + \mathrm{i} \left(\frac{g}{f}\right)_{s} + \mathrm{i} \frac{\gamma}{2} \left(\frac{g}{f}\right)_{y} = 0 \,. \end{equation} Referring to a bilinear identity \begin{equation*} \frac{D_y D_s g \cdot f}{f^2} =\left(\frac{g}{f}\right)_{y s} + 2 (\log f)_{y s} \frac{g}{f} \,, \end{equation*} we then have the bilinear equation (\ref{CDSP_bilinear1}). \end{proof} \begin{prop} By means of the dependent variable transformation \begin{equation} q= \frac{\beta}{2} \frac{g}{f} e^{\mathrm{i}( y- s)}\,, \end{equation} and the hodograph (reciprocal) transformation \begin{equation} x = -\frac{\gamma}{2} y + \frac{\beta^2}{8} s -2(\log f)_s\,, \quad t=-s\,, \end{equation} the defocusing CSP equation (\ref{dCSP}) shares the same bilinear equations (\ref{CDSP_bilinear1})--(\ref{CDSP_bilinear2}). \end{prop} \begin{proof} From the hodograph (reciprocal) transformation and bilinear equations, we could have $$ \frac{\partial x}{\partial y} = -\frac{\gamma}{2} -2(\log f)_{y s} = \rho\,, $$ and $$ \frac{\partial x}{\partial s} = \frac{\beta^2}{8} -2(\log f)_{ss} = \frac{\beta^2}{8} \frac{|g|^2}{f^2} = \frac 12 |q|^2\,, $$ which implies \begin{equation*} \partial _{y}=\rho \partial _{x},\quad \partial _{s}=-\partial_{t}+% \frac{1}{2}|q|^{2}\partial _{x}\,. \end{equation*}% Thus, the defocusing CSP equation is derived from the defocusing CCD system based on the discussion in previous section. \end{proof} \subsection{Bilinear equations for the extended KP hierarchy} Let us start with a concrete form of the Gram determinant expression of the tau functions for the extended KP hierarchy with negative flows \begin{equation} \tau _{nkl}=\left\vert m_{ij}^{nkl}\right\vert _{1\leq i,j\leq N}, \end{equation}% where \begin{equation*} m_{ij}^{nkl}=\delta _{ij}+\frac{1}{p_{i}+\bar{p}_{j}}\varphi _{i}^{nkl}\psi_{j}^{nkl}, \end{equation*}% \begin{equation*} \varphi _{i}^{nkl}=p_{i}^{n}(p_{i}-a)^{k}(p_{i}-b)^{l}e^{\xi _{i}},\quad \psi _{j}^{nkl}=\left(-\frac{1}{\bar{p}_{j}}\right)^{n}\left(-\frac{1}{\bar{p}_{j}+a}\right)^{k} \left(-\frac{1}{\bar{p}_{j}+b}\right)^{l}e^{\bar{\xi}_{j}}, \end{equation*} with \begin{equation*} \xi _{i}=\frac{1}{p_{i}}x_{-1}+p_{i}x_{1}+\frac{1}{p_{i}-a}t_{a}+\frac{1}{p_{i}-b}t_{b}+\xi _{i0}, \end{equation*}% \begin{equation*} \bar{\xi}_{j}=\frac{1}{\bar{p}_{j}}x_{-1}+\bar{p}_{j}x_{1}+\frac{1}{\bar{p}% _{j}+a}t_{a}+\frac{1}{\bar{p}_{j}+b}t_{b}+\bar{\xi}_{j0}. \end{equation*}% Here $p_{i}$, $\bar{p}_{j}$, $\xi _{i0}$, $\bar{\xi}_{j0}$, $a$, $b$ are constants. Based on the KP tau function theory~\cite{Sato,JM}, the above tau functions satisfy a set of bilinear equations \begin{equation} \left(\frac{1}{2}D_{x_{1}}D_{x_{-1}}-1\right)\tau _{nkl}\cdot \tau _{nkl}=-\tau _{n+1,kl}\tau _{n-1,kl}\,, \label{KPbilinear1} \end{equation}% \begin{equation} (aD_{t_{a}}-1)\tau _{n+1,kl}\cdot \tau _{nkl}=-\tau _{n+1,k-1,l}\tau _{n,k+1,l}\,, \label{KPbilinear2} \end{equation}% \begin{equation} \left(D_{x_{1}}(aD_{t_{a}}-1)-2a\right)\tau _{n+1,kl}\cdot \tau _{nkl}=(D_{x_{1}}-2a)\tau _{n+1,k-1,l}\cdot \tau _{n,k+1,l}\,, \label{KPbilinear3} \end{equation} \begin{equation} \left(D_{x_{1}}(bD_{t_{b}}-1)-2b\right)\tau _{n+1,kl}\cdot \tau _{nkl}=(D_{x_{1}}-2b)\tau _{n+1,k,l-1}\cdot \tau _{nk,l+1}\,. \label{KPbilinear4} \end{equation} The proof is given below by referring to the Grammian technique~\cite{Hirotabook,MiyakeOhtaGram}. It is easily shown that $m_{ij}^{nkl}$, $% \varphi _{i}^{nkl}$, $\psi _{j}^{nkl}$ satisfy \begin{equation*} \partial _{x_{1}}m_{ij}^{nkl}=\varphi _{i}^{nkl}\psi _{j}^{nkl},\quad \partial _{x_{-1}}m_{ij}^{nkl}=-\varphi _{i}^{n-1,kl}\psi _{j}^{n+1,kl}, \end{equation*}% \begin{equation*} \partial _{t_{a}}m_{ij}^{nkl}=-\varphi _{i}^{n,k-1,l}\psi _{j}^{n,k+1,l}, \end{equation*}% \begin{equation*} m_{ij}^{n+1,kl}=m_{ij}^{nkl}+\varphi _{i}^{nkl}\psi _{j}^{n+1,kl},\quad m_{ij}^{n,k+1,l}=m_{ij}^{nkl}+\varphi _{i}^{nkl}\psi _{j}^{n,k+1,l}. \end{equation*}% Therefore the following differential and difference formulae hold for $% \tau_{nkl}$, \begin{equation*} \partial_{x_{1}}\tau _{nkl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{nkl} \\ -\psi_j^{nkl} & 0% \end{matrix} \right\vert,\quad \partial _{x_{-1}}\tau _{nkl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n-1,kl} \\ \psi_j^{n+1,kl} & 0% \end{matrix} \right\vert, \end{equation*}% \begin{equation*} a\partial _{t_{a}}\tau _{nkl}= \left\vert \begin{matrix} m_{ij}^{nkl} & a\varphi_i^{n,k-1,l} \\ \psi_j^{n,k+1,l} & 0% \end{matrix} \right\vert,\quad \tau _{n+1,kl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{nkl} \\ -\psi_j^{n+1,kl} & 1% \end{matrix} \right\vert, \end{equation*}% \begin{equation*} \tau _{n-1,kl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n-1,kl} \\ \psi_j^{nkl} & 1% \end{matrix} \right\vert,\quad \tau _{n,k+1,l}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{nkl} \\ -\psi_j^{n,k+1,l} & 1% \end{matrix} \right\vert, \end{equation*}% \begin{equation*} \tau _{n+1,k-1,l}=\left\vert \begin{matrix} m_{ij}^{nkl} & a\varphi_i^{n,k-1,l} \\ \psi_j^{n+1,kl} & 1% \end{matrix} \right\vert,\quad \partial _{x_{1}}\tau _{n+1,kl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n+1,kl} \\ -\psi_j^{n+1,kl} & 0% \end{matrix} \right\vert, \end{equation*}% \begin{equation*} (\partial _{x_{1}}+a)\tau _{n,k+1,l}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n+1,kl} \\ -\psi_j^{n,k+1,l} & a% \end{matrix} \right\vert, \end{equation*}% \begin{equation}\label{border1} (\partial _{x_{1}}\partial _{x_{-1}}-1)\tau _{nkl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n-1,kl} & \varphi_i^{nkl} \\ \psi_j^{n+1,kl} & 0 & -1 \\ -\psi_j^{nkl} & -1 & 0% \end{matrix} \right\vert, \end{equation} \begin{equation}\label{border2} (a\partial _{t_{a}}-1)\tau _{n+1,kl}= \left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{nkl} & a\varphi_i^{n,k-1,l} \\ -\psi_j^{n+1,kl} & 1 & -1 \\ \psi_j^{n,k+1,l} & -1 & 0% \end{matrix} \right\vert, \end{equation} \begin{equation}\label{border3} (\partial _{x_{1}}(a\partial _{t_{a}}-1)-a)\tau _{n+1,kl} =\left\vert \begin{matrix} m_{ij}^{nkl} & \varphi_i^{n+1,kl} & a\varphi_i^{n,k-1,l} \\ -\psi_j^{n+1,kl} & 0 & -1 \\ \psi_j^{n,k+1,l} & -a & 0% \end{matrix} \right\vert. \end{equation} Applying the Jacobi identity of determinants to the bordered determinants (\ref{border1})--(\ref{border3}), the three bilinear equations (\ref{KPbilinear1})--(\ref{KPbilinear3}) are satisfied. The bilinear equation (\ref{KPbilinear4}) can be proved exactly in the same way as equation (\ref{KPbilinear3}) . \subsection{Reduction to the CCD system and the CSP equation of defocusing type} In what follows, we briefly show the reduction processes of reducing bilinear equations of extended KP hierarchy (\ref{KPbilinear1})--(\ref{KPbilinear4}) to the bilinear equation (\ref{CDSP_bilinear1})--(\ref{CDSP_bilinear2}). Firstly, we start with dimension reduction by noting that the determinant expression of $\tau _{nkl}$, \begin{equation*} \tau _{nkl}=\left\vert \delta _{ij}+\frac{1}{p_{i}+\bar{p}_{j}}\varphi _{i}^{nkl}\psi _{j}^{nkl}\right\vert _{1\leq i,j\leq N}\,, \end{equation*} can be alternatively expressed by% \begin{equation*} \tau _{nkl}=\left\vert \delta _{ij}+\frac{1}{p_{i}+\bar{p}_{j}}\varphi _{i}^{nkl}\psi _{i}^{nkl}\right\vert _{1\leq i,j\leq N}\,, \end{equation*} by dividing $j$-th column by $\psi _{j}^{nkl}$ and multiplying $i$-th row by $% \psi _{i}^{nkl}$ for $1\leq i,j\leq N$. By taking \begin{equation} \bar{p}_{j}=\frac{1}{p_{j}},\quad b=-\frac{1}{a}, \label{reduction_par} \end{equation}% we can easily check that $\tau _{nkl}$ satisfies the reduction conditions \begin{equation} \partial _{x_{1}}\tau _{nkl}=\partial _{x_{-1}}\tau _{nkl}\,, \label{reduction_condition1} \end{equation}% \begin{equation} -a^2\partial _{t_{a}}\tau _{nkl}=\partial _{t_{b}}\tau _{nkl}\,, \label{reduction_condition2} \end{equation}% \begin{equation} \tau _{n-1,k+1,l+1}=\tau _{nkl}\,. \label{reduction_condition3} \end{equation}% Therefore the bilinear equation (\ref{KPbilinear1}) is reduced to \begin{equation} \left(\frac{1}{2}D_{x_{1}}^{2}-1\right)\tau _{nkl}\cdot \tau _{nkl}=-\tau _{n+1,kl}\tau _{n-1,kl}. \label{Reduction_bilinear1} \end{equation} Moreover, by referring to the bilinear equation (\ref{KPbilinear4}) and the reduction conditions (\ref{reduction_condition2})--(\ref{reduction_condition3}), we have \begin{equation*} \left(D_{x_{1}}(aD_{t_{a}}-1)+\frac{2}{a}\right) \tau _{n+1,kl}\cdot \tau _{nkl} =\left(D_{x_{1}}+\frac{2}{a}\right)\tau _{n,k+1,l}\cdot \tau _{n+1,k-1,l}\,, \end{equation*}% thus using (\ref{KPbilinear2}) and (\ref{KPbilinear3}) we get \begin{equation*} \left(D_{x_{1}}(aD_{t_{a}}-1)-a+\frac{1}{a}\right) \tau _{n+1,kl}\cdot \tau _{nkl} =\left(-a+\frac{1}{a}\right)\tau _{n+1,k-1,l}\tau _{n,k+1,l} \end{equation*}% \begin{equation} =\left(a-\frac{1}{a}\right)(aD_{t_{a}}-1)\tau _{n+1,kl}\cdot \tau _{nkl}\,, \label{Reduction_bilinear2} \end{equation}% i.e., \begin{equation} (D_{x_{1}}(aD_{t_{a}}-1)-(a^2-1)D_{t_{a}})\tau _{n+1,kl}\cdot \tau _{nkl}=0\,. \label{Reduction_bilinear3} \end{equation} Next, we proceed to the reduction of complex conjugate, which turns out to be very simple. Specifically, by taking $a$ pure imaginary, $|p_{i}|=1$ and $\bar{\xi}_{j0}=\xi _{j0}^{\ast}$, where $^{\ast}$ means complex conjugate, we have $\bar{p}_{i}=p^{\ast}_{i}$ and \begin{equation*} \tau _{n00}^{\ast }=\tau _{-n,00}\,. \end{equation*} Due to the relation (\ref{reduction_condition1}) and (\ref{reduction_condition2}), we can choose $x_{1}$ (or $x_{1}+x_{-1}$) and $t_{a}$ (or $t_{a}+t_{b}$ ) as two independent variables. Therefore, we have \begin{equation*} \tau _{n00}=\left\vert \delta _{ij}+\frac{1}{p_{i}+p_{j}^{\ast }}\left(-\frac{p_{i}% }{p_{j}^{\ast }}\right)^{n}e^{\xi _{i}+\xi _{j}^{\ast }}\right\vert _{1\leq i,j\leq N}\,, \end{equation*} with \begin{equation*} \xi _{i}=p_{i}x_{1}+\frac{1}{p_{i}-a}t_{a}+\xi _{i0}\,. \end{equation*} In summary, by defining \begin{equation*} f=\tau _{000},\quad g=\tau _{100}, \end{equation*}% we arrive at \begin{equation} \left(D_{x_{1}}D_{t_{a}}-\frac{1}{a}D_{x_{1}} -\left(a-\frac{1}{a}\right)D_{t_{a}}\right)g\cdot f=0\,, \label{Reduction_bilinear4} \end{equation} \begin{equation} \left(\frac{1}{2}D_{x_{1}}^{2}-1\right)f\cdot f=-gg^{\ast}\,. \label{Reduction_bilinear5} \end{equation} Finally, by setting $a=\mathrm{i} c$, $t_{a}= cy$, $x_{1}= \beta s/4$ and $\beta (c^2+1)=-2 \gamma c$, the above bilinear equations coincide with the bilinear equations (\ref{CDSP_bilinear1})--(\ref{CDSP_bilinear2}). Therefore, the reduction process is complete. As a result, we can provide the determinant solution to the defocusing CSP equation by the following theorem. \begin{theorem} The defocusing CSP equation (\ref{dCSP}) admits the following determinant solution \begin{equation} q= \frac{\beta}{2} \frac{g}{f}e^{\mathrm{i}( y+\gamma s/2)},\quad x= -\frac{\gamma}{2} y+ \frac{\beta^2}{8}s-2(\log f)_{s}\,\quad t=-s, \label{ndark-qhodo} \end{equation} where% \begin{eqnarray} f &=&\left\vert \delta _{ij}+\frac{1}{p_{i}+p_{j}^{\ast }}e^{\xi _{i}+\xi _{j}^{\ast }}\right\vert _{1\leq i,j\leq N}\,, \nonumber \\ g &=&\left\vert \delta _{ij}+\left( -\frac{p_{i}}{p_{j}^{\ast }}\right) \frac{1}{p_{i}+p_{j}^{\ast }}e^{\xi _{i}+\xi _{j}^{\ast }}\right\vert _{1\leq i,j\leq N}\,, \label{ndark-tau} \end{eqnarray} with \begin{equation} |p_{i}|=1,\quad \xi_{i}=\frac{c }{p_{i}-\mathrm{i}c}y+\frac{\beta}{4} p_{i} s+\xi _{i0}\,, \label{ndark-par} \end{equation} under a constraint \begin{equation} \frac{\beta}{\gamma} = -\frac{c^2+1}{2 c}\,. \label{ndark-constraint} \end{equation} \end{theorem} In the last, we list one- and two-dark soliton solutions to the defocusing CSP equation (\ref{dCSP}). By taking $p_{1}=e^{-\mathrm{i}(\varphi_{1}+\pi/2)}$ and $N=1$ in (\ref{ndark-tau}), we have the tau functions for one-dark soliton solution, \begin{equation} f=1+e^{2\eta _{1}},\quad g=1+e^{2(\eta _{1}-\mathrm{i}\varphi _{1})}\,, \end{equation} where \begin{equation*} 2\eta _{1}=- \frac{ \beta \sin \varphi _{1}}{2} \left( s + \frac {2y}{\beta \cos \varphi _{1} -\gamma} \right)-\ln (p_{1}+p_{1}^{\ast}) \,. \end{equation*} This leads to a one-dark soliton solution of the following parametric form \begin{equation} \label{CSP1solitona} q= \frac{\beta}{2}\left((1+\mathrm{e}^{-2\mathrm{i}\varphi_{1}}) + (\mathrm{e}^{-2\mathrm{i}\varphi _{1}}-1) \tanh \eta_{1} \right)\mathrm{e}^{\mathrm{i}( y +\gamma s/2)}\,, \end{equation} \begin{equation} \label{CCD1solitonb} x= -\frac{\gamma}{2}y+\frac{\beta^2}{8}s+\frac{\beta \sin \varphi_{1} e^{2\eta _{1}}}{1+e^{2\eta _{1}}},\quad \,t=-s\,. \end{equation} \begin{figure}[htb] \centering \includegraphics[height=60mm,width=80mm]{Fig2.eps} \caption{A smoothed dark soliton with $\beta=2.0$, $\gamma=1.0$, $\varphi_1=2\pi/3$.} \label{fig2} \end{figure} Obviously, the amplitude of background plane waves is $\beta/2$, the depth of the trough is $\beta (1-|\cos \varphi_1|)/2$. An example with $\beta=2.0$, $\gamma=1.0$, $\varphi_1=2\pi/3$ is illustrated in Fig. 1. In this case, the envelope of the dark soliton is smooth. However, as analysed in \cite{FenglingzhuPRE}, if we take $\beta=(2+2 \sqrt{7})/3$ while keeping other parameters unchanged, the dark soliton becomes a cusped envelope soliton, as shown in Fig. 2. In other words, the dark solution has more tendency to become singular ( cusped or looped one) as the amplitude of the background waves increases. \begin{figure}[htb] \centering \includegraphics[height=60mm,width=80mm]{Fig3.eps} \caption{A cusped dark soliton with $\beta=(2+2\sqrt{7})/3$, $\gamma=1.0$, $\varphi_1=2\pi/3$.} \label{fig3} \end{figure} From (\ref{ndark-tau}) with $N=2$, we obtain the tau functions for two-soliton solution, \begin{eqnarray} f &=&\left\vert \begin{array}{cc} 1+\frac{1}{p_{1}+p_{1}^{\ast }{}}e^{\xi _{1}+\xi _{1}^{\ast }} & \frac{1}{% p_{1}+p_{2}^{\ast }{}}e^{\xi _{1}+\xi _{2}^{\ast }} \\ \frac{1}{p_{2}+p_{1}^{\ast }{}}e^{\xi _{2}+\xi _{1}^{\ast }} & 1+\frac{1}{% p_{2}+p_{2}^{\ast }{}}e^{\xi _{2}+\xi _{2}^{\ast }}% \end{array}% \right\vert \nonumber\\ &=&1+e^{2\eta _{1}}+e^{2\eta _{2}}+a_{12}e^{2(\eta _{1}+\eta _{2})}, \label{2solitong} \end{eqnarray} \begin{eqnarray} g &=&\left\vert \begin{array}{cc} 1+\frac{1}{p_{1}+p_{1}^{\ast }{}}(-\frac{p_{1}}{p_{1}^{\ast }})e^{\xi _{1}+\xi _{1}^{\ast }} & \frac{1}{p_{1}+p_{2}^{\ast }{}}(-\frac{p_{1}}{% p_{2}^{\ast }})e^{\xi _{1}+\xi _{2}^{\ast }} \\ \frac{1}{p_{2}+p_{1}^{\ast }{}}(-\frac{p_{2}}{p_{1}^{\ast }})e^{\xi _{2}+\xi _{1}^{\ast }} & 1+\frac{1}{p_{2}+p_{2}^{\ast }{}}(-\frac{p_{2}}{p_{2}^{\ast }% })e^{\xi _{2}+\xi _{2}^{\ast }}% \end{array}% \right\vert \nonumber\\ &=&1+e^{2(\eta _{1}-\mathrm{i}\varphi _{1})}+e^{2(\eta _{2}-\mathrm{i}% \varphi _{2})}+a_{12}e^{2(\eta _{1}+\eta _{2}-\mathrm{i}\varphi _{1}-\mathrm{% i}\varphi _{2})}, \end{eqnarray}% where \begin{equation*} 2\eta _{j}=- \frac{ \beta \sin \varphi _{1}}{2} \left( s + \frac {2y}{\beta \cos \varphi _{j} -\gamma} \right)-\ln (p_{j}+p_{j}^{\ast})\,,\quad j=1,2\,, \end{equation*} \begin{equation*} a_{12} =\frac{\sin ^{2}\left( \frac{\varphi _{2}-\varphi _{1}}{2}\right) }{\sin ^{2}\left( \frac{\varphi _{2}+\varphi _{1}}{2}\right) }\,. \end{equation*} The collision of two-dark solitons to the defocusing CSP equation is always elastic, whose analysis is given in \cite{FenglingzhuPRE}. \section{Concluding Remarks} In \cite{FenglingzhuPRE}, a defocusing CSP equation was derived from physical context in nonlinear optics as an analogue of the NLS equation in ultra-short pulse regime. In the present paper, we have established a link between the defocusing CSP equation and the motion of space curves in Minkowski space and the complex sinh-Gordon equation by a hodograph (reciprocal) transformation. We have also derived the CSP equation of both focusing and defocusing type from the fundamental forms of surfaces with non-constant Gaussian curvature, from which the curve flows are formulated. Secondly, starting from a set of bilinear equations, along with their tau functions, of a single-component extended KP hierarchy, we have derived the defocusing CSP equation based on the KP-hierarchy reduction method. Meanwhile, its multi-soliton solutions have been provided in determinant form. Even though we have recently constructed various solutions including bright soliton, dark soliton, breather and rogue wave solutions to the the focusing and defocusing CSP equation~\cite{Feng_ComplexSPE,FengShen_ComplexSPE,FLZPhysD,FenglingzhuPRE} including the work in the present paper, it will be more interesting to investigate all kinds of solutions to the coupled CSP equation, especially the one of mixed focusing and defocusing nonlinearity. In the last, although integrable discretizations of the real short pulse equation and its multi-component generalizations were recently constructed~\cite{FMO-PJMI,FengSPEdiscrete,Fengplanecurves,FCCMOcSP,FMOmultiSP}, how to understand and reconstruct the integrable discretizations of the focusing and defocusing CSP equation geometrically and algebraically and relate them with a general frame work set in ~\cite{BobenkoSurisbook,SchiefdiscretePLR} remains a topic to be explored in the future. \section*{Acknowledment} We thank for the reviewers' comments which helped us to improve the manuscript significantly. BF appreciate the useful discussion on geometric part of the paper with Dr. Zhiwei Wu. BF appreciated the partial support by the National Natural Science Foundation of China (No. 11428102). The work of KM is partially supported by JSPS Grant-in-Aid for Scientific Research (C-15K04909) and CREST, JST. The work of YO is partly supported by JSPS Grant-in-Aid for Scientific Research (B-24340029, C-15K04909) and for Challenging Exploratory Research (26610029).
2,877,628,090,900
arxiv
\section{Introduction} Experiments on few-electron double quantum dots allow the measurement and manipulation of the spin degree of freedom of the confined electrons\cite{Hanson2007}. Such control is at the heart of semiconductor-based spintronics~\cite{wolf2001spintronics,awschalom2007challenges} and quantum-information proposals~\cite{loss1998quantum,cerletti2005recipes}. Recently, substantial experimental efforts have been focussed on controlling electrons in carbon nanotube double quantum dots, and many of the capabilities previously achieved in GaAs double dots\cite{Ono2002,petta2005cmc,Hanson2007,Koppens06} are starting to be reproduced\cite{Churchill09,Churchill09b,Gotz09}. These include the ability to start from an empty double dot and systematically fill it with electrons. Since $^{12}$C has no nuclear spin, carbon based nanostructures are expected to reduce hyperfine induced decoherence as compared with GaAs\cite{bulaev2008soi}. Furthermore, carbon based materials exhibit richer physics than GaAs semiconducting materials because of the additional valley degree of freedom\cite{jarillo2005orbital,oreg2000spin,moriyama2005four}. In principle, the spin and valley degrees of freedom could lead to a SU(4) symmetry at zero magnetic field instead of the standard SU(2) symmetry in conventional semiconductors (see e.g. Ref.~\onlinecite{egger2009kondo} and references therein). However, it has been recently demonstrated~\cite{kuemmeth2008csa} that the enhancement of the spin-orbit splitting in small-radius nanotubes breaks the four-fold degeneracy of the single-electron ground state into a two-fold degeneracy. In this work, we study how spin-orbit coupling and electron-electron interaction effects are manifested in the two-electron spectrum and transport properties of a carbon nanotube double dot. This represents an extension of previous studies on few-electron physics in a single carbon nanotube dot\cite{wunsch09,secchi2009B,secchi2009cvs}. We find that, despite of spin-orbit coupling and the existence of an additional valley degree of freedom, the two-electron eigenstates can be separated in an orbital part and a spin-valley part that are, to a very good approximation, independent of each other. The spin-valley part can be grouped in six antisymmetric and ten symmetric spin-valley eigenstates which we refer to as multiplets. The separation of spin-valley degrees of freedom significantly simplifies the description of the systems and allows us to draw analogies with standard GaAs double dots. Our main results can be summarized as follows: (a) For dots at zero magnetic field and no detuning, each dot is populated by a single electron and tunneling is suppressed because of Coulomb interactions. Thus, interdot coupling only occurs virtually via superexchange interactions that determine the ground state symmetry. In this regime, we find a spin-valley antisymmetric ground state, that does not have a well defined-spin due to the spin-orbit coupling. (b) For large detuning between the dots, double occupation of the same dot becomes favorable. In this regime, Coulomb interactions can mix higher orbitals in the two-electron ground state\cite{wunsch09,secchi2009B,secchi2009cvs}. This admixture significantly reduces the energy spacing between the multiplets (which, for weakly interacting electrons, is determined by the orbital level spacing). The interplay between spin-orbit coupling and interaction then leads to a ferromagnetic ground state above a small critical magnetic field, since the Zeeman terms overcome the strongly suppressed splitting between effective singlet an triplet. c) The reduction of the energy gap between orbitally symmetric and antisymmetric states caused by the Coulomb interaction affects transport properties through the dot and can lead to the disappearance of the so-called Pauli-blockade (suppression of current through the double dot due to the Pauli exclusion principle) and might explain the absence of Pauli blockade reported in Ref.~\onlinecite{Gotz09}. The absence of spin-blockade might affect the performance of quantum information proposal which use spin-qubits in double dots, since gate operation in those proposals is based on the spin-blockade mechanism. The disappearance of Pauli blockade might be prevented by reducing Coulomb correlations by either working with short dots, or by covering the nanotube by large dielectrics\cite{wunsch09}. This paper is organized as follows. In the section II, we introduce the microscopic model for the double dot and analyze the non-interacting predictions taking into account a magnetic field parallel to the nanotube axis, spin-orbit couplings and detuning between the dots. Then, we construct a simple two-electron model that captures the interaction effects. This model is then compared with solutions of an exact many-band Hamiltonian using localized single-particle orbitals. In Section~\ref{results}, we discuss the energy spectrum of two interacting electrons in a double dot in three different detuning regimes corresponding to (i) a symmetric double dot, with one electron in each dot, (ii) strong detuning, with both electrons in the same dot, and (iii) at the crossover between both regimes. We then analyze the transport properties of the double dot. Finally, we discuss how to extend our low energy behavior analysis to serially coupled quantum dots. In section IV, we present the conclusion. Technical details on the calculation of Coulomb matrix elements and the derivation of the rate equations used for transport are presented in Appendices~\ref{App:CoulMat} and~\ref{rate}. \begin{figure}[h] \begin{center} \begin{tabular}{c} \includegraphics[scale=0.23,angle=0]{fig1a.jpg}\\ \includegraphics[scale=0.3,angle=0]{Fig1b.png} \end{tabular} \caption{(Color online) Schematic representations of a double quantum dot in carbon nanotube. Top: Blue regions correspond to the potential barriers. Bottom: Double dot potential at finite detuning $\Delta$. Dashed lines schematically represent the single-particle orbitals.} \label{dots3d} \end{center} \end{figure} \section{Model} Single-wall carbon nanotubes are formed by a single layer of graphite called graphene rolled up into a cylinder. Graphene has a honeycomb lattice formed by covalently bonded carbon atoms. Its electronic properties are determined by the $p_z$ orbital of the carbon atom. The low energy spectrum of graphene consists of two Dirac cones located at the $K$ and $K'=-K$ points of the graphene's Brillouin zone, where the valence and conduction bands touch. To characterize the two Dirac cones, we introduce the valley index $\tau=\pm1$, where $\tau=1$ corresponds to the $K'$ point and $\tau=-1$ to the $K$ point. The behavior of graphene in the presence of an external potential can be described by an effective mass approximation, or $\vec{\kappa}\cdot\vec{p}$ theory~\cite{divincenzo1984sce}. In this approximation, the envelope wave function for the $A$ and $B$ sites of the two-atom unit cell in a honeycomb lattice follows an effective Dirac equation. In a carbon nanotube, the cylindrical structure imposes a quantization condition that leads to either metallic or semiconducting nanotubes, depending on the orientation of the underlying lattice with respect to the symmetry axis of the tube. Here, we will focus on the behavior of semiconducting nanotubes. \subsection{Single particle spectrum and interactions}\label{SParticle} In this section we follow previous work\cite{bulaev2008soi,trauzettel2007sqg,wunsch09} to derive the localized eigenstates of a semiconducting nanotube with an additional confinement potential along the tube, which is controlled by external gates. We describe the confinement potential of each dot by a square well\cite{bulaev2008soi,trauzettel2007sqg} (see Fig.~\ref{dots3d}). The form of the confinement potential will not affect our results qualitatively and we note that for a single dot, the results for parabolic confinement and square well are in good agreement\cite{secchi2009cvs,wunsch09}. The external potential leads to a discrete set of bound states. Taking $\zeta$ as the direction of the nanotube axis and $\varphi$ as the angle perpendicular to the nanotube axis, we can write the single particle Hamiltonian as \begin{equation} \label{H01d} H_0=-i \hbar v (\tau\sigma_1\frac{1}{R}\partial_\varphi+ \sigma_2\partial_\zeta)+V(\zeta), \end{equation} where $v$ is the Fermi velocity, $\sigma_i$ are the Pauli matrices operating over the sublattice space, and $V(\zeta)$ is the external potential that describes one or two dots. The eigenstates are determined by matching the solutions for the dot and barrier regions, which are of the form: \begin{equation} \label{Psi} \Psi^{\tau,\kappa,k}(\varphi,\zeta)=e^{i(\kappa R\varphi+k\zeta)}\left(\begin{array}{c} z^{\tau}_{\kappa,k}\\1 \end{array}\right). \end{equation} Here $\kappa,k$ denote the wave vectors around and along the tube, $z^{\tau}_{\kappa,k}=\pm(\tau\kappa-ik)/\sqrt{\kappa^2+k^2}$, and the energy is given by $E_{\kappa,k}=\pm\hbar v\sqrt{\kappa^2+k^2}$. Solving the effective Dirac equation for $V_{1D}(\zeta)$, which is 0 for $0<\zeta<L$ and $V_g$ otherwise, leads to a quantization condition for the longitudinal momentum modes $k_n$ of the localized states, where $n$ denotes the band index\cite{bulaev2008soi}. So far, the $K$ and $K'$ solutions are degenerate and independent of spin, $\sigma=\uparrow\downarrow$, leading to a four-fold symmetry. However, this symmetry is broken by spin-orbit coupling corrections\cite{kuemmeth2008csa,ando2000soi,Paco06} and by a constant magnetic field $B$ along the nanotube axis $\widehat{\zeta}$. The spin-orbit coupling and the presence of a magnetic field modify the quantization condition in the $\widehat{\varphi}$ direction, \begin{eqnarray} \kappa&=&\kappa_0+\Phi_{AB}/\Phi_0 R-s\Delta_{SO}/(\hbar v)\\ \kappa_0&=&\tau/3R.\label{eq:kappa} \end{eqnarray} Here, $s=\pm1/2$ is the quantum number corresponding to the spin operator parallel to the nanotube ($\hat{S}_\zeta\ket{\uparrow}=1/2\ket{\uparrow}$ and $\hat{S}_\zeta\ket{\downarrow}=-1/2\ket{\downarrow}$); $\Delta_{SO}\approx1\mbox{meV}/R$[nm] is the energy splitting due to spin-orbit coupling, $\Phi_{AB}=B \pi R^2$ is the Aharonov-Bohm flux through the nanotube\cite{minot2004determination} and $\Phi_0=hc/|e|$. The magnetic field also leads to a spin Zeeman term $H_z=s\hbar\omega$, where $\omega=|e| g B/(2m_0 c)$ is the Zeeman frequency in terms of the gyromagnetic constant $g$, the electron mass $m_0$, and the speed of light $c$. The sign convention for the spin used here is opposite to the one of Ref.~\onlinecite{wunsch09}. Note that in Eq.~\eqref{eq:kappa} we only consider the lowest mode in the transverse direction, since excitations involve energies of about $\hbar v/R$ which are much larger than longitudinal-excitation energies or Coulomb-interaction effects as long as $R \ll L$. An example of the single particle energy spectrum of a single dot is shown in Fig.~\ref{E1multi}. Note that we measure the energy with respect to the center of the gap, so that the dominant part of the single-particle energy is constant and given by $\hbar v \kappa\approx 220 \,\mbox{meV}/R$[nm]. Generally, disorder or the confinement potential itself can lead to intervalley coupling. However, for a noticeable effect, the potential must change on the scale of the nearest neighbor lattice spacing between carbon atoms $a_0=1.4$~\AA. The experiment of Ref.~\onlinecite{kuemmeth2008csa} shows only a tiny valley mixing, and we neglect intervalley scattering in this work. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7,angle=0]{fig2.pdf} \caption{(Color online) Single particle spectrum as a function of $B$ for $L=70$~nm, $V_g=78$~meV, and $R=2.5$~nm and $\Delta_{SO}<0$. Solid curves correspond to $E_{nK\uparrow}$, dashed curves correspond to $E_{nK\downarrow}$, dash-dotted curves correspond to $E_{nK'\uparrow}$, and dotted curves correspond to $E_{nK'\downarrow}$, where $n=1,2, 3$ is the band label.} \label{E1multi} \end{center} \end{figure} The longitudinal wave vector depends indirectly on spin-valley quantum numbers and on magnetic field and these effects are included in the multi band calculations. However, since $\kappa_0\gg\Phi_{AB}/(\Phi_0 R),\Delta_{SO}/(2\hbar v),k_n$, the single particle energies can be significantly simplified, yielding \begin{eqnarray} E_{n,\alpha}&\approx& E^c_n(V_g)+E_{\alpha}\label{E1pA}\\ E_{\alpha}&=&B(\tau\mu_{orb}+2 \mu_{spin} s)-\Delta_{SO} \tau s\,. \label{Etaus} \end{eqnarray} Here we have introduced a single quantum number $\alpha\equiv (\tau,s)$ to describe the spin valley degrees of freedom ($\alpha=K\uparrow ,K\uparrow ,K'\downarrow ,K'\downarrow $). The confinement energy $E^c_n(V_g)=\hbar v\sqrt{\kappa_0^2+k_n^2}$ is the single-particle spectrum of a dot with potential depth $V_g$ ignoring magnetic field, spin, and spin-orbit dependences; $\mu_{spin}=\hbar\omega/(2B)=\hbar |e| g /(4m_0 c)$; and $\mu_{orb}=( \pi R|e|/hc)$. Within this approximation the orbital part of the single-particle states separate from the spin-valley part. Figure~\ref{E1multi} illustrates that, for the parameters studied in this work, spin-valley splitting is basically the same for all longitudinal bands, and Eq.~\eqref{E1pA} provides a good description of the single-particle spectrum of a single dot. Next, we consider a biased double-dot system, schematically presented in Fig.~\ref{dots3d}. In experiment the double quantum dot is formed by applying appropriate voltages to external gates and we model the resulting confinement potential $V_{2D}(\zeta)$ by a square well potential that is $-\Delta/2$ for $-a/2-L<\zeta<-a/2$, $\Delta/2$ for $a/2<\zeta<a/2+L$, and $V_g$ otherwise. The length of the dots is $L$ and $a$ is the width of the interdot barrier. As discussed previously we do not expect our results to change qualitatively, if a smoother potential is used. In the double-dot system at finite detuning, the depths of the dots change affecting the single particle energies. In the numerical calculation, we determine the eigenspectrum of $V_{2D}$ exactly. The main effect of the detuning is an energy shift $\pm \Delta/2$ to the single-particle eigenenergies, where ``$+$'' corresponds to right dot, and ``$-$'' to the left dot. Using Eq.~\eqref{E1pA} and neglecting interdot tunneling, the energies of the localized left and right single-particle orbitals are approximately \begin{equation} \label{E1pA2} E^{R/L}_{n,\alpha}\approx E^c_n(V_g \pm\Delta/2)+E_{\alpha}\approx E^c_n(V_g)\pm \Delta/2+E_{\alpha}. \end{equation} When more than one electron is confined in the single or double dot, electron-electron interactions become important. The electrons interact through the long-range Coulomb potential \begin{equation} V_c({\bf r_1},{\bf r_2})=\frac{e^2}{k_d|{\bf r_1}-{\bf r_2|}}, \label{coulomb} \end{equation} where $k_d$ denotes the dielectric constant. Coulomb interactions allow for certain off-diagonal matrix elements in valley space that are produced by intervalley scattering\cite{egger1997ele}. However, these matrix elements are small for quantum dots with a size much larger than the interatomic distance; they are neglected in this work\cite{wunsch09}. To obtain an accurate description of interacting few electron systems, we extend the single-dot treatment of Refs.~\onlinecite{wunsch09,secchi2009B} to the double dot system. We construct single particle orbitals localized in the left and right dots from the exact single particle solutions of the double dot and then we use them to expand the many-body Hamiltonian (see more details in Appendix~\ref{App:CoulMat}). The single particle orbitals have a weak dependence on the spin-valley degrees of freedom that comes from the dependence of the wave vectors $\kappa$ and $k$ on $\tau$ and $s$. This leads to a dependence of the interaction matrix elements on the spin-valley degrees of freedom. However, this dependence is very weak and, to a very good approximation, can be neglected. Thus, interactions can be considered spin-valley independent allowing the separation of the orbital and the spin-valley contributions in the two-electron solutions~\cite{wunsch09}. \subsection{Separating orbital from spin-valley degrees of freedom} Since interactions can be considered diagonal in spin-valley degrees of freedom, orbital and spin-valley part of the two-electron solutions provide independent contributions to the energies and the wave functions~\cite{wunsch09}. The total two-particle wave function must be antisymmetric with respect to particle exchange and the symmetry of the orbital part must always be opposite to that of the spin-valley part. Thus, the two-particle spectrum can be grouped according to their orbital symmetry or parity under particle exchange in multiplets of six states if the orbital part is symmetric or ten states if the orbital part is antisymmetric. The energy splitting between different multiplets, called $\epsilon$, depends on the orbital part, which is determined by electron-electron interactions and longitudinal confinement, and it is generally given by a correlated state that is represented as a superposition of various two-electron orbital wave functions. The energy relations within a multiplet are exclusively determined by $E_\alpha$ that includes the orbital and spin Zeeman terms as well as the spin-orbit coupling [Eq.~\eqref{Etaus}]. Therefore, the spin-valley part of the wave function always has the simple form shown in Table~\ref{table1}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.85,angle=0]{table1.pdf} &\includegraphics[scale=0.38,angle=0]{fig3.pdf} \end{tabular} \caption{Table: spin-valley multiplets, here {\scriptsize $ |\tau_1 s_1,\tau_2s_2\rangle^{\pm}= (|\tau_1s_1,\tau_2s_2\rangle \pm |\tau_2s_2,\tau_1s_1\rangle)/\sqrt{2}$}. The states in each multiplet are grouped in three columns according to their spin-orbit energy. Figures: Schematic magnetic-field dependence of antisymmetric (a) and symmetric (b) multiplets.} \label{table1} \end{center} \end{figure} The magnetic field dependence of the two multiplets is illustrated in Fig.~\ref{table1}. The competition between spin-orbit coupling and orbital Zeeman energy leads to a ground state crossings in the multiplet with antisymmetric spin-valley part at a critical magnetic field $B_c=\Delta_{SO}/2\mu_{orb}$. For $B<B_c$ the ground state of the antisymmetric spin-valley multiplet [Fig.~\ref{table1}~(a)] is in a superposition of spin singlet and triplet, since $\ket{K\downarrow, K'\uparrow}^-=(\ket{K K'}^+ \ket{\downarrow\uparrow}^-+\ket{K K'}^- \ket{\downarrow\uparrow}^+)/\sqrt{2}$. For $B<B_c$, the lowest state of the antisymmetric spin-valley multiplet $\ket{K\uparrow, K\downarrow}^-$ is antiferromagnetic in spin but ferromagnetic (or polarized) in valley space. The lowest state of the symmetric spin-valley multiplet $\ket{K\downarrow, K\downarrow}^+$ is ferromagnetic in both spin and valley space for all positive magnetic fields. The energy difference between the different multiplets $\epsilon$ and the magnetic field $B$ determines the spin-valley symmetry of the ground state. The thick curves in Figs.~\ref{table1}(a) and (b) correspond to the possible ground states which we label according to their spin and valley symmetry. \subsection{Model Description} One of the objectives of this study is to be able to describe the evolution of the spectrum as the detuning is changed from small to large and the low energy configurations change from $|1L,1R\rangle$, i.e. one electron per dot, to $|2L,0R\rangle$ or $|0L,2R\rangle$, two electrons in the same dot. An accurate description of the strong interaction effects requires the inclusion of several single particle bands of the double dot system. For example, the behavior of the $|2L,0R\rangle$ configurations is expected to be very similar to that of the doubly occupied single dot and in that situation the strong correlations need to be described by many single particle orbitals~\cite{wunsch09,secchi2009B}. This situation makes the description and the interpretation of the results not very intuitive. However, we can significantly simplify the description by realizing that no matter how strongly correlated the system is, the parity is a good quantum number and the states can be classified according to the state parity. Thus, we can model the exact system with a simple effective Hamiltonian in the charge degrees of freedom that captures this dependence on parity and describes the energetically lowest multiplets of the $|1L,1R\rangle$, $|2L,0R\rangle$ and $|0L,2R\rangle$ configurations. The charge degrees of freedom of two-electrons in a double dot can have three configurations: $|2L,0R\rangle^\pm$, $|1L,1R\rangle^\pm$, and $|0L,2R\rangle^\pm$, where $\pm$ characterizes the conserved orbital symmetry (i.e. + for antisymmetric spin-valley states and - for symmetric spin-valley states). Within this model, interaction effects can be obtained by diagonalizing the effective Hamiltonians $H_S$ and $H_{AS}$, where $S$ and $AS$ denote symmetric and antisymmetric spin-valley state. The complex multiband problem is then reduced to simple 3 times three matrices. \begin{eqnarray} H_S=\left( \begin{array}{ccc} V+V_{ex}-\Delta & -t_{S}&0\\ -t_{S}&V_{LR} & t_{S}\\ 0 &t_{S}&V+V_{ex}+\Delta \end{array} \right) \label{HmodelS} \end{eqnarray} and \begin{eqnarray} H_{AS}=\left( \begin{array}{ccc} V-V_{ex}-\Delta & -t_{AS}&0\\ -t_{AS}&V_{LR} & -t_{AS}\\ 0 &-t_{AS}&V-V_{ex}+\Delta \end{array} \label{HmodelAS} \right). \end{eqnarray} These effective Hamiltonians include the onsite and nearest neighbor interactions $V$ and $V_{LR}$, the tunnelings in symmetric $t_S$ and antisymmetric $t_{AS}$ configuration and the detuning effects $\Delta$. The dependence of the interaction on the symmetry is introduced by an effective exchange term $V_{ex}$ that favors the antisymmetric spin-valley configuration. Equations~\ref{HmodelS} and \ref{HmodelAS} describe the energy related with the orbital part of the wavefunction. The total energy also contains the contribution of the spin-valley part $E_{SV}=\sum_{\alpha}[E^c_1(V_g)+E_{\alpha}] n_{\alpha}$, where $E_{\alpha}$ was defined in Eq.\eqref{Etaus} and $n_{\alpha}$ denotes the occupation of states with spin-valley $\alpha$. To gain qualitative understanding of the interaction and tunneling terms in the effective Hamiltonian, we analyze the limiting behaviors of the low-energy spectrum. First, we consider the limit of zero detuning and large local Coulomb interactions and we obtain that the lowest symmetric and antisymmetric spin-valley multiplets have energies: \begin{eqnarray} E^{g}_{AS}&\approx&E_{SV}+V_{LR}-\frac{2t_{AS}^2}{V-V_{LR}-V_{ex}} +..., \label{E11ap1}\\ E^{g}_S&\approx&E_{SV}+V_{LR}-\frac{2t_{S}^2}{V-V_{LR}+V_{ex}} +... \label{E11ap2} \end{eqnarray} In the strong interaction regime, our numerical calculations indicate that, to a good approximation $t_S\approx t_{AS}\approx t_{12}$. This approximation allows to obtain a simple expression for the energy splitting between different multiplets, $\epsilon=E^{g}_S-E^{g}_A\approx 4 t_{12}^2 V_{ex}/(V-V_{LR})^2$ in the limit of zero detuning and for $V_{ex}\ll V-V_{LR}$. For a biased double dot system, the single particle energies acquire an energy shift of $\pm\Delta/2$ and in the limit of large detuning the two electrons occupy the same dot. In this limit, energies of the lowest symmetric and antisymmetric spin-valley multiplets are \begin{eqnarray} E^{g}_{AS}&\approx&E_{SV}-\Delta+V-V_{ex}+..., \label{E20ap1}\\ E^{g}_S&\approx&E_{SV}-\Delta+V+V_{ex}+... \label{E20ap2} \end{eqnarray} Thus, the energy splitting $\epsilon=E^{g}_S-E^{g}_A\approx 2 V_{ex}$ in the (2,0) configuration is mainly controlled by the exchange mechanism. This effective model allows a simple and intuitive understanding of the underlying physical behavior of the double dot system. However, to extract the parameters $V$, $V_{LR}$, $V_{ex}$ and $t_{12}$ we need to solve exactly the single and double dot system. In the next Section, we analyze the behavior of the double-dot system at different magnetic field, detuning and interaction regimes by comparing the exact diagonalization solutions discussed in the previous subsection with the model Hamiltonian. From this comparison, we extract the parameters of the model. \section{Results}\label{results} In this section, we analyze the energy spectrum of two interacting electrons in a double dot in three different detuning regimes: (A) small detuning, with one electron in each dot; (B) strong detuning, with both electrons in the same dot and (C) at the crossover between both regimes. In subsection~\ref{subsecST}, we study the transport properties of the double dot. An important energy scale of the two-particle spectrum is the energy spacing $\epsilon$ between the lowest energy states with an antisymmetric orbital part and the lowest state with a symmetric orbital part at zero magnetic field $B=0$. In all detuning regimes, we find $\epsilon>0$, in agreement with the Lieb Mattis theorem\cite{Lieb61}. However, Coulomb correlations can significantly reduce $\epsilon$. In our analysis, we have considered different double dot configurations by changing the length and depth of the dots as well as the interdot distance and found the same scaling of interaction effects as discussed in Ref.\onlinecite{wunsch09}. In this section we present results for the parameters $L=70$~nm, $a=20$~nm, $V_g=78$~meV, $R=2.5$~nm. A single well with these parameters supports five bound states and has an energy splitting between the lowest two of them of $\hbar \omega_0\approx10.6$~meV. The dielectric constant $k_d$ is varied between ($1.5\le k_d\le3.5$) which allows us to explore the strongly interacting regime where new transitions occur. Experimentally, however, it is easier to change the length of the dots. The parameter that characterize the strength of the interactions is the ratio $U/(\hbar\omega_0)$ where $U$ is the characteristic intradot interaction energy $U=e^2/k_d L$. This ratio is typically between $1<U/(\hbar\omega_0)<5$ implying a moderate/strong interaction regime. Another relevant parameter that determines the spin-valley nature of the ground state is, as we will discuss below, the ratio $2V_{ex}/\Delta_{SO}$ that reflects the competition between interactions and spin-orbit effects. \subsection{ Low energy spectrum of symmetric double dot} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{fig4.pdf} \caption{(Color online) Low-energy spectrum of a double dot as a function of $B$. Solid symbols (red online) correspond to states belonging to the antisymmetric spin-valley multiplet and open symbols (blue online) to the symmetric spin-valley multiplet. The solid curves correspond to the effective Hamiltonian description.} \label{ED0} \end{center} \end{figure} First, we analyze the numerical results obtained with the multiband treatment. Figure~\ref{ED0} shows the low-energy spectrum as a function of $B$ for $k_d=2.5$ in a double dot with $\Delta=0$ (zero detuning). In the low-energy spectrum, we can recognize the symmetric and antisymmetric spin-valley multiplets discussed in Table/Fig.~\ref{table1}. The energy difference $\epsilon$ between the two multiplets is not observable in the energy range of Fig.~\ref{ED0}. Since interdot tunneling is very small compared with the interaction energy, the two electrons occupy different dots to avoid strong intradot interactions. The interdot interaction is almost independent of the orbitals occupied in each dot. Thus, at small detuning there is a negligible occupation of higher bands. In Fig.~\ref{ED0}, we identify three states that will be relevant for the discussion of Pauli blockade. One of them has mixed valley-spin symmetry and we label it as $\ket{\tilde{M}}$. The second state is antiferromagnetic in spin and ferromagnetic in valley degree of freedom and we label it as $\ket{\tilde{AfF}}$. The third state, labeled $\ket{\tilde{FF}}$, is ferromagnetic in both spin and valley degrees of freedom. States $\ket{\tilde{M}}$, $\ket{\tilde{AfF}}$ belong to the antisymmetric spin-valley multiplet and $\ket{\tilde{FF}}$ belongs the symmetric spin-valley multiplet. Their configurations are approximately \begin{eqnarray} \ket{\tilde{M}}&\approx & |1L,1R\rangle^{+}\ket{K \downarrow ; K' \uparrow }^-,\label{t1}\\ \ket{\tilde{AfF}}&\approx & |1L,1R\rangle^{+}\ket{K \downarrow ; K \uparrow }^-, \mbox{ and}\label{t2}\\ \ket{\tilde{FF}}&\approx & |1L,1R\rangle^{-}\ket{K \downarrow ; K \downarrow }.\label{t3} \end{eqnarray} Using exact diagonalization, we extract the wave function and conclude that the states $|1L,1R\rangle^{\pm}$ are given, to a very good approximation, by a single Slater determinant formed with left and right orbitals in the lowest band. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5,angle=0]{fig5.pdf} \caption{(Color online) Parameters of the effective Hamiltonian as a function of the dielectric constant.} \label{ParamEfHam} \end{center} \end{figure} We can approximate the energies of $\ket{\tilde{M}}$, $\ket{\tilde{AfF}}$ and $\ket{\tilde{FF}}$ using Eqs.~\eqref{E1pA2}, \eqref{E11ap1} and \eqref{E11ap2}, \begin{eqnarray} E_{\ket{\tilde{M}}}&\approx&2E_1^c(V_g)-2t_{12}^2/(V-V_{LR}- V_{ex})+V_{LR}\nonumber\\ && -\Delta_{SO},\\ E_{\ket{\tilde{AfF}}}&\approx&2E_1^c(V_g)-2t_{12}^2/(V-V_{LR}- V_{ex})+V_{LR}\nonumber\\ && -2 B \mu_{orb},\\ E_{\ket{\tilde{FF}}}&\approx&2E_1^c(V_g)-2t_{12}^2/(V-V_{LR}+V_{ex})+V_{LR} \nonumber\\ &&-\Delta_{SO} -2 B (\mu_{orb} + \mu_{spin}).\label{E3Tap} \end{eqnarray} At $B=0$, the ground state $\ket{\tilde{M}}$ is only separated by the very small superexchange energy $\epsilon\approx 4 t_{12}^2 V_{ex}/(V-V_{LR})^2$ from the lowest spin-valley symmetric triplet that contains $\ket{\tilde{FF}}$. These four states are separated from the rest of the spectrum by a much larger energy scale given by $\Delta_{SO}$. This energy structure resembles the singlet-triplet splitting in GaAs double quantum dots. At finite fields, $\ket{\tilde{FF}}$ is the ground state. The first excited state changes with increasing magnetic field from $ \ket{\tilde{M}}$ to $\ket{\tilde{AfF}}$ at $B_c= \Delta_{SO}/(2 \mu_{orb})$, as shown in Fig.~\ref{ED0}. This crossing between two antisymmetric spin-valley states has no analogous in standard GaAs quantum dots. From the analysis of the single and double dot spectrum, we can obtain the parameters of the charge effective Hamiltonian (Eqs.~\ref{HmodelS},~\ref{HmodelAS}). Figure~\ref{ParamEfHam} presents the parameters $V$, $V_{LR}$, $t_{12}$ and $V_{ex}$ for a double dot with $L=70$~nm, $a=20$~nm, $V_g=78$~meV, $R=2.5$~nm, and $\Delta=0$. The $V$ and $V_{ex}$ are obtained from the two-electron spectrum in a single dot and $V_{LR}$ and $t_{12}$ obtained from double dot spectrum. The black solid curves in Fig.~\ref{ED0} show the prediction from the model using the parameters from Fig.~\ref{ParamEfHam}. \subsection{Low energy spectrum for large detuning: Two electrons in a single dot} When the detuning becomes larger than the intradot interaction, both electrons occupy the same dot, and the charge degree of freedom of the low-energy eigenstates can be described by the $|2L,0R\rangle$ configuration. In this regime, the energy spectrum resembles the one obtained for two electrons in an isolated dot\cite{wunsch09}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.55,angle=0]{fig6.pdf} \caption{(Color online) Low energy spectrum at large detuning ($\Delta\approx35$~meV). Solid symbols (red online) correspond to states belonging to the antisymmetric spin-valley multiplet, and open symbols (blue online) to the symmetric spin-valley multiplet. Curves represent the model predictions.} \label{Ener2p1d} \end{center} \end{figure} Figure~\ref{Ener2p1d} presents the low energy spectra for large detuning for $k_d=2.5$. The solid curves represent the effective Hamiltonian predictions. The multiband structure of the solutions introduce corrections to the spin-valley dependence of the spectrum which can be absorbed in effective $\mu_{orb}$ and $\Delta_{SO}$. However, this corrections are small and correspond to a few percent changes to the bare $\mu_{orb}$ and $\Delta_{SO}$ values. At zero magnetic field, the antisymmetric spin-valley multiplet is favored. This is in agreement with Lieb Mattis theorem\cite{Lieb61} that states that the two particle ground state always has a symmetric orbital part. In our two-electron system, we can understand this prediction from the analysis of the orbital symmetry of the wave function. In the noninteracting limit, the orbital ground state is constructed with both electrons in the lowest band corresponding to a symmetric orbital wave function, i.e., an antisymmetric spin-valley wave function. To form an antisymmetric orbital wave function at least one electron has to occupy an excited state. If the electrons were noninteracting, $\epsilon$ would be given by the level splitting, $\hbar\omega_0$, between the first two bands. However, interactions substantially reduce the energy difference $\epsilon$ between the two multiplets as shown in Fig.~\ref{Ener2p1d}. Consequently, $\epsilon$ can be changed by tuning the ratio $U/\hbar \omega_0$, which can be achieved by changing the dielectric constant (modifying $U$) or changing the dot length (modifying both $U$ and $\omega_0$). In the limit of infinite interactions, the electrons are strongly localized at the positions that minimize the interaction energy. Then the orbital symmetry of the wave function becomes irrelevant and $\epsilon$ vanishes. This effect is a signature of a formation of a Wigner molecule\cite{wunsch09,secchi2009B,secchi2009cvs}. In the effective Hamiltonian, the formation of a Wigner molecule is manifested in the reduction of the parameter $V_{ex}$. This is evident in Fig.~\ref{ParamEfHam} that shows that $V_{ex}$ is a growing function of the dielectric constant. Even though the results of Fig.~\ref{ParamEfHam} are for $\Delta=0$, we note that $V$ and $V_{ex}$ depend weakly on the detuning and can be approximated by $\Delta=0$ predictions for all detunings studied here $\Delta< 35$ meV. In contrast, the tunneling $t_{12}$ is strongly affected by the detuning and is reduced almost by approximately a factor of two in comparison with the $\Delta=0$ case. The reduction of $\epsilon$ implies that the exact eigenstates become strongly correlated and cannot be written as noninteracting wave functions to characterize them. However, we can still label the low-lying $(2,0)$ states according to their conserved quantum numbers: \begin{eqnarray} \ket{M}&=& |2L,0R\rangle^+ \ket{K \downarrow ; K' \uparrow }^-,\label{eqM}\\ \ket{AfF}&=& |2L,0R\rangle^+ \ket{K \downarrow ; K \uparrow }^-, \mbox{ and}\\ \ket{FF}&=& |2L,0R\rangle^- \ket{K \downarrow ; K \downarrow }\label{eqFF} \end{eqnarray} Here the states $|2L,0R\rangle^\pm$ are correlated orbital states of two electrons in the left dot. Our numerical calculations shows that several bands are needed to represent these states accurately. We note that Secchi and Rontani found qualitatively the same correlation effects for a parabolic well with weak confinement~\cite{secchi2009cvs}, which suggests the robustness of these correlation effects. We note that because of the conserved symmetries, the eigenstates at small detuning $\ket{\tilde{M}}$, $\ket{\tilde{AfF}}$, and $\ket{\tilde{FF}}$ will evolve in $\ket{M}$, $\ket{AfF}$, and $\ket{FF}$ for large detuning. Using the effective Hamiltonian along with the approximate description of the single particle energies [Eqs.\ref{E1pA2}, \ref{E20ap1} and \ref{E20ap2}], we can obtain simple expression for the (2,0) configuration energies: \begin{eqnarray} E_{\ket{M}}&\approx&2E_1^c(V_g)+V-V_{ex}-\Delta_{SO}-\Delta, \\ E_{\ket{AfF}}&\approx&2E_1^c(V_g)+V-V_{ex}-2 B \mu_{orb}-\Delta, \,\,\mbox{and}\\ E_{\ket{FF}}&\approx&2E_1^c(V_g)+V+V_{ex}-\Delta_{SO}\nonumber\\ & &-2 B (\mu_{orb} + \mu_{spin})-\Delta.\label{E3ap} \end{eqnarray} At $B=0$, $\ket{M}$ is the ground state. Above a critical magnetic field, there is a ground-state transition to either $\ket{AfF}$ ( $\Delta_{SO}<\epsilon$) or $\ket{FF}$ ( $\Delta_{SO}>\epsilon$) due to the orbital Zeeman term. Thus, the reduction of $\epsilon$ leads to a ground state transition to the ferromagnetic state $\ket{FF}$ at finite magnetic field. In practice, the formation of a ferromagnetic ground state can be experimentally controlled by changing the length of the dots. \subsection{Transition from a double- to a single-dot regime.} We now analyze the transition between the limiting behaviors discussed in the previous two sections. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5,angle=0]{fig7.pdf} \caption{(Color online) Zoom of first crossing of (1,1) and (2,0) states for $B=0$. Solid symbols (red online) correspond to states belonging to the antisymmetric spin-valley multiplet, and open symbols (blue online) to the symmetric spin-valley multiplet. Curves represent the model predictions. } \label{Ezoom1} \end{center} \end{figure} Figure~\ref{Ezoom1} shows the crossover between $|1L,1R\rangle$ states and $|2L,0R\rangle$ states of the double dot system at $B=0$. In absence of interdot tunneling, the two-electron spectrum shows sharp crossings between the $|1L,1R\rangle$ states and $|2L,0R\rangle$ states with increasing detuning. Because of the interdot tunneling, crossings between states with the same symmetries turn into avoided crossings. The avoided crossings occur at the same critical detuning for states of the same multiplet. The avoided crossings within the multiplet with an antisymmetric orbital part occur at relatively larger detuning since the tunneling electron is forced to occupy an excited band. The crossover regime, presented in Fig.~\ref{Ezoom1}, is strongly affected by interactions. The difference between the critical detunings belonging to the avoided crossings of states with symmetric and antisymmetric orbital part is a direct measure for the energy splitting $\epsilon$. Furthermore, correlations in the $|2L,0R\rangle$ states decrease the tunneling coupling to the corresponding $|1L,1R\rangle$ state, leading to a sharper avoided crossing. \begin{figure}[h] \begin{center} \includegraphics[scale=0.75,angle=0]{fig8.pdf} \caption{(Color online) $\epsilon$ as a function of detuning, symbols correspond to the numerical results and the curve is the effective Hamiltonian prediction.} \label{epsilon} \end{center} \end{figure} In the effective Hamiltonian description, the lowest $|1L,1R\rangle$ and $|2L,0R\rangle$ configurations are close in energy when $\Delta\sim V-V_{LR}$. In this regime, the $|0L,2R\rangle$ configurations are energetically suppressed and do not affect the low energy spectrum. Within this approximation, we can obtain a simple expression for $\epsilon$, \begin{eqnarray} \epsilon=V_{ex}+\frac{1}{2}\left(\sqrt{4t_{12}^2+(\Delta-V_{ex}-V+V_{LR})^2}\right.\nonumber\\ \left.-\sqrt{4t_{12}^2+(\Delta+V_{ex}-V+V_{LR})^2}\right) \end{eqnarray} This expression compares well with the numerical prediction as shown in Fig.~\ref{epsilon}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.12,angle=0]{fig9.jpg} \caption{(Color online) Phase diagram for a double quantum dot with five bands and $k_d=1.5$ (a) and $k_d=3.5$ (b). Black solid lines separate regions with different spin-valley symmetry. Regions with the same greyscale ( color online) characterize the ground state.} \label{DP5b} \end{center} \end{figure} From the analysis of the low-energy spectrum, e. g., Fig.~\ref{Ezoom1}, we can extract the phase diagram as a function of the detuning $\Delta$ and the magnetic field $B$. The phase diagrams for strong ($k_d=1.5$) and weak ($k_d=3.5$) interactions, presented in Fig.~\ref{DP5b}, exhibit clear differences. At exactly zero magnetic field, the ground state always has a symmetric orbital wave function ($\ket{M}$ and $\ket{\tilde{M}}$). This state is preferred with respect to other states within the same multiplet by the spin-orbit coupling $\Delta_{SO}$ and, with respect to states of the other multiplet, by the energy gap $\epsilon$. The ground state changes at a critical magnetic field to a valley-polarized state. The valley polarized ground state has symmetric orbital part for $\Delta_{SO}<\epsilon$ and an antisymmetric orbital part for $\Delta_{SO}>\epsilon$. For small detuning $\Delta$, the energy splitting $\epsilon$ is caused by the small superexchange and $\epsilon<\Delta_{SO}$. However, for large detuning, both electrons are on the same dot, and the value of $\epsilon$ can be larger or smaller than $\Delta_{SO}$. Thus, if $2 V_{ex}>\Delta_{SO}$, there is a transition to a valley-polarized state with a symmetric orbital part, $\ket{AfF}$, as detuning is increased [as observed in Fig.~\ref{DP5b}~(b)]. \subsection{Sequential transport} \label{subsecST} In this section, we discuss how the two-electron eigenspectrum affects sequential transport through a double dot. In particular, we analyze the existence of Pauli blockade in serially coupled dots that would lead to a current rectification in DC transport\cite{Ono2002,petta2005cmc}. Figure~\ref{Blockade} shows a schematic description of two situations when a Pauli blockade occurs. In both cases, the double-dot system is detuned such that there is always at least one electron in the left dot. States relevant for transport through the double dot are the $|2L,0R\rangle$ states $\ket{M}$ and $\ket{FF}$ as well as the $|1L,1R\rangle$ states $\ket{\tilde{M}}$ and $\ket{\tilde{FF}}$. In Fig.~\ref{Blockade}, the vertical axes denotes energy and the horizontal axes denotes the spatial coordinate along the nanotube. The center of each figure shows the double dot created by tunnel barriers to the contacts and between the dots. The gray rectangles to the left and right of the double dot are the Fermi seas in the contacts. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth,angle=0]{fig10.pdf} \caption{ (Color online) Schematic representation of a Pauli blockade for: (a) zero magnetic field and negative bias, and (b) finite magnetic field and positive bias. The white areas in the center of each figure correspond to the two dots. The black circle in the left dot stands for the position and energy of one of the electrons. The states $\ket{M}$ and $\ket{\tilde{M}}$ are represented by solid lines, and the states $\ket{FF}$ and $\ket{\tilde{FF}}$ by dashed lines. These states are organized vertically according to their energies, and also represent the possible positions of the second electron. The wide gray rectangles next to the left and right dots correspond to the energy bias.} \label{Blockade} \end{center} \end{figure} For a positive bias voltage (opposite to Fig.~\ref{Blockade} (a)) , the electrochemical potential in the left contact is larger than in the right one, and current flows via a sequential tunneling process $|1L,0R\rangle\to|2L,0R\rangle\to|1L,1R\rangle\to|1L,0R\rangle$. Interdot tunneling is assumed to conserve the spin-valley degree of freedom and allows for transitions between states $\ket{M}$ and $\ket{\tilde{M}}$ or $\ket{FF}$ and $\ket{\tilde{FF}}$. Because of the sequential transport setup, the left dot couples to the left reservoir, allowing for transitions between the $|1L,0R\rangle$ and $|2L,0R\rangle$ states. Analogously, the right contact allows for transitions between the $|1L,0R\rangle$ and $|1L,1R\rangle$ states. Figure~\ref{Blockade} (a) considers Pauli blockade at $B=0$. In this scenario, the current is blocked when the state $\ket{\tilde{FF}}$ is occupied by an electron tunneling in from the right reservoir. Once in state $\ket{\tilde{FF}}$, the electron cannot tunnel back because of the filled Fermi sea in the right contact. Also, if $\epsilon$ is large, the electron cannot tunnel to the left dot since a transition to the $\ket{FF}$ state is energetically suppressed [see Fig.~\ref{Blockade} (a)]. However, $\epsilon$ can be strongly reduced, allowing for a finite exit rate from state $\ket{\tilde{FF}}$. This effect might explain the absence of a Pauli blockade in Ref.~\onlinecite{Gotz09}. This Pauli blockade implies a rectification of the current since, by inverting the bias voltage, a finite current can flow\cite{footnote1}. In order to make these statements more quantitatively we calculate the stationary current with a rate equation approach \cite{Stoof96,sprekeler04,wunsch05}. We describe the regime of possible blockade depicted in Fig.\ref{Blockade} (a) while assuming spin-orbit coupling to be much bigger than temperature, transport voltage and external coupling the the reservoirs. The rate equations and the resulting stationary current are given in Appendix~\ref{rate}. The dependence of the stationary current $I_{bl}$ in the blockade setup and the probability to be in any of the three degenerate $|FF\rangle$ states are given by: \begin{eqnarray} I_{bl}&=&\frac{t_S^2 A}{B+C(2t_S^2+\epsilon^2)}\label{current1}\\ P_{\tilde{FF}}&=&\frac{ C(t_S^2+\epsilon^2)+D}{B+C(2t_S^2+\epsilon^2)} \end{eqnarray} Here $\epsilon$ denotes the energy splitting between the states $\ket{M}$ and $\ket{FF}$ and $t_S$ is the interdot tunneling rate between states $\ket{FF}$ and $\ket{\tilde{FF}}$. The constants $A,B,C,D$ depend on the coupling strength to the contacts and spectral weights for the tunneling probabilities and their form is given in Appendix \ref{rate}. If $\epsilon$ is the dominant energy scale then the current is suppressed as $I_{bl}\propto 1/\epsilon^2$ and the double dot gets stuck in states $|\tilde{FF}\rangle$ with $P_{\tilde{FF}}\to 1$. This is the regime of Pauli-blockade. However, we find that due to interaction effects interdot tunneling $t_S$ can even exceed the energy splitting $\epsilon$, thus removing the blockade mechanism. \begin{figure}[h] \begin{center} \includegraphics[width=0.7\linewidth,angle=0]{fig11.pdf} \caption{(Color online) Ratio between leakage current $I_{bl}$~\eqref{current1} corresponding to setup in Fig.~\ref{Blockade} (a) and the finite current for reversed transport voltage $I_{op}$~\eqref{current2}, i.e., the current in the open (unblocked) direction. Used parameters $\Gamma_L=\Gamma_R=0.01$meV, $t_{AS}=t_S=0.1$meV and $S_M=0.7$ $S_F=1$. } \label{Blockade2} \end{center} \end{figure} Due to Pauli-blockade the double dot acts as a current rectifier, since by reversing the transport voltage in Fig.~\ref{Blockade} a) a finite current can flow. Fig.~\ref{Blockade2} illustrates how effective the double dot acts as current rectifier as function of $\epsilon$. At finite magnetic field, the transport behavior can also be very interesting. For example, if interactions are strong enough to suppress $\epsilon$ to become on the order of the spin-orbit coupling ($\Delta_{SO}\geq\epsilon$) but still much larger than the interdot tunneling ($\epsilon\gg t$), then a current blockade can occur in the opposite bias direction of the zero field case. This situation, depicted in Fig.~\ref{Blockade} (b), is achieved by changing both the magnetic field and the detuning. A magnetic field $B^*\approx \epsilon/2(\mu_{orb}+\mu_{spin})$ is applied so that states $\ket{FF}$ and $\ket{M}$ are degenerate, while in the $|1L,1R\rangle$ charge configuration, the state $\ket{\tilde{FF}}$ is ground state and separated from state $\ket{\tilde{M}}$ by the energy $\epsilon$ (because of the applied field $B^*$). The physical situation of Fig.~\ref{Blockade} (b) resembles that of Fig.~\ref{Blockade} (a), and also results in a current blockade. This ability to control the direction of the current rectification by varying the magnetic field and detuning can have applications to carbon nanotube-based spintronic proposals. \subsection{1D-Quantum dot arrays} \label{Chain} The rich physics associated with the strong correlations of double occupied dots is expected to have significant impact in the many-body behavior of a one-dimensional chain of dots. The double dot analysis carried out here can be used as a starting point to study the behavior of a linear array of coupled dots. According to our analysis, there exists an interaction regime in which the behavior of two electrons in the same dot is strongly correlated and controlled by $\epsilon$ while the behavior of electrons in different dots is weakly correlated and can be accurately described considering only the lowest orbital of each dot. In order to generate an effective Hamiltonian that captures the essential effects of strong onsite correlations, we apply the ideas of Hubbard operators to describe double occupied dots or doublons (see i.e. Ref.~\onlinecite{ovchinnikov2004hubbard}). This description assumes that the strong onsite Coulomb repulsion suppresses the probability to occupy a dot with more than two electrons. This is a good approximation for low enough fillings. Formally, we start from the complete Hamiltonian that describes a chain of dots. This Hamiltonian represents an extension of the double dot Hamiltonian introduced in the Appendix. We introduce the operators $a^\dagger_{r,n,\alpha}$ which create an electron in the $r$ dot, in the $n$ orbital state and with a given spin-valley configuration $\alpha$. To simplify the notation we introduce $a^\dagger_{r,\alpha}\equiv a^\dagger_{r,1,\alpha}$ for electrons in the lowest band which describe well the singly occupied dots. The state that describes two electrons in the $r$ dot can be expanded in single particle basis as \begin{eqnarray} |d_{\eta,r, \alpha,\alpha'}\rangle&=&f_{\alpha,\alpha'}^{-1}\sum_{n\le m}\beta_{\eta,n,m}(a^\dagger_{r,n,\alpha}a^\dagger_{r,m,\alpha'}\notag\\ & &+(-1)^\eta a^\dagger_{r,n,\alpha'}a^\dagger_{r,m,\alpha})|0\rangle \label{ddef} \end{eqnarray} Here, $\eta$ labels the spin-valley symmetry of the two-electron state $\eta=1$ for antisymmetric spin-valley states and $\eta=2$ for symmetric spin-valley states and $f_{\alpha,\alpha'}=\sqrt{1+\delta_{\alpha,\alpha'}}$. Equation~\ref{ddef} defines the doublon operator such that $|d_{\eta,r,\alpha,\alpha'}\rangle=d^\dagger_{\eta,r,\alpha,\alpha'}|0\rangle$ and $d^\dagger_{\eta,r,\alpha,\alpha'}=(-1)^\eta d^\dagger_{\eta,r,\alpha',\alpha}$. Note that the $\beta_{\eta,n,m}$ do not depend on $\alpha$ and $\alpha'$. The coefficients $\beta_{\eta,n,m}$ can be obtained by diagonalizing the local part of the Hamiltonian which amounts to solve two electrons in a single dot. We now assume that the occupation of higher excited two-particle states, which are separated by an energy gap of the order of the single particle level spacing, can be neglected. This should be a good approximation for the low energy spectrum. The effective Hamiltonian is obtained by projecting the complete Hamiltonian onto the subspace of empty, singly and doubly occupied dots and can be written as \begin{equation} H_{eff}=P(H_e+H_d+H_{ed}+V_c)P. \label{Hchain} \end{equation} where $P\equiv \prod_i P_i$ represent a projector to the physically allowed subspace where $P_i=|0\rangle_i\langle0|+\sum_\alpha|\alpha\rangle_i\langle\alpha|+\sum_\eta|d_\eta\rangle_i\langle d_\eta|$ (see Ref.~\onlinecite{duan2005effective}). $H_e$ describes the behavior of single occupied dots, $H_d$ describes the behavior of double occupied, $H_{ed}$ contains a coupling between singly and doubly occupied dots and $V_c$ represents the Coulomb interaction between different sites. The explicit form of these contribution is: \begin{eqnarray} H_{e}&=&\sum_{r,\alpha } E_{r,\alpha}a_{ r \alpha }^{\dagger}a^{}_{r\alpha}+\sum_{\langle r,r'\rangle,\alpha} t_{r,r'} a_{ r \alpha }^{\dagger}a^{}_{r'\alpha}\\ H_{d}&=&\sum_{r,\eta, \alpha\leq\alpha'}E^{2b,\eta}_{r,\alpha,\alpha'}d_{\eta, r \alpha\alpha' }^{\dagger}d^{}_{\eta,r\alpha\alpha'}\nonumber\\ &&+\sum_{\langle r,r'\rangle,\eta, \alpha\leq\alpha'} t^{(d)}_{\eta,r,r'} d_{\eta, r \alpha\alpha' }^{\dagger}d^{}_{\eta, r'\alpha\alpha'}\\ H_{ed}&=& \sum_{\begin{subarray}{l}\langle r,r'\rangle,\eta,\eta',\\ \alpha_1,\alpha_2,\alpha_3\end{subarray}} t^{\eta,\eta'}_{r,r'} f_{\alpha_1\alpha_3}f_{\alpha_2\alpha_3} d_{\eta r \alpha_1 \alpha_3}^\dag a_{r' \alpha_2}^\dag a_{r\alpha_1}d_{\eta'r'\alpha_2\alpha_3}\notag\\ &&+\sum_{\langle r,r'\rangle\eta,\alpha,\alpha'} f_{\alpha\alpha'} g_{\eta,r,r'} [d^\dagger_{\eta,r,\alpha\alpha'}a_{r,\alpha} a_{r',\alpha'}+\mbox{H.c}]\nonumber\\ V_c&=&U_c\sum_{r\ne r'} \frac{n_r n_{r'}}{r-r'} \end{eqnarray} The diagonal part of $H_{e}$ and $H_{d}$ represent the electron and doublons energies composed of the spin-valley energy and onsite energies ($E_{r,\alpha}=E^c_1(V_r)+E_\alpha$ and $E^{2b,\eta}_{r,\alpha,\alpha'}=2E^c_1(V_r)+E_\alpha+E_{\alpha'}+V+(-1)^\eta V_{ex}$) , and $t$ and $t^{(d)}$ the tunnelings of the electrons and doublons. The $H_{ed}$ contains a term that describes the hopping of an electron from the doubly occupied site to a singly occupied site which in this language corresponds to an exchange of a doublon and an electron. The second term in $H_{ed}$ represents the hopping of an electron from a singly occupied site to a singly occupied site and viceversa, which is represented as destroying two atoms and creating a doublon. Finally, we have the offsite Coulomb interaction $V_c$ in terms of the dot density $n_r$ defined as \begin{equation} n_r=\sum_\alpha a_{ r \alpha }^{\dagger}a^{}_{r\alpha}+2\sum_{\eta,\{\alpha,\alpha'\}}d_{\eta, r \alpha\alpha' }^{\dagger}d^{}_{\eta,r \alpha\alpha'}. \end{equation} Here, the summation $\{\alpha,\alpha'\}$ is restricted to $\alpha<\alpha'$ for $\eta=1$ and $\alpha\le\alpha'$ for $\eta=2$. For the offsite interaction, we assume exclusively capacitive coupling and neglect a dependence on the symmetry of the two particle states. Explicit expressions for the parameters $g_{\eta,r,r'}$, and $t^{\eta,\eta'}_{r,r'}$ can be obtained by comparing matrix elements of the exact and effective Hamiltonians. If the doublon solution is expressed as in Eq.~\ref{ddef}, the $g_{\eta,r,r'}$, and $t^{\eta,\eta'}_{r,r'}$ can be expanded in terms of the doublon expansion coefficient $\beta_{\eta,n,m}$ and the many-body Hamiltonian matrix elements (see e.g. Ref.~\onlinecite{duan2005effective}). Alternatively, these parameters can be obtained from comparison between exact and effective Hamiltonian solutions for two and three-electrons systems. These few-electron calculations might be challenging but allow the determination of the parameters needed for many-body calculations. The extraction of these coupling parameters is beyond the scope of the current article. In the most general case the parameters that describe the effective Hamiltonian depend on lattice sites positions and can be controlled by changing the detuning in each lattice site $\Delta_{r}$. For example, an enhancement of the superexchange interactions can be achieved by detuning some of the dots and, therefore, reducing the energy cost of double occupancies. The effective Hamiltonian can be applied to describe an array of coupled dots in many different regimes and its phase diagram might exhibit novel phases. In particular, the existence of spin-valley ``triplet'' close in energy to the spin-valley ``singlet'' can lead to phenomena richer in comparison to the standard single-band Hubbard model \cite{Lieb68}. For example, for one particle per site (filling $n=1$), it is known that the ground state of the usual Hubbard model has infinite susceptibility to spin dimerization\cite{Wilkens01}, i.e. formation of singlet/triplet bonds between nearest-neighbor sites. However, the ground state is not dimerized since the formation of singlet bonds, say at sites (2 i, 2 i+1), is penalized by the large energy cost of the remainder triplet components between sites (2 i+1, 2 i+2). In carbon nanotubes operating in the strongly interacting regime , $\epsilon \ll 1$, even spin-valley ``triplet'' states can lower their energy by virtual hoping. The later situation reduces the energy cost of triplet formation and the infinite susceptibilities to spin dimerization might in this case translate in actual dimerization of the ground state. Away from $n=1$, the presence of a spin-valley ``triplet'' can also significantly affect the magnetic structure of the system. In particular, for systems which already exhibit ferromagnetism in the standard single-band Hubbard model, the inclusion of the spin-valley ``triplet'' can strengthen and extend the ferromagnetic phase. For example, generic 2 and 3 dimensional square lattice geometries, and others with similar connectivity conditions, exhibit Nagaoka ferromagnetism\cite{Nagaoka1966} when the gain of kinetic energy of a single hole exceeds the decrease of superexchange energy. The latter condition is fulfilled at very large on-site interactions. In quantum dots in carbon nanotubes with small $\epsilon$, Nagaoka-type ferromagnetism might become stable at reduced value of the interaction since the virtual hopping of the spin-valley triplets reduces the super-exchange penalty of having a polarized state. For arrays with filling $1<n<2$ and zero detuning, the low energy physics consists of doubly and singly occupied dots while the occurrence of empty sites is strongly suppressed since it implies an increase of double occupancies. This implies that the number of electrons and doublons will be independently conserved and that the second term of $H_{ed}$ can be neglected. Also, the tunneling of the electrons and the doublons can be neglected. Thus, the only relevant terms in the effective Hamiltonian are the onsite energies, the long-range Coulomb and the electron-doublon exchange (first term in $H_{ed}$). This regime can lead to interesting phenomena when the electron-doublon exchange becomes comparable the multiplet splitting $\epsilon$. Finally, it should be pointed out that the long-range Coulomb interaction can have a preponderant influence in the charge distribution in small arrays of quantum dots. \section{Summary and conclusions} We have presented a detailed study of the few-electron eigenspectrum of a double quantum dot in a semiconducting carbon nanotube. We showed how the spin-valley physics leads to the formation of multiplets. The internal energy structure of the multiplet is practically unaffected by a change in either the confinement potential or the interaction strength, but the energy gap between different multiplets is strongly modified by both. We showed that for sufficiently strong interactions, the spin-orbit coupling can exceed the energy splitting between states with symmetric and antisymmetric orbital parts for any detuning between the dots. This situation modifies the two particle phase diagram. Above a critical interaction strength, the ground state at small, finite magnetic fields is always ferromagnetic independent of the detuning. Furthermore, in this strongly interacting regime, the blockade of linear transport gradually disappears since the reduction of the multiplets' energy splitting $\epsilon$ allows a finite tunneling probability from the $|1L,1R\rangle$ state to the $|2L,0R\rangle$ state even for a symmetric spin-valley part. Pauli-blockade physics will occur for weak enough Coulomb correlations, which can be suppressed by either working with short dots, or by covering the nanotube by strong dielectrics\cite{wunsch09}. We note that a well developed Pauli-blockade is the precondition for realization of spin-qubits in double dots allowing for coherent singlet-triplet manipulation \cite{Hanson2007,Churchill09b}. Our understanding of the double dot physics can be used as an starting point to analyze the behavior of an array of coupled dots. The effective Hamiltonian, as presented in Eq.~\ref{Hchain}, can be applied to describe an array of coupled dots in certain regimes. In the future, we would like to explore the many-body physics associated with the strong onsite correlations and its consequences in the magnetic ordering of the system. In the analysis presented here, we neglected terms that flip either the spin or valley degree of freedom, or both. However, such mechanisms could change the various level crossings to avoided crossings and thus open new ways to control the spin-valley degree of freedom as well as the transport properties of this system\cite{murgida2007coherent,murgida2009coherent,Burkard09,Burkard10}. \section{Acknowledgments} We thank F. Kuemmeth, H. Churchill, and H. Bluhm for illuminating discussions. B. W. is funded by the German Science Foundation under grant WU 609/1-1. J. v. S. and A. M. R. are supported by NSF-PIF grant, by an ARO grant with funding from the DARPA OLE program and by NIST.
2,877,628,090,901
arxiv
\section{Introduction} Let $q$ be a prime power and let $m$ be a positive integer. A $q$-\emph{polynomial}, or \emph{linearized polynomial}, over $\F_{q^m}$ is a polynomial of the form \[f(x)=\sum_{i=0}^t a_i x^{q^i},\] where $a_i\in \F_{q^m}$, $t$ is a positive integer. If $a_t \neq 0$, we say that $t=\deg_q f(x)$ is the $q$-\emph{degree} of $f$. We denote by $\mathcal{L}_{m,q}$ the set of all $q$-polynomials over $\F_{q^m}$ and by $\tilde{\mathcal{L}}_{m,q}$ the following quotient $\mathcal{L}_{m,q}/(x^{q^m}-x)$. The $\F_q$-linear maps of $\F_{q^m}$ can be identified with the polynomials in $\tilde{\mathcal{L}}_{m,q}$. This shows the relevance of linearized polynomials in the theory of finite fields and their algebraic and geometric applications. A fundamental problem in the theory of linearized polynomials is to characterize precisely the dimension of the kernel of the given polynomial in terms of its coefficients. Results in this direction are given in \cite{qres,teoremone,GQ2009,McGuireSheekey,PZ2019,wl,Zanella}. Let $n,s$ be positive integers such that $s<2n$ $\gcd(s,n)=1$. First in \cite{CMPZ}, and later in \cite{PZ2019}, the following polynomials are investigated \begin{equation}\label{eq:form} f_{a,b,s}(x)=x+ax^{q^s}+bx^{q^{s+n}} \in \tilde{\cL}_{2n,q}. \end{equation} The following results are known from \cite{CMPZ} and \cite{PZ2019}: \begin{itemize} \item if $\N_{q^{2n}/q^n}(a)=\N_{q^{2n}/q^n}(b)$, then $\dim_{\F_q} \ker f_{a,b,s}(x)\leq 1$; \item if $\N_{q^{2n}/q^n}(a)\neq \N_{q^{2n}/q^n}(b)$, then $\dim_{\F_q} \ker f_{a,b,s}(x)\leq 2$; \end{itemize} where $\N_{q^{2n}/q^n}(x)=x^{1+q^n}$. Our main result is Theorem \ref{th:mainmain} and concerns the existence, for every $\delta\in\F_{q^{2n}}$ with $\mathrm{N}_{q^{2n}/q^n}(\delta)\notin\{0,1\}$, of an element $a\in\F_{q^{2n}}$ such that the kernel of $f_{a,\delta a,s}$ has dimension $2$, providing $n$ is large enough. \begin{theorem}\label{th:mainmain} Let $q$ be a prime power and $n,s$ be two relatively prime positive integers. Suppose that \[ n\geq\begin{cases} 4s+2 & \textrm{if}\; q=3\textrm{ and }s>1,\,\textrm{or}\;q=2\textrm{ and }s>2; \\ 4s+1 & \textrm{otherwise}. \end{cases} \] For every $\delta \in \F_{q^{2n}}^*$ with $\N_{q^{2n}/q^n}(\delta)\neq 1$ there exists $a \in \F_{q^{2n}}^*$ such that \[ \dim_{\F_q} \ker (f_{a,b,s}(x))=2, \] where $b=\delta a$. \end{theorem} In Remark \ref{rem:adjoint} we show that we can always suppose $n>2s$, up to considering the adjoint polynomial. The first step in the proof of Theorem \ref{th:mainmain} is to manipulate the shape of $f_{a,b,s}(x)$ to translate the condition on the dimension of the kernel into the existence of $\mathbb{F}_{q^n}$-rational points in the intersection of certain $\F_{q^n}$-rational hypersurfaces, which are described in Theorem \ref{th:main}. Then we prove that this intersection is described by means of an $\F_{q^{2n}}$-rational curve $\mathcal{X}$. Using intersection theory and function field theory, the curve $\mathcal{X}$ is shown to be absolutely irreducible of genus $q^{2s}-q^s-1$; Theorem \ref{th:mainmain} now follows by Hasse-Weil bound. Theorem \ref{th:mainmain} also has applications in the theory of scattered polynomials. A polynomial $f(x)\in\tilde{\cL}_{m,q}$ is said to be \emph{scattered} if \[ \dim_{\F_q}\ker(f(x)-\lambda x)\leq 1, \quad\textrm{for all}\;\; \lambda\in\F_{q^m}. \] Scattered polynomials have been widely investigated, especially after the paper \cite{Sheekey2016}, where Sheekey builds a bridge between scattered polynomials and rank metric codes. The family of linearized binomials $f_{\delta,s}(x)=x^{q^s}+\delta x^{q^{n+s}}\in\tilde{\cL}_{2n,q}$ with $\delta\ne0$ contains a large number of scattered polynomials when $n$ is $3$ or $4$, as proved in \cite{CMPZ} and \cite{PZ2019}. The question arises whether there exist other values of $n$, possibly infinitely many, for which $f_{\delta,s}(x)$ is scattered. Many authors have considered the problem of classifying \emph{exceptional} scattered polynomials $f(x)\in\tilde{\cL}_{m,q}$, i.e. scattered polynomials which remain scattered over infinitely many extensions $\F_{q^{\ell m}}$ of $\F_{q^m}$; partial classification results have been provided by Bartoli and Zhou \cite{BZ}, Bartoli and Montanucci \cite{BM}, Ferraguti and Micheli \cite{FM}. Their results rely on the fact that the order of $\F_{q^{\ell m}}$ is much larger than the degree of $f(x)$; as a matter of fact, the key role in \cite{BZ,BM} is played by the application of the Hasse-Weil bound to a curve whose degree has the same order of magnitude as $\deg f(x)$, and hence is small with respect to $q^{\ell m}$ (see \cite[Lemma 2.1]{BZ}). The aforementioned binomial $f_{\delta,s}(x)$ is not taken into account by their results, because $\deg f_{\delta,s}(x)=q^{n+s}$ is high with respect to the order of $\F_{q^{2n}}$. As a byproduct of Theorem \ref{th:mainmain}, we prove in Theorem \ref{th:noscatt} that $f_{\delta,s}(x)$ is not scattered when $n$ is large enough with respect to $s$; for instance, when $s=1$ it is enough to choose $n\geq5$. Finally, in Theorem \ref{th:applMRD} we use Theorem \ref{th:mainmain} to give an asymptotic classification of the family of rank-metric codes defined by the binomials $f_{\delta,s}(x)$. The paper is organized as follows. Section \ref{sec:preliminaries} contains preliminary results about algebraic curves and function function fields which are used in Section \ref{sec:proof}. Section \ref{sec:proof} is devoted to the proof of Theorem \ref{th:mainmain}; the cases $q$ odd and $q$ even are studied separately, respectively in Section \ref{sec:qodd} and in Section \ref{sec:qeven}. Section \ref{sec:appl} provides the applications of Theorem \ref{th:mainmain}; namely, Section \ref{sec:linearsets} shows the applications to scattered polynomials and linear sets, while Section \ref{sec:MRD} shows the applications to rank metric codes. \section{Preliminaries on algebraic curves}\label{sec:preliminaries} Let $\cC$ be a projective, absolutely irreducible, algebraic curve over the algebraically closed field $\mathbb{K}=\overline{\mathbb{F}}_q$, embedded in a projective space $\PG(r,\mathbb{K})$ with homogeneous coordinates $(X_1\colon\ldots\colon X_{r+1})$ and not contained in the hyperplane at infinity $H_{\infty}:X_{r+1}=0$. Let $I(\mathcal{C})$ be the ideal of $\mathcal{C}$. Denote by $\mathbb{K}(\mathcal{C})$ the field of ($\mathbb{K}$-)rational functions on $\mathcal{C}$, briefly the function field of $\mathcal{C}$. Clearly, $\mathbb{K}(\mathcal{C})$ is generated over $\mathbb{K}$ by the coordinate functions $x_1,\ldots,x_r$ with $x_i=\frac{X_i+I(\C)}{X_{r+1}+I(\C)}$, and $\mathbb{K}(\C)\colon\mathbb{K}$ is a field extension of transcendence degree $1$. We denote by $\mathbb{P}(\mathcal{C})$ the set of places of $\mathcal{C}$, that is, the set of places of its function field $\mathbb{K}(\mathcal{C})$. For every $P\in\mathbb{P}(\mathcal{C})$ and every nonzero $z\in\mathbb{K}(\C)$, we denote by $v_P(z)\in\mathbb{Z}$ the valuation of $z$ at $P$; $P$ is said to be a zero (resp. a pole) of $z$ if $v_P(z)>0$ (resp. $v_P(z)<0$). Suppose that $\C$ is defined over $\mathbb{F}_q$, i.e. $I(\C)$ is generated by polynomials over $\mathbb{F}_q$. Then $\mathbb{F}_q(\C)$ denotes the $\mathbb{F}_q$-rational function field of $\C$, i.e. the field of $\mathbb{F}_q$-rational functions on $\mathcal{C}$. The $\mathbb{F}_q$-rational places of $\C$ are those places $P\in\mathbb{P}(\C)$ which are defined over $\mathbb{F}_q$; that is, $\mathbb{F}_q$-rational places of $\C$ are the places of degree $1$ in $\mathbb{F}_q(\C)$, which are exactly the restriction to $\mathbb{F}_q(\C)$ of the places of $\mathbb{K}(\C)$ in the constant field extension $\mathbb{K}(\C)\colon\mathbb{F}_q(\C)$. The center of an $\mathbb{F}_q$-rational place is an $\mathbb{F}_q$-rational point of $\cC$; conversely, if $P$ is a simple $\mathbb{F}_q$-rational point of $\cC$, then the only place centered at $P$ is $\mathbb{F}_q$-rational, and may be identified with $P$. Let $\varphi:\C^\prime\to\C$ be a covering of curves, i.e. a non-constant rational map from the curve $\C^\prime$ to the curve $\C$, of degree $\deg(\varphi)=[\mathbb{K}(\C^\prime)\colon\mathbb{K}(\C)]$. We denote by $\varphi$ also the induced map $\mathbb{P}(\C^\prime)\to\mathbb{P}(\C)$; if $\varphi$ is $\mathbb{F}_q$-rational, then $\varphi$ maps $\mathbb{F}_q$-rational places of $\C^\prime$ to $\mathbb{F}_q$-rational places of $\C$. The pull-back of $\varphi$ is denoted by $\varphi^*:\mathbb{K}(\C)\to\mathbb{K}(\C^\prime)$. When $P\in\mathbb{P}(\C)$ and $P^\prime\in\mathbb{P}(\C^\prime)$ satisfy $\varphi(P^\prime)=P$, we write $P^\prime|P$ and say that $P^\prime$ lies over $P$ in $\varphi$. We denote by $e(P^\prime|P)$ the ramification index of $P^\prime|P$, that is the unique positive integer such that $v_{P^\prime}(\varphi^*(w))=e(P^\prime|P)\cdot v_P(w)$ for all $w\in\mathbb{K}(\C)$; we have $\sum_{P^\prime:P^\prime|P}e(P^\prime|P)=\deg(\varphi)$. We say that $P^\prime$ is ramified over $P$ if $e(P^\prime|P)>1$, and totally ramified if $e(P^\prime|P)=\deg(\varphi)$; otherwise it is unramified. A ramified place $P^\prime$ is wildly ramified (resp. tamely ramified) if $e(P^\prime|P)$ is divisible (resp. not divisible) by $p$. We refer to \cite{HKT,Sti} for further details on algebraic curves and function fields. \begin{theorem}\label{th:hurwitz}{\rm (Hurwitz genus formula, \cite[Theorem 3.4.13]{Sti})} Let $\mathcal{C},\mathcal{C}^\prime$ be two absolutely irreducible curves over $\mathbb{K}=\overline{\mathbb{F}}_q$ and $\varphi:\mathcal{C}^\prime\to\mathcal{C}$ be a covering. For every place $P$ of $\mathcal{C}$ and every place $P^\prime$ of $\mathcal{C}^\prime$ lying over $P$ in $\varphi$, let $t\in\mathbb{K}(\mathcal{C})$ be a local parameter at $P$, $t^\prime\in\mathbb{K}(\mathcal{C}^\prime)$ be a local parameter at $P^\prime$, and $\varphi^*(t)\in\mathbb{K}(\mathcal{C}^\prime)$ be the pull-back of $t$ with respect to $\varphi$. Then \[ 2g(\mathcal{C}^\prime)-2=\deg(\varphi)\cdot(2g(\mathcal{C})-2)+\sum_{P^\prime\in\mathbb{P}(\mathcal{C}^\prime)}v_{P^\prime}\left(\frac{d\varphi^*(t)}{d t^\prime}\right). \] \end{theorem} If $P^\prime$ is not wildly ramified, then $v_{P^\prime}\left(\frac{d\varphi^*(t)}{d t^\prime}\right)=e(P^\prime|P)-1$. We now recall two important types of coverings. The following results are the application of \cite[Corollary 3.7.4]{Sti} and \cite[Theorem 3.7.10]{Sti} in the case of an algebraically closed constant field $\mathbb{K}$. \begin{theorem}\label{th:kummer}{\rm \cite[Corollary 3.7.4]{Sti}} Let $\C\colon F(X,Y)=0$ be an absolutely irreducible plane curve defined over a finite field $\mathbb{F}_q$ of characteristic $p$, and $m$ be a positive integer with $\gcd(m,p)=1$. Let $f(X,Y)\in\mathbb{F}_q[X,Y]$ be such that there exists an $\overline{\mathbb F}_q$-rational place $Q$ of $\mathcal{C}$ at which the valuation of the rational function $f(x,y)$ is coprime with $m$, i.e. $\gcd(v_Q(f(x,y)),m)=1$. Let $\mathcal{C}^\prime$ be the curve given by the two affine equations $F(X,Y)=0$ and $Z^m=f(X,Y)$. Then the following holds. \begin{itemize} \item $\mathcal{C}^\prime$ is absolutely irreducible and defined over $\mathbb{F}_q$; $\C'$ is called a \emph{Kummer cover} of $\mathcal{C}$. \item The $\mathbb{F}_q$-rational covering $\varphi:\C^\prime\to\C$, $(X,Y,Z)\mapsto(X,Y)$, has degree $m$. \item For every place $P$ of $\mathcal{C}$ and every place $P^\prime$ of $\C'$ lying over $P$ in $\varphi$, we have $e(P^\prime| P)=m/r_P$, where $r_P=\gcd(v_P(f(x,y)),m)>0$. \item The Hurwitz genus formula reads \[ g(\C')=1+m(g(\C)-1)+\frac{1}{2}\sum_{P\in\mathbb{P}(\mathcal{C})}(m-r_P). \] \end{itemize} \end{theorem} If $\C^\prime$ is an absolutely irreducible curve over $\mathbb{F}_q$ defined by the two affine equations $F(X,Y)=0$ and $L(Z)=f(X,Y)$, for some $f(X,Y),F(X,Y)\in\mathbb{F}_q[X,Y]$ and some separable $p$-polynomial $L(T)\in\mathbb{F}_q[T]$, then $\mathcal{C}^\prime$ is said to be a \emph{generalized Artin-Schreier cover} of the curve $\mathcal{C}:F(X,Y)=0$, with generalized Artin-Schreier covering $\varphi:\C^\prime\to\C$, $(X,Y,Z)\mapsto(X,Y)$. \begin{theorem}\label{th:artinschreier}{\rm \cite[Theorem 3.7.10]{Sti}} Let $\mathcal{C}:F(X,Y)=0$ be an absolutely irreducible plane curve defined over a finite field $\mathbb{F}_q$ of charateristic $p$. Let $L(T)\in\mathbb{F}_q[T]$ be a separable $p$-polynomial of degree $\bar{q}$ with all its roots in $\mathbb{F}_q$. Let $f(X,Y)\in\mathbb{F}_q[X,Y]$ be such that for every place $P\in\mathbb{P}(\mathcal{C})$ there exists a rational function $\omega$ on $\mathcal{C}$ (depending on $P$) satisfying either $v_P(f(x,y)-L(\omega))\geq0$ or $v_P(f(x,y)-L(\omega))=-m$ with $m>0$ and $p\nmid m$. Define $m_P=-1$ in the former case and $m_P=m$ in the latter case. Let $\mathcal{C}^\prime$ be the space curve given by the two affine equations $F(X,Y)=0$ and $L(Z)=f(X,Y)$. If there exists a place $Q\in\mathbb{P}(\mathcal{C})$ with $m_Q>0$, then $\mathcal{C}^\prime$ is a generalized Artin-Schreir cover of $\C$, defined over $\mathbb{F}_q$. With the above notation, the following holds for generalized Artin-Schreier curves. \begin{itemize} \item The $\mathbb{F}_q$-rational covering $\varphi:\C^\prime\to\C$, $(X,Y,Z)\mapsto(X,Y)$, has degree $\bar{q}$. \item For every place $P$ of $\mathcal{C}$ and every place $P^\prime$ of $\mathcal{C}^\prime$ lying over $P$ in $\varphi$, $e(P^\prime|P)$ is equal either to $1$ or to $\bar{q}$ according to $m_P=-1$ or $m_P>0$, respectively. \item The Hurwitz genus formula reads \[ g(\mathcal{C}^\prime)=\bar{q}\cdot g(\mathcal{C})+\frac{\bar{q}-1}{2}\cdot\left(-2+\sum_{P\in\mathbb{P}(\mathcal{C})}(m_P+1)\right). \] \end{itemize} \end{theorem} We now recall the well-known Hasse-Weil bound. \begin{theorem}\label{th:hasseweil}{\rm \cite[Theorem 5.2.3]{Sti} (Hasse-Weil bound)} Let $\mathcal{C}$ be an absolutely irreducible curve defined over $\mathbb{F}_q$ and with genus $g$. Then the number $N_{q}$ of $\mathbb{F}_q$-rational places of $\mathcal{C}$ satisfies \[q+1-2g\sqrt{q}\leq N_{q} \leq q+1+2g\sqrt{q}.\] \end{theorem} \section{Proof of Theorem \ref{th:mainmain}}\label{sec:proof} In this section we prove Theorem \ref{th:mainmain}. First we determine necessary and sufficient conditions on $a$ and $b$ for $f_{a,b,s}(x)$ having kernel of dimension $2$; cf. Theorem \ref{th:main}. Then we investigate such conditions by means of algebraic-geometric tools. The first remark shows that different choices of $a,b$ with the same norm of $b/a$ over $\mathbb{F}_{q^n}$ provide polynomials $f_{a,b,s}(x)$ with the same behaviour. \begin{remark}\label{rk:normdelta} Assume that the linearized polynomial $f_{a,b,s}(x)=x+ax^{q^s}+bx^{q^{s+n}} \in \F_{q^{2n}}[x]$, with $\gcd(s,n)=1$ and $b=\delta a$, has kernel of dimension two. Clearly, for each $\lambda \in \F_{q^{2n}}^*$ we have \[ \dim_{\F_q} \ker(\lambda^{-1}f_{a,b,s}(\lambda x))=2, \] where \[\lambda^{-1}f_{a,b,s}(\lambda x)=x+a \lambda^{q^s-1}x^{q^s}+a\lambda^{q^s-1}\delta \lambda^{q^s(q^n-1)}x^{q^{s+n}}=f_{a',b',s}(x),\] with $a'=a\lambda^{q^s-1}$, $\delta'=\lambda^{q^s(q^n-1)}\delta$ and $b'=a'\delta'$. Note that for each element $\delta' \in \F_{q^{2n}}$ with $\N_{q^{2n}/q^n}(\delta')=\N_{q^{2n}/q^n}(\delta)$ there exists $\lambda \in \F_{q^{2n}}$ such that $\delta'=\delta \lambda^{q^s(q^n-1)}$. Therefore, if $\dim_{\F_q} \ker(f_{a,b,s}(x))=2$, with $b=\delta a$, then for each $\delta' \in \F_{q^{2n}}$ with $\N_{q^{2n}/q^n}(\delta')=\N_{q^{2n}/q^n}(\delta)$ there exists $a' \in \F_{q^{2n}}$ such that $\dim_{\F_q} \ker(f_{a',b',s}(x))=2$, with $b'=\delta' a'$. \end{remark} The second remark shows that we may assume $s<n/2$. \begin{remark}\label{rem:adjoint} The \emph{adjoint} of a $q$-polynomial $f(x)=\sum_{i=0}^{n-1}a_i x^{q^i}$, with respect to the bilinear form $\langle x,y\rangle=\mathrm{Tr}_{q^n/q}(xy)$, is given by \[\hat{f}(x)=\sum_{i=0}^{n-1}a_{i}^{q^{n-i}} x^{q^{n-i}}.\] In particular, if $f(x)$ is a $q$-polynomial of shape \eqref{eq:form}, then \[ f_{a,b,s}(x)=x+ax^{q^s}+bx^{q^{n+s}}\in \tilde{\mathcal{L}}_{2n,q}, \] with $\gcd(s,n)=1$ and its adjoint is \[ \hat{f}_{a,b,s}(x)=x+a^{q^{2n-s}}x^{q^{2n-s}}+b^{q^{n-s}}x^{q^{n-s}}. \] Therefore, choosing $s^\prime=2n-s$, $a^\prime=a^{q^{2n-s}}$, $b^\prime=b^{q^{n-s}}$, we get \[ \hat{f}_{a,b,s}(x)=f_{a',b',s'}(x), \] while choosing $s^{\prime\prime}=n-s$, $a^{\prime\prime}=b^{q^{n-s}}$, $b^{\prime\prime}=a^{q^{2n-s}}$, we get \[ \hat{f}_{a,b,s}(x)=f_{a^{\prime\prime},b^{\prime\prime},s^{\prime\prime}}(x), \] i.e. $\hat{f}_{a,b,s}(x)$ is of shape \eqref{eq:form}. Therefore, the family of $q$-polynomials we are studying is closed by the adjoint operation. Furthermore, we underline that by \cite[Lemma 2.6]{BGMP2015}, the kernels of $f_{a,b,s}$ and $\hat{f}_{a,b,s}$ have the same dimension (see also \cite[pages 407--408]{CsMP}). Thus, we can assume $s< n/2$. \end{remark} We now prove that the shape of $\delta$ can be chosen as in \eqref{eq:deltachoice}. \begin{theorem}\label{th:deltachoice} Let $f_{a,b,s}(x) \in \F_{q^{2n}}[x]$, with $b=a\delta$. Then $\dim_{\F_q}\ker(f_{a,b,s}(x))=2$ if and only if $\dim_{\F_q}\ker(f_{\overline{a},\overline{b},s}(x))=2$, with \begin{equation}\label{eq:deltachoice} \overline{\delta}=\frac{\xi^{q^{s+n}}-\xi^{q^n}}{\xi^{q^n}-\xi^{q^s}}, \end{equation} for some $\xi \in \F_{q^{2n}}\setminus\F_{q^n}$ and some $\overline{a}\in \F_{q^{2n}}$, $\overline{b}=\overline{\delta}\overline{a}$. \end{theorem} \begin{proof} Assume that $\dim_{\F_q}\ker(f_{a,b,s}(x))=2$, i.e. there exist $x_0 \in \F_{q^{2n}}^*$ and $y_0\in \F_{q^{2n}}\setminus\F_q$ such that $x_0/y_0 \notin \F_q$ and \[ \frac{x_0^{q^s}+\delta x_0^{q^{s+n}}}{x_0}=\frac{y_0^{q^s}+\delta y_0^{q^{s+n}}}{y_0}, \] which may be rewritten as follows \[ \delta(y_0 x_0^{q^{s+n}}-x_0y_0^{q^{s+n}})=x_0y_0^{q^s}-y_0x_0^{q^s}. \] If $y_0 x_0^{q^{s+n}}-x_0y_0^{q^{s+n}}$ would be zero, than $x_0/y_0 \in \F_{q^{2n}}\cap\F_{q^{s+n}}=\F_q$, a contradiction. Hence, \[ \delta=\frac{x_0y_0^{q^s}-y_0x_0^{q^s}}{y_0 x_0^{q^{s+n}}-x_0y_0^{q^{s+n}}}, \] and, since $y_o=\xi x_0$ for some $\xi \in \F_{q^{2n}}\setminus \F_q$, we have \[ \delta= \frac{1}{-x_0^{q^{s+n}-q^s}} \frac{\xi^{q^s}-\xi}{\xi^{q^{s+n}}-\xi}. \] By Remark \ref{rk:normdelta}, $\dim_{\F_q}\ker(f_{a,b,s}(x))=2$ if and only if there exists $\overline{a},\overline{b}$ as in the claim such that $\dim_{\F_q}\ker(f_{\overline{a},\overline{b},s}(x))=2$. If $\xi\in\mathbb{F}_{q^n}$, then $\overline{\delta}=-1$, and hence $\dim_{\F_q}\ker(f_{\overline{a},\overline{b},s}(x))\leq1$. The claim follows. \end{proof} As a consequence of Theorem \ref{th:deltachoice} we get the following result. \begin{corollary} There exist $\delta \in \F_{q^{2n}}^*$ for which $\dim_{\F_q}\ker(f_{a,b,s}(x))\leq 1$, with $b=\delta a$, for each $a\in \F_{q^{2n}}^*$ if and only if \[ \left| \left\{ \N_{q^{2n}/q^n}\left( \frac{\xi^{q^{n+s}}-\xi^{q^n}}{\xi^{q^n}-\xi^{q^s}} \right) \colon \xi \in \F_{q^{2n}}\setminus\F_{q^n} \right\} \right|< q^n-1. \] \end{corollary} Since $\xi \notin \F_{q^n}$, we have that $\xi$ is the root of an irreducible polynomial $X^2-SX-T \in \F_{q^n}[X]$, where $\N_{q^{2n}/q^n}(\xi)=-T$ and $\mathrm{Tr}_{q^{2n}/q^n}(\xi)=S$. Also, $\{1,\xi\}$ is an $\F_{q^n}$-basis of $\F_{q^{2n}}$ and so there exist $A,B \in \F_{q^n}$ such that $\xi^{q^s}=A+B\xi$. In the next we give some relations involving $A,B,S$ and $T$. \begin{proposition} The following holds: \begin{enumerate} \item $S^{q^s}=2A+BS$; \item $-T^{q^s}=A^2+B(AS-BT)$. \end{enumerate} In particular, $\mathrm{Tr}_{q^{2n}/q^n}(\xi^{q^s+1})=2BT+AS+BS^2$ and $\mathrm{Tr}_{q^{2n}/q^n}(\xi^{q^s+q^n})=AS-2BT$. \end{proposition} \begin{proof} As \[ \xi^{q^s+q^n}=(A+B\xi)(S-\xi)=AS-BT-A\xi \] and \[ \xi^{1+q^{n+s}}=-BT+(S^{q^s}-A-BS)\xi, \] we have that \[ \mathrm{Tr}_{q^{2n}/q^n}(\xi^{q^s+q^n})=AS-2BT+(S^{q^s}-2A-BS)\xi. \] Since $\mathrm{Tr}_{q^{2n}/q^n}(\xi^{q^s+q^n})\in \F_{q^n}$, we get the first relation. Also, \[ -T^{q^s}=\N_{q^{2n}/q^n}(\xi^{q^s})=A^2+ABS-B^2T, \] i.e. the second relation. \end{proof} Let $\alpha \in \F_{q^n}^*$ with $\alpha \neq 1$. Then \[ \N_{q^{2n}/q^n}\left( \frac{\xi^{q^{n+s}}-\xi^{q^n}}{\xi^{q^n}-\xi^{q^s}} \right)=\frac{\xi^{q^n+1}+\xi^{q^s+q^{n+s}}-(\xi^{1+q^{n+s}}+\xi^{q^s+q^n})}{\xi^{q^n+1}+\xi^{q^s+q^{n+s}}-(\xi^{q^n+q^{n+s}}+\xi^{q^s+1})}=\alpha, \] which can be written as \[ (1-\alpha)(T+T^{q^s})-\alpha S^{q^s+1}+(1+\alpha)(AS-2BT)=0. \] Hence, we have the following result. \begin{theorem}\label{th:main} Let $\alpha \in \F_{q^n}^*$ with $\alpha \ne 1$ and $s$ a positive integer with $\gcd(s,n)=1$. If there exist $T,S,A,B \in \F_{q^n}$ such that \begin{enumerate} \item $(1-\alpha)(T+T^{q^s})-\alpha S^{q^s+1}+(1+\alpha)(AS-2BT)=0$; \item $X^2-SX-T \in \F_{q^n}[X]$ is irreducible over $\F_{q^n}$; \item $S^{q^s}=2A+BS$; \item $-T^{q^s}=A^2+B(AS-BT)$, \end{enumerate} then for every $\delta\in \F_{q^{2n}}$, with $\N_{q^{2n}/q^n}(\delta)=\alpha$, there exists $a\in\mathbb{F}_{q^{2n}}^*$ such that $\dim_{\mathbb{F}_q}\ker(f_{a,b,s}(x))=2$, where $b=\delta a$. \end{theorem} In the rest of this section $q=p^h$ with $p$ prime. We will show that the existence of the parameters $T,S,A,B\in\mathbb{F}_{q^n}$ satisfying the hypothesis of Theorem \ref{th:main} is equivalent to the existence of a suitable affine $\mathbb{F}_{q^n}$-rational point of the algebraic plane curve with equation \eqref{eq:curveqodd} or \eqref{eq:curveqeven}, for $q$ odd or $q$ even respectively. \subsection{Proof of Theorem \ref{th:mainmain} for $q$ odd}\label{sec:qodd} Denote by $\Delta=S^2+4T$. By 3. and 4. of Theorem \ref{th:main}, we get \[ B=\epsilon\Delta^{\frac{q^s-1}{2}},\quad A=\frac{1}{2}(S^{q^s}-\epsilon S \Delta^{\frac{q^s-1}{2}}), \] where $\epsilon\in\{1,-1\}$. Hence we get $AS-2BT=\frac{1}{2}\epsilon\Delta^{\frac{q^s-1}{2}}-\frac{1}{2}S^{q^s+1}$. Replacing such values in 1. of Theorem \ref{th:main}, we get \begin{equation}\label{eq:qodd1} 2(T+T^{q^s})(1-\alpha)+(1-\alpha)S^{q^s+1}=\epsilon(\alpha+1)\Delta^{\frac{q^s+1}2}. \end{equation} Also, the irreducibility of $X^2-SX-T$ over $\F_{q^n}$ is equivalent to the existence of a nonsquare element $\eta$ of $\F_{q^n}$ and a nonzero element $Z$ of $\F_{q^n}$ such that $\Delta=\eta Z^2$. Therefore, \eqref{eq:qodd1} becomes \[ 2(T+T^{q^s})+S^{q^s+1}=\beta \eta^{\frac{q^s+1}2} Z^{q^s+1}, \] where $\beta=\epsilon\frac{\alpha+1}{1-\alpha}$. Using that $T=\frac{\eta Z^2-S^2}{4}$, we get the following equation: \begin{equation}\label{eq:curveqodd} -(S^{q^s}-S)^2+\eta Z^2 + \eta^{q^s} Z^{2q^s} - 2\beta\eta^{\frac{q^s+1}{2}}Z^{q^s+1}=0. \end{equation} \begin{theorem}\label{th:qodd} Let $\beta\in\mathbb{F}_{q^n}\setminus\{1,-1\}$ and $\eta$ a non-square in $\mathbb{F}_{q^n}$. The plane curve $\mathcal{C}$ with affine equation \eqref{eq:curveqodd} is absolutely irreducible and has genus $g(\mathcal{C})=q^{2s}-q^s-1$. \end{theorem} \begin{proof} Let $G(Z)=\eta Z^2+\eta^{q^s} Z^{2q^s}-2\beta\eta^{\frac{q^s+1}{2}}Z^{q^s+1}\in\mathbb{F}_{q^n}[Z]$, and let $\mathcal{C}_1$ be the plane curve with affine equation $F_1(U,Z)=0$, where $F_1(U,Z)=U^2-G(Z)$. By direct computation using the assumption $\beta\ne\pm1$ follows that $0$ is the unique multiple root of $G(Z)$, with multiplicity $2$; the other $2q^s-2$ roots $\lambda_1,\ldots,\lambda_{2q^s-2}$ of $G(Z)$ are simple. Then $G(Z)$ is not a square in $\mathbb{K}[Z]$, whence $F_1(U,Z)$ is irreducible over $\mathbb{K}=\overline{\mathbb{F}}_{q^n}$, i.e. $\mathcal{C}_1$ is absolutely irreducible. The genus of the quadratic Kummer cover $\mathcal{C}_1$ of the projective line is computed as follows. Let $z,u$ be the coordinate functions of $\mathcal{C}_1$, so that the function field of $\mathcal{C}_1$ is $\mathbb{K}(\mathcal{C}_1)=\mathbb{K}(z,u)$. The valuation of $G(z)$ at the zero of $z-\lambda_i$ in $\mathbb{K}(\mathbb{P}_z^1)=\mathbb{K}(z)$ is $1$, for every $i=1,\ldots,2q^s-2$. The valuation of $G(z)$ at any other place of $\mathbb{P}_z^1$ is even; namely, it is $2$ at the zero of $z$, $-2q^s$ at the pole of $z$, and $0$ at the zero of $z-\mu$ whenever $G(\mu)\ne0$. By Theorem \ref{th:kummer}, the only ramified places in $\mathcal{C}_1\to\mathbb{P}_z^1$ are the zeros of $z-\lambda_1,\ldots,z-\lambda_{2q^s-2}$; hence, \[g(\mathcal{C}_1)= 1+2(g(\mathbb{P}_z^1)-1)+\frac{1}{2}(2q^s-2)(2-1)=q^s-2.\] Since $\mathcal{C}$ has equation $(S^{q^s}-S)^2=G(Z)$, it is enough to show that $\mathcal{C}$ is an Artin-Schreier cover of $\mathcal{C}_1$, with covering $\varphi:\mathcal{C}\to\mathcal{C}_1$, $(Z,S)\mapsto(Z,U=S^{q^s}-S)$, of degree $q^s$. To this aim, consider the two poles $P_{\infty}$ and $Q_\infty$ of $u$ on $\mathcal{C}_1$; the rational function $1/z$ is a local parameter at each of them, i.e. $v_{P_{\infty}}(1/z)=v_{Q_{\infty}}(1/z)=1$. By direct computation, the Laurent series of $u$ at $P_\infty$ with respect to $1/z$ is \[u=\sqrt{\eta^{q^s}}\left(1/z\right)^{-q^s}-\beta\sqrt{\eta}\left(1/z\right)^{-1}+\frac{\eta-\beta^2\eta}{2\sqrt{\eta^{q^s}}}(1/z)^{q^s-2}+w,\] for some $w\in\mathbb{K}(\mathcal{C}_1)$ with $v_{P_\infty}(w)>q^s-2$. By choosing $\omega_{P_{\infty}}=\sqrt{\eta}z$ one has that $u-(\omega_{P_{\infty}}^{q^s}-\omega_{P_{\infty}})$ has valuation $-1$ at $P_\infty$, because $\beta\ne1$. Analogously, there exists $\omega_{Q_\infty}$ such that $v_{Q_{\infty}}(u-(\omega_{Q_{\infty}}^{q^s}-\omega_{Q_{\infty}}))=-1$. Hence, by Theorem \ref{th:artinschreier}, $\mathcal{C}$ is an absolutely irreducible Artin-Schreier extension $\mathcal{C}_1$ of degree $q^s$. The ramified places in $\mathcal{C}\to\mathcal{C}_1$ are exactly $P_\infty$ and $Q_\infty$, which are totally ramified; any other place of $\mathcal{C}_1$ is unramified under $\mathcal{C}$. Therefore, \[g(\mathcal{C})=q^s\cdot g(\mathcal{C}_1)+\frac{q^s-1}{2}\left(-2+2\cdot2\right)=q^{2s}-q^s-1.\] \end{proof} \begin{proposition}\label{prop:HWqodd} Let $\mathcal{C}$ be the plane curve with affine equation \eqref{eq:curveqodd}. If \[ n\geq\begin{cases} 4s+1 & \textrm{if }\,q>3, \\ 4s+2 & \textrm{if }\,q=3,s>1,\\ 5 & \textrm{if }\,q=3,s=1; \end{cases} \] then there exists an $\mathbb{F}_{q^n}$-rational affine point $(\bar{z},\bar{s})$ of $\mathcal{C}$ such that $\bar{t}=\frac{\eta \bar{z}^2-\bar{s}^2}{4}$ is different from zero. \end{proposition} \begin{proof} By Theorem \ref{th:qodd}, $\mathcal{C}$ is absolutely irreducible with genus $g(\mathcal{C})=q^{2s}-q^s-1$. By Theorem \ref{th:hasseweil}, the number $N_{q^n}$ of $\mathbb{F}_{q^n}$-rational places of $\mathcal{C}$ satisfies \[N_{q^n}\geq q^n+1-2(q^{2s}-q^s-1)\sqrt{q^n}.\] From the proof of Theorem \ref{th:qodd} the following facts follow. \begin{itemize} \item $z$ has exactly $2$ poles on $\mathcal{C}$, which coincide with the poles of $s$, namely the places lying over $P_\infty$ and $Q_\infty$. \item Using the equation of $\mathcal{C}$, the zeros of $t=\frac{\eta z^2-s^2}{4}=\frac{(\sqrt{\eta}z-s)(\sqrt{\eta}z+s)}{4}$ on $\mathcal{C}$ are also zeros of $(\beta-1)z^{q^s+1}$ and hence of $z$ as $\beta\ne1$; thus, they are the common zeros of $z$ and $s$ on $\mathcal{C}$, and there are exactly $2$ of them. \end{itemize} Altogether, there are $4$ places of $\mathcal{C}$ which are either poles of $s$ or $z$ or $t$, or zeros of $t$. The assumption on $n$ implies that \[ q^n+1-2(q^{2s}-q^s-1)\sqrt{q^n}>4, \] whence $N_{q^n}>4$. Then there exists an $\mathbb{F}_{q^n}$-rational place $P$ which is not a pole of $z$, $s$, or $t$, and is not a zero of $t$. Then the point $(\bar{z},\bar{s})=(z(P),s(P))$ yields the claim. \end{proof} From Theorem \ref{th:main} and Proposition \ref{prop:HWqodd} follows Corollary \ref{cor:mainmain_qodd}, which is our main result Theorem \ref{th:mainmain} when $q$ is odd. \begin{corollary}\label{cor:mainmain_qodd} Let $q$ be an odd prime power, $s\geq1$ be such that $\gcd(s,n)=1$. Suppose that \[ n\geq\begin{cases} 4s+2 & \textrm{if}\; q=3\textrm{ and }s>1; \\ 4s+1 & \textrm{otherwise}. \end{cases} \] Then for every $\delta\in\mathbb{F}_{q^{2n}}$ satisfying $\mathrm{N}_{q^{2n}/q^n}(\delta)\notin\{0,1\}$ there exists $a\in\mathbb{F}_{q^{2n}}^*$ such that $\dim_{\mathbb{F}_q}\ker(f_{a,b})=2$, where $b=\delta a$. \end{corollary} \subsection{Proof of Theorem \ref{th:mainmain} for $q$ even}\label{sec:qeven} Let $q$ be a power of $2$. The conditions of Theorem \ref{th:main} read: \begin{enumerate} \item $T+T^{q^s}+\beta S^{q^{s}+1}+AS=0$, with $\beta=\frac{\alpha}{1+\alpha}\notin\{0,1\}$; \item $S\ne0$ and $\mathrm{Tr}_{q^n/2}(T/S^2)=1$; \item $B=S^{q^s-1}$; \item $A^2+A S^{q^s}+S^{2q^s-2}T+T^{q^s}=0$. \end{enumerate} By 1. we get \[ A= \beta S^{q^s}+\frac{T+T^{q^s}}S, \] which can be replaced in 4. obtaining \begin{equation}\label{eq:curveqeven} (\beta^2+\beta) S^{2(q^s+1)}+S^{q^s+1}(T^{q^s}+T)+S^{2q^s}T+S^2T^{q^s}+T^{2q^s}+T^2=0. \end{equation} Set $T=S^2 Y$. Then \eqref{eq:curveqeven} reads $H(S,Y)=0$, where \begin{equation}\label{eq:miserve} \begin{array}{lll} H(S,Y)=Y^2+S^{4(q^s-1)}Y^{2q^s}+\beta^2 S^{2(q^s-1)}+S^{q^s-1}Y+ \\ \\ S^{3(q^s-1)}Y^{q^s}+\beta S^{2(q^s-1)}+S^{2(q^s-1)}Y+S^{2(q^s-1)}Y^{q^s}. \end{array} \end{equation} Straightforward computation using $\mathrm{Tr}_{q^s/2}(Y)+\mathrm{Tr}_{q^s/2}(Y)^2=Y^{q^s}+Y$ shows that the polynomial $H(S,Y)$ in \eqref{eq:miserve} splits as follows. \begin{lemma} We have $H(S,Y)=G(S,Y)\cdot G^\prime(S,Y)$, where \[ G(S,Y)= S^{2(q^s-1)}Y^{q^s}+S^{q^s-1}(1+\beta+ \mathrm{Tr}_{q^s/2}(Y))+Y, \] \[ G^\prime(S,Y)= S^{2(q^s-1)}Y^{q^s}+S^{q^s-1}(\beta + \mathrm{Tr}_{q^s/2}(Y))+Y. \] \end{lemma} The condition 2. is equivalent to the existence of an element $Z\in \F_{q^n}$ such that \begin{equation}\label{eq:T} T=S^2(Z^2+Z+\epsilon), \end{equation} for some fixed $\epsilon\in\mathbb{F}_{q^n}$ such that $\mathrm{Tr}_{q^{n}/2}(\epsilon)=1$. \noindent Let $\mathcal{C}$ be the plane curve with affine equation $F(S,Z)=G^\prime(S,Z^2+Z+\epsilon)$. \medskip In order to prove Theorem \ref{th:mainmain} when $q$ is even, by Theorem \ref{th:main} and the arguments at the beginning of Section \ref{sec:qeven}, it is enough to prove the existence of an $\fqn$-rational affine point $(\bar{s},\bar{z})$ of $\mathcal{C}$ such that $\bar{s}\ne0$ and $\bar{z}^2+\bar{z}+\epsilon\ne0$. This is done by showing that $\mathcal{C}$ is absolutely irreducible, computing its genus, and applying the Hasse-Weil lower bound. To this aim, we consider the following subcovers: \[ \varphi_2:\C\to\C_2,\quad (S,Z)\mapsto(X=S^{q^s-1},Z), \] \[ \varphi_1:\C_2\to\C_1,\quad (X,Z)\mapsto(X,Y=Z^2+Z+\epsilon). \] The curves $\C_2,\C_1$ have equation $\C_2\colon F_2(X,Z)=0$ and $\C_1\colon F_1(X,Y)=0$, where \[ F_2(X,Z)= X^2(Z^2+Z+\epsilon)^{q^s}+X(\beta + \mathrm{Tr}_{q^s/2}(Z^2+Z+\epsilon))+Z^2+Z+\epsilon, \] \[ F_1(X,Y)= X^2Y^{q^s}+X(\beta + \mathrm{Tr}_{q^s/2}(Y))+Y. \] We first prove that $\mathcal{C}_2$ is absolutely irreducible by direct inspection, and that $\mathcal{C}$ is absolutely irreducible being a Kummer cover of $\mathcal{C}_2$. To compute the genus of $\mathcal{C}$, we start by the genus of the absolutely irreducible subcover $\mathcal{C}_1$ of $\mathcal{C}_2$, which is computed with the Hurwitz genus formula. Then we compute the genus of $\mathcal{C}_2$ as an Artin-Schreier cover of $\mathcal{C}_1$. Finally, the genus of the Kummer cover $\mathcal{C}$ of $\mathcal{C}_2$ is computed. \medskip Let $\gamma,\gamma+1$ be the roots of $Z^2+Z+\epsilon\in\mathbb{F}_{q^n}[Z]$. Hence, $\mathrm{Tr}_{q^{2n}/q^n}(\gamma)=1$ and $\mathrm{N}_{q^{2n}/q^n}(\gamma)=\epsilon$; also, $\mathrm{Tr}_{q^{n}/2}(\epsilon)=1$ implies $\gamma\in\mathbb{F}_{q^{2n}}\setminus\mathbb{F}_{q^n}$. \begin{lemma}\label{lemma:C'2irr} The curve $\mathcal{C}_2$ is absolutely irreducible. \end{lemma} \begin{proof} By contradiction, suppose $F_2(X,Z)=\hat{F}(X,Z)\cdot \tilde{F}(X,Z)$ for some non-constant polynomials $\hat{F},\tilde{F}\in\mathbb{K}[X,Z]$. Then $\hat{F}(X,Z)=X\cdot A(Z)+B(Z)$ and $\tilde{F}(X,Z)=X\cdot C(Z)+D(Z)$; up to scalar multiplication, \[ A(Z)=(Z+\gamma)^a(Z+\gamma+1)^b,\qquad C(Z)=(Z+\gamma)^c(Z+\gamma+1)^d, \] where $a,b,c,d\geq0$ satisfy $a+c=b+d=q^s$. Also, \[ (Z+\gamma)^a(Z+\gamma+1)^bD(Z)+(Z+\gamma)^c(Z+\gamma+1)^dB(Z)=\beta + \mathrm{Tr}_{q^s/2}(Z^2+Z+\epsilon). \] Clearly, $a,b \in \{0,q^s\}$ as $\beta\ne0$. If $a=0$, then $b=c=q^s$ and $d=0$, since $B(Z)D(Z)=Z^2+Z+\epsilon$; hence \begin{equation}\label{eq:Zirr} (Z+\gamma+1)^{q^s}D(Z)+(Z+\gamma)^{q^s}B(Z)=\beta + \mathrm{Tr}_{q^s/2}(Z^2+Z+\epsilon). \end{equation} This implies $D(\gamma)\ne0$ and $B(\gamma+1)\ne0$, whence $D(Z)=\lambda(Z+\gamma+1)$ and $B(Z)=\lambda^{-1}(Z+\gamma)$ for some $\lambda\in\mathbb{K}^*$. With $Z=\gamma$ in \eqref{eq:Zirr}, we get $\beta=1$, a contradiciton. If $a=q^s$, the same arguments yield a contradiction. \end{proof} \begin{proposition}\label{prop:C'1} The curve $\C_1$ is absolutely irreducible with genus $q^s/2$. \end{proposition} \begin{proof} As $\C_1$ is a subcover of $\C_2$, the absolute irreducibility of $\C_1$ follows from Lemma \ref{lemma:C'2irr}. Let $(X\colon Y\colon V)$ be the homogeneous coordinates of the affine point $(X,Y)$. By direct computation, the affine points of $\mathcal{C}_1$ are simple, and hence we identify each of them with the unique place of $\mathcal{C}_1$ centered at it; the points at infinity $E_0=(1\colon 0\colon 0)$ and $E_1=(0\colon 1\colon 0)$ of $\mathcal{C}_1$ are singular. The point $E_0$ is $q^s$-fold with unique tangent line $\ell_Y : Y=0$ having intersection multiplicity $q^s+1$ with $\mathcal{C}_1$ at $E_0$; the point $E_1$ is double with unique tangent line $\ell_X: X=0$ having intersection multiplicity $q^s+1$ with $\mathcal{C}_1$ at $E_1$. We compute the genus of $\C_1$ by applying Theorem \ref{th:hurwitz} to the covering $\varphi_0\colon\C_1\to\mathbb{P}_y^1$, $(X\colon Y\colon V)\mapsto(Y\colon V)$, where $y$ is the coordinate function of $Y$; $\varphi_0$ has degree $2$. We describe the places of $\mathbb{P}_y^1$ which ramify in $\varphi_0$; we denote by $P_\mu$, $\mu\in\mathbb{K}$, the zero of $y-\mu$ on $\mathbb{P}_y^1$, and by $P_\infty$ the pole of $y$ on $\mathbb{P}_y^1$. \begin{itemize} \item If $\bar{y}\in\mathbb{K}$ satisfies $\bar{y}\ne0$ and $\beta+\mathrm{Tr}_{q^s/2}(\bar{y})\ne0$, then $F_1(X,\bar{y})$ has two distinct roots $\bar{x}_1,\bar{x}_2\in\mathbb{K}$, and hence $P_{\bar y}$ does not ramify in $\varphi_0$. \item If $\bar{y}\in\mathbb{K}$ is one of the $q^s/2$ distinct roots of $\beta+\mathrm{Tr}_{q^s/2}(Y)$, then $(\bar{x},\bar{y})\in\varphi_0^{-1}(P_{\bar{y}})$, with $\bar{x}=\sqrt{\bar{y}^{1-q^s}}$. The tangent line to $\C_1$ at $(\bar{x},\bar{y})$ is $\ell_{\bar y}:Y-\bar{y}=0$, having multiplicity intersection $2$ with $\C_1$ at $(\bar{x},\bar{y})$. Hence, \[e((\bar{x},\bar{y})\mid P_{\bar y})= e((\bar{x},\bar{y})\mid P_{\bar y})\cdot v_{P_{\bar y}}(y-\bar{y})=v_{(\bar{x},\bar{y})}(y-\bar{y})=2,\] so that $P_{\bar y}$ totally ramifies in $\varphi_0$. \item The change of coordinates $(X\colon Y\colon V)\mapsto(X\colon V\colon Y)$ maps $E_1$ to the origin $E_2=(0\colon 0\colon 1)=(0,0)$ and $\mathcal{C}_1$ to the curve $\overline{\mathcal{C}}_1$ with affine equation $\overline{F}_1(X,Y)=0$ where \[\overline{F}_1(X,Y)=X^2+X(\beta Y^{q^s+1}+\sum_{i=0}^{sh-1}Y^{q^s+1-2^i}) + Y^{q^s+1}.\] The point $E_2$ is double for $\overline{\mathcal{C}}_2$ with tangent line $\ell_X\colon X=0$; while this holds, we apply iteratively the quadratic transformation $(X\colon Y\colon V)\mapsto(XV\colon Y^2\colon YV)$ which maps $\overline{\mathcal{C}}_1$ to the curve with equation $\overline{F}_1(XY,Y)/Y^2=0$. After $k$ times, the curve has equation \[X^2+X(\beta Y^{q^s+1-k}+\sum_{i=0}^{sh-1}Y^{q^s+1-2^i-k})+Y^{q^s+1-2k}=0.\] Hence, after $q^s/2$ times, the curve has only one affine point on the line $\ell_Y\colon Y=0$, which is a simple point. This means that $\mathcal{C}_1$ has exactly one place centered at $E_1$, which we identify with $E_1$. Since the intersection multiplicity of $\C_1$ at $E_1$ with $\ell_\infty\colon V=0$ and $\ell_Y$ is $2$ and $0$ respectively, we have that $v_{E_1}(y)=-2$; see \cite[Theorem 4.36]{HKT}. Thus, the pole of $y$ in $\mathbb{P}^1_y$ is totally ramified in $\varphi_0$. \item The unique place centered at $E_2=(0,0)$ is clearly a zero of $y$. The only place of $\mathbb{P}^1_y$ which can be covered by $E_0$ is the zero of $y$. Therefore, the zero of $y$ in $\mathbb{P}^1_y$ is not ramified in $\varphi_0$, and $E_0$ and $E_2$ are the simple zeros of $y$ on $\mathcal{C}_1$. \end{itemize} The $q^s/2+1$ ramification places $P'$, namely $E_1$ and $(\bar{x},\bar{y})$ with $\beta+\mathrm{Tr}_{q^2/2}(\bar{y})=0$, are wildly ramified. For each of them, we choose a local parameter $t^\prime$ at $P^\prime$, a local parameter $t$ at the place $P$ lying under $P^\prime$ in $\varphi_0$, and compute $v_{P^\prime}(d\varphi_0^*(t)/dt^\prime)$, where the pull-back $\varphi_0^*$ of $\varphi_0$ is the identity on $\mathbb{K}(\mathbb{P}_y^1)=\mathbb{K}(y)$. \begin{itemize} \item Let $P^\prime=E_1$, lying over $P=P_\infty$. We choose $t=1/y$. From $v_{E_1}(y)=-2$ we get $v_{E_1}(x)=q^s-1$; hence we can choose $t^\prime=1/(xy^{q^s/2})$. By direct computation, the Laurent series of $t$ at $E_1$ with respect to $t^\prime$ is $t= (t^\prime)^2 +(t^\prime)^3 + w$, with $v_{E_1}(w)\geq4$. Thus, $\frac{dt}{dt^\prime}=(t^\prime)^2+\frac{dw}{dt^\prime}$ has valuation $2$ at $E_1$. \item Let $P^\prime=(\bar{x},\bar{y})$, lying over $P=P_{\bar y}$ with $\beta+\mathrm{Tr}_{q^s/2}(\bar y)=0$. We choose $t=y-\bar{y}$ and $t^\prime=x-\bar{x}$. By direct computation, the Laurent series of $t$ at $P^\prime$ with respect to $t^\prime$ is $t=\frac{\bar{y}^{q^s}}{\bar{x}+1}(t^\prime)^2+\frac{\bar{y}^{q^s}}{\bar{x}^2+1}(t^\prime)^3+w$, with $v_{P^\prime}(w)\geq4$. Thus, $\frac{dt}{dt^\prime}=\frac{\bar{y}^{q^s}}{\bar{x}^2+1}(t^\prime)^2+\frac{dw}{dt^\prime}$ has valuation $2$ at $P^\prime$. \end{itemize} Theorem \ref{th:hurwitz} now yields \[ 2g(\C_1)-2=\deg(\varphi_0)\cdot(2g(\mathbb{P}_y^1)-2)+\left(\frac{q^s}{2}+1\right)\cdot2, \] whence $g(\C_1)=\frac{q^s}{2}$. \end{proof} \begin{proposition}\label{prop:C'2} The curve $\C_2$ has genus $q^s-1$. \end{proposition} \begin{proof} By Lemma \ref{lemma:C'2irr}, $\C_2$ is absolutely irreducible; hence, the covering $\varphi_1:\C_2\to\C_1$, $(X,Z)\mapsto(X,Y=Z^2+Z+\epsilon)$, is an Artin-Schreier covering of degree $2$. Every place of $\C_1$ which is not a pole of $y-\epsilon$ is unramified in $\varphi_1$. We consider the unique pole of $y-\epsilon$ on $\C_1$, namely $E_1$. By direct computation, the Laurent series of $y$ at $E_1$ with respect to the local parameter $t^\prime=1/(xy^{q^s/2})$ is $y-\epsilon=(t^\prime)^{-2}+(t^\prime)^{-1}+w$, with $v_{E_1}(w)\geq0$. Choosing $\omega=(t^\prime)^{-1}$, we have $v_{E_1}((y-\epsilon)-(\omega^2+\omega))=v_{E_1}(w)\geq0$. Thus, by Theorem \ref{th:artinschreier}, $E_1$ is unramified in $\varphi_1$. Altogether, the covering $\varphi_1:\C_2\to\C_1$ is unramified. This implies \[ 2g(\C_2)-2 = \deg(\varphi_1)\cdot(2g(\C_1)-2), \] whence $g(\C_2)=q^s-1$. \end{proof} \begin{theorem}\label{th:C'} For the curve $\mathcal{C}$ the following holds. \begin{itemize} \item[(a)] $\C$ is absolutely irreducible with genus $q^{2s}-q^s-1$. \item[(b)] Let $s$ and $z$ be the coordinate functions of $\mathcal{C}$, and let $t=s^2(z^2+z+\epsilon)$. The number of $\mathbb{F}_{q^n}$-rational places of $\C$ which are zeros of $t$, or poles of either $s$ or $z$ or $t$, is at most $2q^s+2$. \end{itemize} \end{theorem} \begin{proof} We compute the valuation of $x$ at the places of $\C_2$. From the proofs of Propositions \ref{prop:C'1} and \ref{prop:C'2} follows that $x$ has exactly $2$ zeros on $\C_1$, namely $E_2$ with $v_{E_2}(x)=1$ and $E_1$ with $v_{E_1}(x)=q^s-1$ ; also, $x$ has exactly $4$ zeros on $\C_2$, namely the two places $Q_1,Q_2$ lying over $E_2$ and the two places $Q_3,Q_4$ over $E_1$. Using the ramification indices, this implies $v_{Q_1}(x)=v_{Q_2}(x)=1$ and $v_{Q_3}(x)=v_{Q_4}(x)=q^s-1$. The unique pole of $x$ on $\C_1$ is $E_0$; since $v_{E_0}(y)=1$, this implies $v_{E_0}(x)=-q^s$. The poles of $x$ on $\C_2$ are the two places $R_1,R_2$ lying over $E_0$, with $v_{R_1}(x)=v_{R_2}(x)=-q^s$. Therefore, by Theorem \ref{th:kummer}, $\C$ is absolutely irreducible and $\varphi_2:\C\to\C_2$ is a Kummer covering of degree $q^s-1$. The places of $\C_2$ which ramify in $\varphi_2$ are exactly $Q_1,Q_2,R_1,R_2$, and they are totally ramified; any other place is unramified in $\varphi_2$. The genus of $\C$ is \[g(\C)=1+\deg(\varphi_2)\cdot(g(\C_2)-1)+\frac{1}{2}\cdot 4\cdot(\deg(\varphi_2)-1)=q^{2s}-q^s-1.\] Using the proofs of Propositions \ref{prop:C'1} and \ref{prop:C'2}, we obtain that: \begin{itemize} \item $s$ has exactly $2$ poles on $\C$, namely the places over $R_1$ or $R_2$; \item $z$ has exactly $2(q^s-1)$ poles on $\C$, namely the places over $Q_3$ or $Q_4$; \item $s^2$ has exactly $2(q^s-1)+2$ zeros on $\C$; namely, two of them lie over $Q_1$ or $Q_2$, while $2(q^s-1)$ of them lie over $Q_3$ or $Q_4$ (and have been been already considered above); \item $z^2+z+\epsilon=y$ has exactly $4$ zeros on $\C$, namely the places over $Q_1,Q_2,R_1,R_2$ (which have been already considered above). \end{itemize} Altogether, the number of $\mathbb{F}_{q^n}$-rational places of $\C$ which are poles of $s$, $z$, $t$, or are zeros of $t$, is smaller than or equal to $2q^s+2$. \end{proof} From Theorems \ref{th:main} and \ref{th:C'} follows Corollary \ref{cor:mainmain_qeven}, which is our main result Theorem \ref{th:mainmain} when $q$ is even. \begin{corollary}\label{cor:mainmain_qeven} Let $q$ be an even prime power, $s\geq1$ be such that $\gcd(s,n)=1$. Suppose that \[ n\geq\begin{cases} 4s+2 & \textrm{if}\;q=2\textrm{ and }s>2; \\ 4s+1 & \textrm{otherwise}. \end{cases} \] Then for every $\delta\in\mathbb{F}_{q^{2n}}$ satisfying $\mathrm{N}_{q^{2n}/q^n}(\delta)\notin\{0,1\}$ there exists $a\in\mathbb{F}_{q^{2n}}^*$ such that $\dim_{\mathbb{F}_q}\ker(f_{a,b,s})=2$, where $b=\delta a$. \end{corollary} \begin{proof} By Theorems \ref{th:hasseweil} and \ref{th:C'}{\it{(a)}}, the number $N_{q^n}$ of $\mathbb{F}_{q^n}$-rational places of $\mathcal{C}$ satisfies \[ N_{q^n}\geq q^n+1 - 2(q^{2s}-q^s-1)\sqrt{q^n} > 2q^s+2. \] By Theorem \ref{th:C'}{\it{(b)}}, there exists an $\mathbb{F}_{q^n}$-rational affine poin $(\bar{s},\bar{z})$ of $\mathcal{C}$ such that $\bar{t}=\bar{s}^2 (\bar{z}^2+\bar{z}+\epsilon)$ is different from zero. The claim follows. \end{proof} \section{Applications to linear sets and rank metric codes}\label{sec:appl} \subsection{Linear sets}\label{sec:linearsets} Let $\Lambda=\PG(V,\F_{q^m})=\PG(1,q^m)$, where $V$ is a vector space of dimension $2$ over $\F_{q^m}$. A point set $L$ of $\Lambda$ is said to be an \emph{$\F_q$-linear set} of $\Lambda$ of rank $k$ if it is defined by the non-zero vectors of a $k$-dimensional $\F_q$-vector subspace $U$ of $W$, i.e. \[L=L_U=\{\la {\bf u} \ra_{\mathbb{F}_{q^m}} \colon {\bf u}\in U\setminus \{{\bf 0} \}\}.\] We say that two linear sets $L_U$ and $L_W$ of $\Lambda=\PG(1,q^m)$ are $\mathrm{P}\Gamma \mathrm{L}$-\emph{equivalent} if there exists $\varphi \in \mathrm{P}\Gamma \mathrm{L} (2,q^m)$ such that $\varphi(L_U)=L_W$. \smallskip We start by pointing out that if the point $\langle (0,1) \rangle_{\F_{q^m}}$ is not contained in a linear set $L_U$ of rank $m$ of $\PG(1,q^m)$ (which we can always assume after a suitable projectivity), then $U=U_f=\{(x,f(x))\colon x\in \F_{q^m}\}$ for some $q$-polynomial $\displaystyle f(x)=\sum_{i=0}^{m-1}a_ix^{q^i}\in \tilde{\mathcal{L}}_{m,q}$. In this case we will denote the associated linear set by $L_f$. Also, recall that the \emph{weight of a point} $P=\langle \mathbf{u} \rangle_{\F_{q^m}}$ is $w_{L_U}(P)=\dim_{\F_q}(U\cap\langle \mathbf{u} \rangle_{\F_{q^m}})$. \smallskip One of the most studied classes of linear sets of the projective line, especially because of its applications (see e.g. \cite{Polverino,Sheekey2016}), is the family of maximum scattered linear sets. A {\it maximum scattered} $\F_q$-linear set of $\PG(1,q^m)$ is an $\F_q$-linear set of rank $m$ of $\PG(1,q^m)$ of size $(q^m-1)/(q-1)$, or equivalently a linear set of rank $m$ in $\PG(1,q^m)$ all of whose points have weight one. If $L_f$ is a maximum scattered linear set in $\PG(1,q^m)$, we also say that $f$ is a \emph{scattered polynomial}. The known scattered polynomials of $\F_{q^m}$ are \begin{enumerate} \item $f_1(x)=x^{q^s}\in \tilde{\mathcal{L}}_{m,q}$, with $\gcd(s,m)=1$, see \cite{BL2000}; \item $f_2(x)= x^{q^s}+\alpha x^{q^{m-s}}\in\tilde{\mathcal{L}}_{m,q}$, with $m\geq 4$, $\gcd(s,m)=1$, $\N_{q^m/q}(\alpha) \notin\{0,1\}$, see \cite{LMPT2015,LP2001,Sheekey2016}; \item $f_3(x)= x^{q^s}+\alpha x^{q^{s+\frac{m}2}}\in\tilde{\mathcal{L}}_{m,q}$, $m \in \{6,8\}$, $\gcd(s,\frac{m}2)=1$ and some conditions on $\alpha$, see \cite{CMPZ} and below; \item $f_4(x)=x^q+x^{q^3}+\alpha x^{q^5}\in \tilde{\mathcal{L}}_{6,q}$, $q$ odd and $\alpha^2+\alpha=1$, see \cite{CsMZ2018,MMZ}; \item $f_5(x)=h^{q-1}x^q-h^{q^2-1}x^{q^2}+x^{q^4}+x^{q^5}\in \tilde{\mathcal{L}}_{6,q}$, $q$ odd, $h^{q^3+1}=-1$, see \cite{BZZ,ZZ}. \end{enumerate} \smallskip In \cite{CMPZ}, the authors introduced the family of linear sets $L_{\delta,s}$ of rank $2n$ in $\PG(1,q^{2n})$ mentioned in 3., i.e. those linear sets defined by the $\F_q$-subspace \begin{equation}\label{eq:Ud,s} U_{\delta,s}=\{ (x,f_{\delta,s}(x)) \colon x \in \F_{q^{2n}} \}\subset \F_{q^{2n}}\times \F_{q^{2n}}, \end{equation} where \[ f_{\delta,s}(x)=x^{q^s}+\delta x^{q^{n+s}}\in{\tilde \cL}_{2n,q}, \] with $\mathrm{N}_{q^{2n/q^n}}(\delta) \notin\{0,1\}$, $1 \leq s \leq 2n-1$ and $\gcd(s,n)=1$. The relevance of this family relies on the property that each point of $L_{\delta,s}$ has weight at most two; see \cite[Proposition 4.1]{CMPZ}. In \cite[Section 7]{CMPZ} the authors proved that for $n=3$ and $q>4$ there exists $\delta \in \F_{q^2}$ such that $L_{s,\delta}$ is scattered; for $n=4$, $q$ odd and $\delta^2=-1$ the linear set $L_{\delta,s}$ is scattered. In \cite[Theorem 7.3]{PZ2019} the authors completely determined for $n=3$ necessary and sufficient conditions on $\delta$ ensuring $L_{\delta,s}$ to be scattered. Note that for $n=3$ we may restrict to the case $s=2$, since every linear set $L_{\delta,s}$ is equivalent to $L_{\delta',2}$ for some $\delta'\in \F_{q^{2n}}^*$. More precisely, if $\N_{q^6/q^3}(\delta)\notin\{0,1\}$ and we denote $A=-\frac{1}{\delta^{q^3+1}-1}$, one has that $L_{\delta,2}$ is scattered if and only if the equation \begin{equation}\label{eq:eq2degree} Y^2-(\mathrm{Tr}_{q^3/q}(A)-1)Y+\N_{q^3/q}(A)=0 \end{equation} admits two distinct roots in $\F_q$. \begin{theorem}\label{th:noscatt} Let $q$ be a prime power and $n,s$ be two relatively prime positive integers. Suppose that \[ n\geq\begin{cases} 4s+2 & \textrm{if}\; q=3\textrm{ and }s>1,\,\textrm{or}\;q=2\textrm{ and }s>2; \\ 4s+1 & \textrm{otherwise}. \end{cases} \] Then, for every $\delta\in\mathbb{F}_{q^{2n}}^*$, the $\mathbb{F}_q$-linear set $L_{\delta,s}$ in $\PG(1,q^{2n})$ is not scattered. \end{theorem} \begin{proof} For every $m\in\mathbb{F}_{q^{2n}}$, the weight of the point $\langle(1,m)\rangle_{\mathbb{F}_{q^{2n}}}$ in $L_{\delta,s}$ coincides with the dimension over $\mathbb{F}_q$ of the kernel of $f_{\delta,s}(x)-mx$. If $\N_{q^{2n}/q^n}(\delta)=1$, then the point $\langle(1,0)\rangle_{\mathbb{F}_{q^{2n}}}$ has weight $n$ in $L_{\delta,s}$. Let $\N_{q^{2n}/q^n}(\delta)\ne1$. By Theorem \ref{th:mainmain}, there exists $a\in\F_{q^{2n}}^*$ such that $\dim_{\F_q}\ker(f_{a,\delta a,s}(x))=2$, whence \[ \dim_{\F_q}\ker\left(a\left(f_{\delta,s}(x)+\frac{1}{a}x\right)\right)=2. \] This implies that the point $\langle\left(1,-\frac{1}{a}\right)\rangle_{\F_{q^{2n}}}$ has weight $2$ in $L_{\delta,s}$. The claim is proved. \end{proof} Hence, we have the following description for the linear set $L_{\delta,s}$. \begin{corollary}\label{cor:classbin} Let $q$ be a prime power and $n,s$ be two relatively prime positive integers. \begin{itemize} \item If $n=3$, then $L_{\delta,s}$ is a scattered linear set if and only if Equation \ref{eq:eq2degree} admits two distinct roots in $\F_q$. \item If $n=4$, $q$ is odd and $\delta^2=-1$ then $L_{\delta,s}$ is scattered. \item If \[ n\geq\begin{cases} 4s+2 & \textrm{if}\; q=3\textrm{ and }s>1,\,\textrm{or}\;q=2\textrm{ and }s>2, \\ 4s+1 & \textrm{otherwise}, \end{cases} \] then, for every $\delta\in\mathbb{F}_{q^{2n}}^*$, $L_{\delta,s}$ is not scattered. \end{itemize} \end{corollary} \begin{proof} The claim follows from \cite[Theorem 7.3]{PZ2019}, \cite[Theorem 7.2]{CMPZ}, and Theorem \ref{th:noscatt}. \end{proof} Among the known scattered polynomials listed above, the families in 3., 4. and 5. provide scattered polynomials for infinitely many $q$'s, but only over a specific extension of $\F_q$, namely either $\F_{q^6}$ or $\F_{q^8}$. Unlike this situation, the families in 1.\ and 2.\ provide scattered polynomials over infinitely many extensions $\F_{q^m}$ of $\F_{q}$; they are named respectively as scattered polynomials of pseudoregulus type, and as scattered polynomials of LP type (after Lunardon and Polverino). The scattered polynomials of pseudoregulus or LP type have raised the following question: which polynomials over $\mathbb{F}_{q^m}$ are scattered over infinitely many extensions of $\mathbb{F}_{q^m}$? \begin{definition}{\rm \cite[Section 1]{BZ}} Let $f(x)\in\tilde{\cL}_{m,q}$, $0\leq t\leq m-1$, $\ell\geq1$, and $U_{\ell}=\{(x^{q^t},f(x))\colon x\in\F_{q^{m\ell}}\}$. We say that $f(x)$ is an exceptional scattered polynomial of index $t$ if $L_{U_{\ell}}$ is a scattered $\F_q$-linear set in $\PG(1,q^{m\ell})$ for infinitely many $\ell$'s. \end{definition} Clearly, the scattered polynomials of pseudoregulus type are exceptional scattered of index $0$. Also, for the scattered polynomial $f_2(x)$ of LP type, \[ U_{f_2}=\{(x^{q^s}, x^{q^{2s}}+\alpha x)\colon x\in\F_{q^m}\}; \] thus, the polynomial $x^{q^{2s}}+\alpha x$ is exceptional scattered of index $s$. For a scattered polynomial $f(x)\in\tilde{\cL}_{m,q}$ of index $t$, we say that $f(x)$ is $t$-normalized if the following properties hold: $f(x)$ is monic; the coefficient of $x^{q^t}$ in $f(x)$ is zero; if $t>0$, the coefficient of $x$ in $f(x)$ is nonzero. Up to ${\rm PGL}$-equivalence of the corresponding scattered linear set, we may always assume that $f(x)$ is $t$-normalized. \begin{theorem} Let $f(x)\in\tilde{\cL}_{m,q}$ be a $t$-normalized exceptional scattered polynomial of index $t$. Then the following holds. \begin{itemize} \item If $t=0$, then $f(x)$ is of pseudoregulus type; see \cite[Corollary 3.4]{BZ} for $q>5$, \cite[Section 4]{BM} for $q\leq5$. \item If $t=1$ or $t=2$, then $f(x)$ is either of pseudoregulus type or of LP type; see \cite[Corollary 3.7]{BZ} for $t=1$, \cite[Corollary 1.4]{BM} for $t=2$. \item If $t\geq3$, $q$ is odd, and $\max\{\deg_q f(x),t\}$ is an odd prime, then $f(x)=x$; see \cite[Theorem 1.2]{FM}. \end{itemize} \end{theorem} Recall that the polynomials $f_3(x)$ of family 3. in the list above are scattered under certain assumptions for $m\in\{6,8\}$; even when $f_3(x)$ is not scattered, still all the points of $L_{f_3}$ have weight at most $2$. Thus, one may conjecture that family 3. contains scattered polynomials over $\F_{q^m}$ for every even $m$. Note that, even if this is the case, the arising scattered polynomials are not exceptional: not only the coefficients but also the degree depend heavily on the underlying field $\F_{q^m}$. Our asymptotic result Theorem \ref{th:mainmain} shows that the family of scattered polynomial in 3. cannot be extended to any higher extension $\mathbb{F}_{q^m}$ when $m$ is large enough with respect to $s$. \subsection{Rank metric codes}\label{sec:MRD} Rank metric codes were introduced by Delsarte \cite{Delsarte} in 1978 and they have been intensively investigated in recent years because of their applications; we refer to \cite{sheekey_newest_preprint} for a recent survey on this topic. The set of $m \times n$ matrices $\fq^{m\times n}$ over $\fq$ may be endowed with a metric, called \emph{rank metric}, defined by \[d(A,B) = \mathrm{rk}\,(A-B).\] A subset $\C \subseteq \fq^{m\times n}$ equipped with the rank metric is called a \emph{rank metric code} (shortly, a \emph{RM}-code). The minimum distance of $\C$ is defined as \[d = \min\{ d(A,B) \colon A,B \in \C,\,\, A\neq B \}.\] Denote the parameters of a RM-code $\C\subseteq\fq^{m,n}$ with minimum distance $d$ by $(m,n,q;d)$. We are interested in $\fq$-\emph{linear} RM-codes, i.e. $\fq$-subspaces of $\fq^{m\times n}$. Delsarte showed in \cite{Delsarte} that the parameters of these codes must obey a Singleton-like bound, i.e. \[ |\C| \leq q^{\max\{m,n\}(\min\{m,n\}-d+1)}. \] When equality holds, we call $\C$ a \emph{maximum rank distance} (\emph{MRD} for short) code. Examples of $\fq$-linear MRD-codes were first found in \cite{Delsarte,Gabidulin}. We say that two $\fq$-linear RM-codes $\C$ and $\C'$ are equivalent if there exist $X \in \mathrm{GL}(m,q)$, $Y \in \mathrm{GL}(n,q)$, and $\sigma\in{\rm Aut}(\fq)$ such that \[\C'=\{XC^\sigma Y \colon C \in \C\}.\] The \emph{left} and \emph{right} idealisers of $\C$ are defined in \cite{LN2016} as $L(\C)=\{A \in \mathrm{GL}(m,q) \colon A \C\subseteq \C\}$ and $R(\C)=\{B \in \mathrm{GL}(n,q) \colon \C B \subseteq \C\}$. They are invariant under the equivalence of rank metric codes, and have been investigated in \cite{LTZ2}; further invariants have been introduced in \cite{GZ,NPH2}. Much of the focus on MRD-codes of $\fq^{m\times m}$ to date has been on codes which are $\F_{q^m}$-\emph{linear}, i.e. codes in which the left (or right) idealiser contains a field isomorphic to $\F_{q^m}$, since for such codes a fast decoding algorithm has been developed in \cite{Gabidulin}. Very few examples of such codes are known, see \cite{BZZ,CMPZ,CsMPZh,CsMZ2018,Delsarte,Gabidulin,LP2001,MMZ,Sheekey2016,ZZ}. In \cite[Section 5]{Sheekey2016} Sheekey showed that scattered $\F_q$-linear sets of $\PG(1,q^m)$ of rank $m$ yield $\F_q$-linear MRD-codes with parameters $(m,m,q;m-1)$ with left idealiser isomorphic to $\F_{q^m}$; see \cite{CsMPZ2019,CSMPZ2016,ShVdV} for further details on such kind of connections. We briefly recall here the construction from \cite{Sheekey2016}. Let $U_f=\{(x,f(x))\colon x\in \F_{q^m}\}$, where $f(x)$ is a scattered $q$-polynomial. The choice of an $\F_q$-basis for $\F_{q^m}$ defines a canonical ring isomorphism between $\mathrm{End}(\F_{q^m},\F_q)$ and $\F_q^{m\times m}$. Thus, the set \[ \C_f=\{x\mapsto af(x)+bx \colon a,b \in \F_{q^m}\}\subset \mathrm{End}(\F_{q^m},\F_q) \] corresponds to a set of $m\times m$ matrices over $\F_q$ forming an $\F_q$-linear MRD-code with parameters $(m,m,q;m-1)$. Also, as $\C_f$ is an $\F_{q^m}$-subspace of $\mathrm{End}(\F_{q^m},\F_q)$, its left idealiser $L(\C_f)$ is isomorphic to $\F_{q^m}$; see also \cite[Section 6]{CMPZ}. Now consider the set \[ \C_{f_{\delta,s}}=\{ x\mapsto a(x^{q^s}+\delta x^{q^{s+n}})+bx \colon a,b \in \F_{q^{2n}} \}, \] which corresponds to a set of $2n\times 2n$ matrices over $\F_q$ forming an $\F_q$-linear rank metric code with parameters $(2n,2n,q;2n-i)$, where \[ i=\max\{ w_{L_{\delta,s}}(P) \colon P \in \PG(1,q^{2n}) \}. \] The following theorem is a consequence of Corollary \ref{cor:classbin} and states that, when $n$ is large enough, $\C_{f_{\delta,s}}$ is not an MRD-code. \begin{theorem}\label{th:applMRD} Let $q$ be a prime power and $n,s$ be two relatively prime positive integers. \begin{itemize} \item If $n=3$, then $\C_{f_{\delta,s}}$ is an MRD-code if and only if Equation {\rm \ref{eq:eq2degree}} admits two distinct roots in $\F_q$; see {\rm \cite{CMPZ}} and {\rm \cite{PZ2019}}. \item If $n=4$, $q$ odd and $\delta^2=-1$ then $\C_{f_{\delta,s}}$ is an MRD-code; see {\rm \cite{CMPZ}}. \item If \[ n\geq\begin{cases} 4s+2 & \textrm{if}\; q=3\textrm{ and }s>1,\,\textrm{or}\;q=2\textrm{ and }s>2 \\ 4s+1 & \textrm{otherwise}; \end{cases} \] then, for every $\delta\in\mathbb{F}_{q^{2n}}^*$, $\C_{f_{\delta,s}}$ is not an MRD-code and its minimum distance is $n-2$. \end{itemize} \end{theorem}
2,877,628,090,902
arxiv
\section{Introduction} The Tevatron is a $p\bar{p}$ collider, with collisions occuring at 1.96 TeV center of mass energy. As a hadron collider, the Tevatron has access to all bottom species, including $B^{0}$, $B^{+}$, $B^{0}_{s}$, $B^{+}_{c}$ and $\Lambda^{0}_{b}$ hadrons. The hadronic detector environment is complex, with large amounts of background. The CDF experiment employs sophisticated triggers to select $B$ decays. Additionally, CDF's precise momentum and vertexing resolution facilitate a variety of $B$ physics measurements, ranging from lifetime and $CP$ violation studies to $B$ hadron spectroscopy measurements. \section{Forward-backward asymmetry in $\boldsymbol{B\rightarrow K^{(*)}\mu\mu}$} Flavor changing neutral current processes can occur via penguin diagrams in the standard model. The transition $b \rightarrow s\ell\ell$, for instance, is a FCNC process, present in the decays $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$, $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ and $B_{s}\rightarrow \phi\mu^{+}\mu^{-}$. The rates for these decays could be enhanced by new physics contributions to the penguin diagrams. This would consequently alter the differential branching ratio and forward-backward asymmetry for these decays from the standard model predictions. $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$, $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ and $B_{s}\rightarrow \phi\mu^{+}\mu^{-}$ decays are reconstructed using 4.4 fb$^{-1}$ of data from a di-muon trigger. The observed signal yields for the three decay modes are 120$\pm$16, 101$\pm$12, and 27$\pm$6 events, respectively. This measurement is the first observation of $B_{s}\rightarrow \phi\mu^{+}\mu^{-}$ decays at a 6.3$\sigma$ significance. The absolute branching ratios of the three modes are measured to be: $BR(B^{+}\rightarrow K^{+}\mu^{+}\mu^{-})$ = 0.38$\pm$0.05(stat)$\pm$0.03(syst)$\times$10$^{-6}$, $BR(B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-})$ = 1.06$\pm$0.14(stat)$\pm$0.09(syst)$\times$10$^{-6}$, $BR(B^{0}_{s}\rightarrow \phi\mu^{+}\mu^{-})$ = 1.44$\pm$0.33(stat)$\pm$0.46(syst)$\times$10$^{-6}$. The differential branching ratio for the $K^{(*)}$ modes was measured in bins of $q^{2}=M_{\mu\mu}^{2}$, as shown in Figure~\ref{diffbr}. The limits on the standard model expectation are denoted by the red lines. The data points are consistent with these limits. The $K^{0*}$ polarization $F_{L}$ and $B^{0} \rightarrow K^{0*}\mu^{+}\mu^{-}$ and $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$ forward-backward asymmetry $A_{FB}$ are also measured. An angular analysis is performed to extract $F_{L}$ and $A_{FB}$ in bins of $q^{2}=M_{\mu\mu}^{2}$. Results are shown in Fig.~\ref{AFB}. The standard model expectation is denoted by a red line, and a generic new physics scenario with flipped sign of the Wilson coefficient $C_{7}$ is indicated by a blue line. The precision of the measurement is not adequate to determine which scenario is favored. The results are consistent and competitive with $B$ factory measurements~\cite{aslnote}. \begin{figure}[htbp] \centerline{ \makebox{\includegraphics[width=0.35\textwidth]{dbr_kmm}} \makebox{\includegraphics[width=0.35\textwidth]{dbr_kstmm}} } \caption{Differential branching ratio for $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$ (left) and $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ (right). The limits for the standard model expectation are shown in red.} \label{diffbr} \end{figure} \begin{figure}[htbp] \centerline{ \makebox{\includegraphics[width=0.33\textwidth]{summary_fl_6bin}} \makebox{\includegraphics[width=0.33\textwidth]{summary_afb_6bin}} \makebox{\includegraphics[width=0.33\textwidth]{summary_afb_6bin_kll}} } \caption{$K^{0*}$ polarization $F_{L}$ and $B^{0}\rightarrow K^{0*}\mu^{+}\mu^{-}$ and $B^{+}\rightarrow K^{+}\mu^{+}\mu^{-}$ forward-backward asymmetry $A_{FB}$.} \label{AFB} \end{figure} \section{Measurement of \textit{CP} violating phase $\boldsymbol{\sin2\beta_{s}}$ in $\boldsymbol{B^{0}_{s}\rightarrow J/\psi \phi}$ decays} The $CP$ violating phase $\sin2\beta_{s}$ quantifies the $CP$ violation in the interference between the amplitudes from $B^{0}_{s}\rightarrow J/\psi \phi$ and $B^{0}_{s}\rightarrow \bar{B}^{0}_{s} \rightarrow J/\psi \phi$ decays. In the latter case, the $B^{0}_{s}$ mixes into its antiparticle before decaying, through the exchange of $W$ bosons and off-shell up type quarks in a mixing box diagram. The standard model expectation for $\sin2\beta_{s}$ is small, but new physics participation in the $B^{0}_{s}$ mixing box diagram could result in a large value for $\sin2\beta_{s}$. The measurement is made on 2.8 fb$^{-1}$ of data collected using the di-muon trigger. Approximately 3000 $B^{0}_{s} \rightarrow J/\psi\phi \rightarrow \mu^{+}\mu^{-}K^{+}K^{-}$ events are reconstructed. An un-binned maximum likelihood fit to mass, lifetime, and final state angular distributions is used to extract $\sin2\beta_{s}$. The measurement's power is enhanced by using flavor tagging algorithms, which determine whether the candidate meson was a $B^{0}_{s}$ or a $\bar{B}^{0}_{s}$ at production. The left plot in Fig.~\ref{bpdG} shows a confidence region in $\beta_{s}$ and $\Delta\Gamma$ space, where $\Delta\Gamma$ is the decay width difference between the light and heavy $B^{0}_{s}$ mass eigenstates. The standard model expectation lies within the 95\% confidence level, with a p-value of 7\%. This means that the probability the true value is the standard model expectation and the observed data are a fluctuation is 7\%~\cite{sin2betasnote}. The CDF measurement has been combined with an analogous measurement from the D\O~experiment. The result is shown in the right plot in Fig.~\ref{bpdG}. The standard model expectation falls outside the 95\% confidence level, indicating a 2$\sigma$ deviation of the contours from the SM prediction~\cite{combinationnote}. \begin{figure}[htbp] \centerline{ \makebox{\includegraphics[width=0.3\textwidth]{2d_contours}} \makebox{\includegraphics[width=0.38\textwidth]{fig06}} } \caption{$\beta_{s}-\Delta\Gamma$ confidence region, CDF only (left), Tevatron combined result (right). The green bands show the allowed region assuming mixing-induced $CP$ violation.} \label{bpdG} \end{figure} \section{$\boldsymbol{B \rightarrow J/\psi X}$ lifetimes} The measurement of $B$ meson lifetimes is an important test of heavy quark effective theory. The $B^{+}$, $B^{0}$, and $\Lambda^{0}_{b}$ lifetimes are measured using 4.3 fb$^{-1}$ of data collected with the CDF di-muon trigger. For $B^{+}\rightarrow J/\psi K^{+}$, 45,000$\pm$230 events are reconstructed, for $B^{0}\rightarrow J/\psi K^{0*}$, 16,860$\pm$140 events, for $B^{0}\rightarrow J/\psi K^{0}_{s}$, 12,070$\pm$120 events, and for $\Lambda^{0}_{b}\rightarrow J/\psi\Lambda^{0}$, 1,710$\pm$50 events. A combined fit to mass and lifetime is used to determine the lifetime. The fit projection for the mass is shown in the left-most plot in Fig.~\ref{lifetimefp}. The fit projections for proper time and its error in the signal region are shown in the center and right-most plots. The lifetimes, which are the world's best measurements, are the following~\cite{jpslifetimenote}: \begin{itemize} \item{$\tau(B^{+})$ = 1.639 $\pm$ 0.009(stat) $\pm$ 0.009(syst) ps} \item{$\tau(B^{0})$ = 1.507 $\pm$ 0.010(stat) $\pm$ 0.008(syst) ps} \item{$\tau(\Lambda^{0}_{b})$ = 1.537 $\pm$ 0.045(stat) $\pm$ 0.014(syst) ps} \item{$\tau(B^{+})/\tau(B^{0})$ = 1.088 $\pm$ 0.009(stat) $\pm$ 0.004(syst)} \item{$\tau(\Lambda^{0}_{b})/\tau(B^{0})$ = 1.020 $\pm$ 0.030(stat) $\pm$ 0.008(syst).} \end{itemize} \begin{figure}[htbp] \centerline{ \makebox{\includegraphics[width=0.35\textwidth]{Bp-mass}} \makebox{\includegraphics[width=0.35\textwidth]{Bp-time}} } \caption{Fit projections for mass (left) and proper time (right).} \label{lifetimefp} \end{figure} \section{Resonance structure of $\boldsymbol{\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}}$} The $\Lambda^{0}_{b}$ meson can decay to an intermediate resonant states before decaying to the final state $\Lambda_{c}\pi^{-}\pi^{+}\pi^{-}$. This measurement is the first observation of these decay modes and their relative branching fractions. The measurement is performed on 2.4 fb$^{-1}$ of data, collected using the two-track trigger. The following resonant modes are observed, with yields: \begin{itemize} \item{$\Lambda^{0}_{b}\rightarrow \Lambda_{c}(2595)^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}$, 46.6$\pm$9.7 events} \item{$\Lambda^{0}_{b}\rightarrow \Lambda_{c}(2625)^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}$, 114$\pm$13 events} \item{$\Lambda^{0}_{b}\rightarrow \Sigma_{c}(2455)^{++}\pi^{-}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}$, 81$\pm$15 events} \item{$\Lambda^{0}_{b}\rightarrow \Sigma_{c}(2455)^{0}\pi^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}$, 41.5$\pm$9.3 events} \item{$\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\rho^{0}\pi^{-} + \Lambda^{+}_{c}3\pi(other) \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-}$, 610$\pm$88 events.} \end{itemize} The relative branching fractions are as follows~\cite{lambdabnote}: \begin{itemize} \item{$\frac{BR(\Lambda^{0}_{b}\rightarrow \Lambda_{c}(2595)^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-})}{BR(\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-} (all))}$ = 2.5$\pm$0.6(stat)$\pm$0.5(syst)$\times$ 10$^{-2}$} \item{$\frac{BR(\Lambda^{0}_{b}\rightarrow \Lambda_{c}(2625)^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-})}{BR(\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-} (all))}$ = 6.2$\pm$1.0(stat)$^{+1.0}_{-0.9}$(syst)$\times$ 10$^{-2}$} \item{$\frac{BR(\Lambda^{0}_{b}\rightarrow \Sigma_{c}(2455)^{++}\pi^{-}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-})}{BR(\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-} (all))}$ = 5.2$\pm$1.1(stat)$\pm$0.8(syst)$\times$ 10$^{-2}$} \item{$\frac{BR(\Lambda^{0}_{b}\rightarrow \Sigma_{c}(2455)^{0}\pi^{+}\pi^{-} \rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-})}{BR(\Lambda^{0}_{b}\rightarrow \Lambda^{+}_{c}\pi^{-}\pi^{+}\pi^{-} (all))}$ = 8.9$\pm$2.1(stat)$^{+1.2}_{-1.0}$(syst)$\times$ 10$^{-2}$} \end{itemize} \section{$\boldsymbol{\Upsilon(1S)}$ Polarization} Measurements of the $J/\psi$ and $\Upsilon(1S)$ polarization are used to test the predictions of non-relativistic QCD. The $J/\psi$ polarization measurement disagrees with theory, making $\Upsilon(1S)$ polarization the subject of much interest. The measurement is made on 2.9 fb$^{-1}$ of data, collected with a di-muon trigger. The polarization parameter $\alpha$ is determined by studying the distribution of the angle $|\cos(\theta^{*})|$ associated with the positive muon from $\Upsilon(1S) \rightarrow \mu^{+} \mu^{-}$. By comparing the observed angular distributions in different $p_{T}(\Upsilon)$ bins with templates of fully transverse and longitudinal Monte Carlo (as shown in the left plot in Fig.~\ref{upsipol}), the polarization can be determined. The polarization parameter $\alpha$ is shown as a function of $p_{T}$ in the right plot in Fig.~\ref{upsipol}, with the NRQCD prediction in green. The data are in poor agreement with theory at high $p_{T}$~\cite{upsinote}. \begin{figure}[htbp] \centerline{ \makebox{\includegraphics[width=0.27\textwidth]{upsi1S_pt1_fit}} \makebox{\includegraphics[width=0.52\textwidth]{Y1S_alpha_withNRQCD}} } \caption{Polar angle distribution for $\mu^{+}$ with transverse (red) and longitudinal (blue) Monte Carlo templates (left). Polarization as a function of $p_{T}$, with NRQCD predictions (green).} \label{upsipol} \end{figure}
2,877,628,090,903
arxiv
\section{Introduction} Populations of biological cells frequently show stochastic switching between alternative phenotypic states. This phenomenon is particularly well-studied in bacteria and bacteriophages, where it is known as phase variation \cite{woude2004}. Phase variation often affects cell surface features, and its evolutionary advantages are believed to involve evading attack from host defense systems (e.g. the immune system) and/or ``bet-hedging'' against sudden catastrophes which may wipe out a particular phenotypic type. Switching between different phenotypic states is controlled by an underlying genetic regulatory network, which randomly flips between alternative patterns of gene expression. Several different types of genetic network are known to control phase variation---these include DNA inversion switches, DNA methylation switches and slipped strand mispairing mechanisms \cite{woude2004,blomfield2001,lim2007}. In this paper, we study a simple model for a genetic network that allows switching between two alternative states of gene expression. Its key feature is that it includes a linear feedback mechanism between the switch state and the flipping rate. When the switch is active, an enzyme is produced and the rate of switching is linearly proportional to the copy number of this enzyme. The statistical properties of this model are made non-trivial by this feedback, leading, among other things, to non-Poissonian behaviour that may be of advantage to cells in surviving in certain dynamical environments. Our model is very generic and does not aim to describe any specific molecular mechanism in detail, but rather to determine in a general way the consequences of the linear feedback for the switching statistics. Motivated by the fact that cells often contain multiple copies of a particular genetic regulatory element, due to DNA replication or DNA duplication events during evolution, we also consider the case of two identical switches in the same cell. We find that the two copies of the switch are coupled and may exhibit interesting and potentially important correlations or anti-correlations. Our model switch is fundamentally different from bistable gene networks that have been the subject of previous theoretical interest. In fact, as we shall show, our switch is not bistable but is intrinsically unstable in each of its two states. Before discussing our model in detail, we provide a brief overview of the basic biology of genetic networks and summarise some previously considered models for genetic switches. Genetic networks are interacting, many-component systems of genes, RNA and proteins, that control the functions of living cells. Genes are stretches of DNA ($\sim$1000 base pairs long in bacteria), whose sequences encode particular protein molecules. To produce a protein molecule, the enzyme complex RNA polymerase copies the gene sequence into a messenger RNA (mRNA) molecule. This is known as transcription. The mRNA is then translated (by a ribosome enzyme complex) into an amino acid chain which folds to form the functional protein molecule. The production of a specific set of proteins from their genes ultimately determines the phenotypic behaviour of the cell. Phenotypic behaviour can thus be controlled by turning genes on and off. Regulation of transcription (production of mRNA) is one important way of achieving this. Transcription is controlled by the binding of proteins known as transcription factors to specific DNA sequences, known as operators, usually situated at the beginning of the gene sequence. These transcription factors may be activators (which enhance the transcription of the gene they regulate) or repressors (which repress transcription, often by preventing RNA polymerase binding). A given gene may encode a transcription factor that regulates itself or other genes, leading to complex networks of transcriptional interactions between genes. There has been much recent interest among both physical scientists and biologists in deconstructing complex genetic networks into modular units \cite{alon}, and in seeking to understand their statistical properties using theory and simulation~\cite{sneppen,alonbook}. Of particular interest is the fact that genetic networks are intrinsically stochastic, due to the small numbers of molecules involved in gene expression \cite{elowitz,swain}. This can give rise to heterogeneity in populations of genetically and environmentally identical cells \cite{elowitz}. For some genetic networks, this heterogeneity is ``all-or-nothing'': the population splits into two distinct sub-populations, with different states of gene expression. Such networks are known as bistable genetic switches: they have two possible long-time states, corresponding to alternative phenotypic states. Well-known examples are the switch controlling the transition from the lysogenic to lytic states in bacteriophage $\lambda$ \cite{lambdap,OKSCA05}, and the lactose utilisation network of the bacterium {\em{Escherichia coli}} \cite{ozbudak}. Several simple mechanisms for achieving bistability have been studied, including pairs of mutually repressing genes \cite{CA00,gardner}, positive feedback loops \cite{farrell} and mixed feedback loops \cite{FH05}. Such bistable genetic networks can allow long-lived and binary responses to short-lived signals---for example, when a cell is triggered by a transient signal to commit to a particular developmental pathway. Theoretical treatments of bistable genetic networks usually consider the dynamics of the copy number (or concentration) of the regulatory proteins involved. This affects the activation state of the genes, which in turn influences the rate of protein production. The macroscopic rate equation approach \cite{Keller95} provides a deterministic (mean-field) description of the dynamics that ignores fluctuations in protein copy number or gene expression state. This approach, applied to a switch with two mutually repressing genes, has shown that co-operative binding of regulatory proteins is an important factor in generating bistability \cite{CA00}. Other studies have shown, however, that bistability can be achieved even when the deterministic equations have only one solution, due to stochasticity and fluctuations in protein numbers \cite{LLBB06,artyomov}. An alternative approach is to study the dynamics of stochastic flipping between two stable states using stochastic simulations \cite{warren2004,WtW05,morelli2008}, by numerically integrating the master equation \cite{LLBB07}, or by path integral-type approaches \cite{aurell}. This dynamical problem bears some resemblance to the Kramers problem of escape from a free energy minimum \cite{Bialek00,KE01}, and one expects on general grounds that the typical time spent in one of the bistable states should be exponentially large in the typical number of proteins present in the state. This has been confirmed, at least for cooperative toggle switches formed of mutually repressing genes \cite{warren2004,WtW05}. From the perspective of statistical physics, interesting questions arise concerning the distribution of escape times and the connection to first passage properties of stochastic processes. In this paper, however, we are concerned with an intrinsically different situation from these bistable genetic networks. The molecular mechanisms controlling microbial phase variation typically involve a binary element that can be in either of two states. For example, this may be a short fragment of DNA that can be inserted into the chromosome in either of two orientations, a repeated DNA sequence that can be altered in its number of repeats, or a DNA sequence that can have two alternative patterns of methylation \cite{woude2004}. The flipping of this element between its two states is stochastic, with a flipping rate that is controlled by various regulatory proteins, the activity of which may be influenced by environmental factors. We shall consider the case where a feedback exists between the switch state and the flipping rate. This is particularly interesting from a statistical physics point of view because it leads to non-Poissonian switching behaviour, as we shall show. Our work has been motivated by several examples. The {\it fim} system in uropathogenic strains of the bacterium {\em{E. coli}} controls the production of Type 1 fimbriae (or pili), which are ``hairs'' on the surface of the bacterium. Individual cells switch stochastically between ``on'' and ``off'' states of fimbrial production \cite{woude2004,wolf2002,CB07,gally1993}. The key feature of the {\it fim} switch is a short piece of DNA that can be inserted into the bacterial DNA in two possible orientations. Because this piece of DNA contains the operator sequence for the proteins that make up the fimbriae, in one orientation, the fimbrial genes are transcribed and fimbriae are produced (the ``on'' state) and in the other orientation, the fimbrial genes are not active and no fimbriae are produced (the ``off'' state). The inversion of this DNA element is mediated by recombinase enzymes. Feedback between the switch state and the switch flipping rate arises because the FimE recombinase (which flips the switch in the on to off direction), is produced more strongly in the on switch state than in the off state. This phenomenon is known as orientational control \cite{kulasekara1999,joyce2002,hinde2005}. The production of a second type of fimbriae in uropathogenic {\em{E. coli}}, Pap pili, also phase varies, and is controlled by a DNA methylation switch \cite{woude2004,blomfield2001,low1987}. Here, the operator region for the genes encoding the Pap pili can be in two states, in which the DNA is chemically modified (methylated) at different sites, and different binding sites are occupied by the regulatory protein Lrp. Switching in this system is facilitated by the PapI protein, which helps Lrp to bind \cite{Nou1995}. Feedback between the switch state and the flipping rate arises because the production of PapI itself is activated by the protein PapB, which is only produced in the ``on'' state \cite{Goransson1989,woude2004,blomfield2001}. A common feature of the above examples is the existence of a feedback mechanism: in the {\it fim} system this occurs through orientational control, and in the {\it{pap}} system, through activation of the {\em{papI}} gene by PapB. In this paper, we aim to study the role of such feedback within a simple, generic model of a binary genetic switch. We shall assume that the feedback is linear, and we thus term our model a ``linear feedback switch''. In a recent publication \cite{VAE08}, we introduced a simple mathematical model of a DNA inversion genetic switch with orientational control, which was inspired by the {\it fim} system. Our model reduces to the dynamics of the number of molecules of a ``flipping enzyme'' $R$, which mediates switch flipping, along with a binary switch state. Enzyme $R$ is produced only in the on switch state. As the copy number of $R$ increases, the on to off flipping rate of the switch increases and this results in a non-Poissonian flipping process with a peak in the lifetime of the on state. The model is linear in the sense that the rate at which the switch is turned off is a linear function of the number of enzymes $R$ which it produces. In our previous work \cite{VAE08}, we imagined enzyme $R$ to be a DNA recombinase, and the two switch states to correspond to different DNA orientations, in analogy with the {\em{fim}} system. However, the same model could be used to describe a range of molecular mechanisms for binary switch flipping with feedback between the switch state and flipping rate, and can thus be considered a generic model of a genetic switch with linear feedback. In our recent work \cite{VAE08}, we obtained exact analytical expressions for the steady state enzyme copy number for our model switch with linear feedback, in the particular case where the flipping enzyme switches only in the on to off direction (this being the relevant case for {\it{fim}}). We also calculated the flip time distribution for this model analytically. Conceptually, such a calculation is reminiscent of the study of persistence in statistical physics \cite{Majumdar99} where, for example, one asks about the probability that a spin in an Ising system has not flipped up to some time \cite{DBG94}. For the flip time distribution, we introduced different measurement ensembles according to whether one starts the time measurement from a flip event (the Switch Change Ensemble) or from a randomly selected time (the Steady State Ensemble). In the present paper, we extend this work to present the full solution of the general case of the model and extend our study of its persistence properties. The introduction of a rate for the enzyme mediated off to on flipping ($k^{\textrm{off}}_3$) has most significant effects on the flip time distributions $F(T)$, as illustrated in Figs. \ref{fig:diagram} and \ref{fig:diagramk3off0} where we show the parameter range over which a peak is found in $F(T)$ for zero and non-zero $k^{\textrm{off}}_3$. We also prove an important relation between the two measurement ensembles defined in \cite{VAE08} and use it to show that a peak in the flip time distribution only occurs in the Switch Change Ensemble and not in the Steady State Ensemble. We find that the non-Poissonian behaviour of this model switch leads to interesting two-time autocorrelation functions. We also study the case where we have two copies of the switch in the same cell and find that these two copies may be correlated or anticorrelated, depending on the parameters of the model, with potentially interesting biological implications. The paper is structured as follows. In section II we define the model, describe its phenomenology, and show that a ``mean-field'', deterministic version of the model has only one steady state solution. In section III we present the general solution for the steady state statistics and in section IV we study first passage-time properties of the switch; technical calculations are left to the appendices. In section V we consider two coupled model switches and we present our conclusions in section VI. \section{The model} We consider a model system with a flipping enzyme $R$ and a binary switch $S$, which can be either on or off (denoted respectively as $S_\textrm{on}$ and $S_\textrm{off}$). Enzyme $R$ is produced (at rate $k_2$) only when the switch is in the on state, and is degraded at a constant rate $k_1$, regardless of the switch state. This represents protein removal from the cell by dilution on cell growth and division, as well as specific degradation pathways. Switch flipping is assumed to be a single step process, which can either be catalysed by enzyme $R$, with rate constants $k^{\textrm{on}}_3$ and $k^{\textrm{off}}_3$ and linear dependence on the number of molecules of $R$, or can happen ``spontaneously'', with rates $k^{\textrm{on}}_4$ and $k^{\textrm{off}}_4$. We imagine that the ``spontaneous'' switching process may in fact be catalysed by some other enzyme whose concentration remains constant and which is therefore not modelled explicitly here. Our model, which is shown schematically in Fig. \ref{fig:sketch}, is defined by the following set of biochemical reactions: \begin{subequations} \label{eq:react} \begin{align} \label{eq:reacta} R & \stackrel{k_1}{\longrightarrow}\emptyset & S_{\textrm{on}} & \stackrel{k_2}{\longrightarrow} S_{\textrm{on}}+R \\ \label{eq:reactb} S_{\textrm{on}} + R & \xrightleftharpoons[k^{\textrm{off}}_3]{k^{\textrm{on}}_3} S_{\textrm{off}} + R & S_{\textrm{on}} & \xrightleftharpoons[k^{\textrm{off}}_4]{k^{\textrm{on}}_4} S_{\textrm{off}}\,\,. \end{align} \end{subequations} \begin{figure} \includegraphics[width=\columnwidth,clip=true]{sketch_pre} \caption{\label{fig:sketch}(colour online) A schematic illustration of the model DNA inversion switch. } \end{figure} \subsection{Phenomenology}\label{sec:phen} \begin{figure*} \includegraphics[width=\textwidth,clip=true]{plot_sample_varyk3} \caption{\label{fig:sampletraj_k3}(colour online) \textsc{Left}: Typical trajectories of the system when $k^{\textrm{on}}_3=k^{\textrm{off}}_3=k_3$ is increased (from top to bottom $k_3=0.0001$, 0.01 and 1). The other parameters are $k_1=1$, $k_2=100$ and $k^{\textrm{on}}_4=k^{\textrm{off}}_4=k_4=0.1$. Grey shading denotes periods in which the switch is in the on state, and the solid lines denote the number of enzyme molecules, plotted against time. In the bottom panel, the switch flips so fast that the grey shading is only shown in the inset where the trajectory from $k_1 t = [60,61]$ is shown in detail. \textsc{Right}: Probability distribution functions for the number $n$ of $R$ molecules, for parameter values corresponding to the trajectories shown in the left panels. The symbols are the result of numerical simulations (see text for details). The full curves plot the analytical results Eqs. (\ref{eq:ponsol}) and (\ref{eq:poffsol}), which are in perfect agreement with the simulations. } \end{figure*} We notice that there are two physically relevant and coupled timescales for our model switch: the timescale associated with changes in the number of $R$ molecules (dictated by the production and decay rates $k_1$ and $k_2$), and that associated with the flipping of the switch (dictated by $k_3$, $k_4$ and the $R$ concentration). We first consider the case where the timescale for $R$ production/decay is much faster than the switch flipping timescale. The top left panel of Fig. \ref{fig:sampletraj_k3} shows a typical dynamical trajectory for parameters in this regime. Here, we plot the number $n$ of $R$ molecules, together with the switch state, against time. This result was obtained by stochastic simulation of reaction set (\ref{eq:react}) using the Gillespie algorithm~\cite{bortz75,gillespie1976}. This algorithm generates a continuous time Markov process which is exactly described by the master equation (\ref{eq:master}). For a given switch state, the number $n$ of molecules of $R$ varies according to reactions (\ref{eq:reacta}). When the switch is in the on state, $n$ grows towards a plateau value, and when the switch is in the off state, $n$ decreases exponentially towards $n=0$. The time evolution of $n$ can thus be seen as a sequence of relaxations towards two different asymptotic steady states, which depend on the switch position. To better understand this limiting case, we can make the assumption that the number of $R$ molecules evolves deterministically for a given switch state. We can then write down deterministic rate equations corresponding to the reaction scheme (\ref{eq:react}). These equations are first order differential equations for $\rho$, the mean concentration of the enzyme. When the switch is on, the rate equation reads \begin{equation} \frac{d \rho}{d t} = -k_1 \rho + k_2 \end{equation} with solution \begin{equation} \rho(t) = \rho(0) e^{-k_1t} + \frac{k_2}{k_1}\left[ 1- {e}^{-k_1 t} \right]\;. \end{equation} Thus the plateau density in the on state is given by the ratio \begin{equation} \rho_{\rm on} = k_2/k_1\;, \label{rhoon} \end{equation} and the timescale for relaxation to this density is given by $k_1$, the rate of degradation of $R_1$. When the switch is in the off state, the rate equation for $\rho$ reads instead \begin{equation} \frac{d \rho}{d t} = -k_1 \rho \end{equation} and one simply has exponential decay to $\rho=0$ with decay time $k_1$. In this parameter regime, switch flipping typically happens when the number of molecules of $R$ has already reached the steady state (as in the top left panel of Fig. \ref{fig:sampletraj_k3}). Thus, the on to off switching timescale is given by $1/(\rho_\textrm{on} k^{\textrm{on}}_3 + k^{\textrm{on}}_4)$, where $\rho_{\rm on}$ is the plateau concentration of flipping enzyme when the switch is in the on state, given by Eq.(\ref{rhoon}). Since the corresponding plateau concentration in the off switch state is zero, the off to on switch flipping timescale is simply given by $1/k^{\textrm{off}}_4$. We now consider the opposite scenario, in which switching occurs on a much shorter timescale than relaxation of the enzyme copy number. A typical trajectory for this case is shown in the bottom left panel of Fig. \ref{fig:sampletraj_k3}. Here, switching reactions dominate the dynamics of the model, and the dynamics of the enzyme copy number follows a standard birth-death process, with an effective birth rate given by the enzyme production rate in the on state multiplied by the fraction of time spent in the on state. A more quantitative account for these behaviours is provided later on, in \ref{sec:pss}. For parameter values between these two extremes, where the timescales for switch flipping and enzyme number relaxation are similar, it is more difficult to provide intuitive insights into the behaviour of the model. A typical trajectory for this case is given in the middle left panel of Fig. \ref{fig:sampletraj_k3}. Here, we have set the on to off and off to on switching rates to be identical: $k^{\textrm{on}}_3 = k^{\textrm{off}}_3$ and $k^{\textrm{on}}_4 = k^{\textrm{off}}_4$. We notice that typically, less time is spent in the on state than in the off state. As soon as the switch flips into the on state, the number of $R$ molecules starts increasing and the on to off flip rate begins to increase. Consequently, the number of $R$ molecules rarely reaches its plateau value before the switch flips back into the off state. To illustrate the effects of including the parameter $k^{\textrm{off}}_3$, we also show trajectories for different values of the ratio $r=k^{\textrm{off}}_3/k^{\textrm{on}}_3$ in Fig. \ref{fig:sampletraj_k3off}, for fixed $k^{\textrm{on}}_3$. For small $r$, the amount of enzyme decays to zero in the off state before the next off-to-on flipping event resulting in bursts of enzyme production. In contrast, when $r$ is $O(1)$, flipping is rapid in both directions so that $p(n)$ is peaked at intermediate $n$. \begin{figure*} \includegraphics[width=\textwidth,clip=true]{plot_sample_varyk3off} \caption{\label{fig:sampletraj_k3off}(colour online) \textsc{Left}: Typical trajectories of the system when $r=k^{\textrm{off}}_3/k^{\textrm{on}}_3=$ is increased (from top to bottom $r=0$, 0.5 and 1). The other parameters are $k_1=1$, $k_2=100$, $k^{\textrm{on}}_3=1$ and $k^{\textrm{on}}_4=k^{\textrm{off}}_4=k_4=0.1$. Grey shading denotes periods in which the switch is in the on state, and the solid lines denote the number of enzyme molecules, plotted against time. In the bottom panel, the switch flips so fast that the grey shading is only shown in the inset where the trajectory from $k_1 t = [60,61]$ is shown in detail. \textsc{Right}: Probability distribution functions for the number $n$ of $R$ molecules, for parameter values corresponding to the trajectories shown in the left panels. The symbols are the result of numerical simulations (see text for details). The full curves plot the analytical results Eqs. (\ref{eq:ponsol}) and (\ref{eq:poffsol}), which are in perfect agreement with the simulations. } \end{figure*} \subsection{Mean-field equations} To explore how the switching behaviour of our model arises, we can write down mean-field, deterministic rate equations corresponding to the full reaction scheme (\ref{eq:react}). These equations describe the time evolution of the mean concentration $\rho (t)$ of $R$ molecules and the probabilities $Q_\textrm{on} (t)$ and $Q_\textrm{off} (t)$ of the switch being in the on and off states. These equations implicitly assume that the mean enzyme concentration $\rho$ is completely decoupled from the state of the switch. Thus correlations between the concentration $\rho$ and the switch state are ignored and the equations furnish a mean-field approximation for the switch. As we now show, this crude type of mean-field description is insufficient to describe the stochastic dynamics of the switch, except in the limit of high flipping rate. Noting that $Q_\textrm{on}(t) + Q_\textrm{off}(t)=1$, the mean-field equations read: \begin{subequations} \begin{equation} \frac{d \rho(t)}{d t}=k_2 Q_\textrm{on}(t) - k_1 \rho(t) \,\,, \end{equation} \begin{multline} \frac{d Q_\textrm{on}(t)}{d t}= (k^{\textrm{off}}_4 + \rho(t) k^{\textrm{off}}_3) (1-Q_\textrm{on}(t)) \\- (k^{\textrm{on}}_4 + \rho(t) k^{\textrm{on}}_3) Q_\textrm{on}(t)\,\,. \end{multline} \end{subequations} The above equations have two sets of possible solutions for the steady--state values of $\rho$ and $Q_\textrm{on}$, but only one has a positive value of $\rho$, and is therefore physically meaningful. The result is: \begin{equation} \rho=\frac{ \rho_{\rm on} {k^{\textrm{off}}_3}- ({k^{\textrm{off}}_4}+{k^{\textrm{on}}_4})+\sqrt{\Delta}}{2 ({k^{\textrm{off}}_3}+{k^{\textrm{on}}_3})} \,\,, \label{rho} \end{equation} where \begin{equation} \Delta=( \rho_{\rm on}{k^{\textrm{off}}_3}-({k^{\textrm{off}}_4}+{k^{\textrm{on}}_4}))^2+4 \rho_{\rm on} {k^{\textrm{off}}_4}({k^{\textrm{off}}_3}+{k^{\textrm{on}}_3}) \,\,, \end{equation} and \begin{equation} Q_\textrm{on}= \rho/\rho_{\textrm{on}} \,\,. \end{equation} The most interesting conclusion to be drawn from this mean-field analysis is that there is only one physically meaningful solution. In this solution, the enzyme concentration $\rho$ is less than the plateau value in the on state [$\rho_{\rm on}$ of Eq.(\ref{rhoon})]. Thus reaction scheme (\ref{eq:react}) does not have an underlying bistability. The two states of our stochastic switch evident in Figures \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4} for low values of $k_3$ and $k_4$ are not bistable states but are rather intrinsically unstable and transient states, each of which will inevitably give rise to the other after a certain (stochastically determined) period of time. In this sense, our model is fundamentally different from the bistable reaction networks which have previously been discussed \cite{CA00,warren2004,dubnau2006}. On the other hand, in the limit of rapid switch flipping, where $k_3$ or $k_4$ is large, the mean-field description holds and the protein number distribution does show a single peak whose position is well approximated by Eq. (\ref{rho}), as shown in Figures \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4} for the case $k_3=1$. \section{Steady--state statistics} \begin{figure*} \includegraphics[width=\textwidth,clip=true]{plot_sample_varyk4} \caption{\label{fig:sampletraj_k4}(colour online) \textsc{Left}: Typical trajectories of the system when $k^{\textrm{on}}_4=k^{\textrm{off}}_4=k_4$ is increased (from top to bottom $k_4=0.1$, 1 and 100. Other parameters are $k_1=1$, $k_2=100$ and $k^{\textrm{on}}_3=k^{\textrm{off}}_3=k_3=0.001$. In each panel the grey shading denotes that the switch is on and the line plots the number of enzymes against time. In the third panel the grey shading is only shown in the inset where the trajectory from $k_1 t = [60,61]$ is detailed. \textsc{Right}: Probability distribution functions of the number of $R_1$ molecules in the cell for parameter values corresponding to the trajectories shown in the left panels. The symbols are the result of numerical simulations (see text for details). The full curves plot the analytical results Eqs. (\ref{eq:ponsol}) and (\ref{eq:poffsol}) and pass perfectly through the simulation points. } \end{figure*} \subsection{Analytical solution} Returning to the fully stochastic version of the reaction scheme (\ref{eq:react}), we now present an exact solution for the steady--state statistics of this model. A solution for the case where $k^{\textrm{off}}_3 = 0$ was sketched in Ref.~\cite{VAE08}. Here we present a complete solution for the general case where $k^{\textrm{off}}_3 \ne 0$, and we discuss the properties of the steady--state as a function of all the parameters of the system. We first define the probability $p_s(n,t)$ that the system has exactly $n$ enzyme molecules at time $t$ and the switch is in the $s$ state (where $s=\{\textrm{on},\textrm{off}\}$). The time evolution of $p_s$ is described by the following master equation: \begin{multline} \label{eq:master} \frac{d p_s(n)}{d t}= (n+1) k_1 p_{s}(n+1)+ k^{s}_2 p_{s}(n-1) +n k^{1-s}_3 p_{1-s}(n)\\+ k^{1-s}_4 p_{1-s}(n) - (n k_1 + k^{s}_2 + n k^{s}_3 + k^{s}_4) p_{s}(n) \,\,, \end{multline} where we use the shorthand notations $\{ \textrm{off},\textrm{on} \} \equiv \{0,1\}$, $k^{\textrm{off}}_2 \equiv 0$ and $k^{\textrm{on}}_2 \equiv k_2$. In the steady state, the time derivative in Eq.(\ref{eq:master}) vanishes, and the problem reduces to a pair of coupled equations for $p_{\textrm{on}}$ and $p_{\textrm{off}}$: \begin{subequations} \label{eq:masterp} \begin{multline} \label{eq:masterpon} (n+1) k_1 p_{\textrm{on}}(n+1)+ k_2 p_{\textrm{on}}(n-1) +n k^{\textrm{off}}_3 p_{\textrm{off}}(n)+ k^{\textrm{off}}_4 p_{\textrm{off}}(n) \\= (n k_1 + k_2 + n k^{\textrm{on}}_3 + k^{\textrm{on}}_4) p_{\textrm{on}}(n) \,\,, \end{multline} \vspace*{-0.7cm} \begin{multline} \label{eq:masterpoff} (n+1) k_1 p_{\textrm{off}}(n+1) + n k^{\textrm{on}}_3 p_{\textrm{on}}(n,t) + k^{\textrm{on}}_4 p_{\textrm{on}}(n,t) \\= (n k_1 + n k^{\textrm{off}}_3 + k^{\textrm{off}}_4) p_{\textrm{off}}(n,t)\,\,\,. \end{multline} \end{subequations} To solve the above equations we introduce the generating functions \begin{equation} \label{eq:genfun} G_s (z)= \sum_{n=0}^{\infty} p_s(n) z^n\,\,. \end{equation} The steady-state equations (\ref{eq:masterp}) can be now written as a set of linear coupled differential equations for $G_s$: \begin{subequations} \label{eq:gz} \begin{gather} \label{eq:gon} {\cal L}_1 G_{\textrm{on}}(z) = {\cal L}_2 G_{\textrm{off}}(z)\,\,,\\ \label{eq:goff} {\cal L}_3 G_{\textrm{off}}(z) = {\cal L}_4 G_{\textrm{on}}(z)\,\,, \end{gather} \end{subequations} where ${\cal L}_i$ are linear differential operators: \begin{subequations} \begin{flalign} {\cal L}_1(z) =& k_1 (z-1) \partial_z - k_2 (z-1) + k^{\textrm{on}}_3 z \partial_z + k^{\textrm{on}}_4\,\,,\\ {\cal L}_2(z)=&k^{\textrm{off}}_3 z \partial_z + k^{\textrm{off}}_4 \,\,,\\ {\cal L}_3(z)=& k_1 (z-1) \partial_z + k^{\textrm{off}}_3 z \partial_z + k^{\textrm{off}}_4\,\,,\\ {\cal L}_4(z)=& k^{\textrm{on}}_3 z \partial_z + k^{\textrm{on}}_4\,\,. \end{flalign} \end{subequations} In order to solve the two coupled Eqs.~(\ref{eq:gz}) it is first useful to take their difference. After simplification this yields the relation: \begin{equation} \label{eq:relgon} \partial_z G_{\textrm{off}}(z) = - \partial_z G_{\textrm{on}}(z) + \frac{k_2}{k_1} G_{\textrm{on}}(z)\,\,. \end{equation} Next, we take the first derivative of (\ref{eq:goff}) and then replace the derivatives of $G_{\textrm{off}}$ with the relation (\ref{eq:relgon}). After some algebra, one finds that $G_{\textrm{on}}$ verifies the following second order differential equation: \begin{multline} \label{eq:diff} k_1 (\alpha z - k_1 ) G_{\textrm{on}}''(z) + (k_1 \beta - \gamma z ) G_{\textrm{on}}'(z) - \delta G_{\textrm{on}}(z)=0\,\,, \end{multline} where the Greek letters are combinations of the parameters of the model: \begin{subequations} \begin{align} \alpha &= k_1 + k^{\textrm{on}}_3 + k^{\textrm{off}}_3\,\,,\\ \beta & =k_1+k_2+k^{\textrm{off}}_3 + k^{\textrm{on}}_3 + k^{\textrm{off}}_4 + k^{\textrm{on}}_4\,\,,\\ \gamma &=k_2 (k_1 + k^{\textrm{off}}_3) \,\,,\\ \delta &= k_2 (k_1 + k^{\textrm{off}}_3 + k^{\textrm{off}}_4)\,\,. \end{align} \end{subequations} We now introduce the new variable \begin{equation} u(z) \equiv u_z=\frac{\gamma}{k_1 \alpha} z - \frac{\gamma}{\alpha^2} =u_0 + z (u_1-u_0)\,\,, \label{udef} \end{equation} and the new parameter combinations: \begin{equation} \zeta=u_0 + \frac{\beta}{\alpha}\,\,, \qquad \eta=\frac{\delta}{\gamma} \,\,. \end{equation} We can now write $G_{\textrm{on}}(z)$ (and $G_{\textrm{off}}(z)$) in terms of the variable $u$ (\ref{udef}) by defining the functions \begin{equation} J_s(u)= G_s(z)\;. \label{Jdef} \end{equation} The differential equation (\ref{eq:diff}) then reads: \begin{equation} u J_\textrm{on}''(u) + (\zeta - u) J_\textrm{on}'(u) - \eta J_\textrm{on}(u) = 0\,\,. \end{equation} Looking for a regular power series solution of the form \begin{equation} J_\textrm{on}(u)= \sum_{n=0}^{\infty} a_n u^n\,\,, \label{Gonu} \end{equation} one obtains the following solution: \begin{equation} \label{eq:gonfull} J_\textrm{on}(u)= a_0 \, _1F_1\left(\eta,\zeta, u \right)\,\,, \end{equation} where $_1F_1$ denotes the confluent hypergeometric function of the first kind, \begin{equation} _1F_1\left(\eta,\zeta, u \right) \equiv \sum_{n=0}^{\infty} \frac{(\eta)_n}{(\zeta)_n} \frac{u^n}{n !} \label{hgdef} \end{equation} and $(\alpha)_n=\alpha (\alpha+1) \dots (\alpha+n-1)$ denotes the Pochhammer symbol. The constant $a_0$ will be determined using the boundary conditions, which we discuss later. We first note that the above result for $J_\textrm{on}(u)$ can be translated into $G_\textrm{on}(z)$ by replacing $u$ with the expression of $u(z)$ in (\ref{Gonu}) and expanding in powers of $z$: \begin{multline} G_\textrm{on}(z) = \sum_{n=0}^\infty a_n (u_0 + z (u_1-u_0))^n\\ = \sum_{n=0}^\infty a_n \sum_{m=0}^{n} u_0^m [z (u_1-u_0)]^{n-m} \binom{n}{m}\\ = \sum_{n=0}^{\infty} z^n \sum_{m=n}^{\infty} a_m u_0^{m-n} [(u_1-u_0)]^{n} \binom{m}{n} \label{Gonsol} \end{multline} where we have relabelled the indices $n-m \to n$ and $n \to m$ in the last line. We can identify $p_{\textrm{on}} (n)$ from (\ref{eq:genfun}) as the coefficient of $z^n$ in the above expression: \begin{equation} \label{eq:ponsol} p_{\textrm{on}}(n) =\sum_{m=n}^{\infty} a_m u_0^{m-n} (u_1 - u_0)^n \binom{m}{n}\;. \end{equation} From (\ref{Gonu}) and (\ref{eq:gonfull}) we read off \begin{equation} a_n = \frac{a_0 }{n!}\ \frac{(\eta)_n}{(\zeta)_n}\, . \label{an} \end{equation} Substituting (\ref{an}) in (\ref{eq:ponsol}) we deduce, using the definition of the hypergeometric function (\ref{hgdef}) and noting $(\alpha)_{n+m} = (\alpha)_n (\alpha+n)_m$, that \begin{equation} p_{\textrm{on}}(n) =a_0 \frac{(u_1 - u_0)^n}{n!} \frac{(\eta)_n}{(\zeta)_n} \,_1F_1(\eta+n, \zeta+n,u_0)\,\,. \end{equation} In deriving this expression we have, in fact, established the following identity which will prove useful again later: \begin{equation} _1F_1(\eta, \zeta,u) = \sum_{n=0}^{\infty} \frac{z^n (u_1 - u_0)^n}{n!} \frac{(\eta)_n}{(\zeta)_n} \,_1F_1(\eta+n, \zeta+n,u_0)\,\,. \label{Fident} \end{equation} To compute $G_{\textrm{off}}(z)$, we integrate Eq. (\ref{eq:relgon}), which yields, using the form of $J_\textrm{on}(u)$ (\ref{eq:gonfull}): \begin{multline} \label{eq:fastest} G_{\textrm{off}}(z) + G_{\textrm{on}}(z) \\ - a_0 \frac{k_2 (\zeta-1)}{k_1 (\eta-1) (u_1-u_0)} \,_1F_1(\eta-1,\zeta-1,u_z)= \kappa\,\,, \end{multline} where $\kappa$ is our second integration constant. We then have two constants, $a_0$ and $\kappa$, which still need to be determined. The constant $\kappa$ can be found using the normalisation condition $\sum_n (p_{\textrm{on}}(n)+p_{\textrm{off}}(n))=1$, which is equivalent to $G_{\textrm{on}}(1)+G_{\textrm{off}}(1)=1$. Using this condition, we obtain \begin{equation}\label{eq:kappa} \kappa=1-a_0 \frac{k_2 (\zeta-1)}{k_1 (\eta-1) (u_1-u_0)} \,_1F_1(\eta-1,\zeta-1,u_1)\,\,. \end{equation} In order to compute the remaining constant $a_0$, we consider the boundary condition at $z=0$. From the definition (\ref{eq:genfun}) of the generating function we see that $G_s(z=0) = p_s(n=0)$. Our boundary condition thus reads: \begin{equation} \label{eq:boundary} J_\textrm{on}(u_0)+J_\textrm{off}(u_0)= p_{\textrm{on}}(0)+ p_{\textrm{off}}(0)\,\,. \end{equation} Setting $n=0$ in the master equation Eq. (\ref{eq:masterpon}) [noting that the term in $p_{\textrm{on}}(n-1)$ vanishes] gives $p_{\textrm{off}}(0)$ in terms of $p_{\textrm{on}}(0)$ and $p_{\textrm{on}}(1)$: \begin{equation}\label{eq:ponpoff} p_{\textrm{off}}(0)=\frac{k_2+k^{\textrm{on}}_4}{k^{\textrm{off}}_4} p_{\textrm{on}}(0) - \frac{k_1}{k^{\textrm{off}}_4} p_{\textrm{on}}(1)\,\,. \end{equation} Combining Eqs.(\ref{eq:fastest}) [with $z=0$] and (\ref{eq:kappa}), substituting in Eq. (\ref{eq:boundary}), using Eq. (\ref{eq:ponpoff}) to eliminate $p_{\textrm{off}}(0)$, and finally substituting in expressions for $p_{\textrm{on}}(0)$ and $p_{\textrm{on}}(1)$ from Eq. (\ref{eq:ponsol}), we determine $a_0$: \begin{multline} \label{eq:a0sol} a_0^{-1}=\left(1+ \frac{k_2+k^{\textrm{on}}_4}{k^{\textrm{off}}_4} \right) \, _1F_1(\eta,\zeta,u_0) \\ - \frac{k_1 \eta (u_1-u_0)}{k^{\textrm{off}}_4 \zeta} \, _1F_1(\eta+1,\zeta+1,u_0) \\ -\frac{k_2 (\zeta-1)}{k_1 (\eta-1) (u_1-u_0)} \big[ \, _1F_1(\eta-1,\zeta-1,u_0) \\- \, _1F_1(\eta-1,\zeta-1,u_1) \big] \,\,. \end{multline} The final step in obtaining our exact solution is to provide an explicit expression for $p_{\textrm{off}}(n)$. From (\ref{eq:fastest}) we have \begin{multline} G_{\textrm{off}}(z) = \kappa - G_{\textrm{on}}(z) \\ + a_0 \frac{k_2 (\zeta-1)}{k_1 (\eta-1) (u_1-u_0)} \,_1F_1(\eta-1,\zeta-1,u_z)\,\,, \end{multline} and using the identity (\ref{Fident}) we obtain: \begin{multline} \label{eq:poffsol} p_{\textrm{off}}(n)=\kappa \delta_{n,0} + \\ \frac{a_0}{n!} \bigg[\frac{k_2}{k_1} (u_1-u_0)^{n-1} \frac{(\eta)_{n-1}}{(\zeta)_{n-1}} \, _1F_1(\eta+n-1, \zeta+n-1,u_0)\\ - (u_1-u_0)^n \frac{(\eta)_n}{(\zeta)_n} \,_1F_1 (\eta+n,\zeta+n,u_0)\bigg]\,\,, \end{multline} where $\delta_{i,j}$ is the Kronecker delta. Our exact analytical solution (\ref{eq:ponsol}), (\ref{eq:a0sol}) and (\ref{eq:poffsol}) is verified by comparison to computer simulation results in the right panels of Figs. \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4}. Here, we plot the probability distribution function for the total number of enzyme molecules: \begin{equation} p(n)=p_{\textrm{on}}(n) + p_{\textrm{off}}(n)\;. \label{p(n)} \end{equation} Computer simulations of the reaction set (\ref{eq:react}) were carried out using Gillespie's stochastic simulation algorithm~\cite{bortz75,gillespie1976}. Perfect agreement is obtained between the numerical and analytical solutions, as shown in Figs. \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4}. \subsection{Properties of the steady--state}\label{sec:pss} Having derived the steady--state solution for $p(n)$, we now analyse its properties as a function of the parameters of the model. We choose to fix our units of time by setting $k_1$, the decay rate of enzyme $R$, to be equal to unity (so our time units are $k_1^{-1}$). With these units, the plateau value for the number of enzyme molecules in the on switch state is given by $\rho_\textrm{on} = k_2$. In this section, we will only analyse the case where $\rho_\textrm{on} = 100$. To further simplify our analysis, we set $k^{\textrm{on}}_3=k^{\textrm{off}}_3=k_3$ and $k^{\textrm{on}}_4=k^{\textrm{off}}_4=k_4$ (a discussion of the case where $k^{\textrm{off}}_3=0$ and $k^{\textrm{on}}_3 \neq 0$ is provided in Ref. \cite{VAE08}). We then analyse the probability distribution $p(n)$ as a function of the $R$-dependent switching rate $k_3$ and the $R$-independent switching rate $k_4$. The results are shown in the right-hand panels of Fig. \ref{fig:sampletraj_k3} and Fig. \ref{fig:sampletraj_k4}. We consider the three regimes discussed in section \ref{sec:phen}: that in which enzyme number fluctuations are much faster than switch flipping, that where the opposite is true, and finally the regime where the two timescales are similar. In the regime where switch flipping is much slower than enzyme production/decay [$k_1 \gg \left(k^{\textrm{on}}_4 + k_2k^{\textrm{on}}_3/k_1\right)$], the probability distribution $p(n)$ is bimodal. This is easily understandable in the context of the typical trajectories shown in the left top panels in Figs. \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4}: in this regime, the number of molecules of $R$ always reaches its steady-state value before the next switch flip occurs. It follows then that $p_{\textrm{on}}(n)$ is a bell-shaped distribution peaked around $k_2/k_1$, while $p_{\textrm{off}}(n)$ is highly peaked around zero, so that the total distribution $p(n)=p_{\textrm{on}}(n) + p_{\textrm{off}}(n)$ is bimodal. In contrast, when switching occurs much faster than enzyme number fluctuations the probability distribution $p(n)$ is unimodal and bell shaped, as might be expected from the trajectories in the bottom left panels of Figs. \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4}. As discussed in section \ref{sec:phen}, in this regime the number of $R$ molecules behaves as a standard birth-death process with effective birth rate given by $k_2$ multiplied by the average time the switch spends in the on state, and death rate $k_1$. For such a birth-death process the steady state probability $p(n)$ is a Poisson distribution with mean given by the ratio of the birth rate to the death rate. To show that our analytical result reduces to this Poisson distribution, we consider the case where enzyme-mediated switching dominates (as in Fig. \ref{fig:sampletraj_k3}), so that both $k^{\textrm{off}}_3$ and $k^{\textrm{on}}_3$ are much greater than $k_1$. The fraction of time spent in the on state is $k^{\textrm{off}}_3/\left(k^{\textrm{on}}_3+k^{\textrm{off}}_3\right)$, thus the effective birth rate is $k_2 k^{\textrm{off}}_3/\left(k^{\textrm{on}}_3+k^{\textrm{off}}_3\right)$. In the limit $k^{\textrm{on}}_3 \to \infty$ and $k^{\textrm{off}}_3 \to \infty$ with $r=k^{\textrm{off}}_3/k^{\textrm{on}}_3$ constant, one finds that $\eta \to 1$, $\zeta \to 1$, and $u_z \to k_2 r z /[k_1 (1+r)]$. Using the fact that $\,_1F_1(1,1,x)=e^x$, Eq. (\ref{eq:gonfull}) gives, in this limit, \begin{equation} \label{eq:pppp} G_{\textrm{on}}(z)=a_0 \exp \left(\frac{k_2 r z }{k_1 (1+r)} \right)\,\,, \end{equation} which is the generating function of a Poisson distribution with mean $k_2 k^{\textrm{off}}_3/[k_1 (k^{\textrm{on}}_3+k^{\textrm{off}}_3)]$. Plugging this result into Eq. (\ref{eq:fastest}) and taking again the limit $k_3 \to \infty$ [and using that $\,_1F_1(0,0,x)=1$] finally yields the result that $p(n) = p_{\textrm{on}}(n) + p_{\textrm{off}}(n)$ is indeed a Poisson distribution. The same approach can be taken for the case of Fig. \ref{fig:sampletraj_k4}, where $k_3$ is constant, and $k^{\textrm{on}}_4$ and $k^{\textrm{off}}_4$ become very large. The probability distribution $p(n)$ then becomes a Poisson distribution with mean $k_2 k^{\textrm{off}}_4/[k_1 (k^{\textrm{on}}_4+k^{\textrm{off}}_4)]$. The above result is only valid when $r \ne 0$. In fact, as shown in Fig. \ref{fig:sampletraj_k3off}, when $r=0$ the distribution of $R$ is peaked at 0 and does not have a Poisson-like shape. Finally, when there is no clear separation of timescales between enzyme number fluctuations and switch flipping, the distribution function for the number of enzyme molecules has a highly non-trivial shape, as shown in the middle panels of Figs. \ref{fig:sampletraj_k3} and \ref{fig:sampletraj_k4}. \section{First passage time distribution} \label{sec:firstpassage} We now calculate the first passage time distribution for our model switch. We define this to be the distribution function for the amount of time that the switch spends in the on or off states before switching. This distribution is biologically relevant, since it may be advantageous for a cell to spend enough time in the on state to synthesise and assemble the components of the ``on'' phenotype (for example, fimbriae), but not long enough to activate the host immune system, which recognises these components. The calculation for the case $k^{\textrm{off}}_3=0$ was sketched in \cite{VAE08}. Here we provide a detailed calculation of the flip time distribution in the more general case $k^{\textrm{off}}_3 \ne 0$. We find that this dramatically reduces the parameter range over which the flip time distribution has a peak. We demonstrate an important relation between the flip time distributions for the two relevant choices of initial conditions (Switch Change Ensemble and Steady State Ensemble). The first passage time distribution is important and interesting from a statistical physics point of view as it is related to ``persistence''. Generally, persistence is expressed as the probability that the local value of a fluctuating field does not change sign up to time $t$ \cite{Majumdar99}. For the particular case of an Ising model, persistence is the probability that a given spin does not flip up to time $t$. In our model, the switch state $S$ plays the role of the Ising spin. For other problems, there has been much interest in the long-time behaviour of the persistence probability, which can often exhibit a power-law tail. In our case, however, we expect an exponential tail for the distribution of time spent in the on state, because linear feedback will cause the switch to flip back to the off state after some characteristic time. We are therefore interested not only in the tail of the first passage time distribution, but in its shape over the whole time range. \subsection{Analytical results} We consider the probability $F_{{\textrm{s}}}(T|n_0) {d} T$ that if we begin monitoring the switch at time $t_0$ when there are $n_0$ molecules of the flipping enzyme $R$, it remains from time $t_0 \to t_0+T$ in state ${\textrm{s}}$, and subsequently flips in the time interval $t_0+T \to t_0+T+{d} T$. This probability is averaged over a given ensemble of initial conditions, determined by the experimental protocol for monitoring the switch. Mathematically, the initial condition $n_0$ for switch state ${\textrm{s}}$ is selected according to some probability $W_{{\textrm{s}}}(n_0)$ and we define \begin{equation} F_{{\textrm{s}}}(T) = \sum_{n_0} F_{{\textrm{s}}}(T|n_0) W_{{\textrm{s}}}(n_0) \end{equation} as the flip time distribution for the ensemble of initial conditions given by $W_{{\textrm{s}}}(n_0)$. The most obvious protocol would be to measure the interval $T$ from the moment of switch flipping, so that the times $t_0$ correspond to switch flips and the $T$ are the durations of the on or off switch states. We call this the {\em{Switch Change Ensemble}} ($\textrm{SCE}$). In this ensemble, the probability $W^\textrm{SCE}_s$ of having $n$ molecules of $R$ at the time $t_0$ when the switch flips into the ${\textrm{s}}$ state is: \begin{equation}\label{eq:w1} W^\textrm{SCE}_s(n)=\frac{p_{1-s}(n) (n k^{1-s}_3 + k^{1-s}_4)}{\sum_n p_{1-s}(n) (n k^{1-s}_3 + k^{1-s}_4)}\,\,. \end{equation} where for notational simplicity, $s=\{1,0\}$ represents $\{{\mathrm{on,off}}\}$. The numerator of the r.h.s of Eq.(\ref{eq:w1}) gives the steady state probability that there are $n$ molecules present in state $1-s$, multiplied by the flip rate into state $s$. The denominator normalises $W^\textrm{SCE}_s(n)$. We also consider a second choice of initial condition, which we denote the {\em Steady State Ensemble} ($\textrm{SSE}$). Here, the initial time $t_0$ is chosen at random for a cell that is in the ${\textrm{s}}$ state. This choice is motivated by practical considerations: experimentally, it is much easier to pick a cell which is in the ${\textrm{s}}$ state and to measure the time until it flips out of the ${\textrm{s}}$ state, than to measure the entire length of time a single cell spends in the ${\textrm{s}}$ state. The probability $W^\textrm{SSE}_s$ of having $n$ molecules of $R$ at time $t_0$ is then the (normalised) steady-state distribution for the ${\textrm{s}}$ state: \begin{equation}\label{eq:w2} W^\textrm{SSE}_s=\frac{p_{s}(n)}{\sum_n p_{s}(n) }\,\,. \end{equation} To compute the distribution $F(T)$, we first consider the survival probability $h_s^W(n,t)$, that, given that at time $t=0$ (chosen according to ensemble $W$), the switch was in state ${\textrm{s}}$, at time $t$ it is still in state ${\textrm{s}}$ and has $n$ molecules of enzyme $R$. As the ensemble $W$ only enters through the initial condition, we may drop the superscript $W$ in what follows. The evolution equation for $h_s$ is the same as for $p_s(n,t)$, but without the terms denoting switch flipping into the $s$ state. This removes the coupling between $p_{\textrm{on}}$ and $p_{\textrm{off}}$ that was present in the evolution equations (\ref{eq:masterp})): \begin{subequations} \label{eq:survn} \begin{multline} \label{eq:survonn} \frac{\partial}{\partial t} h_{\textrm{on}}(n,t) = (n+1) k_1 h_{\textrm{on}}(n+1,t) + k_2 h_{\textrm{on}}(n-1,t) \\- (n k_1 + k_2 + n k^{\textrm{on}}_3 + k^{\textrm{on}}_4) h_{\textrm{on}}(n,t) \,\,\,, \end{multline} \begin{multline} \label{eq:survoffn} \frac{\partial}{\partial t} h_{\textrm{off}}(n,t) = (n+1) k_1 h_{\textrm{off}}(n+1,t) - \\(n k_1 + n k^{\textrm{off}}_3 + k^{\textrm{off}}_4)h_{\textrm{off}}(n,t)\,\,\,. \end{multline} \end{subequations} Introducing the generating function \begin{equation} {\tilde h}_s(z,t)=\sum_{n=0}^\infty z^n h_s(n,t)\;, \end{equation} the above equations reduce to: \begin{subequations} \label{eq:surv} \begin{multline} \label{eq:survon} \frac{\partial}{\partial t} {\tilde h}_{\textrm{on}}(z,t) =( k_1 - (k_1 + k^{\textrm{on}}_3 ) z) \partial_z {\tilde h}_{\textrm{on}}(z,t) \\+ (k_2 z - (k_2 + k^{\textrm{on}}_4)) {\tilde h}_{\textrm{on}}(z,t) \,\,\,, \end{multline} \begin{multline} \label{eq:survoff} \frac{\partial}{\partial t} {\tilde h}_{\textrm{off}}(z,t) = ( k_1 - (k_1 + k^{\textrm{off}}_3 ) z) \partial_z {\tilde h}_{\textrm{off}}(z,t) \\- k^{\textrm{off}}_4 {\tilde h}_{\textrm{off}}(z,t)\,\,\,. \end{multline} \end{subequations} We can relate $h$ to $F$ by noting that $\sum_n h_s(n,t) = {\tilde h}_s(1,t)$ is the total probability that the switch has not flipped up to time $t$. Hence, \begin{equation} F_s(t)=- \partial_t {\tilde h}_s (1,t)\;. \label{eq:link} \end{equation} Equations (\ref{eq:surv}) can be solved using the method of characteristics~\cite{courant}. The result, detailed in Appendix~\ref{app:char}, is: \begin{multline} \label{eq:hton} {\tilde h}_{\textrm{on}}(z,t)= e^{- \omega_\textrm{on} t} e^{k_2 \tau_{\textrm{on}} (z-k_1 \tau_{\textrm{on}}) (1- e^{-t/\tau_{\textrm{on}}})}\\ \times {\widetilde W}(k_1 \tau_{\textrm{on}} + e^{-t/\tau_{\textrm{on}}}(z-k_1 \tau_\textrm{on})) \,\,\,, \end{multline} where $\tau_\textrm{on} = (k_1+k^{\textrm{on}}_3)^{-1}$ and $\omega_\textrm{on}=k^{\textrm{on}}_4+ k_2 (1- k_1 \tau_{\textrm{on}})$. The function ${\widetilde W}$ is the generating function for the distribution of enzyme numbers $W(n)$ at the starting time for the measurement: \begin{equation}\label{eq:ww} {\widetilde W}(z) = \sum_n W(n) z^n\,\,, \end{equation} where $W$ refers to $W^\textrm{SCE}$ or $W^\textrm{SSE}$. The function ${\tilde h}_{\textrm{off}}(z,t)$ can be obtained in an analogous way: this produces the same expression as for ${\tilde h}_{\textrm{on}}$, but with $k_2$ set to zero and with all ``$\textrm{on}$'' superscripts replaced by ``$\textrm{off}$'': \begin{equation} \label{eq:htoff} {\tilde h}_{\textrm{off}}(z,t)= e^{- k^{\textrm{off}}_4 t} {\widetilde W}(k_1 \tau_{\textrm{off}} + e^{-t/\tau_{\textrm{off}}}(z-k_1 \tau_\textrm{off})) \,\,, \end{equation} so that $\tau_\textrm{off} = (k_1+k^{\textrm{off}}_3)^{-1}$. We can then obtain the distributions $F_{\textrm{on}}(T)$ and $F_\textrm{off}(T)$ by differentiating the above expressions, according to Eq.(\ref{eq:link}): \begin{multline} \label{eq:font} F_{\textrm{on}}(T)=\exp \left(-(\omega_{\textrm{on}}+1/\tau_{\textrm{on}}) T +k_2 \tau_\textrm{on} (1-e^{-T/\tau_\textrm{on}}) \right)\\ \times \Bigg\{ \Big[\omega_\textrm{on} e^{T/\tau_\textrm{on}} + k_2 (k_1 \tau_\textrm{on} -1) \Big] {\widetilde W} \left( k_1 \tau_\textrm{on} + e^{-T/\tau_\textrm{on}} (1-k_1 \tau_\textrm{on}) \right)\\ +\left(\frac{1}{\tau_\textrm{on}} - k_1 \right) {\widetilde W}' \left( k_1 \tau_\textrm{on} + e^{-T/\tau_\textrm{on}} (1-k_1 \tau_\textrm{on}) \right) \Bigg\}\,\,, \end{multline} \begin{multline} \label{eq:fofft} F_\textrm{off}(T)= \exp \left(-(k^{\textrm{off}}_4 + 1/\tau_\textrm{off}) T \right)\\ \times \Bigg\{ k^{\textrm{off}}_4 e^{T/\tau_\textrm{off}} {\widetilde W} \left(k_1 \tau_\textrm{off} + e^{-T/\tau_\textrm{off}} (1-k_1 \tau_\textrm{off}) \right)\\ + \left(\frac{1}{\tau_\textrm{off}} - k_1 \right) {\widetilde W}' \left(k_1 \tau_\textrm{off} + e^{-T/\tau_\textrm{off}} (1-k_1 \tau_\textrm{off}) \right) \Bigg\}\,\,. \end{multline} In the above expressions, the function ${\widetilde W}_s$ is given for the steady state ensemble ($\textrm{SSE}$) by \begin{equation} {{\widetilde W}_s}^\textrm{SSE}=G_s(z)/G_s(1) \end{equation} and for the switch change ensemble ($\textrm{SCE}$) by \begin{equation} {{\widetilde W}_s}^\textrm{SCE}(z)=\frac{k^{1-s}_3 z G'_{1-s}(z) + k^{1-s}_4 G_{1-s}(z)}{k^{1-s}_3 G'_{1-s}(1) + k^{1-s}_4 G_{1-s}(1)}\,. \end{equation} \subsection{Relation between SSE and SCE} \label{sec:relation} We now show that a useful and simple relation can be derived between $F_\textrm{SSE}(T)$ and $F_{\textrm{SCE}}(T)$. Let us imagine that we pick a random time $t$, chosen uniformly from the total time that the system spends in state ${\textrm{s}}$. The time $t$ will fall into an interval of duration $T$, as illustrated in Fig. \ref{fig:illustration}. We can then split the interval $T$ into the time $T_1$ before $t$ and the time $T_2$ after $t$, such that $T_1+T_2=T$. \begin{figure} \includegraphics[width=\columnwidth]{sketch_time} \caption{ \label{fig:illustration} Schematic illustration of a possible time trajectory for the switch; $t$ is a random time falling in an interval of total length $T$ and splitting it into two other intervals denoted $T_1$ and $T_2$, as discussed in Section \ref{sec:relation}.} \end{figure} We first note that the probability that our randomly chosen time $t$ falls into an interval of length $T$ is: \begin{equation} \label{eq:plength} P(T)\, dT= \frac{T \,F_s^\textrm{SCE}(T) \, dT}{\int_0^\infty T' \,F_s^\textrm{SCE}(T')\,dT' } \end{equation} Eq.(\ref{eq:plength}) expresses the fact that the probability distribution for a randomly chosen flip time $T$ is $F^\textrm{SCE}_s(T) \,dT$, but the probability that our random time $t$ falls into a given segment is proportional to the length of that segment. Since the time $T$ is chosen uniformly, the probability distribution for $T_2$, for a given $T$, will also be uniform (but must be less than $T$): \begin{equation}\label{eq:cond} P(T_2|T)\,\,dT=\frac{\Theta(T-T_2)}{T} \,\,dT\,\,. \end{equation} One can now obtain $F^\textrm{SSE}_s$ from $P(T_2|T)$ by integrating Eq.(\ref{eq:cond}) over all possible values of $T$, weighted by the relation (\ref{eq:plength}). This leads to the following relation between $F^\textrm{SCE}$ and $F^\textrm{SSE}$: \begin{equation} \label{eq:linkbis} F^\textrm{SSE}_s(T_2)=\frac{\int_{T_2}^\infty F^\textrm{SCE}_s(T') \,\,dT'}{\int_0^\infty T' F^\textrm{SCE}_s(T') \,\,dT'}\,\,. \end{equation} Taking the derivative with respect to $T_2$ this can be recast as \begin{equation} \label{eq:linkbis2} \frac{d F^\textrm{SSE}_s(T)}{d T}=-\frac{F^\textrm{SCE}_s(T)}{ \langle T \rangle_{\rm SCE}} \end{equation} where $\langle T \rangle_{\rm SCE}$ is simply the mean duration of a period in the on state. We have verified numerically that the expressions (\ref{eq:font}) and (\ref{eq:fofft}) for $F^\textrm{SSE}_s(T)$ and $F^\textrm{SCE}_s(T)$ derived above do indeed obey the relation (\ref{eq:linkbis2}). This relation can also be understood in terms of backward evolution equations as we discuss in Appendix \ref{sec:bev}. \subsection{Presence of a peak in $F(T)$} We now focus on the shape of the flip time distribution $F(T)$, in particular, whether it has a peak. A peak in $F^{\textrm{SCE}}_{\textrm{on}}(T)$ could be biologically advantageous for two complementary reasons. Firstly, after the switch enters the on state there may be some start-up period before the phenotypic characteristics of the on state are established, so it would be wasteful for flipping to occur before the on state of the switch has become effective. Secondly, the on state of the switch may elicit a negative environmental response, such as activation of the host immune system, so that it might be advantageous to avoid spending too long a time in the on state. For example, in the case of the {\em{fim}} switch, a certain amount of time and energy is required to synthesise fimbriae, and this effort will be wasted if the switch flips back into the off state before fimbrial synthesis is complete. On the other hand, too large a population of fimbriated cells would trigger an immune response from the host, therefore the length of time each cell is in the fimbriated state needs to be tightly controlled. We note that for bistable genetic switches and many other rare event processes, waiting time distributions are exponential (on a suitably coarse-grained timescale). This arises from the fact that the alternative stable states are time invariant in such systems. The presence of a peak in $F^{\textrm{SCE}}_{\textrm{on}}(T)$ for our model switch would indicate fundamentally different behaviour, which occurs because the two switch states in our model are time-dependent. The presence of a peak in the distribution $F(T)$ requires the slope of $F(T)$ at the origin to be positive. Applying this condition to the function $F_\textrm{on}$ (\ref{eq:font}) we get: \begin{multline} \label{eq:inequality1} (k_2 k^{\textrm{on}}_3 - (k^{\textrm{on}}_4)^2) {\widetilde W}(1) - k^{\textrm{on}}_3 (k_1 + k^{\textrm{on}}_3 + 2 k^{\textrm{on}}_4) {\widetilde W}'(1) \\- (k^{\textrm{on}}_3)^2 {\widetilde W}''(1) >0\,\,. \end{multline} Eq. (\ref{eq:ww}) allows us to expressing the derivatives of ${\widetilde W}(1)$ as functions of the moments of $n$, so that we finally get our condition as a relation between the mean and the variance of the initial ensemble: \begin{multline} \label{eq:inequality} k_2 k^{\textrm{on}}_3 - (k^{\textrm{on}}_4)^2 - k^{\textrm{on}}_3 (k_1 + 2 k^{\textrm{on}}_4) \avg{n}_{W_{\textrm{on}}} \\- (k^{\textrm{on}}_3)^2 \avg{n^2}_{W_\textrm{on}} >0\,\,, \end{multline} where $\avg{\dots}_{W_\textrm{on}}$ denotes an average taken using the weight $W_\textrm{on}$ of Eq. (\ref{eq:w1}) or (\ref{eq:w2}). Analogous conditions can be found for a peak in the off to on waiting time distribution. The moments involved in the above inequality can be computed using the exact results of the previous section. The l.h.s. of (\ref{eq:inequality}) can then be computed numerically for different values of the parameters, to determine whether or not a peak is present in $F(T)$. For the SSE, there is never a peak in the flip time distribution. This follows directly from the relation (\ref{eq:linkbis2}) between the SSE and SCE, which shows that the slope of $F^\textrm{SSE}_s(T)$ at the origin is always negative: \begin{equation} \left. \frac{d F^\textrm{SSE}_s(T)}{d T}\right|_{T=0} = - \frac{F^\textrm{SCE}_s(0)}{\avg{T}_\textrm{SCE}} <0\,\,. \end{equation} Thus a peak in the waiting time distribution cannot occur when the initial condition is sampled in the steady state ensemble. For the SCE, we tested inequality (\ref{eq:inequality}) numerically and found that a peak in the distribution $F(T)$ is possible for the time spent in the on state ($F_\textrm{on}^\textrm{SCE}$), but not for the off to on waiting time distribution ($F_\textrm{off}^\textrm{SCE}$). This is as expected and can be explained by noting that to produce a peak in $F_s^\textrm{SCE}(T)$, the flipping rate must increase with time in state $s$. In the on state the flipping rate typically does increase with time as the enzyme $R$ is produced, while in the off state the flipping rate decreases in time as $R$ decays. \begin{comment} An intuitive explanation can be provided for the peak in $F_\textrm{on}^\textrm{SCE}$. Once the switch enters the on state, the amount of flipping enzyme $R_1$ begins to increase, and the on to off flipping rate increases concomitantly. The probability of flipping shortly after entering the on state is thus lower than the probability of flipping later, once more molecules of $R_1$ are present. Depending on the parameter values, this may lead to a peak in the distribution $F_\textrm{on}^\textrm{SCE}$. The fact that no peak is present in either the SCE ensemble for the off to on flip time distribution $F_\textrm{off}(T)$ might be due to the fact that when the switch is in the off state, the amount of flipping enzyme $R_1$ and hence the flipping rate {\em decreases} after the on to off flip. Therefore, the probability of flipping shortly after the previous flip is higher than the probability of flipping later, when the amount of $R_1$ has dropped off, and no peak is expected. \end{comment} We now discuss the general conditions for the occurrence of a peak in $F_\textrm{on}^\textrm{SCE}$. We first recall from section \ref{sec:pss} that in the regime where the copy number of the enzyme $R$ relaxes much faster than the switch flips [$k_1 \gg k^{\textrm{on}}_4 + k_2k^{\textrm{on}}_3/k_1$], the plateau level of $R$ is reached rapidly after entering the on state, so that the flipping rate out of the on state is essentially constant. This leads to effectively exponentially distributed flip times from the on state, so that no peak is expected. In the opposite regime, where switch flipping is much faster than $R$ number relaxation [$k_3 \gg 0$], we again expect Poissonian statistics and therefore exponentially distributed flip times. Thus it will be in the intermediate range of $k_3$ that a peak in the flip time distribution may occur. The exact condition for this (\ref{eq:inequality}) is not particularly transparent as the dependence on the parameters is implicit in the values of the $\avg{n}_{W_{\textrm{on}}}$ and $\avg{n^2}_{W_{\textrm{on}}}$. In particular, the effects of the parameters $k_3$ and $k_2$ are coupled, since the effective $R$-mediated switching rate depends on the copy number of $R$. However we can make a broadbrush description of what is required. First the switch should enter the on state with typical values of $n \ll \rho_{\rm on}$ so that there is an initial rise in the value of $n$ and therefore the flipping rate. Second, we expect that the flipping should be predominantly effected by the enzyme $R$ rather than spontaneously flipping {\em{i.e.}} $k_3$ should govern the flipping rather than $k_4$. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_diagram_peak} \caption{\label{fig:diagram} Occurrence of a peak in the waiting time distribution sampled in the Switch Change Ensemble. The shaded area delimits the region where there is a peak (here the parameters are: $k_1 =1$, $k_2=10$ and $k^{\textrm{off}}_3=k^{\textrm{on}}_3=k_3$ and $k^{\textrm{off}}_4=k^{\textrm{on}}_4=k_4$). The dashed line delimits the same region for $k_2=100$. The insets show an instance of the distribution both in the SCE (solid red line) and in the SSE (blue dashed line): (a) There is a peak ($k_2=10$, $k_3=0.1$, $k_4=0.1$); (b) On the transition line, where the slope at the origin vanishes ($k_2=10$,$k_3=0.15$, $k_4=0.209384...$); (c) There is no peak ($k_2=10$, $k_3=0.2$, $k_4=0.35$).} \end{figure} Fig. \ref{fig:diagram} shows the region in the $k_3$--$k_4$ plane where $F_\textrm{on}^\textrm{SCE}$ has a peak, for the case where $k^{\textrm{on}}_3 = k^{\textrm{off}}_3 = k_3$ and $k^{\textrm{on}}_4 = k^{\textrm{off}}_4 = k_4$. These results are obtained numerically, using the inequality (\ref{eq:inequality}). The distribution $F_\textrm{on}^\textrm{SCE}$ is peaked for parameter values inside the shaded region. The insets show examples of the distributions $F_\textrm{on}^\textrm{SCE}(T)$ and $F_\textrm{on}^\textrm{SSE}(T)$ for various parameter values. At the boundary in parameter space between peaked and monotonic distributions (solid line in Fig. \ref{fig:diagram}), $F_\textrm{on}^\textrm{SCE}(T)$ has zero gradient at $T=0$ (inset (b)). The dashed line in Fig. \ref{fig:diagram}) shows the position of the boundary for a larger value of the enzyme production rate $k_2$. As $k_2$ increases, the range of values of $k_3$ for which there is a peak decreases. Increasing $k_2$ increases the number of enzyme present, which will increase both the off to on and on to off switching frequency, since here $k^{\textrm{on}}_3=k^{\textrm{off}}_3=k_3$. Thus it appears that approximately the same qualitative behaviour can be obtained for smaller values of $k_3$ when $k_2$ is increased. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_diagram_peak_k3off_0} \caption{\label{fig:diagramk3off0} Same plot as Fig. \ref{fig:diagram} but for $k^{\textrm{off}}_3=0$. The shaded area delimits the values of $k_4$ and $k^{\textrm{on}}_3$ (with $k_2=10$) for which there is a peak in the flip time distribution. The dashed line is the separation line for $k_2=100$. The examples in the insets have as parameters $k_2=10$ and: (a) $k^{\textrm{on}}_3=15$, $k_4=0.15$; (b) $k^{\textrm{on}}_3=50$, $k_4=0.162383...$; (c) $k^{\textrm{on}}_3=80$, $k_4=0.4$. } \end{figure} In our previous paper \cite{VAE08}, we analysed the case where $k^{\textrm{off}}_3=0$: {\em{i.e.}} the flipping enzyme $R$ switches only in the on to off direction. This case applies to the {\em{fim}} system. Fig. \ref{fig:diagramk3off0} shows the analogous plot, as a function of $k^{\textrm{on}}_3$ and $k_4$, when $k^{\textrm{off}}_3=0$. The region of parameter space where a peak occurs in $F_\textrm{on}^\textrm{SCE}(T)$ is much wider than for nonzero $k^{\textrm{off}}_3$. In this case an increase of $k_2$ produces a {\em{larger}} range of parameter values $k^{\textrm{on}}_3$ for which there is a peak (dotted line in Fig. \ref{fig:diagramk3off0}). Here, the off to on switching process is $R$-independent, and is mediated by $k_4$ only (since $k^{\textrm{off}}_3=0$). The typical initial amount of $R$ present on entering the on state is thus not much affected by $k_2$, although the plateau level of $R$ increases with $k_2$. Therefore, as $k_2$ increases, the enzyme copy number in the on state becomes more time-dependent, increasing the likelihood of finding a peak. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_diagram_peak_vary_k3off.pdf} \caption{\label{fig:diagramk_vary_3off} Diagram showing the occurrence of a peak when the ratio $r=k^{\textrm{off}}_3/k^{\textrm{on}}_3$ is varied. Here $k_1=1$ and $k_2=10$. The inset shows a zoom of the plot in the vicinity of $k^{\textrm{on}}_3=0$.} \end{figure} The comparison between Figs. \ref{fig:diagram} and \ref{fig:diagramk3off0} suggests that the relative magnitudes of the $R$-mediated switching rates in the on to off and off to on directions, $k^{\textrm{on}}_3$ and $k^{\textrm{off}}_3$, play a major role in determining the parameter range over which $F_\textrm{on}^\textrm{SCE}$ is peaked. This observation is confirmed in Fig. \ref{fig:diagramk_vary_3off}, where the boundary between peaked and unpeaked distributions is plotted in the $k^{\textrm{on}}_3$--$k_4$ plane for various ratios $r=k^{\textrm{off}}_3/k^{\textrm{on}}_3$. The larger the ratio $r$, the smaller the region in parameter space where there is a peak. An intuitive explanation for this might be that as $r$ increases, the the typical initial number of $R$ molecules in the on state increases, so that less time is needed for the $R$ level to reach a steady state, resulting in a weaker time-dependence of the on to off flipping rate and less likelihood of a peak occurring in $F(T)$. If the presence of a peak in $F_\textrm{on}^\textrm{SCE}$ is indeed an important requirement for such a switch in a biological context, then we would expect that a low value of $k^{\textrm{off}}_3$, as is in fact observed for the {\em{fim}} system, would be advantageous. \section{Correlations} A peaked distribution of waiting times is by no means the only potentially useful characteristic of this type of switch. In this section, we investigate two other types of behaviour that may have important biological consequences: correlations between successive flips of a single switch, and correlated flips of multiple switches in the same cell. We analyse these novel phenomena using numerical methods. We introduce a new correlation measure which enables us to quantify the extent of the correlation as a function of the parameter space. Our main findings are that a single switch shows time correlations which appear to decay exponentially, and that two switches in the same cell can show correlated or anti--correlated flipping behaviour depending on the values of $k^{\textrm{off}}_3$ and $k^{\textrm{on}}_3$. \subsection{Correlated flips for a single switch} Biological cells often experience sequences of environmental changes: for example, as a bacterium passes through the human digestive system it will experience a series of changes in acidity and temperature. It is easy to imagine that evolution might select for gene regulatory networks with the potential to ``remember'' sequences of events. The simple model switch presented here can perform this task, in a very primitive way, because it produces correlated sequences of switch flips: the amount of $R$ enzyme present at the start of a particular period in state ${\textrm{s}}$ depends on the recent history of the system. In contrast, for bistable gene regulatory networks, or other bistable systems, successive flipping events are uncorrelated, as long as the system has enough time to relax to its steady state between flips. In our recent work \cite{VAE08}, we demonstrated that successive switch flips can be correlated for our model switch, and that this correlation depends on the parameter $k^{\textrm{off}}_3$: correlation increases as $k^{\textrm{off}}_3$ increases. Here, we extend our study and introduce a new measure of these correlations: the two time probability $p(s,t; s',t')$ that the switch is in position $s$ at time $t$ and in position $s'$ at time $t'$. In the steady state the two-time probability depends only on the time difference $\tau=t-t'$. In order to compare different simulations results, we define the auto-correlation function: \begin{equation} C(\tau)= \frac{p_{\textrm{on}-\textrm{on}}(\tau)}{p_\textrm{on}}+\frac{p_{\textrm{off}-\textrm{off}}(\tau)}{p_\textrm{off}} -1, \label{Ctau} \end{equation} where $p_{\textrm{on}-\textrm{on}}(\tau) = p(\textrm{on},t; \textrm{on}, t+\tau)$, $p_{\textrm{off}-\textrm{off}}(\tau) = p(\textrm{off},t; \textrm{off}, t+\tau)$, and $p_\textrm{on}$ ($p_\textrm{off}$) is the probability of being in the $\textrm{on}$ (off) state. The correlation function (\ref{Ctau}) takes values between $-1$ and $1$, in such a way that it is positive for positive correlations, negative for negative correlations and vanishes if the system is uncorrelated. This function allows us to understand whether, given that the switch is in a given position $s$ at time $t$, it will be in the same state $s$ at a later time $t+\tau$. Fig. \ref{fig:corrone} shows simulation results for different values of $k^{\textrm{on}}_3=k^{\textrm{off}}_3=k_3$ and $k^{\textrm{on}}_4=k^{\textrm{off}}_4=k_4$. As expected, the correlation function vanishes in the limit of large $\tau$, meaning that in this limit there are no correlations. Furthermore, we can see that the strength of the correlations decreases when either $k_3$ or $k_4$ are increased. This is consistent with the previous remark that in the limit of large switching rate ({\em i.e.} either $k_3$ or $k_4$) the distribution of enzyme numbers tends to a Poisson distribution. It is thus not surprising that in this same limit the correlations vanish. In the insets of Fig. \ref{fig:corrone} we plot the same correlation function on a semi-logarithmic scale. The data for the highest values of $k_3$ or $k_4$ (the dotted green curves) is not shown since the decrease is too sharp, and does not allow for a clear interpretation. For the smallest values of $k_3$ and $k_4$ (blue curves), the decay seems to be exponential. However, for intermediate values of $k_3$ or $k_4$ (dashed red curves) the evidence for an exponential decay is less clear and the issue deserves a more extensive numerical investigation. For the sake of completeness we also show in figure \ref{fig:corronek3off} similar data for the case where $k^{\textrm{off}}_3=0$. We find that qualitatively the data has a very similar behaviour to the case where $k^{\textrm{off}}_3=k^{\textrm{on}}_3$. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_corr_one_sw} \caption{ \label{fig:corrone}(colour online) The two-time auto-correlation function $C(\tau)$ for $k_1=1$, $k_2=100$. The insets shows the same data in a semi-log scale. \textsc{Top}: $k_4$ is varied with constant $k_3=0.001$. \textsc{Bottom}: $k_3$ is varied with constant $k_4=0.1$. } \end{figure} \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_corr_one_sw_k3off} \caption{ \label{fig:corronek3off}(colour online) The correlation function $C(\tau)$ when $k^{\textrm{off}}_3=0$. As previously, $k_1=1$ and $k_2=100$. The data labelled as $a$ corresponds to $k^{\textrm{on}}_3=0.001$ while $b$ corresponds to $k^{\textrm{on}}_3=0.01$. For each $a$ and $b$ the superscripts $1$, $2$ and $3$ refer to different values of $k_4=0.1$, $1$ and $10$ respectively. The inset shows the same plot on a semi-log scale.} \end{figure} \subsection{Multiple coupled switches} Many bacterial genomes contain multiple phase-varying genetic switches, which may demonstrate correlated flipping. For example, in uropathogenic {\em{E. coli}}, the {\em{fim}} and {\em{pap}} switches, which control the production of different types of fimbriae, have been shown to be coupled \cite{holden2004,holden2006}. Although these two switches operate by different mechanisms, it is also likely that multiple copies of the same switch are often present in a single cell. This may be a consequence of DNA replication before cell division (in fast-growing {\em{E. coli}} cells, division may proceed faster than DNA replication, resulting in up to $\sim 8$ copies per cell). Randomly occurring gene duplication events, which are believed to be an important evolutionary mechanism, might also result in multiple copies of a given switch on the chromosome. It is therefore important to understand how multiple copies of the same switch would be likely to affect each other's function~\cite{ribeiro}. Let us suppose that there are two copies of our model switch in the same cell. Each copy contributes to and is influenced by a common pool of molecules of enzyme $R$. Our model is still described by the set of reactions (\ref{eq:react}), but now the copy number of $S_\textrm{on}$ and $S_\textrm{off}$ can vary between 0 and 2 (with the constraint that the total number of switches is 2). To measure correlations between the states of the two switches (denoted $s_1$ and $s_2$) we define the {\em two switch} joint probability $p_2(s_1,t;s_2,t')$ as the probability that switch 1 is in state $s_1$ at time $t$ and switch $2$ is in state $s_2$ at time $t'$. This function is the natural extension of the previously defined two-time probability for a single switch. Thus, in analogy to (\ref{Ctau}), we can define a two-time correlation function: \begin{equation} C_2(\tau)= \frac{p_2(\textrm{on},t;\textrm{on},t+\tau)}{p_\textrm{on}} +\frac{p_2(\textrm{off},t;\textrm{off},t+\tau)}{p_\textrm{off}}-1\,\,, \end{equation} where $p_\textrm{on}$ ($p_\textrm{off}$) is again the steady--state probability for a single switch to be $\textrm{on}$ ($\textrm{off}$). If the two switches are completely uncorrelated, we expect that $p_2(\textrm{on},t;\textrm{on},t') = p_\textrm{on}^2$ and $p_2(\textrm{off},t;\textrm{off},t') = p_\textrm{off}^2$, so that $C_2(\tau) = 0$ (given that $p_\textrm{on} + p_\textrm{off}=1$). In contrast, if the switches are completely correlated, $p_2(\textrm{on},t;\textrm{on},t') = p_\textrm{on}$, $p_2(\textrm{off},t;\textrm{off},t') = p_\textrm{off}$ and $C_2(\tau) = 1$. For completely anti-correlated switches, we expect that $p_2(\textrm{on},t;\textrm{on},t') = p_2(\textrm{off},t;\textrm{off},t') = 0$, and $C_2(\tau) = -1$. In Fig. \ref{fig:twoswitches} we plot the function $C_2(\tau)$ for two identical coupled switches, for several parameter sets. Our results show that for small values of $k_4$, there is correlation between the two switches, over a time period $\approx 10 k_1^{-1}$, which is of the same order as the typical time spent in the on state for these parameter values. Our results also show that the nature of these correlations depends strongly on $k^{\textrm{off}}_3$. In the case where $k^{\textrm{off}}_3=k^{\textrm{on}}_3$ (top panel of Fig. \ref{fig:twoswitches}), one can see that the correlation is positive, meaning that the two switches are more likely to be in the same state. In contrast, when $k^{\textrm{off}}_3$ is set to zero (bottom panel of Fig. \ref{fig:twoswitches}), the correlation is negative, meaning that the two switches are more likely to be in different states. To understand these correlations, consider the extreme situation where both the two switches are off, and the number molecules of $R$ has dropped to zero. In this case, the only possible event is a $k_4$ mediated switching which could take place, for instance, for the first switch. Then, once the first switch is on, it will start producing more enzyme, and, if $k^{\textrm{off}}_3 \ne 0$, this will enhance the probability for the second switch to flip on too. This might explain why, when $k^{\textrm{off}}_3=k^{\textrm{on}}_3$ we see a positive correlation between the two switches. On the other hand, if we consider the opposite situation where both the two switches are on, and the number of molecules of $R$ is around its plateau value, then the on to off switching probability for the two switches will be at its maximum. However, after one of the switches has flipped (e.g. the first), the switching probability will start decreasing, this reducing the flipping rate for the second switch. This suggests that $k^{\textrm{on}}_3$ may have the effect of inducing negative correlations, while $k^{\textrm{off}}_3$ induces positive correlations. We also point out the presence of a small peak in $C_2(\tau)$ in Fig. \ref{fig:twoswitches} (indicated by the arrow) which suggests the presence of a time delay: when one switch flips, the other tends to follow a short time later. We leave the detailed properties of these correlations and their parameter dependence to future work. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{plot_corr_two_sw} \caption{\label{fig:twoswitches}(colour online) Normalised two-time correlation function $C_2(\tau)$ for two identical switches. The parameter values are: $k_1=1$, $k_2=100$,$k^{\textrm{on}}_3=0.001$. In the top panel $k^{\textrm{off}}_3=k^{\textrm{on}}_3$ while in the bottom panel $k^{\textrm{off}}_3=0$. The parameter $k_4$ is varied from $0.1$ to $100$ in each case.} \end{figure} \section{Summary and Outlook} In this paper we have made a detailed study of a generic model of a binary genetic switch with linear feedback. The model system was defined in section II by the system of chemical reactions (\ref{eq:react}). Linear feedback arises in this switch because the flipping enzyme $R$ is produced only when the switch is in the on state, and the rate of flipping to the off state increases linearly with the amount of $R$. Thus, when the switch is in the on state the system dynamics inexorably leads to a flip to the off state. We have shown that this effect can produce a peaked flip time distribution and a bimodal probability distribution for the copy number of $R$. A mean field description does not reproduce this phenomenology and so a stochastic analysis is required. We have studied this model analytically, obtaining exact solutions for the steady state distribution of the number of $R$ molecules, as well as for the flip time distributions in the two different measurement ensembles defined in Section \ref{sec:firstpassage}, the Switch Change Ensemble and the Steady State Ensemble. We have shown how these ensembles are related and demonstrated that the flip time distribution in the Switch Change Ensemble may exhibit a peak but the flip time distribution in the Steady State Ensemble can never do so. We also provide a generic relationship between the flip time distribution sampled in the two different ensembles. Given that in single-cell experiments, measuring the flip time distribution in the SCE is much more demanding than in the SSE, our result provides a way to access the SCE flip time distribution by making measurements only in the SSE. Our flip time calculations are reminiscent of persistence problems in non-equilibrium statistical physics where, for example, one is interested in the time an Ising spin stays in one state before flipping. However, because of the linear feedback of our model switch, the flip time distribution is not expected to have a long tail as in usual persistence problems, rather it is the shape of the peak of the distribution which is of interest. By studying numerically the time correlations of a single switch, using the two time autocorrelator (\ref{Ctau}), we have shown that our model switch can play the role of a primitive ``memory module''. The two time autocorrelator displays nontrivial behaviour including rather slow decay, which would be worthy of further study. We have also investigated the behaviour of two coupled switches within the same cell, and showed that both positive and negative correlations could be produced by choosing the parameters appropriately. In particular for $k^{\textrm{off}}_3=0$, as is the case for the {\em fim} switch, anti-correlations were observed, implying that if one switch were on at time $t$, the other would tend to be off at that time and for a subsequent time of about one switch period. Many open questions and problems remain. At a technical level one would like to compute correlations of a single switch analytically and be able to treat the multiple switch system. The model itself could be refined in several ways, for example, by introducing nonlinear feedback\cite{FGM07,PB08}. It has been shown that such feedback allows nontrivial behaviour even at the level of a piecewise deterministic Markov process approximation \cite{PB08}, where one assumes a deterministic evolution for the enzyme concentration, but a stochastic description for the switching. At present our model includes no explicit coupling to the environment, but such coupling could be included in a simple way by adding into the model environmental control of parameters $k_3$ or $k_4$. To make a closer connection to real biological switches, such as {\em{fim}}, one could extend the model to include, for example, multiple and cooperative binding of the enzymes \cite{wolf2002,CB07}. One particularly exciting direction, which we plan to pursue in future work, is to develop models for growing populations of switching cells, in which cell growth is coupled to the switch state. Such models could lead to a better understanding of the role of phase variation in allowing cells to survive and proliferate in fluctuating environments. \\ \begin{acknowledgments} The authors are grateful to Aileen Adiciptaningrum, David Gally and Sander Tans for useful discussions. R. J. A. was funded by the Royal Society of Edinburgh. This work was supported by EPSRC under grant EP/E030173. \end{acknowledgments}
2,877,628,090,904
arxiv
\section{Introduction} \label{intro} Nowadays, agriculture plays an essential role in many developing countries worldwide, especially in Southeast Asia\footnote{\url{https://asean.org/storage/COVID-19-Pandemic-Implications-on-Agriculture-and-Food-Consumption-Final.pdf}}. It reckons for a substantial portion of each country's GDP and employs a vital part of its workforce. Among those countries, Vietnam and Thai Lan become two of the world's largest agricultural exporters. When rice is one of the most well-known agrarian products exported, other kinds of commodities, including coffee, cocoa, maize, fruits, and vegetables, contribute to the region's GDP. For instance, palm oil is one of the leading agricultural products for both two ASEAN countries, Indonesia and Malaysia. Insect pests are well-known to be the most threatened to crops and agricultural products. Crop areas such as rice and wheat are easily affected by insect pests, causing a heavy loss to the crop owner. For this reason, protecting crops and agricultural products from insect pests becomes a must action in different ASEAN countries for keeping and increasing the volume and the quality of yearly agricultural products exported. Among that, insect identification is needed for early pest forecasting to prevent further crop damaged. Manually identifying insect pests on a large farm by expert human resources is time-consuming and expensive. Nowadays, following the popularity of high-quality image capture devices and the achievements of machine learning in pattern recognition, an automated image-based insect pest recognition system is promising to reduce the labor cost and do this task more efficiently. There have been multiple difficulties in extracting useful features for insect classification problems using images. It is challenging to derive discriminative features from the insect image for classification since there are many pest species and variants of their size and shapes. Most recent works applied traditional machine learning methods using hand-crafted features such as GIST, HOG, SIFT, and SURF as proposed in~\cite{oliva2001modeling, dalal2005histograms, lowe2004distinctive, bay2006surf} respectively. However, the hand-crafted features lack representation for the large-scale variant shapes of various objects. Convolutional Neural Networks (CNN) based features are recently very adaptive to different specific computer vision tasks. With the success of the CNN-based features in many classification tasks, especially on the ImageNet Large Scale Visual Recognition Competition or ILSVRC\cite{russakovsky2015imagenet}, there have been several works using CNN-based features in the insect pest classification problem. Wu and colleagues ~\cite{Wu_2019_CVPR} had proven CNN-based features could be more efficient than hand-crafted features in this task. Secondly, metamorphosed insects transform by many distinct stages (e.g., egg, larva, pupa, and adult) in their life. Also, they may have significant extensive inter-species similarities. For each species, an effective method has to capture features for representing large-scale variant shapes. There is no previous research addressing this problem on insect recognition. In bird breed or car model classification, this problem is solved using a fine-grained image classification method (\cite{DBLP:journals/corr/abs-1903-06150, Chen_2019_CVPR}). Fine-grained image classification uses the discriminative features extracted from informative regions of the object and encoding them into vectors for classification. This paper proposes several methods to handle the problems mentioned above in the insect classification problem. \begin{enumerate} \item Firstly, we apply a CNN model with an attention mechanism to make the feature extractor focusing on the insects in the input image. The attention mechanism is necessary since the captured images of insects in the crop often include a complex background containing leaf, dust, branch, etc. \item Secondly, we utilize a multi-scale CNN model to capture the features of size-variant insects. \item Thirdly, we apply a multi-scale learning method for fine-grained image classification to address the high inter-class similarity problem. \item Finally, we use the soft-voting ensemble method to combine these models to improve the performance. \end{enumerate} The rest of the paper can be organized as follows. We briefly present all related works in the insect classification problem in Section 2 and describe our proposed techniques in Section 3. We compare our methods with previous techniques and show the experimental results and our discussion in Sections 4 and 5. The paper ends with our conclusion and future works. \section{Related Work} Recent years have witnessed the notable performance of deep learning methods in different object recognition problems. One of the well-known competitions related to object recognition is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where all participants competed with each other for object detection and image classification at a large scale. Among mixed approaches at ILSVRC, convolutional neural networks, including Inception \cite{Szegedy_2015_CVPR} and ResNet \cite{DBLP:journals/corr/HeZR016}, became the winners of the ILSVRC competition. Other works can be found at \cite{Duy_2017,Binh_2018,Hieu_2020}. For the insect classification task, Cheng et al. ~\cite{cheng2017pest} presented a new approach using deep residual networks to enhance the performance of the crop pest classification problem based on pest images with complex farmland background. The proposed technique could achieve a classification accuracy of 98.67\% for ten classes of crop pest images, better than a plain deep neural network, AlexNet. Liu and colleagues ~\cite{liu2016localization} proposed a new approach for localizing pest insect objects in pest images using saliency detection algorithms and then used a deep convolutional neural network classifying agricultural pest insects. The experimental results showed this technique could bypass the previous methods in terms of a mean Accuracy Precision (``mAP''), which was 0.951. Wang et al. ~\cite{wang2017crop} investigated an insect pest classification problem using deep convolutional neural networks based on crop pest images. They compared the performance of two selected deep neural networks, LeNet-5 and AlexNet, and measure the effects of both convolutional kernels and the number of chosen layers on the final classification accuracy in various experiments. Thenmozhi and co-workers \cite{thenmozhi2019crop} studied the crop pest classification problem using Convolutional Neural Networks and measured the performance of the proposed techniques and several pre-trained deep learning architectures, including AlexNet, ResNet, GoogLeNet, and VGGNet, on three different datasets (NBAIR, Xie1, and Xie2). The experimental results showed that the proposed technique could outperform other chosen pretrained methods. Wu et al. \cite{Wu_2019_CVPR} presented a newly large-scale benchmark dataset for insect pest recognition (IP102). This dataset has more than 75,000 images belonging to 102 categories, where there are about 19,000 annotated images with bounding boxes for object detection. The authors applied hand-crafted (GIST, SIFT, and SURF) and CNN-Based (extracted by AlexNet, VGGNet, GoogleNet, and ResNet) features to measure the corresponding performance of these methods in the dataset IP102. Ren and colleagues ~\cite{ren2019feature} proposed the feature reuse residual network (FR-ResNet) for the insect pest recognition problem, combining features from the initial layer of a residual block with the residual signal and stack the feature reuse residual block to create the proposed network. The experimental results on the dataset IP102 illustrated the improved performance compared to the previous methods. Liu et al. ~\cite{liu2020deep} also designed a new residual-based block called deep multi-branch fusion residual network (DMF-ResNet) to learn multi-scale representation. It combines basic residual and the bottleneck residual architecture into the residual module with multiple branches. The outputs of these branches can be concatenated and fed into a new module to recalibrate response adaptively and then model the relationship between these branches. Deep multi-branch fusion residual networks had been created by stacking these blocks and applying them for insect pest classification. They measured the performance of the proposed method and compared it with other state-of-the-art approaches. The experimental results illustrated the enhancement of the proposed technique. Other works related can be found in more details at ~\cite{ayan2020crop,nanni2020insect}. \section{Methodology} In this section, we present different approaches using retention attention networks (RANs), feature pyramid networks (FPNs), multi-branch and multi-scale attention networks (MMAL-Nets), and the ensemble technique (ET) for improving the performance of the insect pest classification problem. It is worth noting that these methods have specific advantages. For example, the RANs focus on the most crucial region, FPNs efficiently solve small-scale object problems. In addition, MMAL-Nets practically enhance the fine-grained classification problem for recognizing similar objects. Finally, the ensemble technique can help combine different weak classifiers to create more efficient algorithms having better performance. \subsection{Residual networks} Residual networks or ResNet \cite{DBLP:journals/corr/HeZR016} was proposed by He et al. in 2015. The network could help avoid the vanishing gradient problem in training deep neural networks by presenting the skip connection technique among layers. Consequently, the gradient can easily flow back to the input, and the network's weights can be updated. Recently, residual networks can be built by stacking multiple residual blocks (as depicted in Fig.\ref{residual}) to create hugely deep neural networks (maybe up to 1000 layers) depending on each problem. \begin{figure}[t] \centering \captionsetup{justification=centering} \includegraphics[width=0.45\textwidth]{residual.png} \caption{The structure of one residual block in residual networks. Here, $x$ is the input and $f$ is the operation after element-wise addition.} \label{residual} \end{figure} \subsection{Residual attention networks} Wang and colleagues presented residual attention networks \cite{DBLP:journals/corr/WangJQYLZWT17} (RAN) for image classification problems by adding attention mechanism in CNNs to help the networks to decide which location in the image needs to be focused on. Furthermore, one could build the networks by stacking multiple attention modules that generate attention-aware features to guide the feature learning. As a result, the residual attention networks showed their efficiency at the early stage when surpassing all state-of-the-art image classification methods. In general, each attention module can be constructed by two branches: a trunk branch and a mask branch. The ``trunk branch'' performs features extraction, and it can be adapted to any network structure. In this work, we use the pre-activation residual block for the features processing branch. The ``mask branch'' is used for learning attention masks that softly weigh output features. This mask can be used as a feature selector during the feed-forward phase. Attention module (as depicted in Fig.\ref{attres}) output with residual-attention learning can be formulated by \begin{equation} H_{i,c}(x) = (1 + M_{i,c}(x))\times F_{i,c}(x), \end{equation} where $x$ is the input, $i$ ranges over all spatial positions, and $c$ is the index of the channel ($c\in{1, ..., C}$). Also, $M(x)$ is the mask branch output and $F(x)$ is the original feature by the trunk branch. \begin{figure} \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth]{attres.png} \caption{The structure of an attention module.} \label{attres} \end{figure} \subsection{Feature pyramid networks} Recognizing objects with highly variant scales is challenging in computer vision. Similarly, in insect classification, the scale of the insect in a captured image is usually small. One way to address this problem is to change the size of a single input image with different scales for constructing a pyramid of multiple input images. However, it makes designed networks bigger and requires much memory and longer training time than using a single input image. Feature Pyramid networks\cite{DBLP:journals/corr/LinDGHHB16} (FPN), is an alternative way to construct a pyramid of features with a small extra cost. FPN is a features extractor with architecture composed of a bottom-up and a top-down pathway. The bottom-up pathway is the usual feed-forward computation of the backbone Convolutional Neural Networks; ResNet is selected in our work. The top-down pathway constructs higher resolution features by up-sampling features maps from higher pyramid levels. These features are then added element-wise with features from the bottom-up pathway via lateral connections (as shown in Fig. \ref{gpns} and Fig. \ref{lateral}). Finally, we apply a $3\times3$ convolution on each merged map to generate the final feature map. This filter reduces the aliasing effect of up-sampling. As a result, final feature maps have the exact spatial sizes and the same numbers of channels. In our classification model, after generated all pyramid features, we apply global average pooling on each feature map. Then we feed them into the classifier to conduct the final probability distribution result. \begin{figure}[t] \captionsetup{justification=centering} \centering \includegraphics[width=\textwidth]{fpn.png} \caption{Our proposed Feature Pyramid networks' architecture.} \label{gpns} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{lateral.png} \caption{The lateral connection in Feature Pyramid Network.} \label{lateral} \end{figure} \subsection{Multi-branch and multi-scale attention learning network} Fine-grained image classification is a sub-field of object classification where the classifier distinguishes between visually highly similar objects. The purpose is to make the model focusing on details from coarse level features to fine level features to discriminate similar objects. Many types of research on fine-grained classification methods reached the state-of-the-art on many fine-grained benchmark datasets. In this work, we apply the multi-branch and multi-scale attention learning networks (MMAL-Net) \cite{zhang2020three} for fine-grained image classification on our pest classification task. The key of the fine-grained classification is to identify informative regions in an image accurately. Usually, we need to localize the object and discriminate parts by drawing bounding boxes by hand. In MMAL-Net, we do not have to do extra annotations, object localization, and multiple discriminate part localization being done automatically by two modules with only the category labels: Attention object location module (AOLM) and attention part proposal (APPM) module. MMAL-Net has three branches in the training phase: a raw branch, an object branch, and a parts branch, all of them using the same ResNet-50 as the features extractor and dense layers as the classifier. In the raw branch, the networks mainly study the overall characteristics of the object. Then the AOLM needs to obtain the object's bounding box information with the help of the features maps of the raw image from its branch (as visualized in Fig.~\ref{aolm}). Thus, the accuracy of object localization is achieved by only using category labels. After obtaining the object's bounding box, we crop the input image following by the bounding box's coordinates to get the finer scale of the object image, and it can be used as the input of the object branch. Finally, the object branch can learn to obtain the final classification result with the input containing structural features and the fine-grained features of the object. Additionally, with the feature maps from the object branch, APPM proposes several part regions of the object, which can be used as the input for the parts branch (as shown in Fig.~\ref{appm}). The part images cropped from the object image can train the networks to learn the fine-grained features of different parts in different scales. In the testing phase, the part branch can be disabled, and the final result can be obtained by ensembling the logits from the local branch. We combine the logits from the raw branch and local branch to obtain the final result in our work. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{AOLM.png} \caption{AOLM obtained the bounding box using the feature maps from the feature extractor.} \label{aolm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{APPM.png} \caption{APPM proposed the parts image of the object using the feature maps from the feature extractor and the crop image from the raw branch.} \label{appm} \end{figure} \subsection{Ensemble method} Ensemble learning is a machine learning technique that combines accuracy-low models to obtain more accurate predictions. There are many ways to combine models; in this work, we combine the multiple model's predictions. Combining the models' prediction results can reduce its variance and the generalization error. Specifically, soft voting is one of simple, fast, and reliable ensemble method. The soft-voting method is that we take the sum of all the member models' prediction results on each sample, and then we divide by the number of ensemble members. The final result is the class with the highest probability. Suppose there are $m$ members model and a classification task with $n$ labels. $P_{ij}$ is the predicted probability of model $i = 1, ..., m$ for label $j = 1, ..., n$. One can calculate the ensemble result as follows: \begin{equation} P_j = \frac{\sum^{m}_{i = 1}{P_{ij}}}{m}, \end{equation} where $P_j$ is the predicted probability of class $j$. \section{Experiments} This section presents our experiments and compares the proposed approach with the state-of-the-art methods related to the main problem. We evaluated our proposed models on two datasets, i.e., IP102 and D0, and measure the corresponding performance with the previous methods using standard evaluation metrics, including precision, recall, F1-score, accuracy, and geometric mean score. \subsection{Datasets} We evaluated our proposed method on two datasets. The first dataset is IP102, which is a large-scale benchmark dataset presented in~\cite{Wu_2019_CVPR}. It contains 75.222 images of 102 insect pest species. This dataset has some challenges practically. Firstly, several classes have highly intra-class variances, as shown in Fig.~\ref{problem}(a). Secondly, there are images captured of the damaged crop as shown in Fig. ~\ref{problem}(b). Thirdly, there are images including small-scale insects on the noisy backgrounds as shown in Fig. ~\ref{problem}(c). Finally, there is a significant imbalance among the number of samples in classes. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{hard.png} \caption{Examples for challenges in IP102. Column (a) presents three different worm species, but they are hard to be distinguished. Column (b) shows examples of low-scale insects. Column (c) gives examples of damaged crop fields.} \label{problem} \end{figure} The second dataset is D0, presented in~\cite{xie2018multi}. It contains 4.508 images belonging to 40 insect pest species captured in the natural environment. Some examples are shown in Fig.~\ref{d0image}. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{d0.png} \caption{Examples of Aulacophora indica gmelin in D0.} \label{d0image} \end{figure} \subsection{Experiment settings} According to~\cite{Wu_2019_CVPR}, IP102 is partitioned into three subsets: a training set of 45.095 images, a validation set of 7.508 images, and a testing set of 22.619 images. We applied this setting in our experiments. For D0, we arbitrarily partitioned it into three subsets, i.e., a training set, a validation set, and a testing set, with a ratio of $7:1:2$. We applied pre-processing steps on the input image of the size $h\times w$, where h and w are its height and width, respectively. Firstly, we resized the input image into $h'\times w'$ in which the aspect ratio of the original image is kept. First, the smaller value between $h$ and $w$ is resized to 256. Then, the larger value is assigned by multiplying it with the ratio of the bigger value and the smaller value. Secondly, we applied the random crop data augmentation with the window size of $256\times256$ on the training phase to address the over-fitting problem. Finally, we applied the center crop method with the same window size as the training phase in the testing phase. The settings of all RAN, FPN, and MMAL-Net are set at Table \ref{tab:4} according to \cite{DBLP:journals/corr/WangJQYLZWT17,DBLP:journals/corr/LinDGHHB16, zhang2020three}, respectively. We used a pre-trained ResNet-50 on ImageNet to initialize trainable weights in FPN and MMLA-Net since they utilize ResNet-50 as the feature extractor. For RAN, we initialized the weights arbitrarily. In the training phase, we used categorical cross-entropy as the cost function. We utilized the Adam optimizer with the initial learning rate of $10^{-4}$, and the $\beta_1$ and $\beta_2$ coefficients are $0.9$ and $0.999$, respectively. We used exponential decay for scheduling the learning rate with a decay rate of $0.96$. The mini-batch size and the maximum number of training epochs are set as 64 and 100, respectively. The training phase is stopped when the performance on the validation set doesn't improve after ten epochs. We applied the dropout technique with the drop rate of $0.5$ to address the over-fitting problem. To use the ensemble method on RAN, FPN, and MMAL-Net, we trained them in the same training set and tested by voting their predictions. \begin{table} \centering \caption{General training settings for each network} \label{tab:4} \resizebox{0.81\textwidth}{!}{\begin{minipage}{\textwidth} \begin{tabular}{lllll} \hline\noalign{\smallskip} Model/& ResNet50 & RAN & FPN & MMAL-Net\\ Hyper-Parameters&&&&\\ \noalign{\smallskip}\hline\noalign{\smallskip} Learning rate &0.0001 & 0.1 & 0.0001 & 0.001\\ Batch size & 64 & 32 & 32 & 6 \\ Optimizer & Adam & SGD & Adam & SGD\\ & $betas=$ & $momen-$ & $betas=$ & $momen$\\ & $(0.9, 0.999)$ & $tum=0.9$ & $(0.9, 0.999)$ & $tum=0.9$\\ Scheduler & Exponential & MultiStep & Exponential & MultiStep\\ & decay rate = 0.96 & decay rate = 0.1 & decay rate = 0.96 & decay rate = 0.1\\ Weight decay & 0.00001 & 0 & 0.00001 & 0.00001 \\ Dropout & 0.3 & 0 & 0 & 0\\ Maximum epochs & 100 & 100 & 100 & 150\\ Input Size & $224\times224$ & $224\times224$ & $224\times224$ & $448\times448$\\ \noalign{\smallskip}\hline \end{tabular} \end{minipage}} \end{table} \subsection{Evaluation metrics} We evaluated our proposed models with several suitable metrics for the imbalance among classes in IP102 and D0. The metrics consist of the macro-average precision (MPre), the macro-average recall (MRec), the macro-average F1-score (MF1), the accuracy (Acc), and the geometric mean (GM). To treat the classes equally important, we computed the recall for each category, then took an average of them to obtain \text{MRec} as follows: \begin{equation} \text{Rec}_c = \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} \end{equation} \begin{equation} \text{MRec} = \frac{\sum_{c=1}^C{\text{Rec}_c}}{C} \end{equation} where \textit{C} is the number of classes. $\text{TP}_c$ and $\text{FN}_c$ stand for the true positive and the false negative of the \textit{c}-th class respectively. Similarly, we computed $\text{Pre}_c$ and $\text{MPre}$ as follows: \begin{equation} \text{Pre}_c = \frac{\text{TP}_c}{\text{TP}_c + \text{FP}_c} \end{equation} \begin{equation} \text{MPre} = \frac{\sum_{c=1}^C{\text{Pre}_c}}{C} \end{equation} where $\text{FP}_c$ stands for the false positive of the \textit{c}-th class. \text{MF1} is the harmonic mean of \text{MRec} and \text{MPre} as follows: \begin{equation} \label{eq:7} \text{MF1} = 2\frac{\text{MPre} \cdot \text{MRec}}{\text{MPre}+\text{MRec}} \end{equation} \text{Acc} is computed by the true positive value among all classes as follows: \begin{equation} \label{eq:8} \text{Acc} = \frac{\text{TP}}{N} \end{equation} where \textit{N} is the number of samples. GM is calculated based on the sensitivity of each class (denoted as $S_c$). $S_c$ and GM are as follows: \begin{equation} \text{S}_c = \frac{\text{TP}_c}{\text{TP}_c + \text{FN}_c} \end{equation} \begin{equation} \text{GM} = \prod_{c=1}^{C}{\sqrt[C]{\text{S}_c}} \end{equation} GM equals 0 if and only if one of $\text{S}_c$ equals 0. To avoid this problem, we replaced $\text{S}_c$ of 0 by $0.001$. \subsection{Results} We conducted experiments to compare our proposed models and ResNet-50 as a baseline. Table \ref{tab:1} presents the results of those models on IP102. Among those single models, MMAL-Net achieves the best performance on Acc, MPre, MRec, and MF1, which is better at 1.36 percentage points of Acc than ResNet-50. However, the GM score of MMAL-Net is slightly lower than those of FPN and ResNet-50. It implies that the predictions of MMAL-Net on the minor classes are less accurate than those on the major classes. Besides, FPN and ResNet-50 yield comparable results. RAN achieves the lowest results. Combining RAN, FPN, MMAL-Net, and ResNet-50 by the ensemble method performs the best, better 1.98 percentage points than MMAL-Net. \begin{table} \centering \caption{The comparison among different proposed models on IP102.} \label{tab:1} \begin{tabular}{llllll} \hline\noalign{\smallskip} Model/Metric & Acc & MPre & MRec & MF1 & GM \\ \noalign{\smallskip}\hline\noalign{\smallskip} ResNet-50 & 70.79 & 62.89 & 65.71 & 63.89 & 59.70 \\ RAN & 62.82 & 55.46 & 57.38 & 56.09 & 51.95 \\ FPN & 70.42 & 62.52 & 64.74 & 63.27 & 59.59 \\ MMAL-Net & 72.15 & 62.63 & 69.13 & 64.53 & 58.43 \\ Ensemble model & \textbf{74.13} & \textbf{65.72} & \textbf{70.74} & \textbf{67.65} & \textbf{62.52} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Similarly, we conducted experiments to compare between those models on D0. Table~\ref{tab:2} presents the experiment results. Overall, the results on D0 are significantly better than those on IP102. Among the single models, MMAL-Net again yields the best results. FPN and RAN have slightly lower performance than ResNet-50. RAN performs the worst. The ensemble model of RAN, FPN, and MMAL-Net achieves the highest performance. RAN performed significantly worse than other models on both IP102 and D0. It is probably because the pre-trained ResNet-50 did not initialize trainable weights of RAN on the training phase. \begin{table} \centering \caption{The comparison among different proposed models on D0.} \label{tab:2} \begin{tabular}{llllll} \hline\noalign{\smallskip} Model/Metric & Acc & MPre & MRec & MF1 & GM \\ \noalign{\smallskip}\hline\noalign{\smallskip} ResNet-50 & 99.34 & 99.12 & 99.20 & 99.14 & 99.09 \\ RAN & 93.27 & 92.93 & 93.57 & 93.06 & 92.65 \\ FPN & 99.23 & 99.08 & 99.09 & 99.07 & 99.06 \\ MMAL-Net & 99.56 & 99.48 & 99.50 & 99.48 & 99.46 \\ Ensemble model & \textbf{99.78} & \textbf{99.66} & \textbf{99.71} & \textbf{99.68} & \textbf{99.65} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} We compared our proposed method with the previous methods as shown in Table \ref{tab:3}. For IP102, we compared with ResNet-50 implemented in~\cite{Wu_2019_CVPR}, and some variants of ResNet, i.e. FR-ResNet and DMF-ResNet, proposed in~\cite{ren2019feature} and~\cite{liu2020deep}, respectively. The results show that our MMAL-Net outperforms those models. In addition, our ensemble models of RAN, FPN, and MMAL-Net are significantly better than the ensemble methods proposed in~\cite{nanni2020insect} and~\cite{ayan2020crop}. For D0, our proposed models are better than those proposed in~\cite{xie2018multi} and~\cite{ayan2020crop}. One can see that our implemented ResNet-50 is significantly better than the one implemented in~\cite{Wu_2019_CVPR}. The main difference between the two models is that we applied the random crop augmentation technique and the Adam optimizer while~\cite{Wu_2019_CVPR} did not utilize the augmentation technique and use the Stochastic Gradient Descent optimizer. \begin{table} \centering \caption{The comparison between our proposed models and the previous works. (EM: ensemble method)} \label{tab:3} \resizebox{0.81\textwidth}{!}{\begin{minipage}{\textwidth} \begin{tabular}{cllccc} \hline\noalign{\smallskip} Dataset& Name & Method & Acc & MF1 & GM \\ \noalign{\smallskip}\hline\noalign{\smallskip} IP102&\citet{Wu_2019_CVPR} & ResNet-50 & 49.4 & 40.1 & 31.5\\ &\citet{ren2019feature} & FR-ResNet & 55.2 & 54.1 & - \\ &\citet{liu2020deep} & DMF-ResNet & 59.1 & 58.1 & - \\ & \textbf{Ours} & \textbf{MMAL-Net} & \textbf{72.2} & \textbf{64.6} & \textbf{58.4} \\ &\citet{nanni2020insect} & Saliency method & 61.4 & - & - \\ && + CNNs + EM&&&\\ &\citet{ayan2020crop} & CNNs + EM & 67.1 & 65.8 & - \\ &\textbf{Ours} & \textbf{RAN + FPN } & \textbf{74.1} & \textbf{67.7} & \textbf{62.5}\\ &&\textbf{+ MMAL-Net + ResNet50}&&&\\ \noalign{\smallskip}\hline\noalign{\smallskip} D0&\citet{xie2018multi} & MLLF + MKB & 89.3 & - & - \\ & \textbf{Ours} & \textbf{MMAL-Net} & \textbf{99.6} & \textbf{99.5} & \textbf{99.5} \\ &\citet{thenmozhi2019crop} & CNNs & 96.0 & - & - \\ &\citet{ayan2020crop} & CNNs + EM & 98.8 & 98.8 & - \\ &\textbf{Ours} & \textbf{RAN + FPN} & \textbf{99.8} & \textbf{99.7} & \textbf{99.7} \\ &&\textbf{+ MMAL-Net + ResNet50}&&&\\ \noalign{\smallskip}\hline \end{tabular} \end{minipage}} \end{table} Tables \ref{tab:class_name_IP102} and \ref{tab:class_name_D0} show the list of top 10 classes having the lowest accuracy using ResNet-50 on the dataset IP102 and D0, respectively. As visualized in Figs. \ref{fig:fig10a_ResNet_IP102}, \ref{fig:fig10b_MMAL_IP102} , and \ref{fig:fig10c_Ensemble_IP102} , the performance of MMAL-Net and our ensemble method is much better than the corresponding performance using ResNet-50 for all top 1, 3, and 5 accuracy in these two datasets. Between MMAL-Net and the proposed ensemble method, our ensemble model could achieve a better performance in these 10 classes in the dataset IP102. One can see similar behaviors in the dataset D0, as depicted in Figs. \ref{fig:fig11a_ResNet_D0}, \ref{fig:fig11b_MMAL_D0}, and \ref{fig:fig11c_EM_D0}. \begin{table} \centering \caption{The top 10 classes having the lowest accuracy using ResNet-50 on the dataset IP102. } \label{tab:class_name_IP102} \begin{tabular}{llll} \hline\noalign{\smallskip} Number & Species & Accuracy(\%) \\ &&from ResNet50\\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & Therioaphis maculata Buckton & 16.67 \\ 2 & Beet fly & 25.0 \\ 3 & Green bug & 25.51\\ 4 & Polyphagotars onemus latus & 26.92\\ 5& Large cutworm & 27.03\\ 6 & Rice shell pest & 27.64\\ 7& English grain aphid & 32.49\\ 8 & white margined moth & 33.33\\ 9 & Bird cherry-oataphid & 34.27\\ 10 & Mango flat beak leafhopper & 35.71\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig10a_ResNet_IP102.png} \caption{The performance of ResNet-50 with the top 10 classes mentioned in Table \ref{tab:class_name_IP102} in the dataset IP102.} \label{fig:fig10a_ResNet_IP102} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig10b_MMAL_IP102.png} \caption{The performance of MMAL-Net with the top 10 classes mentioned in Table \ref{tab:class_name_IP102} in the dataset IP102.} \label{fig:fig10b_MMAL_IP102} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig10c_Ensemble_IP102.png} \caption{The performance of our proposed ensemble method with the top 10 classes mentioned in Table \ref{tab:class_name_IP102} in the dataset IP102.} \label{fig:fig10c_Ensemble_IP102} \end{figure} \begin{table} \centering \caption{The top 10 classes having the lowest accuracy using ResNet-50 on the dataset D0. } \label{tab:class_name_D0} \begin{tabular}{llll} \hline\noalign{\smallskip} Number & Species & Accuracy(\%)\\ && from ResNet50 \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & Nilaparvata lugens Stl & 84.62\\ 2 & Pieris rapae Linnaeus& 85.71\\ 3 & Dolycoris baccarum Linnaeus&88.24\\ 4& Dryocosmus KuriphilusYasumatsu & 90.00\\ 5& Halyomorpha halys Stl & 90.00\\ 6 & Luperomorpha suturalis Chen&90.00\\ 7& Riptortus pedestris Fabricius &90.91\\ 8 & Laodelphax striatellus Falln &91.67\\ 9 & Chauliops fallax Scott &92.31\\ 10 & Plutella xylostella Linnaeus &92.31\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig11a_ResNet_D0.png} \caption{The performance of ResNet-50 with the top 10 classes mentioned in Table \ref{tab:class_name_D0} in the dataset D0.} \label{fig:fig11a_ResNet_D0} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig11b_MMAL_D0.png} \caption{The performance of MMAL-Net with the top 10 classes mentioned in Table \ref{tab:class_name_D0} in the dataset D0.} \label{fig:fig11b_MMAL_D0} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig11c_EM_D0.png} \caption{The performance of our proposed ensemble method with the top 10 classes mentioned in Table \ref{tab:class_name_D0} in the dataset D0.} \label{fig:fig11c_EM_D0} \end{figure} \subsection{Visualization with Grad-CAM} This section presents visualization for our proposed models to show where those models focus on the input image to make the predictions. We utilized the Gradient-weighted Class Activation Mapping (Grad-CAM) proposed in~\cite{DBLP:journals/corr/SelvarajuDVCPB16}. In object classification, Grad-CAM commonly uses the computed gradient of a given target class flowing through the final convolutional layer of the feature extractor part to produce class activation maps (CAMs). Our paper visualized CAMs using the gradient flowing through the last feature extractor block of each proposed model. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{grad1.png} \caption{Visualization of Grad-CAMs produced by ResNet-50 and our proposed models. With the input images of IP102 in column (a), Grad-CAMs of ResNet-50 (column (b)), RAN (column (c)), FPN (column (d)) and MMAL-Net (column (e)) are presented.} \label{grad} \end{figure} Fig.~\ref{grad} shows Grad-CAMs produced by ResNet-50, RAN, FPN, and MMAL-Net using the input images in IP102 and their correct classes. They show that MMAL-Net performs the best on focusing the insects in the input images even those insects are small such as in image 2. ResNet-50 and FPN seem to perform the same, which correctly focus on the region containing the insects. On the other hand, RAN seems to focus on large and less accurate areas to make predictions. \section{Conclusion and Future Works} In this paper, we have investigated different CNN-based methods, i.e., Residual-Attention Network (RAN), Feature Pyramid Network (FPN), and multi-branch and multi-scale attention network (MMAL-Net) for recognizing insect pests. Among these methods, MMAL-Net can achieve the best accuracy of $72.15\%$ and $99.56\%$ on two datasets IP102 and D0, respectively. Furthermore, we visually validated that our models focused on the correct region, even on the input images of low-scale insects or a noisy background. With the combination of chosen models by the ensemble technique, we can obtain the better accuracy of $74.13\%$ and $99.78\%$ on IP102 and D0, respectively, and bypass the state-of-the-art methods related to the insect pest classification problem on these two datasets. For contributing to the research community, we publish all source codes associated with this work at \url{https://github.com/hieuung/Improving-Insect-Pest-Recognition-by-EnsemblingMultiple-Convolutional-Neural-Network-basedModels}. We aim to consider utilizing variants of CNNs to solve challenges in insect pest classification and applying efficient data augmentation methods in future work. \nocite{*} \bibliographystyle{spbasic}
2,877,628,090,905
arxiv
\section{Introduction} There are several methods in which humans and computers can converse, like speaking (audio) and writing (text). At present, research in the field of NLP has advanced a lot to attain a good understanding of textual data but there are still some ways to go to properly contemplate the audio/speech data. Word embeddings are extensively used in NLP applications since they have proven to be an extremely informative representation of the textual data. Language models like GloVe \cite{pennington-etal-2014-glove} and Word2Vec \cite{NIPS2013_5021} successfully transform textual words from its raw form to semantically and syntactically correct, fixed dimensional vectors. These type of word representations for the spoken words can be widely used to process speech/audio data for tasks like Automatic Summarization {\cite{kageback-etal-2014-extractive}}, Machine Translation {\cite{DBLP:journals/corr/Jansen17a}}, Named Entity Recognition {\cite{10.1007/978-981-13-9409-6_218}}, Sentiment Analysis {\cite{DBLP:journals/corr/Liu17b}}, Information Retrieval {\cite{DBLP:journals/corr/RekabsazMLH17}}, Speech Recognition {\cite{DBLP:journals/corr/abs-1902-06833}}, Question-Answering {\cite{Tapaswi_2016_CVPR}} etc. Compared to text, not much research has been done in the field of audio-based modeling primarily due to the lack of large, reliable, clean, and publicly available datasets on which the spoken word language models can be trained. Also, spoken words unlike textual words have a different meaning when they are spoken in a different tone, expression, accent, etc, and incorporating them exponentially increases the difficulty of building such language models. Such models also face difficulties such as different people can have different pronunciations, tones, and pauses for the exact same words. The proposed model, aims at generating syntactically and semantically adequate contextualized vector representation of the variable length audio files (instead of using fixed length audio files with multiple word utterances), where each file corresponds to a single spoken word in a speech and further validates the vector representations by evaluating it on three benchmark word similarity datasets (SimVerb, WS-SIM, WS-REL). To further increase the interpretability, this paper also provides illustrations of the vector space generated by the proposed model. \section{Related Work} A lot of work has been done in the field of NLP to give textual words sound representations. Word2Vec \cite{NIPS2013_5021} has demonstrated huge improvements in embedding sub-linear relationships into the vector space of the words but at the same time, they were unable to handle out of vocabulary words. Another comparable word representation model is GloVe \cite{pennington-etal-2014-glove}. GloVe works to fit a giant word co-occurrence matrix built from the matrix. GloVe helps in taking into account the semantics and also gives relatively smaller dimension vectors. Recent advances have enabled it to apply deep learning to transform spoken word segments into fixed dimensional vectors. {\cite{chung2016audio}}, uses fixed-length audio files and passes them through a Sequence-to-Sequence Autoencoder (SA) and Denoising SA (DSA) to generate word embeddings. They demonstrated that the phonetically similar words had close spatial representations in the vector space but they failed to meet the result standards similar to those by GloVe trained on Wikipedia. Following the above work, {\cite{chung2017learning}} used 500 hours of speeches from multiple speakers divided into fixed audio segments. They compare the results also with GloVe based on 13 different comparison measures. Both, {\cite{chung2016audio}} \& {\cite{chung2017learning}} failed to capture the spoken words properly due to the use of fixed length audio segments. {\newcite{9060816}}, proposed an audio2vec model which was built on top of the Word2Vec models (Skip-gram \& CBOW) to reconstruct spectrogram slices using the contextual slices and temporal gaps. They were able to show that Audio2Vec performed better than the other existing fully-supervised models. \section{Model} \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{model.jpg} \caption{\small{Proposed Model Architecture}} \label{fig:main_model} \end{figure*} The proposed model uses sequential utterances of words from a speech to learn their corresponding contextualized representations. These learned contextualized representations capture the semantic and syntactic properties of these spoken words. The input to the model is a speech \(S\). This speech is split into individual spoken word utterances (independent variable-length audio files). The proposed model used audio spectrograms for representing the audio files of these spoken word utterances. An audio spectrogram is a visual representation of sound. So to get spectral representations, all the spoken word utterances are converted to their corresponding spectrograms (which depicts the spectral density of a sound \textit{w.r.t} time (in our case an utterance)). The spoken word utterance spectrograms are represented by \(W_i\) as shown in equation \ref{equ1}. \begin{equation} \label{equ1} S = [W_1, W_2, . . ., W_n], n \in \mathbb{R} \end{equation} In the above equation \(n\) represents the total number of spoken words present in a sentence of the speech and \(W_i \in \mathbb{R}^{{l_1}{l_2}}\) represents a spectrogram, where, \(l_1\) is for the frequency (pitch/tone) dimension, \(l_2\) represents time. Values in the spectrogram represents amplitude (energy/loudness) at a particular time of a particular frequency. Words have different meanings when they are spoken in different contexts. To capture the context corresponding to spoken words, the proposed model uses a context window of size \(m\). So the representation of a spoken word (target word) is learned based on \(m\) spoken words after and before it. This context window of size \(m\) slides over the whole speech having a target spoken word \(W_t\) (where \(1 \leq t \leq n\)) at the middle and \(m\) context spoken words before and after it (a total of 2m context words). These context spoken words are represented by \(W_{t+j}\) where \(-m \leq j \leq m\) \& \(j \neq 0\). Next, the model passes all the pairs of the target spoken word spectrograms \(W_t\) with its corresponding context spoken word spectrograms \(W_{t+j}\) into a convolutional autoencoder individually to learn the contextual representation of the target spoken word corresponding to \(W_t\). The convolutional autoencoder is composed of two independent neural networks namely, an encoder network and a decoder network. The encoder network is represented by \(f_{\phi}\) and the decoder network is represented by \(g_{\theta}\), where \(\phi\) and \(\theta\) are the learnable parameters corresponding to both the networks. Both \(f_{\phi}\) \& \(g_{\theta}\) are used to extract the spatial features of the input spectrogram \textit{w.r.t} to the output spectrogram. The target spoken word spectrogram \(W_t\) is given as input to the encoder network, which outputs a latent representation \(h\). This latent representation is then given as input to the decoder network, i.e. \begin{equation} \label{equ2} \small{h = f_{\phi}(W_t) = \sigma(W_t \ast \phi)} \end{equation} \begin{equation} \label{equ3} \small{W_{t+j} = g_{\theta}(h) = \sigma(h \ast \theta)} \end{equation} \begin{equation} \label{equ4} \small{W_{t+j} = g_{\theta}(f_{\phi}(W_t))} \end{equation} In the equations \ref{equ2} \& \ref{equ3}, (\(\ast\)) represents the convolution operator, \(\sigma\) is the \textit{Leaky}ReLu activation function. The encoder network \(f_{\phi}\), consist of two convolutional layers on top of the input spectrogram. These convolutional layers are used for extracting hierarchical location invariant spatial features. The output of the last convolutional layer in \(f_{\phi}\) is then flattened and passed to a \(d\)-dimensional dense layer (\(h\)). This dense layer (\(h\)) is the embedding layer which learns the contextual representation of the spoken word corresponding to input \(W_t\) (contextualized on the context spoken word spectrograms). The decoder network takes the embedding layer (\(h\)) as input and generates a reconstruction \(W_t^r\) by passing (\(h\)) through a dense layer and two transpose convolutional layers. The \(d\)-dimensional embedding layer (\(h\)) learns an efficient contextualized representation of the word corresponding to \(W_t\) by minimizing the loss function \(L\) (shown in equation \ref{equ5}). In the equation below, \(N\) represents the batch size and \(m\) represents the size of the context window. \begin{equation} \label{equ5} \small{L_{\phi,\theta} = \frac{1}{N}\sum\limits_{t=1}^{N} (\frac{1}{2m}\sum\limits_{j=-m;j\neq0}^{m}(\Delta_{W_t,W_{t+j}}))} \end{equation} where, \begin{equation} \label{equ5_1} \small{\Delta_{W_t,W_{t+j}} = ||g_{\theta}(f_{\phi}(W_t))-W_{t+j}||_2^2} \end{equation} The lost function defined above helps the latent embedding to learn the contextual relationship between the target spoken word spectrograms and it's the corresponding context by calculating a reconstruction loss between the reconstruction \(W_t^r\) and the corresponding contextual spectrograms \(W_{t+j}\). Since a word spoken in different tones has different spectrograms, the model also captures the tone in which the words are uttered in its contextual embedding. So in summary the proposed model can not only incorporate context in the spoken word representations but can also incorporate its tone. \section{Evaluation Setup} \subsection{Dataset} The proposed model uses Trump's speeches (Audio and word transcription)\footnote{https://www.kaggle.com/etaifour/trump-speeches-audio-and-word-transcription} dataset for training and testing. This dataset was chosen because it comprises of audio files and their corresponding word split JSON files. Another reason for choosing this dataset was that it contains speeches of a single person (which will eliminate the problem of having different pronunciations of the same word). These JSON files contain a direct mapping between each word spoken and the duration in which it was spoken. These mappings were used to split the full audio file into multiple audio files for each word spoken. This context mapping was used to create input-output pairs for the proposed model. The statistics about the dataset is shown in Table \ref{tab:my-data}. \begin{table}[ht] \centering \caption{\small{Dataset Statistics}} \label{tab:my-data} \resizebox{0.9\columnwidth}{!}{% \begin{tabular}{c|c|c|c} \hlineB{3} \# Words & \# Sentences & \# Context Mappings & \# Seconds \\ \hline \hline 18.1k& 1K & 72.6K & 12.9K \\ \hlineB{3} \end{tabular}% } \end{table} \subsection{Training Details} The proposed model was evaluated on 10\% of the data, and the rest was used for training. From the training set, 10\% data was used as the validation set. It was trained for 50 epochs having a mini-batch size of 5. For optimization, Adam optimizer was used to have an initial learning rate of \(0.01\). A context window size of 2 was used for all the experiments (due to computational resource limitations). Early stopping with the patience of 5 epochs and dropout with a dropout rate of \(0.7\) was used to avoid over-fitting. The size of the latent representation \(d\) was set to 16. The size of the filters in the convolutional and de-convolutional layers was set to (4$\times$4). \subsection{Results} The performance of the proposed model was validated by (1) inspecting the vector space and (2) evaluating its performance on three benchmark datasets for measuring word similarities, and comparing the proposed model with text-based language models (trained on the textual transcripts). \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{v.png} \caption{\small{Vector space generated by the proposed model}} \label{fig:model_pca} \end{figure} To visualize the performance of the proposed model, the dimensionality of the audio vectors (16-dimensional) was reduced using principal component analysis (PCA) \cite{hotelling1933analysis}, to plot the spoken word representations in a (2-dimensional) vectors space. Figure \ref{fig:model_pca} illustrates the vector space generated by the proposed model. On closeup, it can be easily seen that similar spoken words were grouped together in the vector space. For example, spoken words like \textit{big}, \textit{biggest}, and \textit{much} fall in vicinity to each other. It can also be seen in figure \ref{fig:model_pca} that the same spoken words (\textit{big} \& \textit{biggest}), uttered in different tones were also in close proximity but were slightly distant from each other. This demonstrates the capability of the model to capturing semantical and syntactical similarities between different spoken words (or the same spoken word in different tones). \begin{table}[ht] \centering \caption{\small{Results Table}} \label{tab:vector} \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{c|c|ccc} \hlineB{3} \multirow{2}{*}{Dataset} & \# Word & \multicolumn{3}{c}{\small{Spearman’s rank correlation coefficient $\rho$}} \\ \cline{3-5} & Pairs & \multicolumn{1}{c|}{Our Model} & \multicolumn{1}{c|}{Word2Vec} & GloVe \\ \hline \hline SimVerb & 275 & 0.31 & \textbf{0.32} & 0.28 \\ \cline{1-1} WS-SIM & 33 & \textbf{0.51} & 0.49 & 0.47 \\ \cline{1-1} WS-REL & 53 & 0.23 & \textbf{0.25} & \textbf{0.25} \\ \hlineB{3} \end{tabular}% } \end{table} The spoken word representations generated by the proposed model were evaluated on three different benchmark datasets (\textbf{SimVerb} \cite{gerz-etal-2016-simverb}, \textbf{WS-SIM} and \textbf{WS-REL} \cite{agirre-etal-2009-study}) that are widely used for computing word similarities/relatedness between words. The comparison of the proposed model is done with the text-based language models Word2Vec \cite{NIPS2013_5021} and GloVe \cite{pennington-etal-2014-glove}. In the case of the proposed model, word similarities were obtained by measuring the cosine similarities between the spoken vector representations of the corresponding words, and in the case of Word2Vec \& GloVe, similarities were computed between the corresponding word (textual) vector representations. Table \ref{tab:vector} reports Spearman’s rank correlation coefficient $\rho$ between the human ranking \cite{myers2010research} and the ones generated by each model. The proposed model was trained on a small dataset (small vocabulary). So the proposed model was not able to generate representations for some of the word pairs present in the above mentioned three benchmark datasets (Number of word pairs used is also shown in the table above). Despite spoken words having different tones/expressing/pause for the same words depending on the context (in contrast to text), the proposed model was able perform comparably to the existing text-based language models. \section{Conclusion} This paper introduces an unsupervised model that not only was able to successfully generate semantically and syntactically accurate contextualized representations of varying length spoken words but was also able to perform adequately on three benchmark datasets for measuring word similarities. The proposed model also demonstrated its capabilities to capture tones and expressions of the spoken words. To the best of our knowledge, this is the first work that tries to model variable-length spoken words using convolutional autoencoders. In the future, we plan to extend the capabilities of the model to handle different pronunciations/accent by different speakers. \comment{ \section{Credits} This document has been adapted from the instructions for earlier NLP4MusA proceedings, including those for ACL 2019 by Douwe Kiela and Ivan Vuli\'{c}, NAACL 2019 by Stephanie Lukin and Alla Roskovskaya, ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. \section{Introduction} The following instructions are directed to authors of papers submitted to NLP4MusA or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. \textbf{The proceedings are designed for printing on A4 paper.} \section{General Instructions} Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection~\ref{ssec:first}). \textbf{Type single-spaced.} Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}. Pages are numbered for initial submission. However, \textbf{do not number the pages in the camera-ready version}. By uncommenting {\small\verb|\aclfinalcopy|} at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where {\small\verb|***|} appears in the {\small\verb|\def\aclpaperid{***}|} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NLP4MusA \LaTeX\ style will create a titlebox space of 2.5in for you when {\small\verb|\aclfinalcopy|} is commented out. \subsection{The Ruler} The NLP4MusA style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\small\verb|\aclfinalcopy|} command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper -- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (\emph{e.g.}, the first paragraph on this page ends at mark $108.5$). \subsection{Electronically-available resources} NLP4MusA provides this description in \LaTeX2e{} (\texttt{\small nlp4MusA.tex}) and PDF format (\texttt{\small nlp4MusA.pdf}), along with the \LaTeX2e{} style file used to format it (\texttt{\small nlp4MusA.sty}) and an ACL bibliography style (\texttt{\small nlp4MusA\_natbib.bst}) and example bibliography (\texttt{\small nlp4MusA.bib}). These files are all available at \texttt{\small http://blabla.../nlp4MusA-latex.zip}. We strongly recommend the use of these style files, which have been appropriately tailored for the NLP4MusA 2020 proceedings. \subsection{Format of Electronic Manuscript} \label{sect:pdf} For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from \LaTeX\ using the \textit{pdflatex} command. If your version of \LaTeX\ produces Postscript files, you can convert these into PDF using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. \textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.} Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF. It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper. When working with \texttt{dvips}, for instance, one should specify \texttt{-t a4}. Or using the command \verb|\special{papersize=210mm,297mm}| in the latex preamble (directly below the \verb|\usepackage| commands). Then using \texttt{dvipdf} and/or \texttt{pdflatex} which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. \subsection{Layout} \label{ssec:layout} Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: \begin{itemize} \item Left and right margins: 2.5 cm \item Top margin: 2.5 cm \item Bottom margin: 2.5 cm \item Column width: 7.7 cm \item Column height: 24.7 cm \item Gap between columns: 0.6 cm \end{itemize} \noindent Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. \subsection{Fonts} For reasons of uniformity, Adobe's \textbf{Times Roman} font should be used. In \LaTeX2e{} this is accomplished by putting \begin{quote} \begin{verbatim} \usepackage{times} \usepackage{latexsym} \end{verbatim} \end{quote} in the preamble. If Times Roman is unavailable, use \textbf{Computer Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about 10\% less dense than Adobe's Times Roman font. \begin{table}[t!] \begin{center} \begin{tabular}{|l|rl|} \hline \textbf{Type of Text} & \textbf{Font Size} & \textbf{Style} \\ \hline paper title & 15 pt & bold \\ author names & 12 pt & bold \\ author affiliation & 12 pt & \\ the word ``Abstract'' & 12 pt & bold \\ section titles & 12 pt & bold \\ subsection titles & 11 pt & bold \\ document text & 11 pt &\\ captions & 10 pt & \\ abstract text & 10 pt & \\ bibliography & 10 pt & \\ footnotes & 9 pt & \\ \hline \end{tabular} \end{center} \caption{\label{font-table} Font guide. } \end{table} \subsection{The First Page} \label{ssec:first} Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. \textbf{Title}: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table~\ref{font-table}) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (\emph{e.g.}, use ``Mitchell'' not ``MITCHELL''). Do not format title and section headings in all capitals as well except for proper names (such as ``CNN'') that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. \textbf{Abstract}: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word \textbf{Abstract} in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. \textbf{Text}: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. \textbf{Indent}: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. \begin{table} \centering \small \begin{tabular}{cc} \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} & \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\\hline \end{tabular} \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, \BibTeX\ names.}\label{tab:accents} \end{table} \subsection{Sections} \textbf{Headings}: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. \begin{table*}[t!] \centering \begin{tabular}{lll} output & natbib & previous ACL style files\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \end{tabular} \caption{Citation commands supported by the style file. The citation style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} \textbf{Citations}: Citations within the text appear in parentheses as~\cite{Gusfield:97} or, if the author's name appears in the text itself, as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the former is accomplished using {\small\verb|\cite|} and the latter with {\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\small\verb|\cite|} command, \emph{e.g.}, {\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations as sentence constituents. We suggest that instead of \begin{quote} ``\cite{Gusfield:97} showed that ...'' \end{quote} you use \begin{quote} ``Gusfield \shortcite{Gusfield:97} showed that ...'' \end{quote} If you are using the provided \LaTeX{} and Bib\TeX{} style files, you can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}). If the Bib\TeX{} file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref \LaTeX{} package. To disable the hyperref package, load the style file with the \verb|nohyperref| option: \\{\small \verb|\usepackage[nohyperref]{nlp4MusA}|} \textbf{Compilation Issues}: Some of you might encounter the following error during compilation: ``{\em \verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|.}'' This happens when \verb|pdflatex| is used and a citation splits across a page boundary. To fix this, the style file contains a patch consisting of the following two lines: (1) \verb|\RequirePackage{etoolbox}| (line 454 in \texttt{nlp4MusA.sty}), and (2) A long line below (line 455 in \texttt{nlp4MusA.sty}). If you still encounter compilation issues even with the patch enabled, disable the patch by commenting the two lines, and then disable the \verb|hyperref| package (see above), recompile and see the problematic citation. Next rewrite that sentence containing the citation. (See, {\em e.g.}, {\small\tt http://tug.org/errors.html}) \textbf{Please do not use anonymous citations} and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review. \textbf{References}: Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \verb|\bibliography| commands near the end for more. Provide as complete a citation as possible, using a consistent format, such as the one for \emph{Computational Linguistics\/} or the one in the \emph{Publication Manual of the American Psychological Association\/}~\cite{APA:83}. Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM \emph{Computing Reviews\/}~\cite{ACM:83}. The \LaTeX{} and Bib\TeX{} style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. \begin{itemize} \item Example citing an arxiv paper: \cite{rasooli-tetrault-2015}. \item Example article in journal citation: \cite{Ando2005}. \item Example article in proceedings, with location: \cite{borsch2011}. \item Example article in proceedings, without location: \cite{andrew2007scalable}. \end{itemize} See corresponding .bib file for further details. Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work. \textbf{Appendices}: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: \textbf{Appendix A. Title of Appendix}. \subsection{Footnotes} \textbf{Footnotes}: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.} Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.} \subsection{Graphics} \textbf{Illustrations}: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. \textbf{Captions}: Provide a caption for every illustration; number each one sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1. Caption of the Table.'' Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table~\ref{font-table}). Captions longer than one line are left-aligned (see Table~\ref{tab:accents}). Do not overwrite the default caption sizes. The nlp4MusA.sty file is compatible with the caption and subcaption packages; do not add optional arguments. \subsection{Accessibility} \label{ssec:accessibility} In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. \section{Translation of non-English Terms} It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration ``translation''. \section{Length of Submission} \label{sec:length} The NLP4MusA main conference accepts submissions of extended abstracts and short papers. Extended abstracts may consist of up to two (2) pages of content plus unlimited pages for references. Upon acceptance, final versions of extended abstracts will be given no additional page. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given no additional page. For both extended abstracts and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. NLP4MisA does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix~\ref{sec:appendix} and Appendix~\ref{sec:supplemental} for further information. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. \section*{Acknowledgments} The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. \\ \noindent \textbf{Preparing References:} \\ Include your own bib file like this: \verb|\bibliographystyle{nlp4MusA_natbib}| \verb|
2,877,628,090,906
arxiv
\section{Introduction} SN1a supernovae observations \cite{1.1, 1.2, 1.3} suggest that the universe is currently undergoing an accelerating phase of expansion. Such an uncanny late-time cosmic evolution, is also supported by some other observational evidences like WMAP \cite{1.4}, X-ray \cite{1.5}, LSS \cite{1.6} and SDSS \cite{1.7}. Clearly, standard FLRW model of cosmology, which ensures only decelerating phase of expansion throughout the evolution, does not explain the late time cosmic acceleration. It remains a challenge to the cosmologists for over a couple of decades, to justify such accelerated expansion of the universe. The fundamental requirement of such accelerated expansion is the presence of negative pressure, while thermodynamic pressure due to baryonic and non-baryonic (CDM) matters are positive definite $p \ge 0$. The best known candidate with a negative pressure is the cosmological constant ($\Lambda$), which can unanimously resolve the puzzle, and is dubbed as the $\Lambda$CDM model. However, cosmological constant, which essentially is the vacuum energy density of the universe calculated by the field theorists, is some 120 order of magnitude larger than that required to explain late time cosmic acceleration. Therefore, dynamical models were introduced, by invoking one or more exotic field(s). These are called dark energy (DE) models, since neither cosmological constant nor such fields interact with anything other than gravity. Till date, other than the Higgs, which is not responsible for such late-stage accelerated expansion, no scalar has been detected\footnote{There is a recent indication for direct detection of dark energy in the world's most sensitive WIMP detector XENON1T, located thousands of feet underneath the Monte Gran Sasso, Italy. Last year it puzzled the scientists by reporting an excess of about 53 recoil electrons. Using chameleon screening technique, a group of scientists has claimed that these excess electrons might be an outcome of interaction with dark energy (S. Vagnozzi et al, (2021) Phys. Rev. D 104, 063023).}. An alternative to dark energy models has therefore been invoked, which requires to modify General theory of Relativity (GTR) by introducing higher order curvature invariant terms. These models are known as ``modified theory of gravity". A host of such alternatives to dark energy models viz., ``$F(R)$ gravity" \cite{1.8,1.9,1.10,1.11,1.12,1.13,1.14,1.15,1.16}, ``$F(G)$ or Gauss Bonnet gravity" \cite{1.17,1.18,1.19}, ``F(R) Ho$\check{\mathrm{r}}$ava-Lifshitz gravity" \cite{1.20,1.21,1.22,1.23},``Lovelock gravity" \cite{1.24,1.25,1.26,1.27}, their combinations \cite{1.28,1.29,1.30}, and even more, appear in the literature. In the present manuscript, our concern is with Teleparallel gravity theory, which has drawn lot of attention in recent years. \\ In analogy to the $F(R)$ theory of gravity, recently a new modified theory of gravity, namely the so-called $F(T)$ theory of gravity, also dubbed as ``Gravity with torsion" has been proposed to explain current accelerated expansion without invoking dark energy \cite{1.31,1.32,1.33}. This is essentially a generalized version of the so-called `Teleparallel gravity'. Teleparallelism was first attempted by Einstein \cite{1.34} to base a unified theory of electromagnetism and gravity, on the mathematical structure of distant parallelism. However, the scheme failed. New Teleparallel gravity theory is a theory of gravitation based on Weitzenb\"ock spacetime \cite{1.35}, and attributes gravitation to the torsion tensor formed out of the parallel vector fields. It is important to mention that, there is no foundational reason to consider torsion-less space-time, other than its simplicity. However, it is required to test its advantage over GTR, which is our present concern. For a comprehensive review of $F(T)$-gravity and its cosmological implications, see \cite{1.35a} and references therein. Let us begin with a brief review of the modified Teleparallel gravity theory. The action of $F(T)$ gravity is given by, \be\label{1.1}\mathbb{ A} = \int d^4 x \mid e \mid F(T )+ S_m ,\ee where $|e|$ = det $e^{i}_{\mu}=\sqrt {-g}$, and the units has been chosen so that $c = 16 \pi G = 1$. Teleparallelism uses a vierbein field $ \mathbf{e_{i}} (x^{\mu}),~ i = \{0, 1, 2, 3\}$, as dynamical object, which is an orthonormal basis for the tangent space at each point $x^{\mu}$ of the manifold: $\mathbf{e_{i}}.\mathbf{e_{j}}={\eta}_{ij}$, where ${\eta}_{ij}$ = diag(-1,1,1,1). Each vector $\mathbf{e_{i}}$ can be described by its components $e^{\mu}_{i}, ~\mu = \{0, 1, 2, 3\}$ in a coordinate basis; i.e. $\mathbf{e_{i}}=e^{\mu}_{i}\partial\mu$. Here, the Latin indices refer to the tangent space, while Greek indices label coordinates on the manifold. The metric tensor is obtained from the dual vierbein as $ g_{\mu\nu}(x)=\eta_{ij}e^{i}_{\mu}(x) e^{j}_{\nu}(x)$. In contrast to the GTR, which uses the torsion-less Levi-Civita connection, Teleparallelism uses the curvature-less Weitzenb$\ddot{\mathrm{o}}$ck connection \cite{1.35}, whose non-null torsion is \be\label{1.2} T^{\lambda}_{\mu\nu} \equiv e^{\lambda}_{i}[\partial_{\mu}e^{i}_{\nu}-\partial_{\nu}e^{i}_{\mu}].\ee The above tensor encompasses all the information regarding the gravitational field. The Teleparallel equivalent of General Theory of Relativity (TEGR) Lagrangian is built with the torsion \eqref{1.2}, and its dynamical equations for the vierbein lead to Einstein equations for the metric. The Teleparallel Lagrangian is given by \cite{1.36,1.37,1.38}, \be\label{1.3} L_T = {S_{\rho}}^{\mu\nu} {T^{\rho}}_{\mu\nu},\ee where, \be\label{1.4} {S_{\rho}}^{\mu\nu} = \frac{1}{2}[{K^{\mu\nu}}_{\rho}+{\delta}^{\mu}_{\rho}{T^{\theta\nu}}_ {\theta}-{\delta}^{\nu}_{\rho}{T^{\theta\mu}}_{\theta}],\ee while $K^{\alpha\beta}_{\rho}$ is the contorsion tensor given by, \be\label{1.5} {K^{\mu\nu}}_{ \rho} = -\frac{1}{2}[{T^{\mu\nu}}_{\rho}-{T^{\nu\mu}}_ {\rho}-{T_{\rho}}^{\mu\nu}],\ee which equals the difference between Weitzenb$\ddot{\mathrm{o}}$ck and the Levi-Civita connections.\\ $F(T)$ Teleparallel theory of gravity was primarily introduced to drive inflation by Ferraro and Fiorini \cite{1.39,1.40}. Later, Bengochea and Ferraro \cite{1.41}, Linder \cite{1.42}, and also Myrzakulov \cite{1.42a} proposed to use $F(T)$ Teleparallel theory of gravity to drive the current accelerated expansion of our universe, as an alternative to dark energy. The theory has thereafter been studied extensively over last decade. For example, Hamiltonian constraint analysis in $F(T)$ Teleparallel gravity has been studied \cite{1.43,1.44,1.45,1.46}. Further, constraints on $F(T)$ Teleparallel gravity \cite{1.47,1.48} by latest observational data-sets, analysis of the dynamical behaviour \cite{1.49} and the cosmic large scale structure \cite{1.50}, relativistic neutron star \cite{1.51}, matter bounce \cite{1.52} and perturbations \cite{1.53,1.54} etc. have also been explored. Additionally, in $F(T)$ Teleparallel gravity framework, static spherical symmetry solutions \cite{1.55,1.56,1.57}, validity of Birkhoff's theorem \cite{1.58,1.59,1.60}, Solar system tests \cite{1.61,1.62,1.63}, black hole solutions \cite{1.64,1.65,1.66,1.67}, wormhole solutions \cite{1.68,1.69,1.70} and the equation-of-state (EOS) parameter crossing the phantom divide line, have also been explored \cite{1.71}. In the theoretical aspect, the Lorentz invariance and conformal invariance of the $F(T)$ Teleparallel theory are also investigated \cite{1.72,1.72a,1.73}, and many interesting results emerged in the process. Finally, Inflation in the context of $F(T)$ gravity has also been explored extensively \cite{Inf1,Inf2,Inf3,Inf4,Inf5,Inf6,Inf7,Inf8,Inf9,Inf10,Inf11,Inf12,Inf13,Inf14,Inf15,Inf16,Inf17} and in some works inflationary parameters were matched with experimentally observed data. However, most of these tests were made prior to the currently released data from Planck collaborations \cite{Planck1, Planck2}. Our present concern is primarily to test Inflation and match the inflationary parameters with the latest release data from Planck's collaborations \cite{Planck1, Planck2}, and also to check if it is compatible with later epoch of cosmic evolution in the matter dominated era. \\ Latest released data from Planck collaborations \cite{Planck1, Planck2} has dramatically tightened the constraints on tensor to scalar ratio $(r)$, the slope of the primordial scalar power spectrum, conventionally parameterized by the power-law index $(n_s)$, and the tensor spectral index $(n_t)$. The combined data (TT, TE, EE + lowE + lensing + BK15 + BAO + Bicep2) has constrained $r$ to $r < 0.058$, sets the range on $n_s$ in the limit $n_s = 0.9668 \pm 0.0037$, with a very small tensor spectral index $n_t$, fixed by the single field slow-roll self consistency condition. Now to test inflation, a specific form of $F(T)$ is required. Noether symmetry analysis have been extensively performed in this respect \cite{1.74, 1.75, 1.76, 1.77, 1.78, 1.79}. However, through a private communication it is learnt that none of the forms of $F(T)$ associated with their conserved currents so obtained, satisfy the energy constraint equation of Einstein \cite{NM}. On the contrary, the outcome of reconstruction program are ($F(T)\propto \sqrt T$ in $\Lambda$CDM model, $F(T)\propto T$ in the pressure-less dust model and $F(T)\propto (T^2+6\beta T-3{\beta}^2)$) for stiff fluid \cite{1.42a}, out of which the last form appears to be quite reasonable. In fact several authors followed reconstruction mechanism to find $F(T) = \alpha T + \beta T^2$ (apart from some complicated forms), some of which we shall discuss shortly. It is noteworthy that this particular form is the simplest modification, and has also been found through Noether symmetry analysis, although as mentioned, the associated conserved current is not compatible with the field equation. We therefore choose this particular form of $F(T) = \alpha T + \beta T^2$, by hand for our purpose.\\ Let us briefly review the current status of theoretical study on inflation in Teleparallel $F(T)$ theory. Authors \cite{Inf1} studied inflation with non-minimal gravitational coupling of the torsion scalar and the electro-magnetic field, which breaks the conformal invariance. They found generation of large-scale magnetic field with field strength of the order of $10^{-9}~G$, on $1~ Mpc$ scale. This is sufficient to account for the large-scale magnetic fields observed in clusters of galaxies only through the adiabatic compression, during the construction of the large scale structure of the universe. This does not rewuire the dynamo amplification mechanism. In \cite{Inf2} authors studied trace-anomaly driven inflation in $T^2$ Teleparallel gravity theory. In particular, they demonstrated that the de Sitter inflation can be realized in $T^2$ gravity, and graceful exit is realized. In \cite{Inf3}, warm intermediate inflation $a \propto a_0 e^{At^f}, A > 0,~ 0 < f < 1$ is studied with $F(T) = T +\alpha \sqrt{-T}$, in the presence of a minimally coupled scalar field. However, the plots depict that either $n_s$ or $r$ lie within the current observational limit, but not the two together. For example, if $n_s \approx 0.96$, then $r > 0.2$, in the first case and $r > 0.1$ in the second. On the contrary, if $r$ is kept within experimental limit, $r < 0.06$ (say), then $n_s ~ 0.5$ in the first case, and $n_s ~ 0.1$ in the second, which are awfully bad. Nonetheless, the model admits graceful exit from inflation. In \cite{Inf4}, authors chose $F(T) = T + \beta T^2$, minimally coupled to a scalar field and reconstructed the scalar potential as: $V(\phi)=A + B e^{-\sqrt{1\over 2n}\phi}$. Although $n_s= 0.9691$ lie within experimental limit, $r=0.248$ is far from recently established constraint $r < 0.06$ \cite{Planck1, Planck2}. Further, the model does not admit slow roll. Additionally, it is not clear how the authors obtained $a(t) \propto t^{2\over 3(1 + \omega)}$, (where $\omega$ is the equation of state parameter) which matches with Friedmann-solution in matter dominated (radiation and pressure-less dust) eras. Next, the authors in \cite{Inf5} investigated perfect fluid description of slow-roll parameters, and reconstructed $F(T)$. For quite a complicated power-law form of $F(T)$, the slow-roll parameters have been found to be consistent with the Planck data. However, consideration of perfect fluid in the very early universe is questionable. The authors in \cite{Inf6} studied inflation considering power law $F(T) = T^n$, in the presence of a canonical scalar field, while the scale factor has been chosen to admit power law inflation and intermediate inflation. The potential required is $V(\phi) = \phi^m$. Self-interacting quartic potential is found to be consistent with Planck (2015) data. The unanswered question is: how the scalar slow rolls with quartic potential? The authors of \cite{Inf7} derived a particular class of $F(T)$ model, which is identified with flat-like universe. With a minimally coupled scalar field, double slow roll inflation is realized, and a quasi-inverse power law inflation has been found to fit with the observed data, which admits graceful exit as well. Authors of \cite{Inf8} reconstructed a complicated form of $F(T)$ and investigated bounce inflation with a canonical scalar field. Although graceful exit is admissible, they found $r = 0.00156$ and the scalar tilt $n_s = 0.997$, exhibiting nearly scale invariant power spectrum, which is ruled out by all observations. Following reconstruction mechanism, authors of \cite{Inf9} found a highly complicated form of $F(T)$, which reduces to $F(T) = c_1 \sqrt{-T} + c_2$ in vacuum (note that such a form of $F(T)$ makes the action singular). Inflation is studied in the context of unimodular $F(T)$ gravity. Although, graceful exit is admissible, the spectral index $n_s \approx 0.98$ however, exceeds current experimental limit. In \cite{Inf10}, assuming intermediate inflation $a(t) = a_0 e^{At^n}$, with $A > 0,~ 0 < n < 1$, for $F(T) = T + f(T)$, authors found $f(T) = c_1 T^m – {T\over 2(1-m)}, m = {An\over 2}$ in view of the evolution of perturbation. Although the spectral index $n_s = 0.9644 \pm 0.0049$ lies very much within the experimental limit, the tensor to scalar ratio could not be found. Logamediate inflation $a(t) = a_0 e^{A(ln~t)^\lambda}$, with $A > 0,~ 0 < \lambda < 1$ was studied in \cite{Inf11}, taking into account $F(T) = T_0 \left({T\over T_0}\right)^n$. Quite a nice fit with observed data was found with Planck (2015) TT, TE, EE + lowP data, for $n = 2, \lambda = 8$. Under the choice $F(T) = \alpha T + \beta T^2$ authors in \cite{Inf12} studied constant roll inflation with a minimally coupled inflaton field. Perturbative analysis was also carried out successfully. However, although the authors found $n_s = 0.96$, hiwever $r_{min}= 0.08$ crosses experimental limit. Considering non-minimal coupling with a tachyonic field, authors of \cite{Inf13} found $N = 58$, $n_s = 0.956$ and $r = 0.0061$. Clearly $n_s$ lie much below the experimental data. Authors of \cite{Inf14} considered a minimally coupled scalar field to explore power law inflation, the scalar field being responsible to reheat after inflation. The authors obtain an expression for the reheating temperature in terms of the CMB temperature, the spectral index, the power spectrum and the parameters of the model. In another work \cite{Inf15}, taking into account a canonical scalar field, non-minimally coupled to the torsion with a Galileon-type self-interaction, exhaustive study on different slow roll inflationary scenario on generalized scalar-torsion gravity, and excellent agreement with Planck’s data were established. Nonetheless, since the authors considered $F(T) = T$, so it is the K-essence model of GTR, in disguise. The authors of \cite{Inf16} reconstructed $F(T) = T + T^2 –c$ near the type IV finite-time singularity in the Jordan frame. Considering a specific form of Hubble rate, they confirm theoretical $F(T)$ description based on slow-roll parameters ($n_s = 0.966, r < 0.07$) with Planck and BICEP2/Keck-Array data. Graceful exit is also admissible in the model. Further, the above form of $F(T)$ has been claimed to unify the early and late-time evolution of the universe. Last but not the least, the authors \cite{Inf17} show that a Teleparallel theory with a non-minimally coupled Higgs scalar field has no linear scalar perturbations, and therefore cannot give successful inflation, unless the non-minimal coupling functions satisfy a particular relation. On the contrary, if the relation is satisfied, Higgs inflation can give rise to an arbitrarily large tensor-to-scalar ratio $r$. The results also apply to $F(T)$ theories, as they are scalar-tensor theories written in different field coordinates.\\ In a nut-shell, except for the recent work \cite{Inf16}, in which a specific form of Hubble parameter is chosen, none other fits perfectly with the current released data set \cite{Planck1, Planck2}. However, what happens in the radiation dominated era has not been exhibited in \cite{Inf16}. In this respect, our motivation is to unify a Hubble parameter driven late-time acceleration and a scalar field driven slow roll inflation. Further, we study cosmological evolution in the radiation dominated era. It is noticeable that, in almost all of the above mentioned works, inflation is driven by a scalar field, while a form of $F(T)$ has been found under reconstruction programme, or sometimes following perturbative analysis. Note that, the same field cannot be responsible for accelerating the universe both in the early and the late stage of cosmological evolution. This means, if a particular form of $F(T)$ is found responsible for late-time cosmic acceleration via reconstruction program (say), then a different field (curvature or a scalar) is required for early inflation. In the context of purely geometric $F(R)$ theory of gravity, it was therefore suggested to consider an action in the form $A = \sqrt{-g} d^4x[\alpha R + \beta R^2 + \gamma R^{-n}]$ \cite{1.80} or $A = \sqrt{-g} d^4x[\alpha R + \beta R^2 + \gamma R^{2\over 3}]$ \cite{1.81}, so that $R^2$ may be responsible for inflation in the early universe, $R$ in the middle to ensure Friedmann-like matter dominated eras, and $R^{-n}$ or $R^{2\over 3}$ at the late stage of cosmic evolution, to ensure current accelerated universe. Thus, direct coupling of a scalar field to the torsion scalar in Teleparallel gravity has been studied extensively over last decade (references are available in \cite{Inf17}). The simplest of these models, considered by several authors, is $F(T) = \alpha T + \beta T^2$, which can produce acceleration at the late-stage of cosmic evolution.\\ In view of the above discussion, we also shall consider a minimally coupled scalar field (inflaton) to drive inflation in the very early universe, and try to fit inflationary parameters with the currently released data set \cite{Planck1, Planck2}. In the following section we cast the field equations in the background of spatially flat Robertson-Walker metric. Since vacuum de-Sitter solution is realized for arbitrary form of $F(T)$, we choose a particular form: $F(T) = \alpha T + \beta T^2$, by hand, and write down the field equations in the presence of a minimally coupled scalar field. Next, we show that indeed such a form of $F(T)$ envisages cosmic acceleration in the current pressure-less dust era. In section 3, we study inflation, being driven by the scalar field, under slow roll assumption. In section 4, we try to find an analytical solution from the field equations in the radiation dominated era. Finally, we conclude in section 5. \section{Field equations and cosmological solutions:} In the spatially flat Robertson-Walker (RW) space-time, \be\label{2.1} {ds}^2 = - {dt}^2 + {a^2(t)}\big[dr^2 + r^2 \big(d\theta ^2 + \sin^2{\theta}~d\phi^2\big)\big],\ee the components of vierbein field are expressed in terms of the cosmological scale factor $a(t)$ as , \be\label{2.2}e^{i}_{\mu}= \mathrm{diag}(1,a(t),a(t),a(t)).\ee If matter couples to the metric in the standard form, then the variation of the action with respect to the vierbein leads to: \be\label{2.3} e^{-1}\partial{\mu}(e S^{\mu\nu}_{i})F(T)_{,T}-e^{\lambda}_{i}T^{\rho}_{\mu\lambda}S^{\nu\mu}_{\rho}F(T)_{,T}+S^{\mu\nu}_{i}\partial_{}\mu(F(T))F(T)_{,TT}+ \frac{1}{4}e^{\nu}_{i}F(T)=\frac{1}{4}e^{\rho}_{i} \mathbb{T}^{\nu}_{\rho},\ee where suffix $T$ denotes differentiation with respect to $T$, $S^{\mu\nu}_{i}=e^{\rho}_{i}S^{\mu\nu}_{\rho}$ and $\mathbb{T}_{\mu\nu}$ is the matter energy-momentum tensor. Now, for the spatially flat Robertson-Walker (RW) metric under consideration, equations \eqref{1.2}, \eqref{1.4} and \eqref{1.5} lead to \be\label{2.4} T = {S}^{\rho\mu\nu} {T}_{\rho\mu\nu}=6\left(-\frac {\dot a^2}{a^2}\right)=-6\frac{\dot a^2}{a^2}=-6H^2,\ee $H$ being the Hubble parameter. Substituting the vierbein \eqref{2.2} in \eqref{2.3} for $i = 0 = \nu$, one obtains the following field equation, in the presence of a barotropic fluid and minimally coupled scalar field, \be\label{2.5} F +12H^2F_{,T} =\rho + \rho_\phi = \rho + {1\over 2} \dot\phi^2 + V(\phi),\ee \be\label{2.6} 48H^2\dot{H}F_{,TT} - 4(\dot{H}+3H^2)F_{,T}-F = p + p_\phi = p + {1\over 2} \dot\phi^2 - V(\phi).\ee Equation \eqref{2.5} is essentially the $(^0_0)$ component of Einstein's equation, while \eqref{2.6} is the field equation for $i = 1 = \nu $. Note that \eqref{2.6} is the only independent $(^i_i)$ component of Einstein's equations. In the above, $p$ and $\rho$ stand for thermodynamic pressure and density respectively for the barotropic fluid under consideration. Thus, equations \eqref{2.5} and \eqref{2.6} constitute Einstein's cosmological field equations for $F(T)$ gravity in spatially flat RW metric \eqref{2.1}, under the condition \eqref{2.4}. Let us mention that the above field equations may also be found using Lagrange multiplier technique, as well as from scalar-tensor equivalent forms \cite{STe1, STe2}, as in the case of $F(R)$ theory of gravity. It is trivial to check that GTR may be recovered for $F(T) \propto T$, apart from a boundary term.\\ Now, the field equations in pure $F(T)$ gravity (in the absence of the scalar field) are, \be \label{2.20} F + 4(\dot{H}+3H^2)F_{,T} - 48H^2\dot{H}F_{,TT} = 0,\ee \be \label{2.21} F +12H^2F_{,T} = 0.\ee The above set of equations may be combined to yield, \be\label{2.22} \dot H(12 H^2 F_{,TT} - F_{,T}) = 0.\ee Hence, either the field equations admit a solution in the form $F(T) = F_0\sqrt T =i F_0 \sqrt{6}\left({\dot a\over a}\right)$, ($F_0$ being the constant of integration) which is clearly meaningless; or a de-Sitter solution $(\dot H = 0,~ a = a_0 e^{\lambda t})$ for arbitrary form of $F(T)$. As mentioned in the introduction, we associate a scalar field $(\phi)$ to drive inflation, and choose a form of $F(T)$ as, \be\label{F} F(T) = \alpha T + \beta T^2,\ee say, where the dimension of $[\alpha] = \big[M_P^2\big]$ and the constant parameter $\beta$ is dimensionless. Note that the sound speed for such a form of $F(T)$ is: \be c_s^2 = \frac{F_H}{HF_{HH}} =\frac{F_{,T}}{F_{,T}-12H^2 F_{,TT}} = \frac{72\beta H^2-\alpha}{216 \beta H^2-\alpha},\ee and so $0< c_s < 1$ is always ensured, provided $\beta > 0$. Now, the field equations in the presence of a barotropic fluid are: \be\label{2.33a} \alpha(2\dot H + 3H^2) - 18\beta H^2(4\dot H + 3H^2) = -{1\over 2}\left[p + {1\over 2}\dot\phi^2 -\ V(\phi)\right],\ee \be\label{2.33b} \alpha H^2 - 18\beta H^4 = {1\over 6}\left[\rho + {1\over 2}\dot\phi^2 +\ V(\phi)\right],\ee \be\label{2.33c} \ddot \phi + 3H\dot \phi + V'(\phi) = 0.\ee In the above, prime denotes derivative with respect to $\phi$. For further clarification regarding the above choice, we find the effective state parameter as, \be \label{2.33d} \omega_e = \frac{p + {1\over 2}\dot\phi^2 - V(\phi) - 108\beta H^2 (4\dot H + 3H^2)}{\rho + {1\over 2}\dot\phi^2 + V(\phi) + 108\beta H^4}.\ee In the pressure-less dust era $(p = 0)$, the effective state parameter therefore may remain positive initially, depicting a decelerated expansion. However, at a latter epoch, it turns out to be negative and initiate accelerating expansion. The presence of $H^4$ term with a negative sign in the numerator, might also allow the model to cross the phantom divide line $\omega_e < -1$. This is true even in the absence of the scalar field, if it decays completely (say) in the process of producing particles, while oscillating at the end of inflationary epoch. Clearly, the same $H^4$ term cannot be responsible for driving late-time cosmic acceleration and early inflation simultaneously. Therefore, we need a scalar field to drive inflation at the very early stage of the cosmic evolution. \section{Inflation under slow-roll approximation:} If a single scalar field drives inflation, then the standard slow-roll approximations $|\ddot \phi| \ll 3H\dot \phi$ and $\dot\phi^2 \ll V(\phi)$ hold. As a result, we can express \eqref{2.33b} and \eqref{2.33c} respectively as: \be \label{2.34a} \gamma H^4-6\alpha H^2 +V(\phi) = 0,\ee \be \label{2.34b} 3H\dot\phi + V'(\phi) = 0,\ee in the vacuum era $(p = 0 = \rho)$. In the above, we have redefined $108\beta =\gamma$. Let us now solve $H^2$ from \eqref{2.34a} to obtain, \be\label{2.34c} H^2 = \frac{3\alpha - \sqrt{9\alpha^2 - \gamma V(\phi)}}{\gamma},\ee where we have chosen negative sign to ensure the first slow-roll parameter $\epsilon > 0$. Next, we compute the number of e-folds $(N)$as,. \be \label{2.34d} N = \int_{t_i}^{t_f} H dt = {3\over \gamma}\int_{\phi_f}^{\phi_i} \frac{3\alpha-\sqrt{9\alpha^2-\gamma V}}{V'}d\phi.\ee Further, in view of equation \eqref{2.34a} we can compute ${\dot H\over H^2} = -{V'^2\over 12H^4[\sqrt{9\alpha^2-\gamma V}]}$, and as a result the slow roll parameters $\epsilon$ and $\eta$ may be computed in the following manner \be\label{2.34e} \epsilon = -{\dot H\over H^2} = {\gamma^2V'^2\over 12[\sqrt{9\alpha^2-\gamma V}]\left[3\alpha-\sqrt{9\alpha^2-\gamma V}\right]^2}, ~~\mathrm{and,~~}\eta = 2\alpha\left(V''\over V\right),\ee where, $\epsilon \ll 1$, and $|n_s| \ll 1$. The tensor to scalar ratio $(r)$, the primordial spectral index of scalar perturbation $(n_s)$, and the tensor spectral index $(n_t)$ may now be expressed as, \be \label{2.34f} r = 16\epsilon,~~~~~n_s = 1-6\epsilon + 2\eta,~~~~~ n_t = -{r\over 8} = -2\epsilon.\ee As mentioned in the introduction, the recent released combined data (TT, TE, EE + lowE + lensing + BK15 + BAO + Bicep2) have constrained the inflationary parameters as: $r < 0.058$, $n_s = 0.9668 \pm 0.0037$ and a very small tensor spectral index $n_t$ \cite{Planck1, Planck2}, while number of e-folding preferably should lie within the range $45 < N < 65$, to solve the horizon and the flatness problems. Before choosing a form of potential, let us mention that for the single field inflationary model under present consideration, these constraints imply $V'' < 0$. Since otherwise i.e. for $V'' > 0$, $\eta > 0$, and hence any attempt to keep $n_s < 0.97$, makes $r > 0.08$. Next, we should choose $\phi_i$ in such a manner that $\phi_f \approx 1 M_P$, as inflation ends (i.e. $\epsilon = 1$). The reason being, a graceful exit from inflation requires $\phi$ should oscillate as it becomes less than Planck's mass. Let us make two choices of the potential in the following form, \be \label{Vphi}V(\phi) = V_0 - {V_1\over \phi},~~ V(\phi) = V_0 - V_1e^{-b\phi},\ee so that the potential remains almost flat as $\phi$ is large, and slow roll is admissible. From dimensional analysis, it is clear that since $[H] = [M_P]$, so $[\alpha] = [M_P^2],~[\phi]=[M_P]$ and $\gamma$ is dimensionless.\\ \subsection{Case-I:} Here we consider the potential in the form: $V(\phi) = V_0 - {V_1\over \phi},~~ \mathrm{so~that}~~~ V' = {V_1\over \phi^2}, ~~\mathrm{and},~~ V'' = -{2V_1\over\phi^3}$. We take $\phi_i = 4 M_P,~2\alpha = 1 M_p^2,~{V_0\over V_1} = 2 M_P^{-1}, ~ \gamma V_1 = 1 M_P^5$, to compute slow roll parameters as, \be \eta = -{2\over \phi_i^2\left({V_0\over V_1}\phi_i-1\right)} \approx -0.017.\ee \be \epsilon = \frac{\gamma^2V_1^2}{12 \phi_i^4\sqrt{2.25-{\gamma V_1\left({V_0\over V_1} -{1\over\phi_i}\right)}}\left[1.5-\sqrt{2.25-{\gamma V_1\left({V_0\over V_1} -{1\over\phi_i}\right)}}\right]^2} = 0.000732258.\ee Therefore, \be r = 16 \epsilon = 0.0117,~~~ n_s = 1 - 6\epsilon + 2\eta = 0.9616,~~~\mathrm{and}~~~n_t = -2\epsilon = -0.00146.\ee The slow roll over terminates $(\epsilon = 1)$ at $\phi_f = 0.9 M_P$. The number of e-folding $N$, may now be found using \eqref{2.34d} as, \be\begin{split} N =& {9\alpha\over \gamma}\int_{\phi_f}^{\phi_i} {d\phi\over V'} - {3\over \gamma}\int_{\phi_f}^{\phi_i}{\sqrt{9\alpha^2-\gamma V}\over V'}d\phi = {4.5\over \gamma V_1}\int_{\phi_f}^{\phi_i}\phi^2 d\phi -{3\over \gamma V_1}\int_{\phi_f}^{\phi_i}\sqrt{2.25-\gamma V_1\left({V_0\over V_1}-{1\over \phi}\right)}\phi^2 d\phi\\& = 4.5{\phi^3\over 3}\Big|_{0.92}^{4}- 3\int_{0.92}^{4}\left(\sqrt{2.25-2+{1\over \phi}}\right)\phi^2 d\phi = 94.83-1.5\int_{.92}^{4}\sqrt{\phi^4+4\phi^3}d\phi\\& 94.83-1.5\times 32.749 = 46.\end{split}\ee Clearly, this is an excellent fit with the currently available Planck's data set \cite{Planck1, Planck2}. In Table-1, we organize a data set for the inflationary parameters, varying $\phi_i$ between $4 M_P\le \phi_i \le 4.35 M_P$, so that the spectral index of scalar perturbation lie within experimental limit. Note that slow rollover terminates $(\epsilon = 1)$ at $\phi_f =0.9 M_P$ in every case. Further, the number of e-folds $(46 \le N \le 60)$ found, is sufficient to solve the horizon and flatness problems.\\ \begin{figure} \begin{center} \begin{minipage}[h]{0.47\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline\hline $\phi_i$ in $M_P$ & $n_s$ & $r$ & $N$ \\ \hline 4.00 & 0.9598 & 0.0117 & 46 \\ \hline 4.05 & 0.9614 & 0.0111 & 48 \\ \hline 4.10 & 0.9630 & 0.0105 & 50 \\ \hline 4.15 & 0.9644 & 0.0100 & 52 \\ \hline 4.20 & 0.9657 & 0.0095 & 54 \\ \hline 4.25 & 0.9670 & 0.0091 & 56 \\ \hline 4.30 & 0.9682 & 0.0086 & 58 \\ \hline 4.35 & 0.9694 & 0.0082 & 60 \\ \hline\hline \end{tabular} \captionof{table}{Data set for the inflationary parameters under the choice $F(T) =\alpha T + \beta T^2$ and $V = V_0 -{V_1\over \phi}$, where $\alpha = {M_P^2\over 2},~{V_0\over V_1} = 2 M_P^{-1}$, $108\beta V_1 = \gamma V_1 = 1 M_P^5$. Slow rollover ends at $\phi_f = 0.9 M_P$.} \label{tab:table1} \end{minipage \end{center} \end{figure} Let us now find the energy scale of inflation in view of the relation \eqref{2.34c}. For this purpose, we consider the last data set of the above table-1, associated with $N = 60$, for which $\phi_i = 4.35 M_P$. As a result we find \be \label{scale} {H_*}^2 = \frac{1.5 -\sqrt{2.25-\gamma V_1\left({V_0\over V_1}-{1\over \phi_i}\right)}}{\gamma} = {0.807\over \gamma}, ~~\mathrm{or} ~~H_* = \sqrt {0.807\over \gamma},\ee where, we have substituted $\alpha = {1\over 2} M_P^2$, ${V_0\over V_1} = 2 M_P^{-1}$ and $\gamma V_1 = 1 {M^5_P}$. Note that $\gamma$ is still arbitrary. Now, the energy scale of inflation in a single scalar field model corresponding to GTR, is given by the expression \cite{83} as, \be \label{sf} H_* = 8\times 10^{13}\sqrt{r\over 0.2}~GeV = 1.62\times 10^{13}~GeV = 6.6\times 10^{-6} M_P.\ee In the above, we have used the value of the tensor to scalar ratio $r = 0.0082$ from the last data set of table-1. Thus in order that the scale of inflation \eqref{scale} matches with the single field scale of inflation \eqref{sf}, $\gamma$ has to be of the order, $\gamma \approx 1.8\times 10^{10}$, i.e. $\beta \approx 10^8$. Further, since individually each term has to be of the same order of magnitude, one can compare the two terms on the left hand side of \eqref{2.34a} to find that: \be 6\alpha H_*^2 \approx 1.29 \times 10^{-10} M_P^4,~~ \mathrm{while}~~108\beta H_*^4 = \gamma H_*^4 \approx 10^{-10},\ee provided $\gamma \approx 10^{10}$. It is important to note that, we essentially have to fix only two parameters $\gamma V_1$ and $V_0\over V_1$, for a viable inflationary model. However, in view of the above consideration (sub-Planckian scale of inflation), the coupling parameter $\gamma = 108\beta$ also is fixed, which has been found to match with the value of $\gamma$ in view of equation \eqref{2.34a}. This result is of-course encouraging. In the same manner, one can also guess the value of $V_1$ and hence $V_0$. Since the potential $V(\phi)$ appearing in \eqref{2.34a} has also to be of the same order of magnitude, so \be V(\phi) = V_1\left({V_0\over V_1} - {1\over \phi}\right) = {1.77 V_1} \approx 0.96\times 10^{-10} M_P^4,\ee implying \be V_1 = 10^{-10}~M_P^5, ~\mathrm{and}~ V_0 \approx 2\times 10^{-10}~M_P^4.\ee In this manner all the parameters are fixed once and forever. Finally, we need to check if the model gracefully exits from inflation. In view of the form of the potential, $V(\phi) = V_0 - {V_1\over \phi_f}$, \eqref{2.33b} may be expressed as, \be \label{ge1} {3H^2\over V_1}-{\gamma H^4\over V_1}={\dot\phi^2\over 2 V_1} + \left({V_0\over V_1}-{1\over\phi}\right).\ee During inflation $\gamma H^4$ and $V_1$ are of the same order of magnitude, while Hubble parameter varies slowly. However, at the end, Hubble rate usually decreases sharply, and $\gamma H^4$ falls much below $V_1$. Hence $\gamma H^4 \over V_1$ falls of faster than $3H^2\over V_1$ and as a result, $\gamma H^4 \over V_1$ may be neglected without loss of generality. Hence, the above equation reads as, \be \label{ge2} 6H^2=\dot\phi^2 -{2V_1\over\phi} + 2V_0.\ee Clearly, $\phi$ exhibits oscillatory behaviour, provided $H$ does. In that case, $H$ should not decrease much even at the end of inflation, and the second term ${\gamma H^4\over V_1}$ of \eqref{ge1} cannot be neglected. However, the same result (oscillatory behaviour of $\phi$ as well as $H$) emerges keeping the second term as well. Thus, it is possible that the scalar field oscillates many times over a Hubble time, driving a matter-dominated era at the end of inflation. In the process, graceful exit from inflation is administered. Once scalar field is used up mostly by creating particles and reheating the universe, oscillation halts and a radiation dominated era initiates. The Hubble parameter $H$ then starts evolving differently. In a nut-shell, the model considered here perfectly fits with the evolution of the very early universe.\\ \subsection{Case-II} In this subsection we consider the potential in the form: $V(\phi) = V_0 - V_1e^{-b\phi}$, so that $V' = b V_1e^{-b\phi}, ~~\mathrm{and},~~ V'' = -{V_1 b^2e^{-b\phi}}$. Fixing, $\phi_i = 6 M_P,~2\alpha = 1 M_p^2,~{V_0\over V_1} = 0.867, ~ \gamma V_1 = \sqrt 5 M_P^4 ~\mathrm{and}~ {b=0.45 M_P^{-1}}$, we compute the slow roll parameters, as: \be \eta = -{b^2 e^{-b\phi}\over \left({V_0\over V_1}-e^{-b\phi}\right)}=-0.0170158\ee \be \epsilon = \frac{\gamma^2V_1^2\left(b e^{-b\phi}\right)}{12\sqrt{2.25-{\gamma V_1\left({V_0\over V_1}-{e^{-b\phi}}\right)}}\left[1.5-\sqrt{2.25-{\gamma V_1\left({V_0\over V_1} -{e^{-b\phi}}\right)}}\right]^2}=0.000832993\ee Therefore, \be r = 16 \epsilon =0.0133279,~~~n_s = 1 - 6\epsilon + 2\eta =0.9609,~~~ \mathrm{and}~~~n_t = 0.0017,\ee Further, at the value $\phi_f = 0.94 M_P$, $\epsilon = 1$, and slow rollover halts. The number of e-folds is found from the relation, \be\begin{split} N =& {9\alpha\over \gamma}\int_{\phi_f}^{\phi_i} {d\phi\over V'} - {3\over \gamma}\int_{\phi_f}^{\phi_i}{\sqrt{9\alpha^2-\gamma V}\over V'}d\phi \end{split}=60\ee Clearly, the fit is again excellent. In Table-2, we therefore organize a data set for the inflationary parameters, varying $\phi_i$ between $6 M_P\le \phi_i \le 6.3 M_P$, so that the spectral index of scalar perturbation lie within experimental limit.\\ \begin{figure} \begin{center} \begin{minipage}[h]{0.47\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline\hline $\phi_i$ in $M_P$ & $n_s$ & $r$ & $\mathrm {N}$\\ \hline 6.00 & 0.9609 & 0.0133 & 60\\ \hline 6.05 & 0.9620 & 0.0127 & 62 \\ \hline 6.10 & 0.9630 & 0.0121 & 64 \\ \hline 6.15 & 0.9640 & 0.0115 & 66 \\ \hline 6.20 & 0.9650 & 0.0110 & 68 \\ \hline 6.25 & 0.9659 & 0.0105 & 70 \\ \hline 6.30 & 0.9668 & 0.0100 & 72 \\ \hline\hline \end{tabular} \captionof{table}{Data set for the inflationary parameters under the choice $F(T) =\alpha T + \beta T^2$, where $\alpha = {M_P^2\over 2}$, for the potential $V_0=V_0 - V_1e^{-b\phi}$, taking into account, ${V_0\over V_1} = 0.867, ~ 108 \beta V_1 =\gamma V_1 = \sqrt 5 M_P^4 ~\mathrm{and}~ {b=0.45 M_P^{-1}}.$ Slow rollover ends at $\phi_f = 0.94 M_P$.} \label{tab:table1} \end{minipage \end{center} \end{figure} Let us now find the energy scale of inflation using relation \eqref{2.34c}, from the fourth data of the above table, associated with $N = 66$ as, \be \label{2.34g} {H_*}^2 = \frac{1.5 -\sqrt{2.25-\gamma V_1\left({V_0\over V_1}-{e^{-b\phi_i}}\right)}}{\gamma}=\frac{0.8278}{\gamma },~~\mathrm{or} ~~H_* = \sqrt {0.8278\over \gamma}.\ee where we have substituted $\alpha = {1\over 2} M_P^2$, ${V_0\over V_1} = 0.867$, $b = 0.45 M_P^{-1}$, $\phi_i = 6.15 M_P$ and $\gamma V_1 = \sqrt{5} M_P^4$. Note that, $\gamma$ remains arbitrary as before. Now, using the relation for single field inflation corresponding to GTR \cite{83} we find \be \label{2.34h} H_* = 8\times 10^{13}\sqrt{r\over 0.2}~GeV \approx 7.89\times10^{-6} M_P,\ee where, $r = 0.0115$ corresponding to the data associated with $N = 66$ of Table-2. Thus in order that the scale of inflation \eqref{2.34g} matches with the single field scale of inflation, $\gamma$ has to be of the order, $\gamma \approx 1.3\times 10^{10}$, i.e. $\beta \approx 10^8$. Note that the order of the dimensionless parameter $\beta$ remains unaltered from Case-I. Since individually each term has to be of the same order of magnitude, one can compare the two terms on the left hand side of \eqref{2.34a} to find, \be 6\alpha H_*^2 \approx 1.9 \times 10^{-10} M_P^4,~~ \mathrm{while}~~108\beta H_*^4 = \gamma H_*^4 \approx 10^{-10} M_P^4,\ee provided $\gamma \approx 10^{10}$. This is essentially a consistency check. Note, that basically we had to fix only three parameters $\gamma V_1$, $V_0\over V_1$ and $b$ for a viable inflationary model. However, in view of the above consideration (sub-Planckian scale of inflation), the coupling parameter $\gamma = 108\beta$ also is fixed. In the same manner, one can also fix the numerical value of $V_1$ and hence $V_0$. Since the potential $V(\phi)$ appearing in \eqref{2.34a}, must also be of the same order of magnitude, so \be V(\phi) = V_1\left({V_0\over V_1} -e^{-b\phi} \right) = 0.804 V_1 \approx 1.39\times 10^{-10},\ee which implies \be V_1 = 10^{-10}~M_P^5,~ \mathrm{and~hence},~ V_0 \approx 0.867\times 10^{-10}~M_P^4.\ee \noindent Now, in view of the potential $V(\phi) = V_0 - V_1e^{-b\phi}$, equation \eqref{2.33b} may be expressed as, \be {3H^2\over V_1}-{\gamma H^4\over V_1}={\dot\phi^2\over 2 V_1} + \left({V_0\over V_1}-{e^{-b\phi}}\right)\ee As before (Case-I), here again one might consider oscillatory behaviour of the scalar field $\phi$, provided $H$ oscillates as well at the end of slow rollover. In the process, graceful exit from inflation may be administered. \section{Analytical solution in the radiation dominated era.} The model under consideration unifies an early scalar-driven inflation with the late-stage of Hubble-parameter-driven accelerating universe, as already demonstrated in \cite{Inf16}, taking into account almost an identical form of $F(T)$. The inflationary parameters fit with latest released data sets from Planck \cite{Planck1, Planck2} with excellence. Inflation occurred at sub-Planckian scale and the model admits graceful exit from inflation too. In the present pressure-less dust era, the model might even cross the phantom divide line $(\Omega_\Lambda = -1)$, which is not excluded by observation. Such a wonderful outcome from the well-simplified model $(F(T) = \alpha T + \beta T^2)$, clearly motivates to study the later stage of cosmological evolution, when the oscillatory nature of the scalar field drives a matter dominated era, particularly the radiation dominated era to be precise. For this purpose, let us combine the first two field equations \eqref{2.33a} and \eqref{2.33b} to find: \be \label{3.2} {4\over 3}\gamma H^2 \dot H - 4\alpha \dot H = \dot\phi^2 + (\rho + p).\ee Additionally, we have the Bianchi identity, viz. \be\label{BI} \dot \rho + 3 H(\rho + p) = 0.\ee In the radiation dominated era $(p = {1\over 3}\rho)$, equation \eqref{BI} leads to $\rho = {\rho_{r0}\over a^4}$, where the constant $\rho_{r0}$ stands for the present value of the radiation density. Let us now seek power law solution of the scale factor in the form $a = a_0 t^n$, so that $n < 1$, to ensure decelerated expansion required for structure formation in the radiation dominated era. As a result, \eqref{2.33b} takes the form: \be\label{3.3} {6\alpha n^2\over t^2} - {\gamma n^4 \over t^4} = {1\over 2}\dot\phi^2 + V(\phi) + {\rho_{r0}\over a_0^4 t^{4n}},\ee due to Bianchi identity. It is important to mention that, if the scalar field is completely used up in the process of particle creation and the potential turns out to be a bare cosmological constant, then the above equation \eqref{3.3} is not satisfied due to the presence of three different powers of $(t)$ appearing in the denominator. Therefore, we express \eqref{3.3} in the following manner, \be\label{3.4}{1\over 2}\dot \phi^2 + V(\phi) = {6\alpha n^2\over t^2}- {\gamma n^4 \over t^4} -{\rho_{r0}\over a_0^4 t^{4n}},\ee which under time differentiation takes the form, \be\label{3.5} \dot\phi\ddot \phi + \dot\phi V'(\phi) = -{12\alpha n^2\over t^3} + {4\gamma n^4 \over t^5} + {4n\rho_{r0}\over a_0^4 t^{4n+1}}.\ee In view of \eqref{2.33c} and \eqref{3.5}, we therefore find, \be\label{3.6} 3H\dot\phi^2 = {3n\dot\phi^2\over t} = {12\alpha n^2\over t^3} - {4\gamma n^4 \over t^5} - {4n\rho_{r0}\over a_0^4 t^{4n+1}}.\ee Thus, we have, \be\label{3.7} \phi = \phi_0 + \int\left[{4\alpha n\over t^2} - {4\gamma n^3 \over 3 t^4} - {4\rho_{r0}\over 3a_0^4 t^{4n}}\right]^{1\over 2} dt.\ee If we now seek a typical Friedmann-like solution $a \propto \sqrt t$, i.e. $n = {1\over 2}$, then the above equation may be integrated to yield, \be\label{3.7a} \phi = \phi_0 + c\ln\left[{c\left(ct + \sqrt{c^2 t^2-d^2}\right)}\right] - {\sqrt{c^2 t^2-d^2}\over t},\ee where, $c^2 = 2\big(\alpha - {2\rho_{r0}\over 3a_0^4}\big)$ and $d^2 = {\gamma\over 6}$. Hence, in view of \eqref{3.4}, the potential may be expressed as \be\label{pot} V(t) = \frac {\alpha}{2t^2}+\frac{\gamma}{48t^4}-\frac{\rho_{r0}}{3a^4_{0}t^2}.\ee However, it is impossible to express the potential in terms of $\phi$. Nonetheless, it is apparent that the potential is in no way similar to either of the two \eqref{Vphi} chosen to study inflation. For example, as a special case if we choose $c = 0$, equivalently, $\rho_{r0} = {3\over 2}\alpha a_0^4$, then \eqref{3.7a} reduces to \be \label{3.8} \phi =\phi_0 -{\sqrt{-d^2}\over t} = \phi_0-\left(\sqrt{-{\gamma \over 6}}\right) t^{-1}.\ee Clearly, we encounter a contradiction, since now real solution is admissible only if $\gamma < 0$, while we found $\gamma \thickapprox 10^{10}$ for viable inflation. In any case, if we chose $\gamma = -{\gamma_0}^2$, so that, $\phi = \phi_0 - \left(\sqrt{\gamma_0^2\over 6}\right) {1\over t}$, then we can use \eqref{3.4} to find \be\label{3.9} V = {\gamma_0^2\over 48 t^4}, \ee resulting in the following quartic form of $V = V(\phi)$ viz., \be\label{3.10} V(\phi) = {3\over 4\gamma_0^2} (\phi - \phi_0)^4.\ee Thus, a viable Friedmann-like radiation dominated era requires a quartic potential, with a negative coupling parameter $\beta = \frac{\gamma}{108}$. Such a form of quartic potential is devoid of a flat region, and hence slow rollover is not admissible. Hence, despite the fact that unification of early inflation with late-time cosmic acceleration has been achieved, a viable radiation dominated era in the middle, remains obscure with the same form of potential. \section{Concluding remarks.} It is important to mention that $F(R)$ theory of gravity can unify early inflation with late-time accelerated expansion from purely geometric consideration, with a form like $F(R) = \alpha R + \beta R^m + \gamma R^{-n}$ where, $m > 0,~n > 0$ \cite{1.80}. However, cosmic evolution in the radiation dominated era is not at par with Friedmann-like evolution, unless one neglects $R^m$ term, which is not logical at that epoch. On the contrary, a particular form of $F(R) = \alpha R + \beta R^2 + \gamma R^{3\over 2}$ was found to encompass cosmic evolution right from the radiation era till date, in a continuous manner \cite{1.81}, while the presence of $R^2$ term can generate inflation in the very early universe \cite{1.82}. Thus, unless we obtain anything better, there is no point in considering yet another theory, such as `Teleparallel gravity'. It is noticeable that arbitrary form of $F(R)$ doesn't work. This is the reason for finding a suitable form of $F(R)$ over years, following either Noether symmetry analysis or reconstruction program.\\ Likewise, in order to study the cosmological consequence of the so-called Teleparallel gravity, a particular form of $F(T)$ is required. Usually, either Noether symmetry is imposed or reconstruction program is carried out to find such a form. Although, a host of $F(T)$ has been explored so far, it is learnt that not a single Noether conserved current associated with the available forms of $F(T)$ satisfy the $(^0_0)$ equation of Einstein \cite{NM}. However, reconstruction program could find some of these, the simplest being $F(T) = \alpha T + \beta T^2$. Note that $F(T) = \alpha T$, is simply GTR, apart from a total derivative term. Thus higher powers of $T$ is responsible for late-time cosmic acceleration. Hence, a scalar field (inflaton or Higgs) driven inflation has been studied extensively over last decade, taking into account some of these forms of $F(T)$. Inflationary parameters were matched with `the then' released Planck's data sets, graceful exit from inflation has been assured, and unification with late-time cosmic acceleration was found in some cases. It is therefore required primarily to match inflationary parameters with the latest released Planck's data set, and thereafter study the evolution in the radiation-dominated era, which initiated soon after graceful exit, as the universe enters hot big-bang era. This was our present motivation.\\ Since vacuum de-Sitter solution is admissible for arbitrary form of $F(T)$, so we choose the simplest form $F(T) = \alpha T + \beta T^2$, which envisages late-time cosmic acceleration assuring crossing of the phantom divide line as well. We then study slow roll inflation, being driven by a scalar field choosing two different types of potentials having flat sections, so that slow roll is applicable. The inflationary parameters viz. the tensor to scalar ratio $(r)$ and the spectral index of scalar perturbation $(n_s)$ exhibit wonderful fit with the recent released data from Planck's collaborators, keeping the number of e-folds around $N = 60$. In the process, unification is achieved. However, the potentials responsible for slow-roll does not ensure a decelerated Friedmann-like expansion in the radiation dominated era. On the contrary, the quartic potential required for this purpose, is devoid of a flat section, and hence slow-roll is not admissible.\\ Of-course, slow roll inflation with quartic potential may be studied, but in more involved theories of gravity, e.g. in non-minimally coupled scalar-tensor theories \cite{sami, beh, dalia} or in higher order theories \cite{ranajit1, ranajit2, subhra}, in which an effective potential with a flat section comes into play. It is important to mention that $F(T)$ theories have also given rise to some disputes about an intriguing and essential feature: for example, the action of the theory is not invariant under local Lorentz transformations of the tetrad \cite{1.39, 1.50, 1.72, 1.72a}, and also it suffers from the unresolved pathology of `Branched Hamiltonian' that we shall discuss in future.
2,877,628,090,907
arxiv
\section{Introduction} \label{s_intro} Random interlacements were introduced by Sznitman in~\cite{Szn10}, to model the trace of the simple random walk on the discrete torus $\Z_n^d:=\Z^d/n\Z^d$ or the discrete cylinder $\Z\times\Z^{d-1}$, in dimension~$d\geq 3$. Detailed treatments and reviews of recent results can be found in the recent books~\cite{CT12, DRS14, Szn12book}. Loosely speaking, the model of random interlacements in~$\Z^d$, $d\geq 3$, is a stationary Poissonian soup of bi-infinite simple random walk trajectories on the integer lattice. There is a parameter~$u>0$ entering the intensity measure of the Poisson process, the larger $u$~is the more trajectories are thrown in. The sites of~$\Z^d$ that are not touched by the trajectories constitute the \emph{vacant set}~$\mathcal{V}^u$, and the union of all trajectories constitutes the interlacement set $\I^u=\Z^d\setminus \mathcal{V}^u$. The random interlacements are constructed simultaneously for all $u>0$ in such a way that $\I^{u_1}\subset \I^{u_2}$ if $u_1<u_2$. In fact, the law of the vacant set at level~$u$ can be uniquely characterized by the following identity: \begin{equation} \label{eq_vacant>3} \IP[A\subset \mathcal{V}^u] = \exp\big(-u \capacity (A)\big), \end{equation} where $\capacity(A)$ is the \emph{capacity} of a finite set~$A\subset\Z^d$. Informally, the capacity measures how ``big'' is the set from the point of view of the walk, see Section~6.5 of~\cite{LawlerLimic} for formal definitions, or Section~\ref{s_def} below. The model of random interlacements naturally has more independence built in than just one random walk on the torus or the cylinder (because on a fixed set one observes traces of \emph{independent} trajectories). Still, the analysis of random interlacements is difficult because of the long-range dependencies present there. For example, in~$(1.68)$ from~\cite{Szn10} we can see that \begin{equation} \label{1-corr} \Cov(\1{x \in \mathcal{I}^u}, \1{y \in \mathcal{I}^u}) \sim \frac{c_{d}u}{\|x-y\|^{d-2}} \quad\text{ as } \quad \| x-y \| \to \infty, \end{equation} which means that the ``degree of dependence'' decreases polynomially in the distance. Naturally, one is interested in ``decoupling'' the events supported on distant regions; that is, to argue that they are approximately independent to a certain degree. One possible approach to quantify that degree is the following: given finite sets $A_1,A_2~\subset~\Z^d$ and functions $f_1:\{0,1\}^{A_1} \to [0,1]$ and $f_2:\{0,1\}^{A_2} \to [0,1]$ depending on the interlacements set intersected with $A_1$ and $A_2$ respectively, we have \begin{equation} \label{basic_dec} \Cov_u(f_1,f_2) \leq c_{d} u \frac{\capacity(A_1) \capacity(A_2)}{\dist(A_1,A_2)^{d-2}}, \end{equation} as proved in formula~$(2.15)$ of~\cite{Szn10}, see also $(8.1.1)$ in \cite{DRS14}. However, the polynomial error term in \eqref{basic_dec} can complicate one's life in many applications (and, e.g.\ in the case when the diameters of these sets are of the same order as the distance between them, \eqref{basic_dec} is simply of no use); on the other hand, while \eqref{basic_dec} can be improved to some degree \cite{BGP}, the error term there should always be at least polynomial, as \eqref{1-corr} shows. To circumvent this difficulty, one first may note that usually the ``interesting'' events/functions are \emph{monotone} (i.e., increasing or decreasing). For e.g.\ increasing events, we know that their probabilities increase as the parameter~$u$ increases. Note also that the FKG inequality (see~\cite{Tei09}, Theorem~$3.1$) gives us \begin{equation} \label{main_incr} \IE^u[g_1 g_2] \geq \IE^{u}[g_1] \IE^{u}[g_2], \end{equation} for \emph{any} increasing functions $g_{1,2}$ with finite second moments. To complement the FKG inequality, we use \emph{sprinkling}, i.e., we slightly change the intensity of random interlacements in order to decrease the error term; this approach was used in \cite{Szn10} and \cite{Szn12}. Then, in particular, in \cite{PopovTeixeira} it was proved that \begin{equation} \label{strong_dec} \IE^u[f_1 f_2] \leq \IE^{(1+\eps)u}[f_1] \IE^{(1+\eps)u}[f_2] + c_d(r+s)^d \exp(-c_d ' \eps^2us^{d-2}); \end{equation} with $f_1:\{0,1\}^{A_1} \to [0,1]$ and $f_2:\{0,1\}^{A_2} \to [0,1]$ both increasing functions in the interlacements set, $r=\min(\diam(A_1),\diam(A_2))$, and $s=\dist(A_1,A_2)$. The same bound was also obtained for decreasing functions. It is important to observe, however, that the decoupling in the above form may not always be useful for one's needs. Intuitively, one is tempted to understand inequalities like \eqref{basic_dec} as ``what happens in one set does not influence a lot what happens in the other set''. Now, consider the following situation. Suppose that on top of the random interlacements we have some additional stochastic process (e.g., a random walk) that ``explores'' the interlacement set in some way. Assume that this process has already explored the interlacements in a given area, revealing a lot of information about it; think, for definiteness, that it simply revealed the interlacement set exactly. The probability of a particular configuration of the interlacement set is usually very small; so, \eqref{basic_dec} (even~\eqref{strong_dec}!) will blow up when one divides by that probability, because of the error term. In fact, in the end of Section~\ref{s_def} we discuss a particular model of the random walk on the interlacement set, where our main results turn out to be useful. This justifies the need for \emph{conditional} decoupling, i.e., show that, given the configuration on some set, the law of the interlacement configuration on a distant set is still in some sense close to the unconditional law. This is what we are doing in this paper. To prove our results, the main method we use is a suitable modification (that allows dealing with conditional probabilities) of the \emph{soft local time} method of~\cite{PopovTeixeira}. We hope that this modification will be useful in other contexts, for instance, for dealing with the decoupling properties of the \emph{loop measures} \cite{CS14}. Another important observation is the following. There are strong connections between random interlacements and the Gaussian free field, see e.g. \cite{Szn12book, Szn12ecp}. In particular, there are decoupling inequalities similar to \eqref{basic_dec} and \eqref{strong_dec} for the Gaussian free field as well, see~\cite{PR15}. Notice, however, that the decoupling-with-sprinkling result for the Gaussian free field (Theorem~$1.2$ of \cite{PR15}) is \emph{already} conditional (the unconditional decoupling is obtained as a simple consequence, just by integration). On the other hand, note that the error terms in the conditional decoupling in the main result of this paper (Theorem~\ref{t_main1}) are much worse than that of \eqref{strong_dec}; related to this is the fact that in the conditional setting the minimal distance between sets that permits the result to work is much bigger. A comparison with the situation for the Gaussian free field suggests that, hopefully, there is still much room for improvement for the conditional decoupling for random interlacements. \section{Definitions, notations and results} \label{s_def} In this section we will introduce the basic definitions, conventions and notation used in this paper. We will then be able to state our main result. We start by stating our convention regarding constants: $c$, $c'$, $c_1$, $c_2$, $c_3$,$\dots$ are always defined as strictly positive constants depending only on the dimension $d$. Constants can also change value from line to line, unless when the text explicitly states to the contrary. We let $\| \cdot \|$ and $\| \cdot \|_{\infty}$ denote the Euclidean and $\ell_\infty$ norms in~$\Z^d$ respectively. For $x,y\in\Z^d$, we also let $\dist(x,y)\equiv \|x-y \|$. We say that two vertices $x,y\in\Z^d$ are neighbors when $\|x-y\|=1$, this notion introduces the usual nearest-neighbor graph structure in $\Z^d$. For~$x\in\Z^d$ and~$r\in\R_+$, we define \begin{equation*} B(x,r) :=\big\{ y\in\Z^d;\|y-x\| \leq r \big\}, \end{equation*} the discrete ball in the Euclidean norm centered on $x$ with radius $r$, and \begin{equation*} B_{\infty}(x,r) :=\big\{ y\in\Z^d;\|y-x\|_{\infty} \leq r \big\}, \end{equation*} the discrete ball in the $\ell_\infty$-norm centered on~$x$ with radius~$r$. Given a set $A\subseteq\Z^d$ we denote by \begin{equation*} A^C := \{x\in \Z^d;x\notin A\} \end{equation*} its complement and by \begin{equation*} \partial A := \big\{x\in A;\text{ there exists $y\in A^C$ such that $\|x-y\|=1$}\big\} \end{equation*} its (internal) boundary. For any set $Z$ and any two functions $f,g: Z\mapsto\R$, we write $f(z)\asymp g(z)$ to denote the fact that there exist two strictly positive constants, $c_1$ and $c_2$, such that $c_1 f(z) \leq g(z) \leq c_2 f(z)$ for all $z\in Z$. When $Z$ is equal to $\R$ we say that $f(z)=o(g(z))$ when $\frac{f(z)}{g(z)}$ goes to~$0$ as $z\rightarrow\infty$. Given~$x\in\Z^d$, we let~$\IP_{x}$ denote the probability measure associated with the simple random walk in $\Z^d$ started at $x$. We will also let $(X_k,k\geq 0)$ denote the simple random walk process in $\Z^d$. Given a set $A\subset \Z^d$, we define the entrance time for the set $A$ \begin{equation*} H_{A}:=\inf\big\{ k\geq 0;X_k\in A \big\}. \end{equation*} We also let the hitting time for $A$ be defined as \begin{equation*} \tilde H_{A}:=\inf\big\{ k\geq 1;X_k\in A \big\}. \end{equation*} When $A$ is finite we denote its harmonic measure by \begin{equation*} e_A(x)=\1{x\in A}\IP_x\big[ \tilde H_A=\infty \big]\text{ for $x\in\Z^d$}. \end{equation*} We are then able to define the capacity of the set $A$ \begin{equation*} \capacity (A):=\sum_{x\in A} e_A(x), \end{equation*} and the normalized harmonic measure \begin{equation*} \overline{e}_A(x):=e_A(x) \capacity(A)^{-1}. \end{equation*} We now write down the definition of the Green's function for the simple random walk in $\Z^d$: for $x,y\in\Z^d$, we let \begin{equation*} G(x,y):=\sum_{k\geq 0}\IP_{x}\big[ X_k=y \big]. \end{equation*} Theorem~$1.5.4$ of~\cite{LawlerI} provides us with the following estimate on the Green's function: \begin{equation} \label{greenestimate} G(x,y)\asymp \frac{1}{1+\|x-y\|^{d-2}}. \end{equation} Let us briefly discuss the definition of the measure associated with the random interlacements process intersected with a given finite set $A\subset\Z^d$. Assume we have constructed a probability space where, for every $i\geq 1$, there exists a simple random walk process $(X^{(i)}_k,k\geq 0)$ with starting distribution given by $\overline{e}_A(\cdot)$, and such that $(X^{(i)}_k,k\geq 0)$ is independent from~$(X^{(j)}_k,k\geq 0)$ for~$i\neq j$. We also assume that in this space we can construct an independent Poisson process~$(J_u)_{u\geq 0}$ on the positive real line with intensity~$\capacity(A)$. The law of the random interlacements process $(\I^u)_{u\geq 0}$ intersected with the set $A$ can then be characterized by \begin{equation} \label{interlacementsdef} (\I^u\cap A)_{u\geq 0} \eqd \Big(A\cap\bigcup_{i\leq J_u}\bigcup_{k\geq 0}X_k^{(i)} \Big)_{u\geq 0}, \end{equation} as can be seen in \cite{Szn10}, Proposition~$1.3$, or in the paragraph before $(2.6)$ in \cite{CT14}. This definition gives rise to compatible measures in the following sense: Given two finite sets~$K_1\subset K_2\subset\Z^d$, we have that $((\I^u\cap K_2)_{u\geq 0})\cap K_1$ has the same law as $(\I^u\cap K_1)_{u\geq 0}$. To state our main result, we need more definitions. \begin{figure}[ht] \centering \includegraphics[scale = 1]{A1def} \vspace{0.5cm} \caption{Definition of the sets $A_1^{\tiny\Circle}$, $A_2^{\tiny\Circle}$ and $V^{\tiny\Circle}$.} \label{processdeffig1} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale = 1]{A1defsquare} \vspace{0.5cm} \caption{Definition of the sets $A_1^{\tiny\Square}$, $A_2^{\tiny\Square}$ and $V^{\tiny\Square}$.} \label{processdeffig1square} \end{figure} Let $r>0$ be sufficiently big, and let $s:=s(r)>0$, with $s=o(r)$. We define $A_1^{\tiny\Circle}:=A_1^{\tiny\Circle}(r)$ to be the discrete ball of radius $r$, that is \begin{equation*} A_1^{\tiny\Circle}:=\{x_1\in\Z^d;\dist(x_1,0)<r\}. \end{equation*} We also define $A_1^{\tiny\Square}:=A_1^{\tiny\Square}(r,s)$ to be a $d$-dimensional discrete `hypercube' with edge length~$r$ and a smoothed frontier such that for every point $x_1\in\partial A_1$ there exists a discrete Euclidean ball~$B_{x_1}$ of radius $s$ contained in $A_1$ such that $B_{x_1}\cap A_1^C=x_1$. More precisely, we let $\mathfrak{H}_{r-s}$ be a discrete $d$-dimensional hypercube with edge length $r-s$ contained in~$\Z^d$ and define \begin{equation*} A_1^{\tiny\Square}:=\{x_1\in\Z^d;\dist(x_1,\mathfrak{H}_{r-s})\leq s\}. \end{equation*} We refer the reader to \cite{PopovTeixeira}, Section~$8$, to see that $A_1^{\tiny\Square}$ possesses the desired properties. Note that, since $s=o(r)$, the diameter of $A_1^{\tiny\Square}$ is of order $r$. We then define $A_2^{\tiny\Circle}:=A_2^{\tiny\Circle}(r,s)$ to be the set of points that are at least at distance $2s$ from $A_1^{\tiny\Circle}$: \begin{equation*} A_2^{\tiny\Circle}:=\{x_1\in\Z^d;\dist(x_1,x_2)>2s\text{ for every $x_2\in$}A_1^{\tiny\Circle}\}. \end{equation*} We finally define $V^{\tiny\Circle}:=V^{\tiny\Circle}(r,s)$ to be the boundary set \begin{equation*} V^{\tiny\Circle}:=\partial\{x_1\in\Z^d,\dist(x_1,x_2)\leq s\text{ for some $x_2\in$}A_1^{\tiny\Circle}\}, \end{equation*} separating $A_1^{\tiny\Circle}$ from $A_2^{\tiny\Circle}$. We analogously define $A_2^{\tiny\Square}(r,s)$ and $V^{\tiny\Square}(r,s)$. It will also be useful to define the $d$-dimensional hypercube $\mathfrak{H}_{r+2s}$ of edge length $r+2s$ concentric with~$\mathfrak{H}_{r-s}$, which will essentially be the unsmoothed version of $(A_2^{\tiny\Square})^C$. When there is no risk of confusion, or when the arguments presented work for both balls and smoothed hypercubes (which will be often so), we will omit the super-indexes~$\tiny\Circle,\tiny\Square$. Since $s=o(r)$, we have \begin{equation*} \capacity(V)=\capacity(A_2)(1+o(1))=\capacity(A_1)(1+o(1)), \end{equation*} and also, by Proposition~$2.2.1$ and equation~$(2.16)$ of \cite{LawlerI}, \begin{equation} \label{asymcapv} \capacity(V)\asymp r^{d-2}. \end{equation} We will now state our main result. Heuristically, it says the following: Let~$s$ be bounded from below by a polynomial of~$r$ with a explicit given coefficient (strictly smaller than~$1$, depending only on the dimension~$d$ and whether~$A_1$ is a ball or a smoothed hypercube). Let~$A_3$ be a subset of~$A_2$ with finite boundary, that is, $A_3$ is either finite or has finite complement. If we pay a stretched exponentially small price (in~$s$) to guarantee that the interlacements configuration of~$\I^u\cap A_3$ is not too weird, then the distribution of~$I^u\cap A_1$ conditioned on this configuration is well approximated by the unconditional distribution, with high probability ($1$ minus a stretched exponential function of~$s$). \begin{figure} \centering \includegraphics[scale = 0.75]{A3def} \vspace{0.5cm} \caption{Our main result says that if the interlacements configuration in a set~$A_3\subseteq A_2$ is not too weird, that is, it does not belong to a set with stretched exponentially small probability (in~$s$, as~$s\rightarrow\infty$), then with high probability ($1$ minus stretched exponential in~$s$) the distribution of the interlacements set intersected with~$A_1$ conditioned on the state of~$\I^u\cap A_3$ can be well approximated by the unconditional distribution.} \label{A3def} \end{figure} \begin{theorem} \label{t_main1} Let the real numbers $b_{A_1^{\tiny\Circle}},b_{A_1^{\tiny\Square}}$ be such that \begin{align} \label{bcircledef} 1\leq b_{A_1^{\tiny\Circle}}<\frac{2d-2}{d},\\ 1\leq b_{A_1^{\tiny\Square}}<\frac{4d-4}{3d-2}\label{bsquaredef}. \end{align} Then, define \begin{align} \label{acircledef} a_{A_1^{\tiny\Circle}}&=2d-2-db_{A_1^{\tiny\Circle}}>0,\\ a_{A_1^{\tiny\Square}}&=4d-4-3db_{A_1^{\tiny\Square}}+2b_{A_1^{\tiny\Square}}>0.\label{asquaredef} \end{align} From now on we will again omit the indexes $\tiny\Circle,\tiny\Square$. Recall that $r$ is of the same order as the diameter of $A_1$, and that $s$ has the same order as the distance between $A_1$ and~$A_2$. Assume $r\asymp s^{b_{A_1}}$, let $s$ be sufficiently big. Let $\eps>0$ be smaller then $1/4$. Let~$A_3$ be a subset of~$A_2$ such that~$|\partial A_3|<\infty$. Define $\I^u_{A_j}:=\I^u\cap A_j$, for~$j=1,2,3$. Then there are positive constants~$c,c'$ depending only on the dimension $d$, and a measurable (according to the random interlacements $\sigma$-field) set~$\mathcal{G}\in\{0,1\}^{A_3}$ such that \begin{equation*} \IP^u \big{[} \I^u_{A_3}\in\mathcal{G}\big{]}\geq 1-\exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big), \end{equation*} and for any increasing function $f$ on the interlacements set intersected with $A_1$, with $\sup |f| < M$, we have \begin{align} \label{e_conditionaldecoupling2} \lefteqn{\big(\IE f( \I^{u(1-\eps)}_{A_1})-cM\exp\big(-c'\eps^2 u s^{a_{A_1}}\big)\big)\1{\I^u_{A_3}\in\mathcal{G}} \leq \IE \big(f(\I^{u}_{A_1})\mid \I^{u}_{A_3} \big)\1{\I^u_{A_3} \in\mathcal{G}}\nonumber} \phantom{**********************} \\ \phantom{*****************}&\leq \big(\IE f( \I^{u(1+\eps)}_{A_1}) +cM\exp\big(-c'\eps^2 u s^{a_{A_1}}\big)\big)\1{\I^u_{A_3}\in\mathcal{G}}. \end{align} \end{theorem} We also obtain a result analogous to Theorem~\ref{t_main1}, but this time we allow the sprinkling factor to be arbitrarily big. This decreases the ``precision'' (in the result below, $\IE f( \I^{u + u'}_{A_1})$ can be very different from~$\IE f( \I^{u}_{A_1})$), but, in compensation, the size of the complement of the ``good'' set as well as the ``error term'' become smaller. \begin{theorem} \label{t_main3} Let $u'>u>0$. We use the same definitions as Theorem~\ref{t_main1}. There are positive constants~$c,c'$ depending only on the dimension $d$, and a measurable (according to the random interlacements $\sigma$-field) set~$\mathcal{G}_{u'}\in\{0,1\}^{A_3}$ such that \begin{equation*} \IP^u \big{[} \I^u_{A_3}\in\mathcal{G}_{u'}\big{]} \geq 1-\exp\Big(-c' u' s^{a_{A_1}}\Big), \end{equation*} and for any increasing function $f$ on the interlacements set intersected with $A_1$, with $\sup |f| < M$, we have \begin{equation} \label{e_conditionaldecoupling3} \IE\big(f(\I^{u}_{A_1})\mid \I^{u}_{A_3} \big) \1{\I^u_{A_3}\in\mathcal{G}_{u'}} \leq \big(\IE f( \I^{u + u'}_{A_1})+cM\exp\big(-c' u' s^{a_{A_1}}\big)\big)\1{\I^u_{A_3}\in\mathcal{G}_{u'}}. \end{equation} \end{theorem} \begin{remark} We have to explain why we need to consider $A_3\subset A_2$. Indeed, at first sight it seems that conditioning on a configuration on~$A_3$ does not add generality to our results, since any fixed configuration on~$A_3$ corresponds to a set of configurations on~$A_2$. However, the problem with always setting $A_3=A_2$ is the following: the ``exceptional set''~$\mathcal{G}^c$ will then be supported on the whole~$A_2$, and this can be inconvenient for applications. For example, assume that we successively apply the conditional decoupling results to a process (such as the one of Section~\ref{s_applic}) that ``explores'' the interlacement environment. If that process has explored only a finite chunk of~$A_2$, we would not be able to say if the configuration is ``good'' (i.e., belongs to~$\mathcal{G}$) by only observing that finite chunk. This would force us to condition on the (configuration on the) whole~$A_2$, which would mean that a subsequent application of a conditional decoupling may be difficult, since we already ``revealed'' some information about the configuration on a set which is ``too big'' (i.e., when we apply the decoupling result for the next time, the ``new'' $A_1$ may be inside the ``previous'' $A_2$) \end{remark} \begin{remark} In the course of the proof of the above theorems we actually prove a stronger result: the same conditional decoupling inequality holds true if we replace the sets $\I^u_{A_1}\subset A_1$ and $\I^u_{A_3}\subset A_3$ by sets of \emph{random walk excursions} in $A_1$ and $A_3$ (we also have to replace the function $f$ by an increasing function on the set of excursions). That is, the conditional decoupling continues to work when we replace the ranges of the excursions (which constitute the random interlacements set) by the actual excursions themselves. We chose to state the results in the above manner for the sake of clarity and brevity. Note that this remark also applies to the decoupling obtained by Popov and Teixeira in~\cite{PopovTeixeira}. \end{remark} \begin{remark} The above theorems can be proved in the same way if we replace the smoothed hypercube $A_1^{\tiny \Square}$ by a smoothed version of a box $[0,a_1]\times\dots\times[0,a_d]$, with~$c^{-1}r<a_i<cr$ for all $i=1,\dots,d$, and some constant~$c>1$, and then replace the sets $A_2^{\tiny \Square}$ and~$V^{\tiny \Square}$ accordingly. We chose to prove the theorems for~$A_1^{\tiny \Square}$ only to simplify the notation. We also note that we prove the theorem for both balls and boxes because the error term obtained in the decoupling for balls is much smaller than the error obtained in the decoupling for boxes, but at the same time the decoupling between boxes tends to be more useful because boxes cover the space in a much more efficient manner. \end{remark} \begin{remark} For $d=3$, the only way to obtain an exponentially small (instead of a \emph{stretched exponentially} small) error term in equations~\eqref{e_conditionaldecoupling2} and~\eqref{e_conditionaldecoupling3} is to allow the distance~$\sim s$ between the sets~$A_1$ and~$A_2$ to be of the same order of the minimal diameter~$\sim~r$. \end{remark} Here is an overview of the paper. In Subsection~\ref{s_applic}, we discuss an application of some of our results. In Section~\ref{s_slt} we recall the soft local times technique. In Section~\ref{s_simexc} we show how we simulate the interlacements set~$\I^u_{A_1}$ conditioned on the information given by~$\I^u_{A_2}$ using a suitable version of the soft local times method. Finally, in Section~\ref{s_cd}, we prove the main theorem using a large deviations estimate for the soft local times associated with~$\I^u_{A_1}$. The Appendix is then used to collect and derive the technical estimates we need. \subsection{An application: biased random walk on the interlacement set} \label{s_applic} Let~$G$ be some (possibly random) subset of~$\Z^d$, $d\geq 2$. Fix a parameter~$\beta>0$, which accounts for the bias; also, fix some non-zero vector~$\ell\in\Z^d$. Let us define the \emph{conductances} on the edges of~$\Z^d$ in the following way: \[ \mathcal{C}(x,y) = \begin{cases} e^{\beta(x+y)\cdot \ell}, & \text{if $x,y$ are neighbors and belong to $G$}, \\ 0, & \text{otherwise}, \end{cases} \] and we call the collection of all conductances $\omega = \big\{\mathcal{C}(x,y), x,y\in\Z^d\big\}$ the random environment. Consider a random walk $(X_n, n\geq 0)$ in this environment of conductances; i.e., its transition probabilities are given by \[ P^{\omega}[X_{n+1}=y \mid X_n=x] = \frac{\mathcal{C}(x,y)}{\sum_{z}\mathcal{C}(x,z)} \] (the superscript in~$P^{\omega}$ indicates that we are dealing with the ``quenched'' probabilities, i.e., when the underlying random graph / conductancies are already fixed). There have been significant interest towards this model in recent years, mainly in the case when~$G$ is the infinite cluster of supercritical Bernoulli percolation model, see e.g.\ \cite{BergerGantertPeres, Szn03, Fribergh}. In particular, one remarkable fact is the following: the walk is ballistic (transient and with positive speed) in the direction of the drift if $\beta>0$ is small enough; however, it moves only sublinearly fast (its displacement is only of order~$t^a$ by time~$t$ with~$a\in(0,1)$, as proved in~\cite{FriberghHammond}) for large values of~$\beta$. In the work~\cite{FriberghPopov} the case $G=\I^u$ was considered. It turned out that in dimension~$d=3$, for \emph{any} value of~$\beta>0$, although still transient in the direction of the drift, the walk is not only sub-ballistic, but has also sub-polynomial speed, in the sense that its distance to the origin grows slower than~$t^\eps$ for any $\eps>0$. This is also in contrast with the result that the walk on~$\I^u$ without any drift is diffusive (so, loosely speaking, its ``speed'' is~$\sqrt{t}$), as shown in~\cite{ProcacciaRosenthalSapozhnikov}. We will not describe all the details of~\cite{FriberghPopov} here, but the main idea is the following. As in the case of the biased walk on the infinite percolation cluster, to prove zero speed one needs to show that the walk frequently gets caught in traps. These traps are ``dead ends'' of the environment looking in the direction of the bias, see Figure~\ref{f_trap}. \begin{figure} \centering \includegraphics{trap} \caption{A trap for the random walk on the interlacement set (on this picture, the bias is directed along the first coordinate vector). Only the interlacements are shown; the trajectory of the RWRE~$X$ is not present on the picture.} \label{f_trap} \end{figure} When the walk enters such a trap, the bias prevents it from goint out, so there is a good chance that the walk will spend quite a lot of time there, and this effectively leads to zero speed. Now, the crucial fact is that, specifically in three dimensions, it is much cheaper to have a trap in the interlacement set than in the (Bernoulli) percolation cluster. Indeed, it is possible to show that the capacity of the dotted set on Figure~\ref{f_trap} is of order $\frac{\ln t}{\ln \ln t}$ for any fixed $\alpha<1$. The formula~\eqref{eq_vacant>3} then shows that having a trap as above has only a subpolynomial (in~$t$) cost; also, it turns out that ``forcing'' a trajectory to create a ``dead end'' as shown on the picture is not too costly as well. So, when the walk advances in the direction of the bias, from time to time it will encounter a trap and be trapped. However, to make such an argument rigorous, one has to face the following difficulty. When the walk already explored some parts of the environment and then came to an unexplored area, we can no longer use~\eqref{eq_vacant>3} to estimate the probability that there is a trap in front of it, due to the lack of independence. It is here that the conditional decoupling enters the scene: it is possible to use the main results of this paper to show that probability of having a trap in front of the particle (when it comes to an unexplored area) is not very small. As mentioned above, the detailed argument can be found in~\cite{FriberghPopov}. \section{Soft local times} \label{s_slt} In the present section we describe the technique introduced in \cite{PopovTeixeira}, the so called Soft Local Times method. This method essentially allows us to simulate any number of random variables taking values in a state space~$\Sigma$ using a realization of a Poisson point process in~$\Sigma\times\R_+$. Let~$\Sigma$ be a locally compact Polish metric space, and let~$\mathcal{B}(\Sigma)$ be its Borel~$\sigma$-algebra. Let~$\mu$ be a Radon measure over~$\mathcal{B}(\Sigma)$, so that every compact set has finite $\mu$-measure. Such measure space~$(\Sigma,\mathcal{B}(\Sigma),\mu)$ is the usual setup for the construction of a Poisson point process on~$\Sigma$. We consider the space of Radon point measures in~$\Sigma\times\R_+$ \begin{equation} \label{e_M1} \M = \vvviiiggg\{\m = \sum_{\lambda \in \Lambda} \delta_{(z_\lambda, v_\lambda)}; z_\lambda \in \Sigma, v_\lambda \in \mathbb{R}_+ \text{ and } \m(K) < \infty \text{ for all compact $K$} \vvviiiggg\}, \end{equation} endowed with the $\sigma$-algebra generated by the evaluation maps \begin{equation*} \eta\mapsto\eta(D),\phantom{*}D\in\mathcal{B}(\R_+)\otimes\mathcal{B}(\Sigma). \end{equation*} We are then able to construct a Poisson point process~$\eta$ in the space $(L,\mathcal{D},\mathbb{Q})$ with intensity measure given by $\mu\otimes\d v$, where~$\d v$ is the Lebesgue measure on~$\R_+$, see \cite{Resnick1}, Proposition~$3.6$ on p.$130$. The next proposition, originally seen in \cite{PopovTeixeira}, is at the core of the soft local times argument. \begin{proposition} \label{p_simslt} Let $g:\Sigma \to \mathbb{R}_+$ be a measurable function with $\int g(z) \mu(\d z) = 1$. For $\m = \sum_{\lambda \in \Lambda} \delta_{(z_\lambda, v_\lambda)} \in \M$, we define \begin{equation} \xi = \inf \{ t \geq 0; \text{ there exists $\lambda \in \Lambda$ such that $t g(z_\lambda) \geq v_\lambda$}\}. \end{equation} Then under the law $\mathbb{Q}$ of the Poisson point process~$\m$, \begin{enumerate}[(i)]\addtolength{\itemsep}{2mm}\vspace{2mm} \item \label{e:io} there exists a.s.\ a unique $\hat{\lambda} \in \Lambda$ such that $\xi g(z_{\hat{\lambda}}) = v_{\hat{\lambda}}$, \item \label{e:xiio} $(z_{\hat{\lambda}}, \xi)$ is distributed as $g(z) \mu(dz) \otimes \Exp(1)$, \item \label{e:mprime} $\m' := \sum_{\lambda \neq \hat{\lambda}} \delta_{(z_\lambda,v_\lambda - \xi g(z_\lambda))}$ has the same law as~$\m$ and is independent of $(\xi, \hat{\lambda})$. \end{enumerate} \end{proposition} The proof is remarkably simple, mainly relying on the independence of a Poisson process in disjoint sets, and can be seen in the original paper. With the above proposition we are able to simulate as many random variables as we want: \begin{figure}[ht] \centering \includegraphics[scale = 1]{softlocaltimesfig1} \vspace{0.5cm} \caption{An example showing the definition below. Under mild conditions we are able to use Proposition $\ref{p_simslt}$ to simulate a sequence of random variables over $\Sigma$.} \label{softlocaltimesfig1} \end{figure} Let $X_1,\space X_2,\dots,\space X_n$ be random variables on $\Sigma$ such that $X_1$'s distribution is absolutely continuous with respect to $\mu$ and, for all $i=2,\dots,n$ the probability measure generated by $X_i$, conditioned on on the values taken by $X_1,\dots,\space X_{i-1}$, is absolutely continuous with respect to $\mu$. Using the process $\m$ constructed above, we define \begin{equation} \begin{aligned} & g_1:\Sigma\mapsto\R_+\text{, the density function of $X_1$ with respect to $\mu$,}\\ & \xi_{1} := \inf \big\{ t \geq 0; \text{ there exists $\lambda \in \Lambda$ such that $t g_1( z_\lambda) \geq v_\lambda$}\big\}, \\ & G_{1}(z) := \xi_{1} \; g_1(z), \text{ for $z \in \Sigma$,}\\ & (z_{\lambda_1}, v_{\lambda_1}) \text{, the unique pair in $\{(z_\lambda, v_\lambda)\}_{\lambda \in \Lambda}$ with $G_1( z_{\lambda_1}) = v_{\lambda_1}$.} \end{aligned} \end{equation} We now define $g_2:\Sigma\mapsto\R_+$ to be the density of $X_2$ conditioned on the event $\{X_1=z_{\lambda_1}\}$. Using the fact that $\m_1 := \sum_{\lambda \neq {\lambda_1}} \delta_{(z_\lambda,v_\lambda - \xi_1 g_1(z_\lambda))}$ has the same law as $\m$ and is independent from $(\xi_1, {\lambda_1})$ we define \begin{equation} \begin{aligned} & \xi_{2} := \inf \big\{ t \geq 0; \text{ there exists $\lambda \in \Lambda$ such that $t g_2( z_\lambda)+G_1(z_{\lambda}) \geq v_\lambda$}\big\}, \\ & G_{2}(z) := \xi_{2} \; g_2(z)+G_1(z), \text{ for $z \in \Sigma$,}\\ & (z_{\lambda_2}, v_{\lambda_2}) \text{, the unique pair in $\{(z_\lambda, v_\lambda)\}_{\lambda \in \Lambda}$ with $ G_2( z_{\lambda_2}) = v_{\lambda_2}$.} \end{aligned} \end{equation} Then, recursively, for $1\leq k\leq n$ we define $g_k:\Sigma\mapsto\R_+$ to be the density function of $X_k$ conditioned on the event $\{X_1=z_{\lambda_1},\dots,X_{k-1}=z_{\lambda_{k-1}}\}$, \begin{equation} \begin{aligned} & \xi_{k} := \inf \big\{ t \geq 0; \text{ there exists $\lambda \in \Lambda$ such that $t g_k( z_\lambda)+G_{k-1}(z_{\lambda}) \geq v_\lambda$}\big\}, \\ & G_{k}(z) := \xi_{k} \; g_{k}(z)+G_{k-1}(z), \text{ for $z \in \Sigma$,}\\ & (z_{\lambda_k}, v_{\lambda_k}) \text{, the unique pair in $\{(z_\lambda, v_\lambda)\}_{\lambda \in \Lambda}$ with $ G_k( z_{\lambda_k}) = v_{\lambda_k}$.} \end{aligned} \end{equation} We refer to Figure~$\ref{softlocaltimesfig1}$. Using Proposition \ref{p_simslt} together with the above construction, we are able to state the following proposition: \begin{proposition} \label{p_decslt} The vector $(z_{\lambda_1},\dots,z_{\lambda_n})$ has the same law as $(X_1,\dots,X_n)$. \end{proposition} We call the function $G_n(z)$ the soft local time of the vector $(X_1,\dots,X_n)$ up to time~$n$ with respect to the measure $\mu$, or more usually simply the soft local time. If $T$ is a stopping time with respect to the canonical filtration generated by the variables $X_i$, it is simple to define $G_T(z)$, the soft local time up to time $T$. Note that by controlling the value of the soft local times function we will automatically control the values our random variables take, as the next corollary summarizes: \begin{corollary} \label{c_couplez} For any measurable function $h: \Sigma \to \R_+$ we have, using the same notation as above, \begin{equation} \label{e_couplesingle} \mathbb{Q} \Big[ \{z_1, \dots, z_T\} \subseteq \{z_{\lambda}; v_{\lambda} \leq h(z_{\lambda})\} \Big] \geq \mathbb{Q} \big[ G_T(z) \leq h(z), \text{ for $\mu$-a.e. $z \in \Sigma$} \big], \end{equation} for any finite stopping time $T \geq 1$. \end{corollary} \section{Simulating excursions} \label{s_simexc} In this section we will show a way of simulating the intersection of the random interlacements set with a given subset of $\Z^d$ in such a way as to make explicit the dependence each random walk excursion has with its entrance and exit points on the subset. We refer the reader to Figure~$\ref{clotheslineslt}$ for a brief overview of the arguments used in this section. \begin{figure}[ht] \centering \includegraphics[scale = 1]{clotheslineslt} \caption{The figure shows how we will use the soft local times technique to simulate the range of a simple random walk trajectory intersected with~$A_1$. We first simulate a process of pairs of points $((W_k,Y_k),k\geq 0)$ denoting the entrance at $V$ and exit at $\partial A_2$ of a simple random walk trajectory that starts at $V$. We then use the soft local times method to simulate the pieces of trajectory that lie between each of the pairs $(W_k,Y_k)$.} \label{clotheslineslt} \end{figure} It is clear from \eqref{interlacementsdef} the fact that in order to simulate the random interlacements set at level $u$ in a bounded subset $K$ of $\Z^d$ we need only to pick a~$N^u_K\eqd\textit{Poisson}(u\capacity(K))$ number of points in $\partial K$, each point chosen according to the measure~$\overline e_K (\cdot)$, and from each point start a simple random walk. We intend to study $\I^u_{A_1}=\I^u\cap A_1$, showing that this set is not much influenced by the random interlacements set intersected with $A_2$, $\I^u_{A_2}=\I^u\cap A_2$. We will later clarify what we mean by ``influence". For now, we observe that the only ``information" $\I^u_{A_1}$ receives from $\I^u_{A_2}$ is the location of the entrance and exit points of the excursions on $\partial A_2$ of the random walks that constitute $\I^u_{A_2}$. Let us begin the work towards our result. We first generate the points of entrance at~$V$ and exit from $A_2^C$ of each excursion on $V$ of a random walk trajectory. These points will be the clothesline onto which we will hang the pieces of trajectory that meet $A_1$, we will do so using the soft local times method. Let us define the successive return and departure times between $V$ and $A_2$. Given a trajectory that starts at $V$, we define \begin{align} \nonumber D_0 &= 0, \qquad & & R_1 = H_{\partial A_2},\\ D_1 & = H_{\V} \circ \theta_{R_1} + R_1, \qquad & & R_2 = H_{\partial A_2} \circ \theta_{D_1} + D_1,\label{exctimedef}\\ D_2 & = H_{\V} \circ \theta_{R_2} + R_2 \qquad & & \text{and so on.} \nonumber \end{align} We also define the random time \begin{equation} \label{e_tdelta} T_\Delta = \inf \{k \geq 1; R_k = \infty\}, \end{equation} which is almost surely finite, as the walk is transient. Let $(X_n,n\geq 0)$ be the simple random walk with initial distribution given by $\overline e_V (\cdot)$. Let~$\Delta$ be an artificial cemetery state. We construct a random sequence of elements of $(V\times \partial A_2)\cup\{\Delta\}$ in the following way: Conditioned on the event $\{T_{\Delta}=m\}$, we let \begin{align*} \lefteqn{\big((W_1,Y_1),\dots,(W_{m-1},Y_{m-1}),(W_{m},Y_{m}),(W_{m+1},Y_{m+1}),\dots\big)} \phantom{********************}\\ &= \big((X_{D_0},X_{R_1}),\dots,(X_{D_{m-2}},X_{R_{m-1}}),\Delta,\Delta,\dots\big). \end{align*} It is then elementary to prove that the process $((W_k,Y_k))_{k\geq 1}$ inherits the Markov property from the simple random walk. We call $((W_k,Y_k))_{k\geq 1}$ the clothesline process started at~$W_1$. When there is no risk of confusion we will also denote by $\IP_{w_0}$ the probability measure associated with the clothesline process started at a given point $w_0\in V$. \begin{figure}[ht] \centering \includegraphics[scale = .8]{processdeffig2} \vspace{0.5cm} \caption{An example of the process $((W_k,Y_k))_{k\geq 1}$.} \label{processdeffig2} \end{figure} Let us now use the soft local times method to generate the trajectories inside $A_1$, given the entrance and exit points $((W_k,Y_k))_{k\geq 1}$. We first define the underlying space $\Sigma$ where our pieces of trajectories will live. We let $\mathcal{K}$ be the set of nearest-neighbor paths in $A_2^C$ with one endpoint in $\partial A_1$ and the other in $V$, \begin{equation} \mathcal{K} := \big\{(x_0,x_1,\dots , x_n);\space n\in\N,\space x_i\in A_2^C\text{ for $1\leq i\leq n$},\space x_0\in\partial A_1,\space x_n\in V\big\}. \end{equation} \begin{figure}[ht] \centering \includegraphics[scale = .8]{processdeffig3} \vspace{0.5cm} \caption{The definition of $\sigma(w,y)$ and $\Xi(w,y)$.} \label{processdeffig3} \end{figure} We introduce yet another artificial state $\Theta$ for reasons that will be made clear in a few moments. We let $\Sigma := \mathcal{K}\cup \{\Theta\} $ and let $\mu$ be a measure on $\Sigma$ defined in the following way: given $A\subseteq \Sigma$, \begin{equation} \mu(A):= \sum_{(x_0,\dots,x_n)\in A}\mathbb{P}_{(x_0,x_n)}[X_0=x_0,\dots,X_n=x_n]+1_{\{\Theta\in A\}}, \end{equation} where $\mathbb{P}_{(x_0,x_n)}$ is the simple random walk measure conditioned on the event where $x_0$ is the walk's initial point and $x_n$ is its last point on $V$ before reaching $\partial A_2$. Notice that~$\mu(\{\Theta\})=1$. Given $(w,y)\in V\times \partial A_2$ we let $\mathbb{P}_{w,y}$ be the measure associated with simple random walk starting at $w$ conditioned on the event where $y$ is the first point the walk hits in~$\partial A_2$, that is: \begin{equation} \mathbb{P}_{w,y}[\cdot]:=\mathbb{P}_{w}[\text{ }\space\cdot\mid X_{H_{\partial A_2}}=y] \end{equation} We want to randomly select (according to the conditional simple random walk measure above) a piece of trajectory in $A_1$ given a starting point in $V$ and an ending point in $\partial A_2$. Given $w\in V$ and $y\in\partial A_2$ we define the random element $\sigma_{w,y}\in\Sigma$ in the following way: \begin{itemize} \item Let $\mathcal{B}_{w,y}$ be a Bernoulli random variable with parameter $\mathbb{P}_{w,y}[H_{\partial A_1}<H_{\partial A_2}]$. \item If $\mathcal{B}_{w,y}=0$ we let $\sigma_{w,y}\equiv\Theta$. \item If $\mathcal{B}_{w,y}=1$ we let, for $\mathfrak{A}\subseteq\mathcal{K}$: \begin{equation} \mathbb{P}[\sigma_{w,y}\in \mathfrak{A}]=\sum_{(a_0,\dots,a_n)\in \mathfrak{A}}\mathbb{P}_{w,y}\left[\begin{array}{c}X_{H_{A_1}}=a_0,X_{H_{A_1}+1}=a_1,\dots,X_{H_{A_1}+n}=a_n,\\X_k\notin A_1\text{ for every }k=H_{A_1}+n+1,\dots,H_{A_2}\end{array}\right]. \end{equation} \end{itemize} In other words, the random element $\sigma_{w,y}\in\Sigma$ will either be $\Theta$, on the event where a random walk starting at $w$ and exiting at $y$ fails to reach $A_1$, or a simple random walk trajectory~$(x_{0}^{w,y},x_{1}^{w,y},\dots , x_{k(w,y)}^{w,y})\in\mathcal{K}$ distributed so that $x_{0}^{w,y}$ is the first point in $A_1$ after the start at $w$ and $x_{k(w,y)}^{w,y}$ is the last point in $V$ before reaching $y\in\partial A_2$. We then define~$g_{(w,y)}:\Sigma\mapsto\R_+$ to be the $\mu$-density of $\sigma_{w,y}$. We refer to Figure~$\ref{processdeffig3}$. Given $z=(x_0,\dots,x_n)\in \mathcal{K}$ we denote by $\Xi(z)$ the pair~$(x_0,x_n)$, the path's starting and ending points. We also let~$\Xi(\Theta)=\Theta$ so that $\Xi(z)$ is defined for all~$z\in \Sigma$. For $(w,y)\in V\times \partial A_2$ we define~$\Xi(w,y)$ to be the random element~$\Xi(\sigma_{w,y})$. Let us calculate~$g_{(w,y)}$ using the above notation. For $\mathfrak{A}\subseteq\Sigma$ we want to express the probability $\mathbb{P}[\sigma_{w,y}\in \mathfrak{A}]$ as a~$\mu$-integral over $\mathfrak{A}$. \begin{equation} \begin{array}{e} \mathbb{P}[\sigma_{w,y}\in \mathfrak{A}] & = & \sum_{a\in \mathfrak{A}}\mathbb{P}[\sigma_{w,y}=a]\\ & = & 1_{\{\Theta\in \mathfrak{A}\}}\mathbb{P}_{w,y}[\Xi(w,y)=\Theta]\\ \\ & &+ \sum_{\substack{a\in \mathfrak{A} \\ a\neq \Theta}} \mathbb{P}_{w,y}[\Xi(w,y)=\Xi(a)] \mathbb{P}_{w,y}[a\mid \Xi(w,y)=\Xi(a)]\\ & = & 1_{\{\Theta\in \mathfrak{A}\}}\mathbb{P}_{w,y}[\Xi(w,y)=\Theta]+\sum_{\substack{a\in \mathfrak{A} \\ a\neq \Theta}} \mathbb{P}_{w,y}[\Xi(w,y)=\Xi(a)] \mathbb{P}_{\Xi(a)}[a]\\ & = & \sum_{a\in \mathfrak{A}}\mathbb{P}_{w,y}[\Xi(w,y)=\Xi(a)]\mu(a) \\ & = & \int_\mathfrak{A} \mathbb{P}_{w,y}[\Xi(w,y)=\Xi(z)]\mu(\d z), \end{array} \end{equation} so that $g_{(w,y)}(z)= \mathbb{P}_{w,y}[\Xi(w,y)=\Xi(z)]$. Notice that the function $g_{(w,y)}(z)$ only depends on the pair $\Xi (z)$, the path's initial and ending points. Let $(L,\mathcal{D},\mathbb{Q})$ be the measure space of the Poisson point process on $\Sigma\times\R_{+}$ with intensity measure $\mu\otimes\d v$, where $\d v$ is the Lebesgue measure on $\R_{+}$. A weighted sum of functions $g_{(\cdot,\cdot)}$ indexed by clothesline processes $((W_k,Y_k))_{k\geq 1}$ will be the soft local time used to simulate the pieces of trajectory we need. This way we will be able to simulate the intersection of a simple random walk trajectory with $A_1$. As we have seen in the random interlacements process's definition, to simulate the interlacements set inside $V$ we need a number $N^u_V\eqd Poisson(u\capacity (V))$ of independent random walks. We will need the same number of independent clothesline processes. For such task we will need a much bigger probability space, easily definable as a product between the Poisson point process space and an infinite product of independent simple random walk spaces starting on $V$. We call this bigger space the global probability space, and denote by $\GP$ its probability measure, which we will call the `global probability'. Given a clothesline process $((W_k,Y_k))_{k\geq 1}$, we define the trajectory's soft local time: \begin{equation} \label{sltclothdef} G(z)=\sum_{k=1}^{T_\Delta}\xi_k g_{(W_k,Y_k)}(z). \end{equation} We will also need to consider the soft local time up to a random time $T\leq T_\Delta$: \begin{equation} G_T(z)=\sum_{k=1}^{T}\xi_k g_{(W_k,Y_k)}(z). \end{equation} Analogously, we define for any deterministic time $n\geq 1$ \begin{equation} G_n(z)=\sum_{k=1}^{n}\xi_k g_{(W_k,Y_k)}(z). \end{equation} We denote by $z_k$ the piece of trajectory randomly selected by the $k$-th soft local time,~$G_k$. As we have seen before, in order to simulate the random interlacements set at level~$u$ in~$A_1$, we actually need a \begin{equation*} N^u_V\eqd\textit{Poisson}(u\capacity (V)) \end{equation*} number of random walk trajectories, each started at a point in $V$ distributed as $\overline e_V (\cdot)$. For~$j=1,\dots,N^u_V$ we let~$((W_{k}^j,Y_{k}^j))_{k\geq 1}$ be a clothesline process started at $W_{1}^j$, so that $((W_{k}^j,Y_{k}^j))_{k\geq 1}$ is independent from $((W_{k}^i,Y_{k}^i))_{k\geq 1}$ for $i\neq j$, and so that $W_{1}^j$ is distributed as $\overline e_V (\cdot)$. Let $T_{\Delta}^j$ be the killing time associated with $((W_{k}^j,Y_{k}^j))_{k\geq 1}$. We denote by \begin{equation} \label{e_sltdef1} G^j(z)=\sum_{k=1}^{T_{\Delta}^j}\xi_{k}^j g_{(W_{k}^j,Y_{k}^j)}(z) \end{equation} the soft local time associated with the $j$-th clothesline process. It should be clear from Proposition $\ref{p_decslt}$ that we can simulate all the random elements $(\sigma_{W_{k}^j,Y_{k}^j})_{j,k\geq 1}$ at the same time using only one realization of a Poisson point process in $\Sigma\times\R_+$. As the Corollary $\ref{c_couplez}$ shows, in order to control the values our random elements take we only need to control the function \begin{equation} \label{e_sltdef2} G^{\Sigma}_{u}(z)=\sum_{j=1}^{N^u_V}G^j(z), \end{equation} the soft local time associated with the whole process. With such objective in mind we for now set our goals at estimating the soft local time's moments. We first show an easier way to express the expectation of $G(z)$. \begin{proposition} \label{t_expslt} Using the same notation as above, we have \begin{equation} \IE (G(z))=\IE\Big(\sum_{k=1}^{T_\Delta}1_{ \{\Xi(X_{D_{k-1}},X_{R_k})=\Xi(z)\}}\Big). \end{equation} \end{proposition} \begin{proof} In fact, \begin{equation} \begin{array} {e} \IE (G(z)) & = & \IE\Big(\sum_{k=1}^{T_\Delta}g_{(W_k,Y_k)}(z)\Big) = \IE\Big(\sum_{k=1}^{T_\Delta}\IP_{W_k,Y_k}[\Xi(W_k,Y_k)=\Xi(z)]\Big) \\ & = & \IE\Big(\sum_{k=1}^{T_\Delta}1_{ \{\Xi(W_k,Y_k)=\Xi(z)\}}\Big) = \IE\Big(\sum_{k=1}^{T_\Delta}1_{ \{\Xi(X_{D_{k-1}},X_{R_k})=\Xi(z)\}}\Big). \end{array} \end{equation} \end{proof} We have then that the expectation of $G(z)$, for $z\neq\Theta$, is the same as the expectation of how many times a random walk started at $W_1$ will do a excursion on $A_2^C$ with starting and ending points given by $\Xi(z)$. It is clear that the same computation works for any starting distribution for $W_1$. Given~$y\in\partial A_2$, we let $\beta_y(\cdot)$ be the hitting measure on $V$ of a simple random walk started at~$y$. We are then able to take $\beta_y(\cdot)$ as the starting distribution of $W_1$. Let then~$\GP_{\beta_y}$ be the global process's measure in which the clothesline process's starting distribution is given by $\beta_y(\cdot)$, and let $\IE_{\beta_y}$ be its associated expectation. We are then required to allow the clothesline process to start at the cemetery state $\Delta$, denoting the failure of the random walk trajectory started at $y$ to reach $V$. In an analogous definition, we let $\GP_{w_0}$ be the global process's measure with $w_0\in V$ as the clothesline process's starting point, and let $\IE_{w_0}$ be its associated expectation. The next proposition, adapted from Theorem~$4.8$ of \cite{PopovTeixeira}, gives a bound on the second moment $\IE(G(z))^2$. \begin{proposition} \label{t_2ndmoment} For any $w_0 \in V$, \begin{equation} \IE_{w_0} \big(G(z) \big)^2 \leq 2 \IE_{w_0} \big(G(z)\big) \big( \sup_{w'\in V} \IE_{w'} G(z) +\sup_{w,y} g_{(w,y)}(z) \big) . \end{equation} \end{proposition} \begin{proof} Recall that the second moment of a $\Exp(1)$ random variable equals $2$. For $z \in \Sigma$ and $n \geq 1$, we write \begin{align*} \IE_{w_0} \big( G_n(z) \big)^2 & = \IE_{w_0} \Big( \sum_{k=1}^n \xi_k g_{(W_k,Y_k)}(z) \Big)^2 \\ & = \IE_{w_0} \Big( \sum_{k=1}^n \xi_k^2 g_{(W_k,Y_k)}^{2}(z) \Big) + \IE_{w_0} \Big(2 \sum_{k < k' \leq n} \xi_k \xi_{k'} g_{(W_k,Y_k)}(z) g_{(W_{k'},Y_{k'})}(z) \Big)\\ & \leq \sum_{k=1}^n \IE\xi_k^2 \sup_{w,y} g_{(w,y)}(z) \IE_{w_0} g_{(W_k,Y_k)}(z) + 2 \sum_{k=1}^{n-1} \sum_{k'=k+1}^{n} \IE_{w_0} \big(g_{(W_k,Y_k)}(z) g_{(W_{k'},Y_{k'})}(z) \big) \\ & \leq 2 \sup_{w,y} g_{(w,y)}(z) \IE_{w_0} G_n(z) + 2 \sum_{k=1}^{n-1} \sum_{k'=k+1}^{n} \IE_{w_0} \big( g_{(W_k,Y_k)}(z) \IE_{w_0} (g_{(W_k',Y_k')}(z)\mid W_k,Y_k ) \big)\\ & \leq 2 \sup_{w,y} g_{(w,y)}(z) \IE_{w_0} G_n(z) + 2 \sum_{k=1}^{n-1} \IE_{w_0} \Big( g_{(W_k,Y_k)}(z) \IE_{\beta_{Y_{k}}} \Big( \sum_{m=1}^{n-k} g_{(W_m,Y_m)}(z) \Big) \Big)\\ & \leq 2 \sup_{w,y} g_{(w,y)}(z) \IE_{w_0} G_n(z) + 2 \sup_{w'}\IE_{w'} \Big( \sum_{m=1}^{n-k} g_{(W_m,Y_m)}(z) \Big) \IE_{w_0} \Big(\sum_{k=1}^{n-1} g_{(W_k,Y_k)}(z) \Big)\\ &\leq 2 \IE_{w_0} \big(G_n(z)\big) \big( \sup_{w'} \IE_{w'} G_n(z) +\sup_{w,y} g_{(w,y)}(z) \big), \end{align*} so that the result is proved for time~$n$. Letting~$n$ go to infinity, by the monotone convergence theorem we can prove the result for the stopping time $T_\Delta$. \end{proof} For this paper's results, an estimate on the exponential moments of~$G$ will be essential. The next proposition, again adapted from \cite{PopovTeixeira} (propositions $\ref{t_expmoment}$ and $\ref{t_2ndmoment}$ are proved in the context of Markov chains in the original paper), gives us such an estimate. \begin{proposition} \label{t_expmoment} Given $\hat z \in \Sigma$ and measurable $\Gamma \subset \Sigma$, let \begin{equation} \begin{split} \alpha & = \inf\Big\{\frac{g_{(w,y)}(z')}{g_{(w,y)}({\hat z})}; (w,y) \in V\times\partial A_2, z' \in \Gamma,\hat z \in \mathcal{K}\Big\},\\ N(\Gamma) & = \#\{k \leq T_\Delta; z_k \in \Gamma\} , \text{ and}\\ \ell & \geq \; \smash{\sup_{(w,y) \in V\times\partial A_2}} \; g_{(w,y)}(\hat z). \end{split} \end{equation} Then, for any $v \geq 2$, \begin{equation} \label{e_boundexpslt} \GP[G({\hat z}) \geq v \ell] \leq \GP[G({\hat z}) \geq \ell] \Big( \exp \big\{-\big(\tfrac v2 - 1 \big) \big\} + \sup_{w'}\GP_{w'} \big[ \eta( \Gamma \times [0, \tfrac{1}{2} v \ell\alpha] ) \leq N(\Gamma) \big] \Big) \end{equation} (note that $\eta( \Gamma \times [0, \tfrac{1}{2} v \ell\alpha] )$ is a random variable with distribution $\textnormal{Poisson} \big( \tfrac{1}{2}v \ell \alpha\mu(\Gamma) \big)$). \end{proposition} The number $\alpha=\alpha(\Gamma)$ above gives us a regularity condition: whenever $\alpha$ is uniformly larger than some constant $c>0$, we have that the density function $g_{(w,y)}(\cdot)$ when restricted to the subset $\Gamma$ cannot vary too much. We first explain the intuition behind the terms in the right-hand side of ~$(\ref{e_boundexpslt})$. The first term in the product is explained by the fact that in order for $G(\hat z)$ to get past $v \ell$, it must first overcome $\ell$. The first summand inside the parenthesis corresponds to the probability that the sum $G(\hat z)$ overcomes $\ell$ at the same ``time'' it overcomes $v \ell 2^{-1}$, that is, a overshooting probability. The second summand corresponds to a large deviation estimate, and generally, as $v$ grows, $N(\Gamma)$ becomes much smaller than the expected value of~$\eta(\Gamma \times [0, \tfrac{1}{2}v\ell\alpha])$. \begin{proof} We define the stopping time (with respect to the filtration $\mathcal{F}_n = \sigma((W_k,Y_k), \xi_k, k \leq n))$ \begin{equation} T_\ell = \inf\{ k \geq 1; G_k({\hat z}) \geq \ell\}. \end{equation} For $v \geq 2$, we have \begin{align} \lefteqn{\GP[G ({\hat z}) \geq v \ell]}\phantom{*************************************************}\nonumber \\ \leq \GP \big[T_\ell < \infty, \; G_{T_\ell}({\hat z}) \geq \tfrac v2 \ell \big] + \GP \big[T_\ell < \infty, \; G_{T_\ell}({\hat z}) < \tfrac v2 \ell, \; G({\hat z}) - G_{T_\ell}({\hat z}) > \tfrac v2 \ell \big] \label{e:Qtwoterms} \end{align} (note that $\GP[G({\hat z})\geq \ell] =\GP[T_\ell<\infty]$). We first estimate the first term in the right side of the above inequality. By the memoryless property of the exponential distribution, we have \begin{align} \label{e:Qfirstterm} \nonumber \smash{\sum_{n \geq 1}} \IE & \Big( G_{n-1}({\hat z}) < \ell, \GP \big[ \xi_n g_{(W_n,Y_n)}({\hat z}) > \tfrac v2 \ell - G_{n-1} ({\hat z}) \mid W_{n-1}, Y_{n-1}, G_{n-1} \big] \Big)\\ \nonumber & \leq \sum_{n \geq 1} \IE \Big( G_{n-1}({\hat z}) < \ell, \GP[ \xi_1 g_{(W_n,Y_n)}({\hat z}) > \ell - G_{n-1}] \, \GP \big[\xi_1 g_{(W_n,Y_n)}({\hat z}) > \big( \tfrac v2 -1 \big) \ell \big] \Big)\\ & \leq \GP[T_{\ell} < \infty] \sup_{(w',y')} \GP\big[\xi_1 g_{(w',y')}({\hat z}) > \big( \tfrac v2 -1 \big) \ell \big]\\ \nonumber & \leq \GP[T_{\ell} < \infty] \exp \big\{-\big(\tfrac v2 - 1 \big) \big\}. \end{align} Now, to bound the second term in the right side of~\eqref{e:Qtwoterms}, we write \begin{equation} \label{e:Qsecterm} \begin{split} \IE & \big(T_\ell < \infty, \; G_{T_\ell}(\hat z) < \tfrac v2 \ell, \GP [G({\hat z}) - G_{T_\ell}({\hat z}) > \tfrac v2 \ell \mid G_1, \dots, G_{T_\ell}]\big)\\ & \leq \GP \big[T_\ell < \infty \big] \; \smash{\sup_{w'}} \; \GP_{w'} [G({\hat z}) > \tfrac v2 \ell ]. \end{split} \end{equation} Using that for any $z' \in \Sigma$ \begin{equation} G(z') = \sum_{k=1}^{T_\Delta} \xi_k g_{(W_k,Y_k)}(z') \geq \sum_{k=1}^{T_\Delta} \alpha \xi_k g_{(W_k,Y_k)}({\hat z}) \1{\Gamma}(z') = \alpha G({\hat z}) \1{\Gamma}(z'). \end{equation} we obtain, for all~$z'$, \begin{equation} \label{e:GbyPoisson} \begin{split} \GP\big[G({\hat z}) \geq \tfrac v2 \ell \big] & \leq \GP \Big[ G(z') \geq \frac{1}{2 }v \ell\alpha, \text{ for every $z' \in \Gamma$} \Big]\\ & \leq \GP \big[ \eta( \Gamma \times [0, \tfrac{1}{2} v \ell\alpha] ) \leq N(\Gamma) \big]. \end{split} \end{equation} Collecting \eqref{e:Qtwoterms}, \eqref{e:Qfirstterm}, \eqref{e:Qsecterm} and \eqref{e:GbyPoisson} we finish the proof of the result. \end{proof} \section{Conditional decoupling} \label{s_cd} We begin this section gathering some facts needed for the proof of the main theorem of this paper. But first we give an overview of main argument presented in this section. We will simulate the random interlacements set intersected with $A_1$ in two ways. In the first way we will simulate $\I_{A_1}^u$ using $G^{\Sigma}_u$, that is, we will simulate $\I_{A_1}^u$ using the soft local times indexed by the clothesline processes. In the second way, we will construct a set made up from random walk trajectories in $A_1$ in a similar way to the construction of~$\I^u_{A_1}$, the only difference will be that the soft local times used in this second construction will be indexed by a given nonrandom sequence $\hat{\zeta}$ of pairs of points belonging $V\times \partial A_2$. We will denote this second random set by~$\I^u_{A_1\mid\hat{\zeta}}$, and we will show using the soft local times method that~$\I^u_{A_1\mid\hat{\zeta}}$ and~$\I^u_{A_1}$ are usually very similar to each other. We then prove a similar result when the pairs of points that constitute the nonrandom sequence all belong to the boundary of a set contained in~$A_2$. Throughout this section we will again only differentiate between $A_{1}^{\tiny\Circle}$ and $A_{1}^{\tiny\Square}$ when the need arises. We start by stating the following bound \begin{equation}\label{supprob2} \sup_{\substack{w'\in V \\ y'\in \partial A_2}}\IP_{w',y'}\big{[} \Xi(w',y')=(w_0,y_0) \big{]}\leq c s ^{-2(d-1)} ,\end{equation} for which the proof is technical and we thus postpone it to subsection $\ref{s_t1}$ of the appendix. Let $z\in\Sigma$ be such that~$\Xi(z)=(w_0,y_0)$, and let $h:=\dist(w_0,y_0)$. We let $F(w_0,y_0)$ stand for $G(z)$, making explicit the dependence of the soft local time on the endvertices~$\Xi(z)$. We define \begin{equation} \label{e_expecdef} \pi(w_0,y_0):=\IE(F(w_0,y_0)). \end{equation} We define $f_{A_1}(w_0,y_0)$ to be the probability that the simple random walk started at $w_0$ visits $y_0$ before hitting $A_2$. We will prove in the appendix (see Section~$\ref{s_t1}$, propositions~$\ref{p_boundprocircle}$ and~$\ref{p_boundprosquare}$) the following bounds for these probabilities: \begin{itemize} \item[(i)]Given $(w_0,y_0)\in A_1^{\tiny\Circle}\times V^{\tiny\Circle}$, there are constants $c_1,c_2>0$ such that \begin{equation} \label{e_circlebound} c_1\frac{s^{2}}{h^{d}} \leq f_{A_1^{\tiny\Circle}}(w_0,y_0) \leq c_2\frac{s^{2}}{h^{d}}. \end{equation} \item[(ii)]Let $(w_0,y_0)\in A_1^{\tiny\Square}\times V^{\tiny\Square}$, and recall the definition of $\mathfrak{H}_{r+2s}$, the unsmoothed version of $ {A_2^{\tiny\Square}}^C$. Let $\mathfrak{H}^{d-1}_i$; $i=1,\dots,2d$; denote the $(d-1)$-dimensional hyperfaces of~$\mathfrak{H}_{r+2s}$, and let $l^{w_0}_i:=\min \{\dist(w_0,\mathfrak{H}^{d-1}_i),h\}$, and $l^{y_0}_i:=\min \{\dist(y_0,\mathfrak{H}^{d-1}_i),h\}$. Then there are constants $c_1,c_2>0$ such that \begin{equation} \label{e_squarebound} c_1\frac{l^{w_0}_1\dots l^{w_0}_{2d}}{h^{2d}}\cdot\frac{1}{h^{d-2}}\cdot\frac{l^{y_0}_1\dots l^{y_0}_{2d}}{h^{2d}}\leq f_{A_1^{\tiny\Square}}(w_0,y_0) \leq c_2\frac{l^{w_0}_1\dots l^{w_0}_{2d}}{h^{2d}}\cdot\frac{1}{h^{d-2}}\cdot\frac{l^{y_0}_1\dots l^{y_0}_{2d}}{h^{2d}}. \end{equation} \end{itemize} The following lemma, whose proof we also postpone to the appendix (Section~$\ref{s_t2}$), gives us an estimate on $\pi(w_0,y_0)$. \begin{lemma} \label{l_expecslt} Using the notation defined above we have, for constants $c_1,c_2,c_3,c_4>0$:\\ \begin{itemize} \item[(i)]$c_1\capacity(V)^{-1}s^{-1}f_{A_1}(w_0,y_0)\leq\pi_{A_1}(w_0,y_0)\leq c_2\capacity(V)^{-1}s^{-1} f_{A_1}(w_0,y_0)$, \\ \item[(ii)]$\IE(F(w_0,y_0)^2)\leq c_3 \capacity(V)^{-1}s^{-2d+2}f_{A_1}(w_0,y_0)$. \\ \\ Moreover, since $\dist (w_0,y_0)\geq s$, we have \\ \item[(iii)]$\sup_{w_0,y_0}\pi(w_0,y_0)\leq c_4\capacity(V)^{-1}s^{-(d-1)}$. \end{itemize} \end{lemma} We now provide a large deviation bound for $F(w_0,y_0)$. \begin{lemma} \label{l_expmoment} There are constants $c,c_1,c_2>0$ such that for every $(w_0,y_0)\in V\times \partial A_2$, we have \begin{equation} \GP\big{[} F(w_0,y_0)>v c s^{-2(d-1)} \big{]}\leq c_1 s^{2d-3}f_{A_1}(w_0,y_0)\capacity(V)^{-1}e^{-c_2 v} \end{equation} for any $v\geq 2$ (we can also assume $c_2\leq 1$ without loss of generality). \end{lemma} \begin{proof} In the proof of this particular result it will be important for us to distinguish between the constants. We will use Proposition $\ref{t_expmoment}$ for $F(w_0,y_0)$, with \begin{equation*} \Gamma_{w_0,y_0} :=\{(w_0 ',y_0 ')\in \partial A_1\times V ;\space\space \max \{\|w_0 ' - w_0\|,\|y_0 ' - y_0\|\} \leq c_4 s \}, \end{equation*} with $0<c_4<1$ defined in Section~$\ref{s_t3}$ of the appendix. Using the same notation as in Proposition $\ref{t_expmoment}$, we note that $(\ref{supprob2})$ implies \begin{equation*} l\leq c s^{-2(d-1)} \end{equation*} and observe that $\mu(\Gamma_{w_0,y_0})\geq c_5 s^{2(d-1)}$ for some constant $c_5>0$. Also, as can be seen in Section~$\ref{s_t3}$ of the appendix, we have \begin{equation*} \alpha \geq c_3>0. \end{equation*} Chebyshev's inequality and Lemma~$\ref{l_expecslt}$ then imply \begin{equation} \label{e_timebound} \GP\big{[} T_l < \infty \big{]}\leq\GP\big{[} F(w_0,y_0)>c s^{-2(d-1)} \big{]}\leq\frac{\pi (w_0,y_0)}{c s^{-2(d-1)}} \leq c_1 s^{2d-3}f_{A_1}(w_0,y_0)\capacity (V)^{-1}. \end{equation} We denote by $N(\Gamma_{w_0,y_0})$ the number of times the simple random walk trajectory associated with $F(w_0,y_0)$ makes an excursion of the form $z'\in\Sigma$ on $A_2^C$ such that $\Xi(z')=(w',y')\in\Gamma_{w_0,y_0}$. We also let $\eta_{w_0,y_0}$ stand for the number of points of the Poisson process associated with our soft local times that belong to $\Gamma_{w_0,y_0}\times\big[0,\frac{1}{2}v c c_3 s^{-2(d-1)}\big]$. We note that both definitions are consistent with Proposition~$\ref{t_expmoment}$ and write \begin{equation*} \GP\Big{[} \eta_{w_0,y_0}\leq N(\Gamma_{w_0,y_0}) \Big{]}\leq \GP\Big{[} \eta_{w_0,y_0}\leq \frac{c c_3 c_5 v}{4} \Big{]} + \GP\Big{[} N(\Gamma_{w_0,y_0}) \geq \frac{c c_3 c_5 v}{4} \Big{]}. \end{equation*} We claim that both terms in the right side of the above inequality are exponentially small in $v$. To see why this is true, observe that: \begin{itemize} \item $\eta_{w_0,y_0}$ has Poisson distribution with parameter at least $\frac{cc _3 c_5 v}{2}$, and \item every time the simple random walk associated with $F(w_0,y_0)$ hits $\partial A_2$, with uniform positive probability the walk never reaches $\Gamma_{w_0,y_0}$ again. This way $N(\Gamma_{w_0,y_0})$ is dominated by a Geometric$(c_6)$ random variable, for some constant $c_6<1$. \end{itemize} Together with $(\ref{e_timebound})$ and Proposition~$\ref{t_expmoment}$, this finishes the proof of the lemma. \end{proof} Let $\Psi_{w_0,y_0}(\lambda)=\IE (e^{\lambda F(w_0,y_0)})$ be the moment generating function of $F(w_0,y_0)$. We are going to use the bounds above to estimate $\Psi_{w_0,y_0}$. It is elementary to obtain that $e^{t}-1\leq t+t^2$ for $t\in [0,1]$. With this observation in mind, we write for $0\leq \lambda\leq \frac{c_2 s^{2(d-1)}}{2c}$, where $c$ and $c_2$ are the same as in the theorem above: \begin{align} \lefteqn{\Psi_{w_0,y_0}(\lambda)-1 =\phantom{******}}\nonumber \\ \nonumber &\phantom{****} = \IE (e^{\lambda F(w_0,y_0)}-1)\1{\lambda F(w_0,y_0)\leq 1} + \IE (e^{\lambda F(w_0,y_0)}-1)\1{\lambda F(w_0,y_0)> 1} \\ \nonumber &\phantom{****} \leq \IE(\lambda F(w_0,y_0)+\lambda^2 F(w_0,y_0)^2)+\IE (e^{\lambda F(w_0,y_0)}-1)\1{\lambda F(w_0,y_0)> 1} \\ \nonumber &\phantom{****} \leq \lambda\pi(w_0,y_0)+c_1\lambda^2\capacity(V)^{-1}s^{-2d+2}f_{A_1}(w_0,y_0)+\IE (e^{\lambda F(w_0,y_0)}-1)\1{\lambda F(w_0,y_0)> 1} \\ \nonumber &\phantom{****} \leq \lambda\pi(w_0,y_0)+c'\lambda^2\capacity(V)^{-1}s^{-2d+2}f_{A_1}(w_0,y_0) +\lambda\int\limits_{\lambda^{-1}}^{\infty}e^{\lambda y}\GP \big{[} F(w_0,y_0)>y \big{]}\d y \\ \nonumber &\phantom{****} \leq \lambda\pi(w_0,y_0)+f_{A_1}(w_0,y_0)\capacity(V)^{-1}\Big(c'\lambda^2s^{-2d+2} +\lambda c's^{2d-3}\int\limits_{\lambda^{-1}}^{\infty} \exp{\Big(\frac{-c_2 s^{2(d-1)}y}{2c}\Big)}\d y \Big) \\ \nonumber &\phantom{****} \leq \lambda\pi(w_0,y_0)+f_{A_1}(w_0,y_0)\capacity(V)^{-1}\Big(c'\lambda^2 s^{-2d+2} + c'\lambda s^{-1}\exp{\Big(\frac{-c_2 s^{2(d-1)}\lambda^{-1}}{2c}\Big)} \Big)\nonumber \\ \label{e_moment1} &\phantom{****}\leq\lambda\pi(w_0,y_0)+c'\lambda^2 \capacity(V)^{-1}s^{-2d+2}f_{A_1}(w_0,y_0), \end{align} where we used Lemma~$\ref{l_expecslt}$ and Lemma~$\ref{l_expmoment}$. Now since $e^{-t}-1\leq -t+t^2$ for all $t\geq 0$, we obtain for $\lambda\geq 0$ \begin{equation} \label{e_moment2} \Psi_{w_0,y_0}(-\lambda)-1\leq -\lambda\pi(w_0,y_0)+c\lambda^2\capacity(V)^{-1}s^{-2d+2}f_{A_1}(w_0,y_0), \end{equation} (the large deviation bound of Lemma~$\ref{l_expmoment}$ is not necessary is this case). Observe that if $(\chi_k,k\geq 1)$ are i.i.d.\ random variables with common moment generating function~$\Psi$ and $N$ is an independent Poisson random variable with parameter $\theta$, then \begin{equation*} \IE\exp\big(\lambda\sum\limits_{k=1}^{N}\chi_k \big)=e^{(\theta(\Psi(\lambda)-1))}. \end{equation*} We let $F_k(w_0,y_0)$ denote the expectation $\IE(G^k(z))$ defined in $(\ref{e_sltdef1})$, when $z\in\Sigma$ is such that $\Xi(z)=(w_0,y_0)$. Using Lemma~$\ref{l_expecslt}$ and $(\ref{e_moment1})$, we have, for $N_{\hat u}^{V}\eqd Poisson(\hat u\capacity(V))$ and any $\delta>0$ \begin{equation} \label{e_moment3} \begin{array}{e} \lefteqn{\GP \big{[} G^{\Sigma}_{\hat u}(z)\geq (1+\delta)\hat u \capacity(V)\pi (w_0,y_0) \big{]}=\phantom{********}}\\ &\phantom{********} = & \GP \Big{[} \sum_{k=1}^{N_{\hat u}^{V}} F_k (w_0,y_0)\geq (1+\delta)\hat u \capacity(V)\pi (w_0,y_0) \Big{]} \\ \\ &\phantom{********} \leq & \frac{\IE(\exp \big(\lambda \sum_{k=1}^{N_{\hat u}^{V}} F_k (w_0,y_0)\big))}{\exp\big(\lambda (1+\delta)\hat u \capacity(V)\pi (w_0,y_0)\big)} \\ \\ &\phantom{********} \leq & \exp\big(-\lambda (1+\delta)\hat u \capacity(V)\pi (w_0,y_0)+\hat u\capacity (V)(\Psi_{w_0,y_0}(\lambda)-1)\big) \\ \\ &\phantom{********} \leq & \exp\big(-(\lambda \delta\hat u \capacity(V)\pi (w_0,y_0)-c'\lambda^2\hat u s^{-2d+2}f_{A_1}(w_0,y_0))\big) \\ \\ &\phantom{********} \leq & \exp\big(-(\lambda \delta\hat u c s^{-1} f_{A_1}(w_0,y_0)-c'\lambda^2\hat u s^{-2d+2}f_{A_1}(w_0,y_0))\big) . \end{array} \end{equation} Analogously, with $(\ref{e_moment2})$ instead of $(\ref{e_moment1})$, we obtain \begin{equation} \label{e_moment4} \GP \big{[} G^{\Sigma}_{\hat u}(z)\leq (1-\delta)\hat u \capacity(V)\pi (w_0,y_0) \big{]} \leq \exp\big(-(\lambda \delta\hat u c s^{-1}-c'\lambda^2\hat u s^{-2d+2})f_{A_1}(w_0,y_0)\big) . \end{equation} We choose $\lambda=c_7\delta s^{2d-3}$ with $c_7$ small enough so that $\lambda\leq\frac{c_2 s^{2(d-1)}}{2c}$, and observe that the bounds for $f_{A_1}(w_0,y_0)$ given in $(\ref{e_circlebound})$ and $(\ref{e_squarebound})$ imply \begin{align*} \inf_{w_0,y_0}f_{A_1^{\tiny\Square}}(w_0,y_0)& \geq c s^{2d} r^{-3d+2},\\ \inf_{w_0,y_0}f_{A_1^{\tiny\Circle}}(w_0,y_0)& \geq c s^{2}r^{-d}. \end{align*} Recall the definition of $b_{A_1^{\tiny\Circle}}$, a number such that \begin{equation*} 1\leq b_{A_1^{\tiny\Circle}}<\frac{2d-2}{d}, \end{equation*} and the definition of $b_{A_1^{\tiny\Square}}$, a number such that \begin{equation*} 1\leq b_{A_1^{\tiny\Square}}<\frac{4d-4}{3d-2}. \end{equation*} Recall that $r\asymp s^{b_{A_1}}$. Then there exist constants $a_{A_1^{\tiny\Circle}}=2d-2-db_{A_1^{\tiny\Circle}}>0$ and $a_{A_1^{\tiny\Square}}=4d-4-3db_{A_1^{\tiny\Square}}+2b_{A_1^{\tiny\Square}}>0$ such that \begin{equation*} \GP \big{[} G^{\Sigma}_{\hat u}(z)\geq (1+\delta)\hat u \capacity(V)\pi (w_0,y_0) \big{]}\leq \exp\big(-c\delta^2 \hat{u} s^{a_{A_1}}\big) . \end{equation*} Using the union bound (note that $ \partial A_1\times V$ has $O(r^{2(d-1)})$ elements), \begin{align} \label{e_moment5} \lefteqn{\GP \big{[} (1-\delta)\hat u \capacity(V)\pi (\Xi(z)) \leq G^{\Sigma}_{\hat u}(z)\leq (1+\delta)\hat u \capacity(V)\pi (\Xi(z))\text{, for all $z\in\mathcal{K}$}\big{]}\geq} \phantom{********************************}\nonumber\\ &\geq 1-cr^{2(d-1)}\exp\big(-c'\delta^2\hat u s^{a_{A_1}}\big). \end{align} Observe that we can suppose $c'\leq 1$ without loss of generality. We define the interval \begin{equation*} I_{\hat u,z}^{\delta}:=[(1-\delta)\hat u \capacity(V)\pi (\Xi(z)), (1+\delta)\hat u \capacity(V)\pi (\Xi(z))] \end{equation*} and the event \begin{equation*} D_{\hat u}^{\delta}:=\{ G_{\hat u}^{\Sigma}\in I_{\hat u,z}^{\delta}\text{ for all $z\in\mathcal{K}$} \}. \end{equation*} Using $(\ref{e_moment5})$ and the union bound we obtain, for $\eps>0$ sufficiently small, \begin{equation} \nonumber \GP \Big{[} D_{u}^{\eps/4},D_{u(1-\eps)}^{\eps/4} ,D_{u(1+\eps)}^{\eps/4} \Big{]} \geq 1-cr^{2(d-1)}\exp\big(-c'\eps^2 u s^{a}\big). \end{equation} Since $r\asymp s^{b_{A_1}}$, by replacing the constants $c$ and $c'$ in the above equation we obtain \begin{equation} \label{e_moment6} \GP \Big{[} D_{u}^{\eps/4},D_{u(1-\eps)}^{\eps/4} ,D_{u(1+\eps)}^{\eps/4} \Big{]} \geq 1-c\exp\big(-c'\eps^2 u s^{a}\big). \end{equation} We have just proved that with high probability, the soft local time associated to each of the processes~$\I^u_{A_1}$, $\I^{u(1-\eps)}_{A_1}$ and $\I^{u(1+\eps)}_{A_1}$ stays confined between the graphs of two explicit deterministic functions. This happened when we let the ``information'' given by $\I^{u}_{A_2}$; namely the points of entrance at~$V$ and exit at~$\partial A_2$ of the excursions on~$A_1$ of the simple random walk trajectories of the interlacements process at level~$u$; to be distributed according to the right law, that is, the law of the clothesline processes. When we ``average'' those points according to these laws we obtain a good concentration for the whole function~$G_{u}^{\Sigma}$, but our goal is to obtain a similar concentration when these points are deterministic. The heuristic argument is that when something happens with high probability in the annealed law, then most of the times it will also happen with high probability in the quenched law. We will introduce some new notation to make this argument rigorous and prove our main theorem. Given any two finite sets $K_1,K_2\subset\Z^d$, not necessarily disjoint, we want to describe a collection of generalized clothesline processes between~$K_1$ and~$K_2$ associated with the interlacements process at level~$u$. We construct an infinite family $(X^{(j)}_k,k\geq 0)_{0<j<\infty}$ of independent simple random walks with starting point distributed according to the normalized harmonic measure on~$K_1$, as we did in definition~\eqref{interlacementsdef}. We let $\tau^j_0\equiv 0$ and define inductively \begin{equation*} \begin{array}{e} \tau_{k+1}^j &:= 1\{X^{(j)}_{\tau_{k}^j}\in K_1\}\inf\{t>\tau_{k+1}^j; X^{(j)}_t\in K_2\} \\ &\qquad+ 1\{X^{(j)}_{\tau_{k}^j}\in K_2\} \inf\{t>\tau_{k+1}^j; X^{(j)}_t\in K_1\}, \end{array} \end{equation*} where $1\{\cdot\}$ denotes the indicator function of an event. We also define the random time \begin{equation*} T_j:=\inf_{k\geq 0}\{\tau_{k+1}^j=\infty\}. \end{equation*} We let yet again $N_u^{K_1}\eqd\textit{Poisson}(u\capacity (K_1))$ be a random variable independent from\[(X^{(j)}_k,k\geq 0)_{0<j<\infty}.\] We then define the interlacements' clothesline processes between~$K_1$ and~$K_2$ at level~$u$ by \begin{equation*} \mathrm{Cloth}_u(K_1,K_2):=\Big\{\big(X^{(j)}_{\tau^j_k}\big)_{k=0}^{T_j} \Big\}_{j=1}^{N_u^{K_1}}. \end{equation*} When $K_1=V$ and $K_2=\partial A_2$, we have \begin{equation*} \mathrm{Cloth}_u(V,\partial A_2)\eqd\Big\{\big( W^j_k,Y^j_k \big)_{k=1}^{T^j_\Delta} \Big\}_{j=1}^{N_u^{V}}. \end{equation*} We define \begin{equation*} \Big( \mathcal{S}_u(K_1,K_2),\sigma_u(K_1,K_2),\IP^u_{K_1,K_2} \Big) \end{equation*} to be the probability space in which $\mathrm{Cloth}_u(K_1,K_2)$ is defined, and in which $\sigma_u(K_1,K_2)$ is the smallest $\sigma$-field in which $\mathrm{Cloth}_u(K_1,K_2)$ is measurable. If $\hat\zeta\in\mathcal{S}_u(V,\partial A_2)$ and $\IP^u_{V,\partial A_2}(\hat\zeta)>0$, then we can write~$\hat\zeta$ as a finite collection of finite sequences of points belonging to~$V$ and~$\partial A_2$ \begin{equation*} \hat{\zeta}:=\big\{ \hat{\zeta}_1,\dots, \hat{\zeta}_K \big\}, \end{equation*} where for each $j=1,\dots,K$; $\hat{\zeta}_j$ is a finite sequence alternating between points of~$V$ and~$\partial A_2$. In other words, $\hat{\zeta}_j$ is a possible realization of a clothesline process. We write \begin{equation*} \hat{\zeta}_j:=\big( \zeta^j_0,\dots,\zeta^j_{n(j)} \big), \end{equation*} where~$n(j)$ is odd, every even entry belongs to~$V$ and every odd entry belongs to~$\partial A_2$. \begin{figure} \centering \includegraphics[scale = 1]{genclothdef} \vspace{0.5cm} \caption{The generalized clothesline process between~$K_1$ and~$K_2$, here represented by the X marks.} \label{genclothdef} \end{figure} We then define the soft local time associated with~$\hat{\zeta}$. Using the same realization of the Poisson point process on~$\Sigma\times\R_+$ defined on Section~$\ref{s_simexc}$, we construct the soft local times \begin{equation*} G^{\hat\zeta_j}(z):=\sum_{k=0}^{\frac{n(j)+1}{2}}\tilde{\xi}_{k}^{j} g_{(\zeta^j_{2k},\zeta_{2k_1}^j)}(z), \end{equation*} where $\tilde{\xi}^j_k$ is an exponential random variable defined in the manner of~\eqref{sltclothdef}. We then define \begin{equation*} G^{\hat{\zeta}}(z):=\sum_{j=1}^{K}G^{\hat\zeta_j}(z). \end{equation*} This function should be viewed as a quenched version of the soft local times $G^\Sigma_u$, when the collection o clothesline processes $\{( W^j_k,Y^j_k )_{k=1}^{T^j_\Delta} \}_{j=1}^{N_u^{V}}$ is given by yhe deterministic element~$\hat{\zeta}$. We denote by $\I^u_{A_1\mid\hat{\zeta}}$ the interlacements process inside~$A_1$ determined by the ranges of the excursions of~$\Sigma$ bellow~$G^{\hat{\zeta}}$. $\I^u_{A_1\mid\hat{\zeta}}$ is distributed as the random interlacements process inside~$A_1$ when its associated random walks excursions have entrance points at~$V$ and exit points at~$\partial A_2$ given by~$\hat{\zeta}$. The next proposition implies that~$G^{\hat{\zeta}}$ is usually between~$G^\Sigma_{u(1-\eps)}$ and~$G^\Sigma_{u(1+\eps)}$ with high probability. \begin{proposition} \label{t_varpi} There exists a set $\mathcal{A}\in\sigma_u(V,\partial A_2)$ such that \begin{equation*} \IP^u_{V,\partial A_2}\big{[} \mathcal{A} \big{]}\geq 1-\exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big), \end{equation*} and for all fixed $\hat{\zeta}\in\mathcal{A}$, \begin{align*} \lefteqn{\GP \big{[} G_{u(1-\eps)}^{\Sigma}(z)\leq G^{\hat{\zeta}}(z)\leq G_{u(1+\eps)}^{\Sigma}(z)\text{ for all $z\in\mathcal{K}$} \big{]}}\phantom{*********************}\nonumber\\ &\phantom{*****}\geq 1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big). \end{align*} \begin{proof} Observe that $(\ref{e_moment6})$ implies \begin{align} \lefteqn{\int\GP \big{[} G_{u(1-\eps)}^{\Sigma}(z)\leq G^{\hat{\zeta}}(z)\leq G_{u(1+\eps)}^{\Sigma}(z)\text{ for all $z\in\mathcal{K}$} \big{]}\IP^u_{V,\partial A_2}\big{[}\d \hat{\zeta}\big{]}}\phantom{************************}\nonumber\\ &\phantom{********}\geq 1-c \exp\big(-c'\eps^2 u s^{a_{A_1}}\big)\label{e_moment7}. \end{align} Let \begin{equation*} \begin{array}{e}\lefteqn{\mathcal{A}:=\Big\{\hat{\zeta}\in \mathcal{S}_u(V,\partial A_2)\text{ such that: }\GP\big{[} G_{u(1-\eps)}^{\Sigma}(z)\leq G^{\hat{\zeta}}(z)\leq G_{u(1+\eps)}^{\Sigma}(z)\text{ for all $z\in\mathcal{K}$} \big{]} }\phantom{********************}\\ &\phantom{*************}\geq 1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big) \Big\}.\end{array} \end{equation*} \begin{figure}[ht] \centering \includegraphics[scale = .7]{mpsidef} \vspace{0.5cm} \caption{A visual representation of the random element $m_{\hat{\psi}}$.} \label{mpsidef} \end{figure} Then $(\ref{e_moment7})$ implies \begin{align*} \lefteqn{\IP^u_{V,\partial A_2}\big{[}\mathcal{A}\big{]}+\Big(1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big)\Big)\Big(1-\IP^u_{V,\partial A_2}\big{[}\mathcal{A}\big{]}\Big)}\phantom{*****************************}\\ &\geq 1-c \exp\big(-c'\eps^2 u s^{a_{A_1}}\big), \end{align*} so that \begin{equation*} \IP^u_{V,\partial A_2}\big{[}\mathcal{A}\big{]}\geq 1-\exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big). \end{equation*} This finishes the proof of the proposition. \end{proof} \end{proposition} \begin{figure}[ht] \centering \includegraphics[scale = 1]{softlocaltimesfig2} \vspace{0.5cm} \caption{When the sequence $\hat{\zeta}$ belongs to a well behaved set $\mathcal{A}$, the decoupling probability is greater than $1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big)$. The symbol~$\varphi$ in the figure stands for the function $u\capacity(V)\pi(\Xi(z))$. The figure shows the decoupling event, where $G_{u(1-\eps)}^{\Sigma}(z)\leq G^{\hat{\zeta}}(z)\leq G_{u(1+\eps)}^{\Sigma}(z)$ for all~$z\in\mathcal{K}$. } \label{softlocaltimesfig2} \end{figure} Proposition $\ref{t_varpi}$ implies that, for $\hat{\zeta}\in\mathcal{A}$, there exists a process~($\hat{\I}^u_{A_1}$, $u\geq 0$) distributed as the random interlacements set intersected with $A_1$, and a coupling $\GP$ such that, for all $\eps>0$ sufficiently small and $r>0$ sufficiently big, we have \begin{equation} \label{e_varpi} \GP\big{[} \hat{\I}^{u(1-\eps)}_{A_1} \subseteq \I^u_{A_1\mid\hat{\zeta}} \subseteq \hat{\I}^{u(1+\eps)}_{A_1} \big{]} \geq 1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big). \end{equation} To complete the proof of our main theorem we need to show that a result similar to Proposition~$\ref{t_varpi}$ remains valid under a different conditioning. Let $A_3\subset A_2$ be such that $|\partial A_3|<\infty$, and write $\I^u_{A_3}:=\I^u\cap A_3$ . Then $\mathrm{Cloth}_u(\partial A_3, \partial A_3)$ is well defined. Given $\hat{\psi}\in\mathcal{S}_u(\partial A_3,\partial A_3)$, we define $m_{\hat{\psi}}\equiv m_{\hat{\psi}}(\partial A_3)$ as a random element of $\mathcal{S}_u(V,\partial A_2)$ distributed as~$\mathrm{Cloth}_u(V, \partial A_2)$ conditioned on the event where the entrance and exit points at~$\partial A_3$ of the simple random walk excursions of $\I^u_{A_3}$ are given by~$\hat{\psi}$. We denote by $\I^u_{A_1\mid\hat{\psi}} $ the random interlacements process on~$A_1$ conditioned on the event where~$\mathrm{Cloth}_u(\partial A_3, \partial A_3)$ is equal to the deterministic element~$\hat\psi$. Notice that all ``information'' given by~$\I^u_{A_3}$ to $\I^u\cap A_3^C$ is contained in $\mathrm{Cloth}_u(\partial A_3,\partial A_3)$, that is, conditioned on $\mathrm{Cloth}_u(\partial A_3,\partial A_3)$, $\I^u_{A_3}$ and $\I^u\cap A_3^C$ are independent. Inequality~\eqref{e_varpi} then implies, for $\hat{\psi}\in\mathcal{S}_u(\partial A_3,\partial A_3)$, \begin{align} \GP\big{[} \hat{\I}^{u(1-\eps)}_{A_1} \subseteq \I^u_{A_1\mid\hat{\psi}} \subseteq \hat{\I}^{u(1+\eps)}_{A_1} \big{]} & =\!\!\!\!\!\!\!\!\! \sum_{\hat{\zeta}\in\mathcal{S}_u(V,\partial A_2)} \!\!\!\!\GP\big{[} \hat{\I}^{u(1-\eps)}_{A_1} \subseteq \I^u_{A_1\mid\hat{\psi}} \subseteq \hat{\I}^{u(1+\eps)}_{A_1} \mid m_{\hat{\psi}}=\hat{\zeta} \big{]}\GP\big{[} m_{\hat{\psi}}=\hat{\zeta} \big{]}\nonumber\\ & = \!\!\!\!\!\!\!\!\! \sum_{\hat{\zeta}\in\mathcal{S}_u(V,\partial A_2)} \!\!\!\!\GP\big{[} \hat{\I}^{u(1-\eps)}_{A_1} \subseteq \I^u_{A_1\mid\hat{\zeta}} \subseteq \hat{\I}^{u(1+\eps)}_{A_1} \big{]}\GP\big{[} m_{\hat{\psi}}=\hat{\zeta} \big{]}\nonumber\\ & \geq \Big(1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big)\Big)\GP\big{[} m_{\hat{\psi}}\in\mathcal{A} \big{]}\label{e_psi}. \end{align} Let $\mathcal{E}$ be the set of all $\hat{\psi}\in\mathcal{S}_u(\partial A_3,\partial A_3)$ such that \begin{equation*} \GP\big{[} m_{\hat{\psi}}\in\mathcal{A}^C \big{]}\geq \sqrt{\IP^u_{V,\partial A_2}\big{[} \mathcal{A}^C\big{]}}. \end{equation*} Since \begin{equation*} \IP^u_{V,\partial A_2}\big{[} \mathcal{A}^C\big{]}=\int\GP\big{[} m_{\hat{\psi}}\in\mathcal{A}^C \big{]}\IP^u_{\partial A_3,\partial A_3}\big{[} \d \hat{\psi}\big{]}\geq\IP^u_{\partial A_3,\partial A_3}\big{[} \mathcal{E} \big{]}\sqrt{\IP^u_{V,\partial A_2}\big{[} \mathcal{A}^C\big{]}}, \end{equation*} we have \begin{equation*} \IP^u_{\partial A_3,\partial A_3}\big{[} \mathcal{E}\big{]}\leq \sqrt{\IP^u_{V,\partial A_2}\big{[} \mathcal{A}^C\big{]}}. \end{equation*} We have proved the following theorem, which implies Theorem~\ref{t_main1}: \begin{theorem} \label{t_main2} Using the same notation as above, we have that, for constants $c,c'>0$, there exists a set $\mathcal{G}\in\sigma_u(\partial A_3, \partial A_3)$ such that \begin{equation*} \IP^u_{\partial A_3,\partial A_3}\big{[} \mathcal{G}\big{]}\geq 1-\exp\Big(-\frac{c'}{4}\eps^2 u s^{a_{A_1}}\Big), \end{equation*} and for all $\hat{\psi}\in\mathcal{G}$, \begin{equation} \label{e_psi2} \GP\big{[} \hat{\I}^{u(1-\eps)}_{A_1} \subseteq \I^u_{A_1\mid\hat{\psi}} \subseteq \hat{\I}^{u(1+\eps)}_{A_1} \big{]} \geq 1-c \exp\Big(-\frac{c'}{2}\eps^2 u s^{a_{A_1}}\Big). \end{equation} Moreover, for any increasing function $f$ on the interlacements set intersected with $A_1$, with $\sup |f| < M$, we have \begin{align} \label{e_conditionaldecoupling} \lefteqn{\big(\IE(f( \I^{u(1-\eps)}_{A_1}))-cM\exp\big(-c'\eps^2 u s^{a_{A_1}}\big)\big)\1{\mathcal{G}} \leq \IE(f(\I^{u}_{A_1})\mid \I^{u}_{A_3} )\1{\mathcal{G}}\nonumber} \phantom{*************************} \\ \phantom{*****************}&\leq \big(\IE (f( \I^{u(1+\eps)}_{A_1}))+cM\exp\big(-c'\eps^2 u s^{a_{A_1}}\big)\big)\1{\mathcal{G}}. \end{align} \end{theorem} We finish the section with a brief proof of Theorem~\ref{t_main3}. \begin{proof}[Proof of Theorem~\ref{t_main3}] Note that, on equation~\eqref{e_moment3},~$\delta$ can be any real number greater than~$0$, whereas in equation~\eqref{e_moment4}, we need to have $0<\delta<1$. Recall that $u'>u>0$. We have, by substituting the appropriate~$\delta$ in~$\eqref{e_moment5}$ and ignoring the union bound term~$cr^{2d-2}$, \begin{align*} \GP\big[G^\Sigma_{u}(z)< G^\Sigma_{u+u'}(z) \big] &\geq 1-\GP\big[G^\Sigma_{u}(z)> (u+u' 4^{-1})\capacity(V)\pi(\Xi(z)) \big] \\ &\quad -\GP\big[ G^\Sigma_{u+u'}(z) <2^{-1}(u+u')\capacity(V)\pi(\Xi(z)) \big] \\ &\geq 1-\exp\left(-¨\frac{c}{4}(u+u')s^{a_{A_1}}\right)-\exp\left(-\frac{c}{16} \frac{(u')^2}{u^2} us^{a_{A_1}}\right) \\ &\geq 1-\exp\left(-c' u' s^{a_{A_1}}\right). \end{align*} Now, proceeding in the same manner as we did in the proof of Theorem~\ref{t_main2}, we are able to prove Theorem~\ref{t_main3}. \end{proof}
2,877,628,090,908
arxiv
\section{Conclusion} \label{sec:conclusion} In this paper, we demonstrated that deep neural network based mean opinion score (MOS) estimation of speech signals processed by DNS models can be improved by adding auxiliary supervision on the original distribution of the scores. We demonstrated several ways these extra supervisions can be incorporated, either by integrating the uncertainty (variance of the scores) into a single task loss weighting strategy or directly incorporating the variance or histogram information into a multi-task learning setting. While some of the approaches appear to be more effective than others, it is clear that providing auxiliary supervision will result in better performance than doing single task MOS estimation. This benefit is practically free since during the data curation process (e.g., ITU-P.808~\cite{naderi2020open}) these statistics are typically available but discarded during model training. We also note that direct opinion score prediction seems to consistently generate the best results among all the proposed models. Our results were obtained with limited hyper-parameter search; our multi-task learning setups do not employ any loss balancing techniques~\cite{liu2019loss,kendall2018multi,ganin2014unsupervised} -- often crucial for achieving the best performance. We also opted for a simple convolutional LSTM model as our \textit{backbone} for the simplicity of exposition; combining auxiliary supervision into more sophisticated architectures (e.g. teacher-student model from DNSMOS) has the potential to bring substantial performance benefits. Further investigation is also warranted for a combination between the presented approaches. It would be interesting to see whether the integration of higher-order moments (skewness, kurtosis) into the multi-task learning setup can induce further improvements. We would also like to investigate the compatibility of our proposed approaches in more recent speech quality assessment challenges~\cite{dubey2022icassp} and datasets~\cite{reddy2021dnsmos835} where background noise quality labels are also being provided. In the same vein, we wish to also investigate the effect of providing supervision in the form of soft labels regarding the reverberation of the speech signals (e.g. energy ratio C50~\cite{gamper2020blind}, reverberation time $T_{60}$~\cite{gamper2018blind}) in improving the quality of MOS estimation. \section{Dataset and score distribution} \label{sec:dataset} \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{figs/hist-labels-prob-log-wide-2.png} \caption{Histogram of (a) MOS, (b) Median, (c) Standard deviation, (d) Skewness, (e) Kurtosis of the scores and (f) Number of opinion scores per clip. The last 3 subfigures are in log-scale for better visibility.} \label{fig:hist-labels} \end{figure} The dataset used in our experiment is derived from the Interspeech 2020 Deep Noise Suppression Challenge dataset~\cite{reddy2020interspeech}, obtained using ITU-T P.808~\cite{reddy2020interspeech,naderi2020open}. P.808 is an online crowd-sourcing based highly reproducible subjective testing framework. It has been shown to stack rank noise suppression models with high accuracy when each model is tested as an average over a statistically significant number of clips. In our dataset, 121679 unique files comprising both noisy and clean speech are first processed through 320 unique noise suppression models and model variations. We only take the files that are between 4 and 20 seconds in length and consist of only single-channel 16~kHz samples. The process generates a total of 419836 files in the training set. To allow comparisons with external baselines, we used the test set from DNSMOS~\cite{reddy2021dnsmos} (18K files) for all evaluations. The statistics of the training dataset are shown in Figure~\ref{fig:hist-labels}. The ratings of the speech qualities vary between very poor ($\text{MOS}=1$) and excellent ($\text{MOS}=5$) and as shown in Figures~\ref{fig:hist-labels}(a) and (b), the majority of the MOS ratings are between 2.5 and 4. From Figure~\ref{fig:hist-labels}(c), we can also see that a sizable number of the samples have opinion scores with a standard deviation, $\sigma > 1$ indicating a high amount of subjectivity in the opinion scores. The Skewness (Fisher-Pearson) of the opinion scores distribution ranges between -1.75 and 1.75 as shown in Figure~\ref{fig:hist-labels}(d). Such high skewness indicates that the median of the opinion scores is often different from the MOS scores. Interestingly in Figure~\ref{fig:hist-labels}(e), we also notice that majority of the samples are \textit{platykurtic} -- most of the samples are free from extreme outlier opinion scores. Figure~\ref{fig:hist-labels}(f) demonstrates the number of opinion scores per sample and the majority (75\%) of the samples has 5 opinion scores. \section{Experimental Setup} \section{Evaluation Criteria} \label{sec:exp} We use (i) Pearson's correlation coefficient (PCC), (ii) Spearman's rank correlation coefficient (SRCC) (iii) mean absolute error (MAE) and (iv) root mean square error (RMSE) between the predicted MOS scores and the ground truth human ratings to evaluate the performance of our models. Since we are interested in evaluating the performance of a number of DNS models in enhancing the speech quality of the given samples, in addition to calculating the four evaluation metrics on a \textit{per-file} basis, we also group the clips together by the DNS model being used to generate them and calculate the evaluation metrics. This way of generating the evaluation metrics is referred to as \textit{stack-ranked} evaluation~\cite{reddy2021dnsmos}. \section{Introduction} As more people are increasingly working from home and using live telephony and communication applications to collaborate with their peers as well as stay connected to friends and family, retaining and improving speech quality has become a topic of immense importance in industry and academia~\cite{dubey2022icassp,reddy2021icassp,reddy2020interspeech,reddy2021interspeech}. Real-time speech enhancement (SE) solutions~\cite{ephraim1984speech,reddy2017individualized} have traditionally been used for decades to improve the perceptual quality of speech. Nowadays they are being replaced by Deep Noise Suppression (DNS)~\cite{fu2017raw,choi2020phase,koyama2020exploring} models due to their flexibility in handling a variety of background noises, room reverberations, and distortions. However, due to the possible wide variety in the training datasets and model architecture, each DNS model often performs noticeably better and worse in dealing with certain kinds of noise compared to other models. Moreover, they can also introduce their own set of artifacts -- ranging from mistaking actual speech for noise and removing it to introducing distortions during the speech reconstruction phase -- all of which can lower the perceptual quality of the speech to the point that an independent listener might prefer the original version of the speech vs the noise suppressed one. In order to properly provision these DNS models for widespread deployment, their performance needs to be evaluated on a large number of noisy and distorted speech samples. The subjective listening test has been the staple for evaluating the perceived speech signal quality~\cite{itu-t-recommendation-p-800} where multiple users provide judgment on a scale ranging from $1$ to $5$ and usually the average score of all participants over specific condition (commonly referred to as MOS, i.e., mean opinion score) represents the perceived quality after leveling out individual factors~\cite{moller2011speech}. But given the wide number of possible DNS models and noisy sample combinations, they would require huge time and human labor investment and even then cannot achieve real-time feedback~\cite{avila2016performance}, thus making the process unsustainable for conducting large-scale experiments. Several automated objective instrumental quality measures have been proposed and adopted over the years as an alternative (e.g. PESQ~\cite{itu-t-recommendation-p-862}, POLQA~\cite{itu-t-recommendation-p-863}). However, they were optimized to measure compression artifacts rather than degradation introduced by the noise, reverberation, and speech enhancements. These measures are also limited by their need to have access to the original \textit{clean} signals, making the bulk of them \textit{intrusive} and unable to be applied to the speech captured in the wild. Several deep-learning based \textit{non-intrusive} speech quality assessment models have been proposed recently that aim to tackle this challenge~\cite{avila2019non,reddy2021dnsmos, mittag21_interspeech}. Most of these models are trained in a supervised way with the aim of minimizing the error between the ground truth MOS scores and the predicted MOS scores. Recently, attempts have been made to incorporate additional information during model training. To include the effect of individual judges' bias on the MOS labels, MBNET~\cite{leng2021mbnet} is trained using a multi-task loss with an additional bias term, i.e., the difference between the MOS score and the individual judge score. However, it is not clear how this approach might generalize to datasets generated via crowd-sourcing based subjective listening tests~\cite{reddy2021dnsmos} that may include hundreds of judges, who may each provide anywhere from one to hundreds of scores. MetricNet~\cite{yu2021metricnet} jointly models MOS estimation with a reconstruction objective of the clean speech signal, to estimate Perceptual Evaluation of Speech Quality (PESQ). The model uses the Wasserstein distance between the ground truth PESQ distribution and the model output as a training objective, where the ground truth distribution is either a simple one-hot vector or a soft target around the true PESQ value. It should be noted that PESQ has been shown to correlate poorly with human rating when used for evaluating speech enhancement models~\cite{reddy2021dnsmos}. Here, we study incorporating the distribution of scores underlying each MOS label for training a speech quality estimation model geared towards evaluating speech enhancement methods. We hypothesize that in addition to the first moment (mean) of the subjective listening scores, providing extra supervision concerning the distribution of the scores (e.g. second-moment/variance or histogram information) may improve model performance and robustness. To test our hypothesis, we develop a number of models that incorporate the (a) variance/standard deviation, (b) median (c) histogram bins of the opinion scores ($1-5$ scale) into the primary regression loss calculation logic of MOS estimation by either (a) direct prediction of these statistics, (b) weighting the MOS estimations by these statistics (c) directly predicting the opinion scores themselves. We develop a convolutional LSTM model as the primary \textit{backbone} and run experiments with different loss functions to align the distributions. During our experiments, we found that predicting $5$ opinion scores and then aligning the primary and secondary moments (mean and standard deviation) with the ground truth opinion scores provides the best improvement over vanilla MOS estimation. \section{Proposed model architecture} \label{sec:arch} \subsection{Backbone Model} \label{sec:backbone} \begin{figure}[!htb] \centering \includegraphics[width=.95\linewidth]{figs/model-smaller.png} \caption{Backbone model (single task MOS estimation) Overview. The shape of the features after each operation is shown at the bottom of each layer.} \label{fig:model-overview} \end{figure} The $16$~kHz monaural samples are first pre-processed by STFT transform with $512$ samples per frame (i.e., $32$~ms) and a $160$ sample (i.e., $10$~ms) overlap and thereafter $26$ Mel-frequency bins per frame are extracted. We perform power-to-decibel conversion on the resulting Mel-frequency bins to better align the features with human perception of sound levels. This results in a $26\times N$ shaped feature matrix per file where $N$ can be of varying length due to the input audio samples being between $4-20$ seconds long. We utilized a convolutional-LSTM based architecture (referred to as \textit{backbone} henceforth) throughout all of our experiments. We employ $5$ convolutional layers (without any padding) to gradually decrease the size of the feature space before feeding the resultant features to an LSTM layer. The LSTM layer helps to build a fixed-length representation from the variable-length convolutional feature sets. The first convolution layer has a $1 \times 5$ shaped kernel followed by $1 \times 3$ max-pool operation which helps to capture the temporal relationship among the adjacent input frames. This is followed by two $5 \times 5$ and two $3 \times 3$ shaped convolutional kernels. The first $5 \times 5$ convolution is followed by a $2 \times 2$ max-pool operation to further reduce both the spectral and temporal resolution. Each of the convolution operations is followed by a $\mathtt{ReLU}$ activation and batch-normalization and dropout regularization (with dropout probability of $0.1$) layers. The LSTM layer consists of $64$ cells and is followed by a fully-connected layer with $1$ neuron (final prediction). We employed $\mathtt{Adam}$ optimizer with a batch size of $256$ and an initial learning rate of $0.001$, and a learning rate scheduler which reduces the learning rate by a factor of $0.1$ every $10$ epoch if there is no improvement in the validation metric. The $51300$ parameters ($205$~KB) of the model are trained up to $100$ epochs. The complete model architecture is shown in Figure~\ref{fig:model-overview}. \subsection{Baselines} \label{sec:baselines} We use two primary baselines for our experiments which are described below. \myparagraph{MOS Prediction with Convolutional LSTM Backbone} \label{sec:baseline-mos} Our first baseline is the backbone model described in Section~\ref{sec:backbone}, where we train the model using the MOS ground truth only. Every other model proposed further in Section~\ref{sec:models} shares the same architecture, but simple modifications are made to accommodate the auxiliary labels and alternative loss functions. \myparagraph{DNSMOS} \label{sec:baseline-dnsmos} The second baseline model is DNSMOS~\cite{reddy2021dnsmos}, a convolutional neural network based multi-stage self-teaching model inspired by continual lifelong learning~\cite{parisi2019continual}. Our primary intention for including this model as a baseline is that of a sanity check as we note that comparing DNSMOS with the rest of the models proposed in this paper is not a fair comparison since (a) DNSMOS employs a more sophisticated multi-stage self-teaching architecture compared to our \textit{backbone} model, and (b) we employ 3.5x more audio samples in our training regimen. Nevertheless, we use the same test set from the DNSMOS model to evaluate all proposed models. \subsection{Models Developed} \label{sec:models} We developed a number of models to incorporate the extra supervision (variance of the scores or histogram information) in addition to the MOS labels. A high variance score is indicative of higher disagreement between the judges, hence the variance ground truth can be a measurement of the confidence of the MOS scores. This confidence of the MOS scores can either be integrated as a weight to the loss function to give higher weight to the confident MOS scores (i.e., low variance) in a single task learning setup or it can be used directly as auxiliary ground truth in a multi-task learning setup. In the same vein, since there are only 5 possible values of the opinion scores (i.e., $1-5$), regardless of the number of opinion scores per sample, the ground truth of the opinion scores can be expressed as a 5-bin histogram and directly used to train the \textit{backbone} model. These approaches have the added flexibility of not requiring a fixed number and order of judges across the whole dataset, and are better suited for datasets collected with crowd-sourcing based approaches such as ITU-T P.808~\cite{reddy2020interspeech,naderi2020open}. \subsubsection{Single Task MOS Estimation with Variance Weighted Loss} \label{sec:single-task-variance} We train the \textit{backbone} model with mini-batch gradient descent and loss is calculated for each sample in the batch before taking a mean across the batch to derive the final loss. However, in this setup, we use the standard deviation ground truth to assign weight to each sample and calculate a weighted loss -- by assigning a higher weight to the samples with lower variance. This can be achieved in two primary ways: \myparagraph{Inverse Variance Weighting} This approach is inspired by~\cite{sinha2011statistical}, where the weight of each sample is calculated as $1/(\sigma_i + \delta)$ where $\sigma_i$ is the standard deviation of the sample and $\delta$ is a small constant (e.g., $10^{-3}$) to avoid division by zero. \myparagraph{Linear Variance Weighting} The numerical range of the opinion scores is $1-5$, and the range of the standard deviation is $0-2$. Inverse variance weighting can assign a high weight to samples with very low variance and as an alternative, we also explore the linear variance weighting strategy. Here samples with the highest $\sigma=2$ are assigned a weight of 0.1 and samples with the lowest $\sigma=0$ are assigned a weight of 1. And the weight of the remaining samples is linearly interpolated between the two extremes. \subsubsection{Multi-Task Learning} We experimented with several ideas on how-to incorporate extra supervision on the distribution of the opinion scores in a multi-task learning setup. They can be categorized as: (i) directly using the variance or median ground truth as the auxiliary label, (ii) calculating a 5 bin histogram of the opinion scores and using that as ground truth, and (iii) predicting opinion scores directly. \myparagraph{MOS + Standard Deviation/Median Prediction} In this setup, an extra regression head is added to the final layer of the backbone model that predicts the standard deviation or median of the opinion scores and is trained with the associated ground truth. \myparagraph{Histogram Prediction} The final layer of the backbone model predicts a 5 bin histogram of the opinion scores and is trained with the associated ground truth calculated from the individual opinion scores from the dataset. As the number of option scores per sample varies between 2 to 30 in our dataset, by creating a 5 bin histogram (to account for the 5 distinct values) we have a consistent way of representing the opinion distribution of all the samples. We experimented with 3 different loss functions to match the histogram distribution with the ground truth: (a) cross-entropy loss (b) Wasserstein loss~\cite{hou2017squared} (c) chi-square~\cite{pele2010quadratic,wang2021chi} loss. The MOS predictions can be derived by taking the weighted average of the bin values. \myparagraph{Direct Opinion Score Prediction} In this setup (shown in Figure~\ref{fig:model-vote-pred}), we designate 5 neurons (since 75\% of the samples have 5 individual opinion scores) in the final layer of the backbone model as a representation of 5 judges and let them predict individual opinion scores. Since we have a variable number of opinion scores per sample and the real judges between the samples are not consistent (due to crowd-sourcing), it is not possible to directly compare the predicted and ground truth opinion scores to calculate the loss. Instead, we calculate MOS, standard deviation, median, etc. from the predicted opinion scores and calculate the losses against their respective ground truth from the samples. We experimented with two activation functions: (a) $\mathtt{ReLU}$, (b) Modified Sigmoid (i.e. $1 + 4 \times \mathtt{Sigmoid}(x)$) to predict values always between $1-5$ range. \begin{figure}[!htb] \centering \includegraphics[width=.95\linewidth]{figs/model-vote-pred.png} \caption{The model used in Direct Opinion Score Prediction.} \label{fig:model-vote-pred} \end{figure} \section{Results} \label{sec:results} \myparagraph{Baseline Sanity Check} \label{sec:prelim} \input{tables/results-spearman} The results of our ablation study are shown in Table~\ref{tab:results}. Our Convolutional LSTM based backbone (Model II), achieved similar stack ranked SRCC to DNSMOS (Model I) but shows 0.16 MAE and 0.13 RMSE improvement. We perform a further inspection of the distribution of the predicted MOS labels generated by these two baselines against the ground truth, which is shown in Figure~\ref{fig:distro-dnsmos-vs-convlstm}. The predictions of DNSMOS are heavily compressed between the 2-4.25 range (note Figure 2(d) of~\cite{reddy2021dnsmos}) while model II baseline predicts between a broader 1-4.7 range. The differences in model architecture (DNSMOS being more sophisticated) and training set size (model II using 2.5x samples) are the likely cause of such discrepancies, but it would require an in-depth investigation to find the concrete reasons. \begin{figure}[!htb] \centering \includegraphics[width=\columnwidth]{figs/distro-dnsmos-vs-convlstm-wide.png} \caption{Scatter plot of real MOS labels (x-axis) vs predicted MOS labels (y-axis) for DNSMOS (model I) and ConvLSTM Baseline (model II)} \label{fig:distro-dnsmos-vs-convlstm} \end{figure} \myparagraph{Effect of Auxiliary Supervision} \label{sec:effect-aux} Almost in every case, providing additional supervision leads to a better performance over our model II baseline. Among our single task experiments, where we employ the variance of the opinion scores to scale per sample loss term, linear variance weighting strategy (IV) improves stack ranked SRCC by 0.4\% over model II, but inverse variance weighting (III) incurs a 3.83\% drop in the same metric. \begin{table}[!htb] \centering \caption{Stack-ranked SRCC per bin for the three histogram prediction models.} \label{tab:histigram-loss} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}rclllll@{}} \toprule \multicolumn{1}{l}{\multirow{2}{*}{ID}} & \multirow{2}{*}{Histogram Loss} & \multicolumn{5}{c}{Bins} \\ \cmidrule(l){3-7} \multicolumn{1}{l}{} & & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} \\ \midrule VII & Cross Entropy & 0.9371 & 0.9351 & 0.5544 & 0.9464 & 0.9431 \\ VIII & Wasserstein & 0.9565 & 0.9548 & 0.631 & 0.9435 & 0.9149 \\ IX & Chi Square & 0.9355 & 0.9343 & 0.6758 & 0.948 & 0.9343 \\ \bottomrule \end{tabular}% } \end{table} Among the three histogram prediction models, the cross-entropy (model VII) and chi-square loss (model IX) variants provide 0.28\% stack ranked SRCC improvement over the Model II baseline. We take a deeper look into them in Table~\ref{tab:histigram-loss}, where we notice that all three models struggle to predict the accurate probability of $\mathtt{Score}=3$ bin, indicated by much lower SRCC compared to other bins. We further compare the ground truth and predictions for model VII in Figure~\ref{fig:his-pred-ce-gt-vs-pred.png} where we notice the model tends to learn a higher value (compared to ground truth) for $\mathtt{Score}=3$ bin. \begin{figure}[!htb] \centering \includegraphics[width=\columnwidth]{figs/his-pred-ce-gt-vs-pred-violin-split.png} \caption{Violin plot~\cite{hintze1998violin} per histogram-bin for ground truth and model VII predictions.} \label{fig:his-pred-ce-gt-vs-pred.png} \end{figure} According to the stack ranked PCC and SRCC metric, predicting MOS and variance score together (model V) results in the top performance improvement (0.66\% and 0.77\% respectively) compared to the model II baseline. In the rest of the 6 metrics, however, opinion score prediction with $\mathtt{ReLU}$ activation (model X) and MOS with median score prediction (Model VI) are the top two performing models. Opinion score prediction with $\mathtt{ReLU}$ activation (model X) achieved the highest improvement in RMSE (0.015 per-file, 0.016 stack-ranked) and SRCC (1.02\% per-file, 0.77\% stack-ranked). To further investigate how model X generates the top results, we plot the distributions of the activations from the final 5 neurons of model X in Figure~\ref{fig:vote-pred-distro}. We can notice that the first 3 neurons tend to produce higher scores than the last 2. The last two neurons also produce scores with relatively high variance. \begin{figure}[!htb] \centering \includegraphics[width=\columnwidth]{figs/5-neuron-violin-3.png} \caption{Violin plot~\cite{hintze1998violin} of the final 5 neurons' activations from the opinion score prediction (model X).} \label{fig:vote-pred-distro} \end{figure} \section{Acknowledgments} We would like to thank Sebastian Braun and the rest of the members of the Audio and Acoustics Research Group at Microsoft Research for their valuable feedback; Hari Dubey and Ross Cutler from the IC3-AI team for providing the dataset for the experiments. \bibliographystyle{IEEEtran}
2,877,628,090,909
arxiv
\section{Introduction} In recent years the field of galaxy cluster surveys has been re-energised by the realisation that a well-measured cluster population places strong independent constraints on cosmological models \citep[e.g.][]{boehringer2004, vikhlinin2009, mantz2010, weinberg2013}. As the largest known bound objects, clusters can be used to simultaneously probe the cosmic expansion rate and the gravitational mechanisms responsible for the growth of structure in the Universe. The evolution of the galaxy cluster mass function across cosmological times and the distribution of clusters within the tridimensional large-scale structure are two key observables, since they are readily predicted by theoretical models and simulations. Galaxy cluster cosmology studies start with constructing cluster samples. Fortunately for observers, the hot baryonic gas trapped in galaxy clusters emits large amounts of X-ray photons, in great part due to bremsstrahlung processes. Extended X-ray objects are thus the signpost of deep potential wells, and their X-ray luminosity directly relates to the mass of the dark matter halo in which they reside. Therefore, large surveys of the sky at the high-energy end of the electromagnetic spectrum (i.e.~X-ray) permit a complete census of clusters covering a wide range of masses and redshifts, and survey data themselves can provide estimates of the mass of these objects. This is why large surveys in the X-ray wavelengths aimed at constraining cosmology with galaxy clusters have been developed since the early years of X-ray astronomy \citep{henryarnaud1991, bahcallcen1993, jonesforman1984}, with a notable step-change brought about by the ROSAT all-sky survey \citep{ebeling2000, ikebe2002, reiprichboehringer2002, schuecker2003} and ROSAT serendipitous surveys \citep{rosati1998, romer2000,burke2003, burenin2007} and the support of Chandra and XMM-Newton \citep{pacaud2006, vikhlinin2009, mantz2010, finoguenov2010, clerc2014, pierre2016}. The next major advance in the field will be offered by \emph{eROSITA} \cite[extended ROentgen Survey with an Imaging Telescope Array,][]{predehl2014} which will survey the entire sky in the 0.3-10~keV energy range at depths 10 to 30 times deeper than ROSAT. The combination of eROSITA's field of view, angular resolution and sensitivity will lead to the detection of $\sim 100,000$ galaxy clusters down to $[0.5-2]$~keV fluxes of $\sim 3\times 10^{-14}$~ergs\,s$^{-1}$ cm$^{-2}$ and up to redshifts of unity and beyond \cite[see][]{merloni2012,borm2014}. However, X-ray observations alone, in general, are not sufficient to fully assess the nature of the emitting sources and, most importantly, to determine their redshifts. Therefore, optical observations play a critical role in complementing surveys of galaxy clusters in X-rays. Whilst multi-filter optical imaging proves efficient at detecting and characterizing galaxy clusters -- notably through their ubiquitous red sequence \citep[e.g.][]{gladdersyee2000, rykoff2014} --, ultimate confirmation of a galaxy cluster is achieved by optical spectroscopy. Spectroscopic observations of cluster members can be used to disentangle projection effects and substructures from real concentrations, and they also provide the precise redshift of the halo, and therefore lead to precise luminosities and masses once they are combined with X-ray measurements. Obtaining spectroscopic redshifts for galaxy cluster members is recognized as a major bottleneck in X-ray cluster surveys, because of the double need for deep imaging data to select targets, and the deep spectroscopic exposures necessary for redshift determination. The SPIDERS (SPectroscopic IDentification of \emph{eROSITA} Sources) cluster program is specifically designed to overcome this bottleneck. It relies on the BOSS spectrograph mounted on the SDSS-2.5m telescope at Apache Point Observatory \citep{gunn2006} to follow-up galaxies detected in the large area of extragalactic sky imaged in $ugriz$ filters by the same telescope. The SDSS/BOSS instrumentation and infrastructure are used in combination with most recent techniques in finding X-ray galaxy clusters and their photometric members, in order to perform an unprecedentedly wide spectroscopic survey of X-ray galaxy clusters. Advanced techniques used in this work include the wavelet filtering of X-ray maps, and the use of a series of matched filters to look for red-sequence galaxies in a multi-variate optical parameter space (colour, position and magnitude). This paper (one of a series of SDSS-IV technical papers), describes the targeting and analysis steps leading to the construction of a large, spectroscopically validated sample of X-ray selected galaxy clusters within SPIDERS. In the preparation phase for \emph{eROSITA}, these samples are drawn from ROSAT and XMM data. Throughout this paper we use a small pilot survey dubbed SEQUELS (Sloan Extended Quasar, ELG, and LRG Survey, a precursor to the main SPIDERS/eBOSS/TDSS program) to illustrate the efficacy of our targeting and analysis approach. We present the first scientific results from the SPIDERS cluster program. The paper is structured as follows. Section~\ref{sect:spiders} is devoted to the general presentation of the SPIDERS cluster survey and the samples of X-ray clusters it is based upon. The selection of targets for the shallower, pre-\emph{eROSITA} phase of the survey is described in Sect.~\ref{sect:targeting}, along with forecasts regarding the outcome of the observations. In Sect.~\ref{sect:analysis} we depict the steps envisaged to transform observations into science-oriented catalogues. We emphasize that these methods are subject to improvements in the course of the survey. In Section~\ref{sect:sequels} we present some examples for the science exploitation of the SPIDERS program, using the validated sample of clusters from the SEQUELS pilot survey, that we compare to existing cluster catalogues in Sect.~\ref{sect:mcxc}. We conclude in Sect.~\ref{sect:conclusions}. Unless otherwise stated, we assume a flat $\Lambda$CDM cosmological model with $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ and $H_0 = 100 h$~km\,s$^{-1}$\,Mpc$^{-1}$ with $h=0.7$. We define $L_C \equiv L_X/E(z)$, with $E(z)=H(z)/H_0$ and $L_X$ the [0.1-2.4]~keV luminosity of a cluster. \section[]{The SPIDERS cluster program} \label{sect:spiders} \subsection[]{General description} SPIDERS is an observational program, part of the SDSS-IV project \citep{blanton2017}. The primary goal of SPIDERS is to obtain homogeneous and complete spectroscopic follow-up of extragalactic sources, both point-like and extended, using data from X-ray satellites and over the SDSS imaging footprint. Given the nature of these sources, SPIDERS naturally splits into two main components, i.e.~an AGN program and a cluster program. The SPIDERS AGN targeting strategy is described in Dwelly et al. (in prep.), and will collect $\sim$50,000 spectra of ROSAT, XMM and \emph{eROSITA} X-ray AGN. A pilot study for the SPIDERS AGN survey, based around the BOSS follow-up of X-ray selected AGN in the XMM-Newton XMM-XXL field, is presented in \citet[][]{menzel2016}. This paper describes the targeting of X-ray extended sources identified as galaxy cluster candidates. The driving goals of the program are the confirmation of those candidates and the assignment of a precise redshift. This in turn leads to the determination of precise absolute cluster parameters (including X-ray luminosity and mass). A number of important secondary goals include the estimation of cluster dynamical masses (via line-of-sight velocity dispersion measurements), the study of the physical interplay between massive dark matter halos, the hot baryonic gas they host, and the galaxies that live therein. From a technical point of view, the SPIDERS cluster program represents a novel approach to galaxy cluster spectroscopic follow-up: the large data volume involved (several thousands of galaxy clusters) demands innovative targeting and analysis strategies. Considering that the ultimate goal envisaged by the program is precision cosmology using galaxy clusters as tracers of the large-scale structure, we require that all the procedures involved must undergo careful control and validation. SPIDERS will follow-up X-ray extended sources detected in \emph{eROSITA} data in the final years of SDSS-IV. Prior to \emph{eROSITA}'s launch, galaxy clusters identified in the shallower RASS and sparser XMM-Newton data will constitute the bulk of the SPIDERS program. There will be an incremental increase in X-ray sensitivity brought about by each of \emph{eROSITA}'s sky surveys, which will start to be be accumulated after the start of SDSS-IV. Therefore, SPIDERS is planned in three tiers. Tier~0, the shallowest tier, relies mainly on ROSAT data. When successive \emph{eROSITA} catalogues become available (following the cadence of one deeper X-ray catalogue every six months, going from eRASS:1 to eRASS:8), sources in the North Galactic Cap within the German \emph{eROSITA} sky ($180 < l < 360$) will be added to the pool of targets. The planned launch date of \emph{eROSITA} means that Tier~1 and Tier~2 will most likely correspond to eRASS depths 2 and 4 respectively. Fig.~\ref{fig:layout}~shows the layout of the surveys in Equatorial coordinates. The area covered in Tier~0 corresponds to the entire eBOSS footprint not covered in Tiers~1 and 2, i.e.~$>5,200~\deg^2$, while Tier~1 and 2 will lie to the South of the eROSITA-DE boundary (the black dashed curve in this figure), with exact footprints dependent on the launch date. We detail in the following sub-sections the samples of galaxy clusters followed-up by SPIDERS in Tier~0, along with expectations regarding the deeper tiers, when \emph{eROSITA} is available. \begin{figure*} \includegraphics[width=\linewidth]{images/eboss_chunks_Hammer-Aitoff_for_Nicolas} \caption{Location of the survey in equatorial coordinates. The blue line is the perimeter of the BOSS optical imaging area within which the eBOSS survey lies. Various distinct regions of sky (chunks) are tiled separately: in this figure chunks {\tt eboss} 1 to 5, 9 and 16 are displayed, approximating the area expected to be covered after 2 years of eBOSS/SPIDERS/TDSS survey operations. Each spectroscopic plate is represented by a circle of diameter $\sim 3 \deg$. The black dashed line indicates the boundary between the eastern and western Galactic hemispheres and delimits the German and Russian halves of the \emph{eROSITA} sky.} \label{fig:layout} \end{figure*} \begin{figure} \includegraphics[width=84mm]{images/Tier-0-1-2-all_dndm_plots.pdf} \caption{Cumulative mass distribution of galaxy clusters (per unit area) in the SPIDERS survey, split into three different tiers. Tier~0 stands for CODEX clusters only. Also shown is the cumulative mass function of all haloes with mass above $10^{12.5} h^{-1}$~M$_{\odot}$ and the mass distribution of \emph{eROSITA} clusters at the end of the 4-year survey (eRASS:8, not part of SPIDERS). The flux limits assumed are taken from \citet{merloni2012}.} \label{fig:dndm_tiers} \end{figure} \subsection[]{SPIDERS Tier 0: CODEX and XCLASS} Prior to the delivery of first cluster catalogues from \emph{eROSITA}, SPIDERS ensures follow-up of galaxy clusters discovered in the RASS (\emph{ROSAT} All-Sky Survey) and in XMM archival data. The two relevant cluster samples are the CODEX (Finoguenov et al., in prep.) and the XCLASS-RedMapper catalogue \citep{clerc2012b, sadibekova2014} respectively. Both are based on X-ray detections of galaxy clusters, yet they differ in their characteristics and their construction. Since they conveniently encompass the range of X-ray properties expected from \emph{eROSITA} clusters (see below), they show a particular interest in view of preparing the \emph{eROSITA} survey. We provide details on their construction in the following paragraphs, and Table~\ref{table:codex_vs_xclass} summarizes the main characteristics of both samples. Note in particular that the same red-sequence finder was run for both samples. \begin{figure} \includegraphics[width=84mm]{images/Lx-z_plot_MCXCmatched.pdf} \caption{The distribution of SPIDERS cluster candidates in the redshift-X-ray luminosity plane. Grey points represent CODEX clusters with a richness above 10, i.e.~the main pool of targets in SPIDERS. When matched in position to a MCXC cluster, their location is taken from the MCXC meta-catalogue \citep{piffaretti2011} and marked by a darker point. Otherwise, their redshift corresponds to $z_{\lambda}$, as estimated from SDSS photometry, with typical uncertainty $\Delta_z/(1+z) \sim 0.01 - 0.02$; their rest-frame [0.1-2.4]~keV luminosity derives from the ROSAT flux using $z_{\lambda}$. Blue points are the 230 CODEX clusters confirmed in the SEQUELS-DR12 demonstration sample (Sect.~\ref{sect:description_sequels}) with a spectroscopic redshift (typical $\Delta_z/(1+z) \sim 0.001$). 24 MCXC clusters lie within the SPIDERS pilot footprint (Fig.~\ref{fig:pilotareasamples}). ABELL~1361 is within a masked area of the CODEX survey, hence the absence of a match despite its remarkable X-ray brightness.} \label{fig:lxzmcxc} \end{figure} \begin{figure} \includegraphics[width=84mm]{images/Lx-z_plot.pdf} \caption{The distribution of SPIDERS confirmed clusters in the redshift-X-ray luminosity plane. Similarly as in Fig.~\ref{fig:lxzmcxc}, grey dots represent the main pool of targets in SPIDERS. Red and blue points are the 230 CODEX clusters confirmed in the SEQUELS-DR12 demonstration sample. The three XCLASS clusters validated as part of the demonstration sample are displayed as green squares. Two low-richness CODEX clusters labelled '(C)' are suffering from contamination by a point-source in RASS data, not necessarily linked to the system, and artificially boosting the X-ray luminosity measurement.} \label{fig:lxzdist} \end{figure} \subsubsection[]{The CODEX subsample} CODEX (COnstrain Dark Energy with X-ray clusters, Finoguenov et al., in prep.) is an extensive search for galaxy clusters in ROSAT data, based on the association of RASS photon overdensities to red-sequence galaxies identified in SDSS. It covers the entire SPIDERS/eBOSS footprint and these detections are expected to show as the brightest, best-characterized, cluster sources in future \emph{eROSITA} data. The current study provides the only spectroscopically complete CODEX catalogue down to low richness values. As such, this paper is the first in a series of CODEX catalogue papers. The sample construction is fully detailed in Finoguenov et al. (in prep.)~; we briefly summarize here the steps leading to the list of cluster candidates. As a first step, RASS data is searched for faint sources using a wavelet-based detection algorithm. The detection threshold is set to 4-$\sigma$. Sensitivity maps (as in Fig.~\ref{fig:rass_sensitivity}) are created as by-products and help in assessing the completeness of the sample. On average, the 90\% completeness level is achieved for a source delivering 8 X-ray counts, while the 10\% completeness level is reached for sources delivering 4 counts. Given these sensitivity estimates, the number of spurious sources is estimated between 500 and 1000 across the entire CODEX area and the number of X-ray AGN amounts to around 20\,000. The RedMapper algorithm \citep{rykoff2014} looks in SDSS imaging data (Data Release 8) for galaxies with similar colours around each faint RASS source, i.e.~for a red-sequence formed by passive galaxies at the same redshift. This provides in turn an estimate for the photometric redshift of the cluster (based on the colours of the galaxies) and an optimized richness estimator. The counterpart having the highest richness is listed for each RASS X-ray source. Given the uncertain position of RASS detections the red-sequence algorithm is then run to optimally find the cluster center. The constraint on the centre position is relaxed, to be within $3\arcmin$ from the X-ray position\footnote{The mean and 95th percentile of the RASS faint point-source 1-$\sigma$ positional uncertainty are $\sim 20$ and $\sim 35$ arcsec respectively.}. The newly found red-sequence is used to provide a fresh estimate for the cluster photometric redshift and richness (optical or "OPT" quantities: $z_{\lambda, {\rm OPT}}, \lambda_{\rm OPT}$, etc.) In the final step, X-ray properties based on the RASS count-rate and the RedMapper redshift are calculated in optimized apertures (imposing a minimal signal-to-noise threshold of 1.6), assuming a model for the X-ray spectral emissivity. Among them stand the aperture-corrected cluster flux $f_X$ and $[0.1-2.4]$~keV luminosities $L_X$. The average number density of CODEX sources over the BOSS imaging footprint is 0.8~deg$^{-2}$ (for candidates with richness\footnote{By 'richness' we will refer to the RedMapper richness estimator \citep{rykoff2014}. It correlates with the total cluster mass and equals the sum of the membership probabilities $p_{\rm mem}$ of galaxies within a given system.} $\lambda_{OPT} \geq 10$). Adding those with lower richness brings this number up to 1.0~deg$^{-2}$. However, due to spatial fluctuations in the RASS depth, these numbers vary as a function of sky position. \begin{figure} \includegraphics[width=84mm]{images/rass_sensitivity_finoguenov.pdf} \caption{Rosat All-Sky Survey (RASS) sensitivity in the CODEX footprint (North galactic cap). Colour bar indicates the limiting flux in the [0.5-2]~keV band, from red ($10^{-13}$~ergs\,s\,cm$^{-2}$) to black ($8\times10^{-13}$~ergs\,s\,cm$^{-2}$).} \label{fig:rass_sensitivity} \end{figure} \subsubsection[]{The XCLASS-RedMapper subsample} XCLASS \citep[XMM CLuster Archive Super Survey,][]{clerc2012b} is a search for galaxy clusters detected in the XMM-Newton archive, based on a robust cluster detection algorithm, developed in the context of the XMM-LSS \citep[e.g.][]{pacaud2006,clerc2014} and XMM-XXL \citep{pierre2016} surveys. Extensive simulations of XMM observations, including realistic instrumental effects and astrophysical source populations, support the construction of a pure sample of extended objects in $[0.5-2]$~keV XMM images (the "C1" selection, \citealt{pacaud2006}). Visual screening removes nearby galaxies and detector artefacts, leading to the final catalogue of XCLASS galaxy cluster candidates. The L4SDB\footnote{{http://xmm-lss.in2p3.fr:8080/l4sdb/}} database stores validated detections, along with other useful information related to the X-ray sources (redshifts, flux measurements, etc.) The XCLASS surveyed area amounts to $\sim 90$~deg$^2$ but due to its very nature, it is scattered across the extragalactic sky ($|b_{\rm galactic}| > 20 \deg$). All analyzed XMM observations were deliberately shrunk to 10~ks depths (exactly) so as to provide a survey as uniform as possible in sensitivity. \citet{sadibekova2014} performed the correlation of XCLASS C1 sources with the RedMapper optical cluster catalogue in the regions where the two surveys overlap. A major difference with the CODEX sample consists in very reliable cluster X-ray positions (the positional uncertainty amounts to a few arcsec rms), and the secure extended nature of the X-ray detections. Similarly to CODEX, the RedMapper algorithm provides an estimate for the photometric redshift and the optical richness ($\lambda_{XC}$) of the clusters. The SPIDERS sample contains 238 XCLASS clusters securely matched to a RedMapper candidate: i.e. $\lambda_{XC}>20$ and a correlation radius $r_{\rm corr} \leq 3 \arcmin$, or $(5<\lambda_{XC}<20)$ and $r_{\rm corr} \leq 1 \arcmin$. We further added a group of 40 less securely matched sources, having $(5<\lambda_{XC}<20)$ and $1\arcmin \leq r_{\rm corr} \leq 3 \arcmin$. The total number of XCLASS-RedMapper sources across the full SDSS imaging footprint amounts to 278, 84 of them are in common with the CODEX subsample described earlier. Since they are irregularly distributed on sky, their sky density is quoted over the common overlap area between XMM observations and the imaging footprint and amounts to 3-4 deg$^{-2}$. \begin{table*} \centering \caption{\label{table:codex_vs_xclass} Characteristics of the two samples of X-ray clusters followed-up by SPIDERS prior to the launch of eROSITA. Notes: $^{(a)}$: calculated over the area overlapping XMM-Newton observations analyzed in X-CLASS. $^{(b)}$ only for SEQUELS pilot area, see Sect.~\ref{sect:sequels}.} \begin{tabular}{@{}lcc@{}} \hline & CODEX & XCLASS-RedMapper \\ \hline Number of clusters in SDSS DR8 footprint & 10 415 & 278 \\ Sky distribution & Full SDSS area & Spatially scattered \\ Average candidate density ($\deg^{-2}$) & 0.8 & 3-4$^{(a)}$ \\ Maximal redshift & $\sim 0.6$ & $\sim 0.6$ \\ Minimal richness & 10 (3$^{(b)}$) & 5 \\ \hline X-ray data origin & RASS faint sources & XMM-Newton archival data \\ X-ray selection & 4-$\sigma$ above background & C1 selection (extended sources) \\ Limiting flux in X-rays (0.5--2~keV, units ergs/s/cm$^2$) & $\sim 10^{-13}$ & $\sim 10^{-14}$ \\ X-ray positional accuracy & $\sim 3\arcmin$ & $\lesssim 10\arcsec$ \\ X-ray spatial resolution & $\sim 100 \arcsec$ & $\sim 10-20 \arcsec$ \\ X-ray energy resolution ($\Delta E$ @ 1 keV) & $\sim 450$~eV & $\sim 100$~eV \\ \hline Red-sequence finder & redMaPPer v.5.2 & redMaPPer v.5.2 \\ Optical search & Around each X-ray source & Independent from X-ray sources \\ Optical/X-ray association & Richness cut vs. chance identification & Angular distance criterion + visual checks \\ \hline \end{tabular} \end{table*} \begin{figure*} \begin{tabular}{cc} \includegraphics[width=84mm]{images/2_2338_gri_contours.pdf} & \includegraphics[width=84mm]{images/RMXCLASS-336_gri_contours.pdf} \\ \end{tabular} \caption{SDSS-$gri$ composite images of two SPIDERS clusters with X-ray contours overlaid. {\it Left:} CODEX cluster (Id: 2\_2338, known as ABELL 661) at R.A.\,=\,8h27m15.5s, Dec\,=\,+$53\degr$8m53s and $z=0.121$, the contours correspond to ROSAT-All-Sky Survey [0.1-2.4]~keV smoothed image. \emph{Right:} bright XCLASS cluster (Id: XC~0062 ; RX\,J0256.5+0006 in \citealt{romer2000}) at R.A.\,=\,2h56m30.8s, Dec\,=\,+$0\degr$6m3s and $z_{\rm phot}=0.37$ (the SPIDERS-determined redshift of XC~0062 will be published at completion of the survey). The contours correpond to XMM-Newton [0.5-2]~keV smoothed image. In both panels, North is up and East is left, the cyan cross indicates the position of the original X-ray detection, the red circle the position of the optical centre. Point-like sources are easily distinguishable in XMM data (small-scale overdensities in the right panel). } \label{fig:example_codex_xclass} \end{figure*} \subsection[]{\emph{eROSITA} survey: eRASS samples} Although this paper focuses mainly on the targeting of Tier~0 samples, namely CODEX and XCLASS, we forecast our target budget for the future Tiers 1 \& 2 in SPIDERS. These forecasts are based on pre-launch assumptions as for the amount and nature of \emph{eROSITA} clusters \citep{merloni2012}. A simple model, subject to the current uncertainty concerning the in-flight performances of the instrument and the actual physics of the population it will uncover, helps in deriving rough numbers and adjusting the targeting strategy. We modeled the galaxy cluster mass distribution using \citet{tinker2008} halo mass function and converted masses ($M_{200c}$) to X-ray temperatures and luminosities using scaling relations. The \emph{eROSITA} selection is modeled with a lower cut in soft-band flux, representative of the selection relevant to each tier ($f_{\rm lim} = 1.2$ and $0.8 \times 10^{-13}$~ergs\,s$^{-1}$\,cm$^{-2}$ for Tier 1 and 2 respectively). Integrating the resulting filtered mass distribution provided the curves shown in Fig.~\ref{fig:dndm_tiers}. The galaxy population within clusters was simulated by means of galaxy luminosity functions parametrized as a function of cluster mass and redshift \citep{popesso2005,hansen2009}. We folded a spectral energy distribution template representative of passive galaxies \citep{maraston2009} into the SDSS filter set. Flux losses due to the finite $2\arcsec$ fiber aperture were accounted for by assuming a size-magnitude relation \citep{bernardi2007} and a typical $1.4\arcsec$ seeing (see details in \citealt{zhang2016}). We then applied a photometric selection $17 < i(2 \arcsec) < 21.2$ representative of the SPIDERS target selection (see Sect.~\ref{sect:targeting}) and excluded galaxies whose photometric properties correspond to BOSS galaxy targets \citep[the LOWZ and CMASS selections~;][]{bolton2012}. Finally, a cluster radius-dependent sampling factor was set to account for fiber collisions. The resulting redshift distribution of targetable galaxies not already targeted in BOSS is shown in Fig.~\ref{fig:dndz_galaxies_erass}, along with the densities of targets for each layer of the \emph{eROSITA} survey. These numbers are indicative and are refined within the Tier 0 phase of SPIDERS. Based on those calculations, SPIDERS (in its Tier 1 and 2 phases) will confirm 90\% of $z \lesssim 0.6$ \emph{eROSITA} clusters by obtaining (at least) 3 spectroscopic redshifts per system (including those known from previous SDSS observations.) SPIDERS will also raise the number of spectroscopic members per cluster virial radius to 10 for 50\% of the $z<0.5$ clusters and to 20 for 20\% of the $z<0.5$ clusters. \begin{figure} \includegraphics[width=84mm]{images/eRASS1-4_dndzgalaxy_HDU1_plots.pdf} \caption{Redshift distribution of all targetable galaxies in the \emph{eROSITA} era of the SPIDERS survey. It includes a model for the cluster number density combined to galaxy luminosity functions and roughly accounts for the lost of flux in fibers and their maximal spacing. These numbers are indicative and need refinement in the course of the Tier-0 phase of the survey.} \label{fig:dndz_galaxies_erass} \end{figure} \subsection[]{The SEQUELS pilot program} \label{sect:description_sequels} SEQUELS \citep[The Sloan Extended Quasar, ELG, and LRG Survey,][their App.~A.3]{alam2015} is an ancillary program part of BOSS (SDSS Data Release 12) and served as a pilot survey for the eBOSS, SPIDERS and TDSS programs in SDSS-IV \citep{dawson2016}. Its initial footprint consists in the rectangle $(120 \leq RA \leq 210)$ and $(45 \leq DEC \leq 60)$. Only 300~deg$^2$ of this area were observed (corresponding to 66 plates) as part of Data Release 12 \citep[DR12,][and Fig.~\ref{fig:layout}]{alam2015}. As a preparation for the SDSS-IV SPIDERS cluster follow-up program, SEQUELS contains a number of targets assigned to SPIDERS clusters. While the parent cluster samples are the same as for SPIDERS Tier~0 (i.e.~CODEX and XCLASS), the target selection slightly differs and it is in general broader in SEQUELS (see Sect.~\ref{sect:targeting}). Throughout this work, we illustrate our envisaged analysis procedures with results extracted from the pilot SPIDERS program in SEQUELS DR12. Note that 51 SEQUELS plates (about $166 \deg^2$) are observed in the course of the eBOSS survey (post-DR12) and therefore the targeting strategy for those slightly differs from the main SPIDERS survey. These objects are not considered in the following 'pilot sample'. \section[]{Targeting strategy} \label{sect:targeting} This section details the steps followed in preparing the target lists in the first phase of SPIDERS (Tier 0). The main difference with respect to conventional multi-object spectroscopic observations of galaxy clusters, consists in an ensemble treatment of the entire pool of targets. Because the exact set of targets is only known after the eBOSS tiling algorithm has run \citep{dawson2016} and accommodated for the various target classes within eBOSS, we worked out a scheme for assigning priorities to potential targets, aimed at optimizing the primary science goal, namely the number of spectroscopically confirmed clusters. \subsection[]{Target selection and prioritization\label{sect:targetsel}} \subsubsection[]{The CODEX and XCLASS red-sequences} To each galaxy cluster candidate we attach a list of potential member galaxies detected over the SDSS imaging data, which form the likely red sequence of a cluster in the SDSS passbands. Specifically, the redMaPPer algorithm assigns to each galaxy near a cluster a probability $p_{\rm mem} \in [0,1]$ \citep{rykoff2014} that it actually is a cluster member, based on its magnitude, colours and position relative to the cluster centre. This allows to rank galaxies by membership probability within each cluster, down to $p_{\rm mem}=0.05$, a ranking that we convert in terms of targeting priority, as described in the following for CODEX and XCLASS targets. \subsubsection[]{CODEX clusters} The entire CODEX red-sequence member catalogue comprises 312564 objects over the entire BOSS footprint. Among them, 3797 formally belong to two or more parent clusters: however, since the membership catalogue includes objects down to membership probability $p_{\rm mem}=5\%$, this amount is not necessarily indicative of projection effects: it actually includes galaxies with low probability in one of their parent clusters, as well as galaxies that belong to several clusters on valid physical grounds (mergers). In order to maximize the redshift determination efficiency of the targets \citep[e.g.][]{bolton2012}, only red-sequence candidates with $17.0 < {\tt FIBER2MAG\_I} < 21.2$ are considered for targeting. The algorithm starts with the richest cluster in the sample (as defined by $\lambda_{OPT}$) and iteratively proceeds by decreasing richness. It assigns to each member an integer {\tt TARGETSELECTED} indicating its rank in the red-sequence. Members with a spectroscopic redshift determined from past SDSS/BOSS observations, with values satisfying {\tt SPECPRIMARY == 1} and {\tt ZWARNING == 0} were identified and removed from the initial list. Targets already assigned a rank within a higher-richness cluster keep the {\tt TARGETSELECTED} flag they were assigned previously. The priority flag for each target is computed based on a combination of {\tt TARGETSELECTED} and the cluster richness $\lambda_{OPT}$. Fig.~\ref{fig:prio_rank} displays the relation between the priority flag and the galaxy rank in the red-sequence, as a function of cluster richness. A low priority flag indicates high targeting priority. We ensured the 3 highest-probability objects in the red-sequence are prioritized regardless of the cluster richness, in order to maximize the number of confirmed clusters in the sample. This simple scheme ensures higher prioritization of rich clusters and galaxies relevant to cluster confirmation. A last step consists in applying a hard cut to the priority flag at the value of 80, except in the {\tt eboss3} chunk where this threshold is set at 33 (see App.~\ref{app:eboss3}). This change is motivated by the highest density of RASS-faint sources in this area, which lies close to the deep polar region of the X-ray survey. \begin{figure} \includegraphics[width=84mm]{images/priority-rank_plot.pdf} \caption{Scheme applied to assign a priority flag to red-sequence members in a cluster depending on its richness $\lambda_{OPT}$ (dotted line is the one-to-one relation). Members in rich clusters (lower curve, plain black) are assigned lower priority flags, ensuring a highest fraction of targeted objects. The red-dashed line stands for very poor clusters ($\lambda_{OPT} < 10$). These candidates are not included in SPIDERS but considered in the SEQUELS Selection (Sect.~\ref{sect:targeting}).} \label{fig:prio_rank} \end{figure} \subsubsection[]{XCLASS-RedMapper clusters} XCLASS-RedMapper clusters are targeted in a way very similar to CODEX clusters, with two exceptions: no cluster richness-based selection is applied; and the conversion from {\tt TARGETSELECTED} to the actual priority flag is computed regardless of the richness and follows the lowest (plain black) curve in Fig.~\ref{fig:prio_rank}, i.e.~all XCLASS clusters are treated equivalently to rich CODEX clusters. These two differences stem from the secure galaxy cluster nature of these objects (bona-fide extended sources in X-ray, compare both panels of Fig.~\ref{fig:example_codex_xclass}) and ensure higher internal prioritization of the overall less numerous XCLASS targets in the SPIDERS survey. \subsection[]{Tiling forecasts\label{sect:tiling}} The pool of targets along with the priority flag is submitted to the eBOSS tiling algorithm. Given their relative sparsity (less than $10 \deg^{-2}$), and because the high-level requirement for SPIDERS is a high completeness level in spectroscopic confirmation of clusters, SPIDERS cluster targets are assigned first among other eBOSS targets. Fig.~\ref{fig:eboss1_tilingforecasts} shows the expected number of spectroscopic redshifts of red-sequence galaxies per CODEX cluster in the {\tt eboss1} chunk, based on the plate tiling. Fig.~\ref{fig:eboss3_tilingforecasts} shows the equivalent for the {\tt eboss3} chunk, which is obtained with a lower priority threshold due to the increased depth of RASS in this area of sky (App.~\ref{app:eboss3}). Within the {\tt eboss1} area, the number of spectroscopic redshifts in the red-sequence increases from 2 (median) prior to observations to about 10 (median) after observation, while in the {\tt eboss3} area this number amounts to 8. Fig.~\ref{fig:eboss13_targetdistributionredshift} shows the photometric redshift distribution of SPIDERS\_RASS\_CLUS targets in both chunks, i.e. the photometric redshift of the galaxy cluster they are attached to. Noticeably, SPIDERS will increase the fraction of cluster member with redshift up to $z\sim 0.6$. The deeper X-ray data in chunk {\tt eboss3} enables the use of spectroscopic targets for targeting more distant clusters. \begin{figure} \includegraphics[width=84mm]{images/targetdistribution_eboss1.pdf} \caption{Number of clusters that will have more than $n$ spectroscopic members in their red-sequence after full observation of the {\tt eboss1} chunk. The dot-dashed curve shows the situation prior to the start of SPIDERS (with a median of 2 spectroscopic redshifts per cluster), while the solid red curve is a forecast based on eBOSS tiling results (median of 10 spectroscopic redshifts per cluster, including possible interlopers). These median numbers change from 2 to 4 when considering only high-redshift candidates (thin lines, as compared to thick lines representing objects at all redshifts).} \label{fig:eboss1_tilingforecasts} \end{figure} \begin{figure} \includegraphics[width=84mm]{images/target_distribution_redshift_eboss1-3.pdf} \caption{Redshift distribution of SPIDERS\_RASS\_CLUS target in all clusters in chunk {\tt eboss1} (656 systems) and chunk {\tt eboss3} (943 systems). Targets submitted to the fiber allocation algorithm are shown with a black line, tiled targets with a blue line. The redshift distribution of red-sequence members with a redshift prior to SPIDERS observations is shown as a red dashed line.} \label{fig:eboss13_targetdistributionredshift} \end{figure} Figure~\ref{fig:eboss1_targetdistance} shows (for the {\tt eboss1} chunk) the pairwise separation between targets submitted to the eBOSS tiling algorithm, and the separation between cluster targets that will be assigned a fibre. Targets closer than the fiber collision radius are too close to each other (collided targets) and cannot be observed on a single plate. This explains the jump in the histogram of tiled targets at $62\arcsec$ separation. Multiple plate overlaps resolve a fraction of those collisions and enable access to smaller separations, hence a non-zero completeness for the collided set of SPIDERS targets. Fig.~\ref{fig:eboss1_targetsky} shows the distribution of targets relative to their parent cluster centre. A substantial fraction of central galaxies already have a redshift determined from previous observations, and SPIDERS will observe almost all remaining cluster central galaxies. Because of the fiber collisions, a noticeable dip in completeness is expected at cluster distances $\lesssim 1\arcmin$, and the completeness increases with increasing radial distance. \begin{figure} \includegraphics[width=84mm]{images/targetdistances_eboss1.pdf} \caption{Histograms of pairwise separations ($2\arcsec$ bins) between SPIDERS submitted targets (black curve) and between tiled targets in SPIDERS clusters (lower curve), the top panel shows the target allocation completeness. The inset shows a zoom over the first $\sim 80\arcsec$ and the dashed vertical line indicates the eBOSS fiber collision radius ($62\arcsec$).} \label{fig:eboss1_targetdistance} \end{figure} \begin{figure*} \begin{tabular}{c} \includegraphics[width=\linewidth]{images/target_distribution_sky_plot_5-zmin0.0-0.3_eboss1.pdf} \\ \includegraphics[width=\linewidth]{images/target_distribution_sky_plot_6-zmin0.3-0.8_eboss1.pdf} \end{tabular} \caption{Sky distribution of SPIDERS\_RASS\_CLUS targets in all clusters in chunk {\tt eboss1} (656 systems), displayed relative to the cluster "optical" centre (RA\_OPT, DEC\_OPT) and split in two photometric redshift bins (upper and lower row). The left panel shows the distribution of submitted (black), tiled (blue) targets and those with a redshift known prior SPIDERS (red). The right panel shows those distributions as a function of distance to the cluster centre. \emph{"Would have z"} corresponds to the sum of submitted and known-redshift targets. \emph{"Will have z"} corresponds to the sum of tiled and known-redshift targets.} \label{fig:eboss1_targetsky} \end{figure*} \subsection[]{Illustration: the SEQUELS-DR12 pilot} \label{sect:sequels-pres} A total of 918 CODEX and 28 XCLASS-RedMapper clusters lie within the rectangle $(120 \leq RA \leq 210)$ and $(45 \leq DEC \leq 60)$, being the target selection area for SEQUELS (Sect.~\ref{sect:sequels-pres}). The selection of galaxy cluster candidates in the SEQUELS pilot survey slightly differs from the main SPIDERS selection. All CODEX clusters have been considered regardless of their richness, and a richness cut $\lambda_{XC} \geq 20$ has been applied to select XCLASS-RedMapper clusters. The galaxy targeting strategy in SEQUELS is similar to, but not identical to, the final SPIDERS targeting algorithm. The two differences are: \begin{itemize} \item a cut in {\tt FIBER2MAG\_I} set at 21.0 instead of 21.2 \item a target list being trimmed at a priority $\leq 50$ instead of $\leq 80$ ($\leq 33$ in the case of eboss3 chunk) \end{itemize} We show in Fig.~\ref{fig:sequels_clusters} the sky distribution of CODEX clusters in the SEQUELS area. Those having at least one new redshift from SEQUELS observations are shown with colours. Within SDSS Data Release 2012, 230 of them are completely observed (i.e.~all tiled targets have been acquired) and 121 are pending completion. Throughout this paper we will illustrate the procedure envisaged to build the SPIDERS cluster catalogue using this sample of 351 galaxy clusters in the frame of Data Release 12. As shown in Fig.~\ref{fig:lxzdist}, this illustration sample is representative of the complete SPIDERS sample. \begin{figure} \includegraphics[width=84mm]{images/radecCODEX_unobs_partialobs_fullobs.pdf} \caption{Sky distribution of SPIDERS/CODEX clusters in the SEQUELS pilot area. Each point is one CODEX cluster (918 objects, no cut on richness). Coloured points correspond to clusters within (at least) one SEQUELS-DR12 plate. Blue are "fully observed" clusters (230/918): the number of obtained SEQUELS-DR12 spectra equals the number of tiled targets. Red are "partially observed" (121/918) and missing redshifts will be obtained in the course of the SPIDERS survey.} \label{fig:sequels_clusters} \end{figure} \section[]{Analysis steps: from SPIDERS spectra to cluster properties} \label{sect:analysis} SPIDERS observations deliver spectra of sources identified as red-sequence members in CODEX and XCLASS X-ray clusters. This section describes the steps needed to reach the primary goals of SPIDERS, namely confirmation and redshift determination of X-ray selected galaxy clusters. This procedure is illustrated throughout this section with results from the SEQUELS-DR12 sample of CODEX candidate clusters, made of 351 objects in total. The algorithms presented here are prototypical and adapted to this illustration sample. They will benefit from developments in the course of the SPIDERS survey. \subsection[]{Data reduction} SEQUELS-DR12 data are processed identically as in BOSS, namely using the {\tt idlspec2d} routines \citep{dawson2016}. Redshifts and classifications of sources are obtained after fitting a set of templates to the reduced spectra \citep{bolton2012}. Fits excluding quasar templates ("{\tt \_NOQSO}" values) provide reliable redshifts for targets known to be galaxies. The final SPIDERS data reduction and spectral classification will rely on the eBOSS improved pipeline developments and be backward compatible with previous BOSS data. \subsection[]{Spectra and redshift collection} Conversely to other target classes in BOSS and eBOSS, each SPIDERS cluster is a collection of spectroscopic targets (potential cluster members), instead of a proper target itself. Redshifts are collected in the vicinity of a (candidate) cluster, and listed while keeping track of relevant associated information (magnitude, photometric and spectroscopic flags, etc.) Fig.~\ref{fig:example_image_cluster} and Fig.~\ref{fig:example_image_cluster_inset} display 3-colour $gri$ images of a CODEX cluster observed and confirmed in SEQUELS-DR12. The overlays correspond to: the CODEX \emph{cluster} catalogue (large cyan circle), the CODEX \emph{cluster photometric member} catalogue (red-sequence members, small cyan circles), the list of \emph{submitted targets} (gold circles), the list of \emph{tiled targets} (magenta triangles) and \emph{spectroscopic redshifts} from SDSS (red and orange squares, the latter correspond to SDSS data up to DR11). In case a target was observed multiple times over the course of the SDSS programs, the eBOSS {\tt SPECPRIMARY} flag is considered and higher priority is given to higher signal-to-noise spectra. For the SEQUELS-DR12 sample that is used as an illustrative example in this paper, only galaxies identified as members of the cluster red-sequence ($p_{\rm mem} \geq 5\%$) are taken into account. Therefore "{\tt NOQSO}" values are considered (i.e.~{\tt Z\_NOQSO, CLASS\_NOQSO, ZWARNING\_NOQSO}), except in the case of redshifts whose origin is SDSS-I/II for which we use the standard {\tt Z, CLASS, ZWARNING} values. The final SPIDERS data collection procedure will improve on specific points, in particular by investigating the benefits of including galaxies with a spectroscopic redshift excluded from the RedMapper red-sequence (because of the colour selection, or cluster-centric distance cuts, etc.) While this may prove advantageous in confirming the cluster redshift, their selection is more heterogeneous and more difficult to track back. \begin{figure*} \includegraphics[width=\linewidth]{images/1_4601_gri_regions_light.pdf} \caption{$12\arcmin \times 12\arcmin$ $(g,r,i)$ composite image of CODEX 1\_4601 at R.A.\,=\,12h22m05.3s, Dec\,=\,+$45\degr$18m36s, $z_{\lambda, OPT}=0.27$, $\lambda_{OPT}=47.2$, as observed and confirmed by SPIDERS at $z_{spec}=0.2630 \pm 0.0009$ (see Fig.~\ref{fig:example_diagnostic}). The 2~arcmin large cyan circle is centred on the cluster "optical" centre. The original X-ray position is materialized by the thick blue cross ($\sim 1$ arcmin to the north-west). Cyan circles indicate red-sequence members, numbers below correspond to their {\tt fiber2mag\_i} magnitudes and membership probability $p_{\rm mem}$. Gold circles are SEQUELS submitted targets, the number in parenthesis is the $priority$ (1 stands for SEQUELS AGN, SPIDERS\_RASS\_AGN). Orange boxes are SDSS spectroscopic redshifts. Text above indicates the best-fit type (GAL, QSO, STA for galaxy, quasar and star templates) with associated {\tt Z}/{\tt ZWARNING}. The "NOQSO" fits are preferred when available (i.e.~for all SDSS-BOSS data). Green boxes are targets with the {\tt EBOSS\_TARGET0} bit set to 21, i.e.~SPIDERS\_RASS\_CLUS targets. Tiled targets in SEQUELS appear as magenta triangles. The large yellow circle shows the $R_{200c} = 1.3$~Mpc radius of the cluster as derived from X-ray data. } \label{fig:example_image_cluster} \end{figure*} \begin{figure} \includegraphics[width=84mm]{images/1_4601_gri_regions_zoom.pdf} \caption{Zoom over the central part of Fig.~\ref{fig:example_image_cluster}, centred on the cluster "optical" position. The large cyan circle has a radius 2~arcmin (see legend of Fig.~\ref{fig:example_image_cluster} for explanations on symbols.)} \label{fig:example_image_cluster_inset} \end{figure} \subsection[]{Automatic membership} Most surveys of X-ray galaxy clusters rely on an ultimate validation by one or several trained astronomers based on spectroscopic redshifts of individual galaxies \citep[e.g.][for recent applications]{guzzo2009,adami2011}. This is needed in order to disentangle dubious cases, carefully inspect interlopers and members and classify the reliability of the cluster redshift. The limited manpower imposes limits on the large amount of galaxy clusters involved in SPIDERS that can be visually screened; however this can be alleviated by running an automatic procedure in first place. This algorithm must be able to separate the secure and easy cases, only requiring quick eyeballing, from the more difficult ones demanding deeper inspection. In the former situations, the automatic procedure must be able to address the membership of red-sequence galaxies. In order to account for the variety of cluster masses, physical states, richnesses and redshifts in the sample, we decided to adopt a broad approach, preparative of the visual inspection of every individual cluster. In the following, we demonstrate its main features and its applicability with the SEQUELS-DR12 sample. Our procedure runs on each galaxy cluster individually, based on the list of red-sequence members associated to a spectroscopic redshift (see above). The bi-weight average \citep[][]{beers1990} of those $N_{zpsec,0}$ redshifts provides the starting point (first guess) of an iterative clipping procedure. It performs an initial rejection of members with velocities offsets greater than $5000$~km/s (relative to this first guess mean redshift). The bi-weight average of the resulting $N_{zspec,1}$ potential members is computed. An estimate of the velocity dispersion \citep[][]{beers1990} is also computed and results from the bi-weight variance (if $N_{zspec,1} \geq 15$) or the gapper estimator (if $N_{zspec,1} < 15$). Objects lying further away than 3 times the velocity dispersion from the average velocity are rejected ("3-$\sigma$ clipping"). This procedure is iterated until convergence or stops after 10 steps. The remaining objects are called members ($N_{zspec,k} = N_{\rm mem}$, $k \leq 10$). In the course of the iterative procedure described above, several cases may arise: \begin{itemize} \item $N_{zspec,0}<3$: the cluster is left for visual inspection \item $N_{zspec,1}=0$, i.e.~the initial 5000 km/s clipping rejected all members: the procedure stops, a flag is issued. This may correspond to the case in which groups of galaxies are too far from each other in velocity space, for instance in case of several distinct structures along the line of sight. \item $0<N_{zpsec,i}<3$, i.e.~only 1 or 2 members are left after $i$ steps: the iteration process stops and returns the member list without estimating the mean nor the velocity dispersion. \item $N_{zspec,i}=0$, i.e.~no member is left after $i$ steps: a flag is issued indicating failure of the $\sigma$-clipping method. \item $N_{zspec,k} \geq 3$: the process succesfully converges, a cluster redshift is estimated from the biweight-average of the $N_{\rm mem}$ galaxies and the biweight-variance (or gapper estimator if $N_{\rm mem}<15$) serves as an estimate for the velocity dispersion. \end{itemize} Fig.~\ref{fig:example_diagnostic} provides a detailed illustration of the results output of the procedure in a successful case, extracted from the SEQUELS-DR12 sample. For this cluster, one of the 21 members of the red-sequence with a spectroscopic redshift was flagged as an interloper, it has $p_{\rm mem}=0.85$. The cluster spectroscopic and photometric redshifts are compatible within their $1 \sigma$ uncertainty. We note the presence of a Seyfert 1 galaxy located $\sim 1\arcmin$ to the West of the cluster core, possibly contaminating the X-ray emission of the galaxy cluster (as hinted by the ROSAT soft X-ray contours). The rest-frame velocity of this object relative to the cluster redshift is above 4000~km/s, hence consistent with it not being included in the dynamical analysis of the cluster. We discuss and model X-ray AGN contamination later in this study (see App.~\ref{app:selbias}). \begin{figure*} \includegraphics[width=\linewidth]{images/zspecplot_1_4601_2015-02-16-DR12_paper_NOQSO.pdf} \caption{Example diagnostic plots outcome of the automated membership procedure, for one particular cluster in the SEQUELS-DR12 sample (1\_4601, $z_{\lambda, OPT}=0.27$, $\lambda_{OPT}=47.2$, see Fig.~\ref{fig:example_image_cluster} and~\ref{fig:example_image_cluster_inset}). \emph{{\bf Top left} -- "redshift/probability plot":} offset of all red-sequence members with spectroscopic redshifts (21 objects, black diamonds) relative to the RedMapper photometric redshift ($z_{\lambda,OPT}$, applicable to all members). Error bars (often too small to be visible) are displayed for each data point. The x-axis shows $\ln(p_{\rm mem})$ the (logarithmic) probability of each red-sequence member, as computed by RedMapper. The photo-z uncertainty is represented by the two horizontal dotted lines. Blue squares are selected spectroscopic members (20 objects). A grey bar indicates the initial 5000 km/s selected range before iterative clipping. The blue horizontal bar represents the spectroscopic redshift value (bi-weight average) calculated with the spectro-members. \emph{{\bf Top right} -- "sky location of members":} All spectroscopic red-sequence members (21 objects) are displayed in a $12 \times 12$~arcmin projected map. Blue symbols are identified members. The centre of the map corresponds to the optical cluster centre. \emph{{\bf Bottom left} -- "velocity-distance plot":} this plot considers only identified members (20 objects). The cluster redshift (bi-weight average) is taken as a reference and velocity offsets are indicated on the y-axis. Error bars on the individual velocities are represented as vertical lines (insivble in this figure). The x-axis displays the projected distance to the cluster centre. Blue horizontal line shows the 0 offset, dashed lines show the velocity dispersion value ($\pm 1 \sigma$). The colour-code indicates blue-/redshifted objects. The bi-weight average redshift ZBIWT and the bootstrap uncertainty on this value are indicated in the panel. The velocity dispersion estimate (Bi-weight variance if $N_{spec} \geq 15$, Gapper estimate if $N_{spec}<15$) is given along with its uncertainty (see text). \emph{{\bf Bottom right} -- "sky projection of spectroscopic members":} similar as the above panel, but only for selected spectroscopic members and reproducing the colour code (blue-/redshifted objects). Circles indicate projected physical distances to the cluster centre.} \label{fig:example_diagnostic} \end{figure*} The automatic procedure delivers a redshift for 219 out of the 351 candidates with $N_{\rm mem} \geq 3$. We note that our choice for an initial 5000~km/s rejection criterion is more inclusive than other studies relying instead on a lower threshold, usually 3000~km/s \citep[e.g.][]{}. We checked that changing to this value provides similar results, except in cases requiring human decision. For 194 systems, the final cluster redshifts agree within $10^{-3}$ relative difference. The other systems are complex or poor systems, either discarded or refined while performing the visual confirmation (as described in the next section). \subsection[]{Manual steps and refinements} Validation of the galaxy cluster and final assessment of its redshift are achieved through visual screening of the outcome of the automatic procedure. This process should allow a number of refinements inaccessible to algorithms. In particular, the inspection of individual galaxy spectra may refine or discard the result of the eBOSS fitting algorithm, based on e.g.~the knowledge of the cluster photometric redshift and the probability $p_{\rm mem}$ that the object belongs to the cluster. The object can therefore be added or removed from the list upon which the cluster validation is performed. Inclusion or removal of members as well as particular weights given to members (e.g.~depending on their $p_{\rm mem}$ value, or in the case of a BCG) help in deciding the validation status and mean redshift of the cluster. Line-of-sight projection effects not disentangled by the photometric membership algorithm can also be identified and split into several components. Finally, a comment can be set by the inspector. We anticipate such inspection to be collaborative, final decisions should be taken based on the judgement of independent inspectors. We illustrate the validation process with the SEQUELS-DR12 sample of 351 clusters. Since these clusters will be re-inspected within the complete SPIDERS Tier-0 survey with more redshifts, only one inspector participated in this exercise. Table~\ref{tab:visual_result} shows that a large fraction of the algorithm decisions are confirmed by visual screening, while 10 candidates were split into multiple distinct components, 5 were discarded and 15 promoted. In 30 cases no spectroscopic redshifts were found in the red-sequence, leaving the cluster status as non-validated (these clusters are mostly high-redshift $z \gtrsim 0.6$ candidates whose members are too faint to be spectroscopically observed). Fig.~\ref{fig:visual_zzplot} displays the same result as a function of the automatic and final cluster spectroscopic redshift. \begin{table} \centering \caption{\label{tab:visual_result}Result of the visual inspection of 351 CODEX cluster candidates in SEQUELS-DR12, split according to the outcome of the automatic algorithm. Note that only 230/351 candidates are completely observed within SEQUELS-DR12, i.e.~all of their tiled targets have been acquired within the program, their statistics are indicated in parentheses. "Non validated" does not necessarily means non-existence of a cluster and may result from too low a number of spectra in the red-sequence.} \begin{tabular}{@{}lcc@{}} \hline {\bf Auto-validation status:}& {\bf Validated} & {\bf Pending} \\ \hline \hline Visual inspection status: & & \\ - Single-component, validated & 205 (119) & 15 (11) \\ - 2-component split & 9 (7) & 1 (1) \\ - Non validated & 5 (2) & 86 (65) \\ - No spec-z (non validated) & - & 30 (26) \\ \hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=84mm]{images/compa_visu_2015-02-16-DR12.pdf} \caption{Result of the visual inspection (see Table~\ref{tab:visual_result}) as a function of the cluster spectroscopic redshift, shown with outputs from the automatic procedure (x-axis) and of the visual procedure (y-axis). Green points (and error bars) stand for clusters initially validated by the algorithm and confirmed by visual inspection (205). Blue triangles are cluster recovered by visual inspection (15) and orange triangles are systems split in 2 components (9 initially validated as single-component and 1 initially discarded). Red diamonds are candidates initially validated and discarded by visual inspection (5).} \label{fig:visual_zzplot} \end{figure} \subsection[]{Redshift and velocity dispersion estimates} \label{sect:zveldisp} \subsubsection[]{Cluster redshift estimates} Final cluster redshift estimate (hereafter $z_{\rm BIWT}$, or simply $z$) is based on the bi-weight average \citep[][]{beers1990} of all red-sequence galaxies selected as cluster members, in the cases where $N_{\rm mem} = 3$ or more members are identified. Cases with 1 or 2 members only correspond to a redshift set manually, typically equal to that of the BCG. When $N_{\rm mem} \geq 3$, the statistical uncertainty $\Delta_z$ of the cluster redshift is computed by bootstrap resampling of the $N_{\rm mem}$ velocities. Fig.~\ref{fig:compa_deltazestimates} compares these uncertainties with a more common estimator \citep[see e.g.][their Eq.~4]{ruel2014}, involving the standard deviation of velocities ($\sigma$): \begin{equation} \label{eq:ruelzerr} \Delta_{z} {\rm (standard)} = \frac{1}{c} \frac{\sigma \cdot (1+z)}{\sqrt{N_{\rm mem}}} \end{equation} with $\sigma$ given by the bi-weight variance estimator if $N_{\rm mem} \geq 15$ and by the gapper estimator otherwise (see Sect.~\ref{sect:zveldisp}). The two estimates are in good agreement with each other. In almost all cases, the bootstrap technique provides slightly more conservative uncertainty estimates than the standard one and we consider the former as our baseline redshift error. \begin{figure} \includegraphics[width=84mm]{images/compa_boot_normal.pdf} \caption{Comparison of statistical uncertainties on the cluster spectroscopic redshift. The x-axis shows the bootstrap error (our baseline value throughout this work) while the y-axis shows the result from Eq.~\ref{eq:ruelzerr}, involving an estimate of the standard deviation of velocities. The dashed line shows equality, dotted lines represent a factor 2 between plotted quantities.} \label{fig:compa_deltazestimates} \end{figure} The typical cluster redshift statistical uncertainty is $\Delta_z/(1+z) \lesssim 10^{-3}$, a factor 10 lower than the typical cluster photometric redshift error, with a median number of 10 members. Figure~\ref{fig:zrm_vs_zspec} compares the photometric and spectroscopic redshift estimates for each of the validated clusters: the very good agreement between them is not surprising \citep[e.g.][]{rykoff2014}, although this comparison emphasizes a noticeable improvement brought by spectroscopic redshifts at $z \geq 0.2 - 0.3$, both in terms of accuracy and precision. The theoretical quantity of interest for the SPIDERS clusters is the redshift of the halo in which the X-ray gas and the galaxies are hosted. An uncertainty of $\Delta_z = 10^{-3}$ on the redshift of an object at $z=0.3$ corresponds to a velocity offset of 230~km/s, hence a few times smaller than the typical velocity dispersion of a galaxy cluster. It also corresponds to $\sim 4$~Mpc comoving radial distance, hence slightly larger than the typical size of a galaxy cluster. The statistical uncertainties on cluster redshifts are shown in Fig.~\ref{fig:zerr_vs_x}, as a function of redshift, cluster richness and cluster X-ray luminosity, and colour-coded by the number of members $N_{\rm mem}$ entering their computation. As expected, higher $N_{\rm mem}$ lead to lower uncertainties on the redshift estimates, which favors low-redshift clusters. Because of the various selection effects involved in detecting clusters (flux-limit in X-rays, red-sequence in optical), trends in the redshift uncertainty versus richness or luminosity do not appear clearly. \begin{figure} \includegraphics[width=84mm]{images/zRM_vs_zspec_splitnmembers.pdf} \caption{Comparison between the photometric redshift $z_{\lambda}$ output of RedMapper and the spectroscopic redshift of the 230 CODEX clusters validated by visual inspection in SEQUELS-DR12. The spectroscopic redshift error is the bootstrap uncertainty. The colour code reflects the number of spectroscopic members retained in validating the cluster. The histogram in the inset shows the distribution of spectroscopic redshifts, with the same colour coding.} \label{fig:zrm_vs_zspec} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{images/booterror_vs_all.pdf} \caption{Distribution of statistical errors $\Delta_z/(1+z)$ in the cluster spectroscopic redshift among the SEQUELS-DR12 sample of validated clusters. These uncertainties are estimated through bootstrap resampling of the $N_{\rm mem}$ redshifts identified as cluster members. In particular they do not include additional sources of uncertainties due to potential systematic effects, e.g.~presence of substructures, inhomogeneous sampling, etc. The right panel is an histogram of sources binned by values of the redshift uncertainty.} \label{fig:zerr_vs_x} \end{figure*} \subsubsection[]{Radial velocity dispersions} Once cluster members are identified, one estimates their line-of-sight velocities $v_i$, defined as \citep{danese1980}: \begin{equation} \frac{v_i}{c} = \frac{z_i - z_{\rm BIWT}}{1+z_{\rm BIWT}} \end{equation} We use two of the most common estimators for the dispersion of velocities, namely the "gapper" ($\sigma_{\rm GAP}$) and the "bi-weight variance" ($\sigma_{\rm BWT}^2$). We refer to \citet{beers1990} for details in their computation and the algorithm\footnote{We used a Fortan version of ROSTAT adapted to our purposes.} used for these calculations. We refer the reader to \citet[][]{ruel2014} for a discussion of measurements of velocity dispersions in the regime of low number of spectroscopic members, in the context of galaxy clusters selected by Sunyaev-Zeldovich effect in the South Pole Telescope data. Both estimators are computed for each cluster, although it is clear that a high enough number of members must enter the derivation to ensure robust measurements. Fig.~\ref{fig:gapper_vs_biwt} demonstrates the good agreement between the two measurements provided that $N_{\rm mem} \geq 15$. While the majority of $10 \leq N_{\rm mem} < 15$ clusters also lie on the one-to-one line in this figure, a number of them stand as outliers, possibly impacted by the presence of interlopers or substructures in their list of spectroscopic members. Evaluating the uncertainties and biases linked to cluster velocity dispersion measurements performed with a small number of spectroscopic members is a rather complex task. It requires in particular an understanding of the selection and sampling processes leading to the list of members entering the catalogue. Ideally, one would want to design end-to-end simulations reproducing all of the steps described above, from the cluster selection in X-rays down to the calculation of velocity dispersions. Such procedures are feasible, for instance by combining N-body simulations and semi-analytical models \citep[e.g.][]{biviano2006,saro2013}. An alternative, simpler, approach consists in resampling dense observations of clusters with high numbers of spectroscopic members and well-determined velocity dispersions $(\sigma^{true}_{\rm BWT})$, ensuring the target sampling reproduces that of SPIDERS. We follow this approach in the present work, bearing in mind the opportunities for further, more detailed developments. We resample the observations of the HIFLUGCS sample of clusters \citep{zhang2011}, imposing a limiting magnitude and a minimal fiber distance corresponding to SPIDERS observations and accounting for the mass and redshift distribution of clusters in SPIDERS. The corresponding procedure is fully described in \citet{zhang2016}. It leads for each HIFLUGCS cluster to 500 resampled realizations, each realization leads in turn to an estimate of an observed $\sigma^{obs}_{\rm BWT}$. Grouping results by the number of members $N_{\rm mem}$ remaining after resampling, we derive the average value and the spread in $\sigma^{obs}/\sigma^{true}$. This calculation provides therefore a baseline for the bias-correction and 1-$\sigma$ uncertainty on individual velocity dispersion measurements. We note that the catalog of the galaxy redshifts of the HIFLUGCS is a rather clean member galaxy input catalogue, likely almost free from interlopers. Uncertainties derived from the scatter in the down-sampling thus do not account for the effect of interlopers. \begin{figure} \includegraphics[width=84mm]{images/gap-bwt_plot.pdf} \caption{Comparison of velocity dispersion estimates obtained from the Gapper method ($\sigma_{\rm GAP}$) and the biweight sample variance ($\sigma_{\rm BWT}$) for individual CODEX clusters in the SEQUELS-DR12 sample. Only systems validated with more than 10 and 15 spectroscopic members are displayed. There is good agreement between the two estimators, although outliers are present, indicative of badly determined velocity dispersions due to, e.g.~substructure or presence of interlopers (the number of members is indicated in brackets).} \label{fig:gapper_vs_biwt} \end{figure} \subsection[]{Catalogue production} \label{sect:catalogue_prod} The updated, accurate, cluster spectroscopic redshifts enter as input of a new computation of X-ray cluster properties. For the CODEX subsample, this procedure follows the same route as when starting from photometric redshifts ($z_{\lambda}$). Details on the procedure can be found in \citet{mirkazemi2015}: assuming a cosmological model, ROSAT fluxes are converted into rest-frame $[0.1-2.4]$~keV luminosities and scaling relations allow an estimate of the cluster mass and typical radius $R_{500}$ and $R_{200}$. The typical uncertainty on the luminosities of CODEX clusters amounts to $\sim 35$\%, as computed from the Poissonian fluctuation of number counts in ROSAT data. As an illustration, Fig.~\ref{fig:lxzdist} highlights the position of the SEQUELS-DR12 confirmed clusters in the luminosity-redshift plot, along with the corresponding error bars. The XCLASS galaxy clusters benefit from high-quality X-ray data, thanks to the exquisite spatial and spectral resolution of XMM: the angular point-spread function FWHM is around $10-20 \arcsec$, depending on the off-axis angle of the cluster, and the spectral line spread function FWHM is around 100~eV at 1~keV energy. X-ray surface-brightness profiles and spectra are therefore the primary observables from which cluster physical properties are derived. In addition to $\lesssim 10\%$ accurate bolometric luminosities, surface brightness-averaged temperatures of C1 clusters can be measured with relatively good accuracy ($\sim 15$\%), depending on the actual cluster temperature, the number of counts collected by the instruments and the uncertainties in background subtraction \citep[e.g.][]{clerc2014}. The \emph{eROSITA} data will be similar to the XMM data, although with a spatial resolution $\sim 1.5-2$ times lower. The methodology to compute X-ray cluster properties by combining SPIDERS spectroscopic redshifts and \emph{eROSITA} data is expected to lie between that of XCLASS clusters and CODEX clusters. \section[]{Results from SEQUELS-DR12 sample} \label{sect:sequels} Throughout this paper we illustrated the SPIDERS targeting strategy and plans for data analysis by means of the SEQUELS-DR12 pilot sample. We now elaborate on the use of such a sample of spectroscopically confirmed clusters, and present possible science applications with the perspective of the much larger, upcoming, SPIDERS sample. \subsection[]{Catalogue presentation} \subsubsection[]{The SPIDERS-CODEX clusters} The SEQUELS-DR12 sample consists of 230 validated CODEX systems, out of an initial set of 351 CODEX candidate clusters within the SEQUELS footprint \citep[][]{alam2015}. Among those 230 clusters, 137 are fully observed within SEQUELS-DR12 (i.e.~all tiled targets have received a fiber). Let alone differences in target selection outlined in Sect.~\ref{sect:targeting}, this subsample offers a representative view of the expected, $\sim 20$ times larger, entire SPIDERS sample of clusters. Half of the validated clusters have more than 7 spectroscopic members (8 for completed clusters), this number increases with decreasing redshift, as shown in Fig.~\ref{fig:nmem_dist}. Fig.~\ref{fig:areacurve} illustrates the X-ray sensitivity of the CODEX survey integrated over the footprint of the area considered for the present catalogue. The median sensitivity in the [0.5-2]~keV band is $\sim 10^{-13}$\,ergs\,s$^{-1}$\,cm$^{-2}$. \begin{figure} \includegraphics[width=84mm]{images/spiderspilot_areacurve.pdf} \caption{Effective area curve of the CODEX cluster survey, calculated over the footprint of the SPIDERS pilot area, expressed as a function of the X-ray flux sensitivity.} \label{fig:areacurve} \end{figure} The redshift distribution of clusters in bins of $\Delta_z = 0.04$ is shown in Fig.~\ref{fig:zrm_vs_zspec} and peaks at $z \sim 0.2$. A deficit is observed in one bin around $z=0.3$, which we attribute to a mixture of selection effects involving different redshift dependencies of the X-ray sensitivity and RedMapper efficiency, to the preliminary existence of numerous redshifts in SDSS pre-SPIDERS data peaking below and above $z \sim 0.3$, and to sample variance. The X-ray properties of the 230 validated clusters were computed according to the updated redshift value, starting from the ROSAT counts \citep[e.g.][]{mirkazemi2015}. Their $L_X-z$ distribution is displayed in Figure~\ref{fig:lxzdist}. A caveat in the computation of X-ray properties relates to the 10 clusters split into two components after visual inspection. Only one X-ray detection is associated to the original CODEX candidate, and current data do not allow to assign a flux to each of the components. In this work we considered only the primary component as the source of the X-ray emission and therefore discarded the 10 secondary components from the catalogue. Two points in Fig.~\ref{fig:lxzdist} are labelled with '(C)', indicating likely contamination of the X-ray measurement by a (possibly unrelated) point-source in RASS data. These clusters are optically poor ($\lambda_{\rm OPT} < 10$), hence are among the less reliable sources in the CODEX sample. This class of sources is not targeted in the main SPIDERS program. \begin{figure*} \includegraphics[width=\linewidth]{images/nmem_complete_cumulplot.pdf} \caption{Cumulative distribution of CODEX clusters validated in SEQUELS-DR12, as a function of their number of spectroscopic members $N_{\rm mem}$. The right panel is for the case when all tiled targets within a cluster have been observed (completed observations) and is therefore more representative of the final outcome of SPIDERS. Colour lines represent cases when redshifts cuts are applied, as shown in legend together with the total number of objects. Dotted lines indicate the median number of spectroscopic members, and correspond to: 7.5/8.7 (all), 4.8/4.0 ($z>0.4$), 6.2/5.9 ($0.2<z<0.4$) and 11.7/13.0 ($z<0.2$) for all/completed clusters respectively.} \label{fig:nmem_dist} \end{figure*} \subsubsection[]{The SPIDERS-XCLASS clusters} The SEQUELS-DR12 sample contains three XCLASS-RedMapper clusters, as listed in Table~\ref{tab:xclassrmdr12}, two being completely observed, ID-5117 still awaiting completion. One of the clusters (ID~157) is also found in the CODEX subsample. However, the higher quality of the X-ray data allows to measure its flux, luminosity and temperature with much greater accuracy than the RASS does. This system is in fact better known as Abell~851 (Table~\ref{tab:mcxc_matches}). The SPIDERS redshift, luminosity and $R_{500}$ values agree with those found in the literature \citep{piffaretti2011}. Our XMM-derived gas temperature is similar to the value of $T_{X,{\rm all}}= 5.7 \pm 0.5$~keV reported in \citet{mahdavi2013}. Our velocity dispersion estimate computed from 18 SPIDERS spectroscopic members is in agreement with the value of $\sigma_{v}=1067_{-96}^{+89}$~km/s derived by \citet{girardi2001} using 55 members; and with that of \citet{oemler2009}, $\sigma_{v}=1287$~km/s, using 101 members. After masking point sources, XMM MOS1, MOS2 and PN spectra were extracted in the $[0.3-10]$~keV energy range and analysed with \textsc{XSpec} \citep{arnaudxspec}. An \textsc{APEC} model was fit to measure the cluster temperatures $T_X$, fixing the element abundance to $0.3\,Z_{\odot}$. The results shown in Table~\ref{tab:xclassrmdr12} involve a scaling relation linking $T_X$ and $R_{500c}$ \citep[][]{sun2009}, found by iteratively recomputing the temperature within the aperture. Fluxes were extracted on $[0.5-2]$~keV XMM images following a growth curve analysis \citep[][]{reiprichboehringer2002, suhada2012, clerc2014, pacaud2016} and converted into rest-frame $[0.1-2.4]$~keV luminosities assuming the best-fit APEC spectral model found earlier. This analysis, summarized in Table~\ref{tab:xclassrmdr12}, illustrates the gain in information brought by the XCLASS subsample of SPIDERS clusters (originating from XMM data) in comparison with the CODEX subsample (originating from the shallower RASS data). For instance, the CODEX $[0.1-2.4]$~keV luminosity relative uncertainty on ID~157 is $\sim 25\%$, while the XCLASS one is only a few percent. However, given the low number of XCLASS clusters within the SEQUELS-DR12 demonstration sample, we do not consider them further and defer the interpretation of the full SPIDERS/XCLASS sample to a future study. \begin{table*} \centering \caption{\label{tab:xclassrmdr12}The XCLASS-RedMapper clusters validated in SEQUELS-DR12. Their absorbed fluxes, luminosities (in the rest-frame $[0.1-2.4]$~keV band) and temperatures are derived from XMM data and computed within the $[0-R_{500c}]$ radial range. The radius $R_{500c}$ is derived via a $T_X-R_{500c}$ scaling relation \citep{sun2009}. Line-of-sight velocity dispersions and uncertainties are estimated as described in Sect.~\ref{sect:zveldisp}. The ID in the first two columns refer to the RedMapper (RM) and XCLASS (XC) catalogue IDs respectively. ($^{*}$: is present in the SPIDERS-CODEX subsample and also known as Abell~851.)} \begin{tabular}{@{}ccccccccccc@{}} \hline RM & XC & R.A. & Dec & $z_{\rm spec}$ & $N_{\rm mem}$ & $\sigma$ & $f_X^{[0.5-2]}$ & $L_X^{[0.1-2.4]}$ & $T_X$ & $R_{500c}$ \\ ID & ID & (J2000) & (J2000) & & & km/s & ($10^{-14}$~ergs/s/cm$^2$) & ($10^{43}$~ergs/s) & (keV) & (Mpc) \\ \hline \hline 5117 & 1288 & 122.586 & 48.347 & $0.534 \pm 0.002$ & 11 & $1060 \pm 250 $ & $28.7 \pm 0.9$ & $41.7 \pm 1.3$ & $4.8 ^{+0.5}_{-0.4}$ & 0.84 \\ 157$^{*}$ & 1678 & 145.754 & 46.992 & $0.408 \pm 0.002$ & 18 & $1270 \pm 210$ & $56.2 \pm 0.6$ & $45.9 \pm 0.5$ & $5.3 \pm 0.1$ & 0.93 \\ 15756 & 1451 & 170.746 & 46.988 & $0.478 \pm 0.004$ & 6 & - & $3.3 \pm 0.4$ & $3.9 \pm 0.5$ & $3.0_{-0.8}^{+1.6}$ & 0.67 \\ \hline \end{tabular} \end{table*} \subsubsection[]{The pilot sample catalogue} We provide in Table~\ref{tab:recap_samples} a condensed summary of the samples and catalogues discussed in this paper. The list of 230 SPIDERS/CODEX clusters is available online\footnote{https://data.sdss.org/sas/dr13/eboss/spiders/analysis/catCluster-SPIDERS\_RASS\_CLUS-v1.0.fits}. The content of the columns in the catalogue are summarized in Table~\ref{tab:cat_cols}. Column names starting with {\tt SCREEN} result from visual inspection of the system. The luminosity and cluster radius are computed according to the cluster redshift assigned after visual inspection. Note the presence of 10 additional entries in this catalogue, flagged with {\tt COMPONENT} set to 2, corresponding to putative groups along the line of sight of a given cluster. \begin{table} \centering \caption{\label{tab:recap_samples}Summary of the number of objects in the different survey areas mentioned in this work. The number of candidates ("cand.") and validated ("val.") clusters in each sample are shown. "MCXC" refers to the compilation of X-ray detected galaxy clusters by \citet{piffaretti2011}, as detailed in Sect.~\ref{sect:mcxc}.} \begin{tabular}{@{}lcccc@{}} \hline Area: & \multicolumn{2}{c}{BOSS imaging} & \multicolumn{2}{c}{SEQUELS-DR12} \\ \hline & cand. & val. & cand. & val. \\ \hline \hline SPIDERS/CODEX & 10\,415 & - & 351 & 230 \\ SPIDERS/XCLASS & 278 & - & 7 & 3 \\ MCXC & - & 718 & - & 24 \\ \hline \end{tabular} \end{table} \begin{table*} \centering \caption{\label{tab:cat_cols}Description of the columns entering the catalogue of validated SPIDERS/CODEX clusters (230 entries) in the SEQUELS-DR12 pilot area. The full table is available online (see text).} \begin{tabular}{@{}lllr@{}} \hline Column & Unit & Description & Example \\ \hline {\tt CLUS\_ID} & & SPIDERS/CODEX identification number & 1\_4601 \\ {\tt COMPONENT} & & Component index of the system & 1 \\ {\tt RA} & deg & CODEX X-ray detection right ascension (J2000) & 185.497 \\ {\tt DEC } & deg & CODEX X-ray detection declination (J2000) & 45.310 \\ {\tt RA\_OPT} & deg & CODEX optical detection right ascension (J2000) & 185.522 \\ {\tt DEC\_OPT} & deg & CODEX optical detection declination (J2000) & 45.404 \\ {\tt LAMBDA\_CHISQ\_OPT} & & Richness ($\lambda_{\rm OPT}$) of the CODEX optical detection & 47.2 \\ {\tt Z\_LAMBDA} & & Photometric redshift ($z_{\lambda}$) of the CODEX optical detection & 0.266 \\ {\tt Z\_LAMBDA\_ERR} & & Uncertainty on {\tt Z\_LAMBDA} & 0.009 \\ {\tt NMEM} & & Number of objects in the CODEX red sequence ($p_{\rm mem} > 5\%$) & 64 \\ {\tt NOKZ} & & Number of red-sequence members with a spectroscopic redshift & 21 \\ {\tt SCREEN\_CLUZSPEC} & & Galaxy cluster redshift & 0.2630 \\ {\tt SCREEN\_CLUZSPECBOOT} & & Bootstrap uncertainty on {\tt SCREEN\_CLUZSPEC} & 0.0009 \\ {\tt SCREEN\_CLUVDISP\_GAP} & km/s & Gapper estimate of the cluster velocity dispersion & 869.8 \\ {\tt SCREEN\_CLUVDISP\_BWT} & km/s & Square root of the biweight variance velocity dispersion & 868.0 \\ {\tt SCREEN\_CLUVDISPTYPE} & & Type of the "best" velocity dispersion (gapper or bi-weight) & SIG-BWT \\ {\tt SCREEN\_CLUVDISPBEST} & km/s & Value of the "best velocity dispersion" & 868.0 \\ {\tt SCREEN\_DAZSPEC} & Mpc & Angular diameter distance computed at $z=$ {\tt SCREEN\_CLUZSPEC} & 836.9 \\ {\tt SCREEN\_NMEMBERS} & & Number of red-sequence members retained as cluster members & 20 \\ {\tt SCREEN\_STATUS} & & Validation status of the cluster assigned by the visual inspector & validated \\ {\tt LX0124} & ergs/s & Luminosity in the rest-frame 0.1-2.4 keV band in $R_{500c}$ & $1.3\times10^{44}$ \\ {\tt ELX} & ergs/s & Uncertainty on {\tt LX0124} & $0.4\times10^{44}$ \\ {\tt R200C\_DEG} & deg & Apparent $R_{200c}$ radius of the galaxy cluster & 0.093 \\ {\tt FLUX052} & ergs/s/cm2 & Galaxy cluster X-ray flux in the 0.5-2.0 keV band & $4.1\times10^{-13}$\\ {\tt EFLUX052} & ergs/s/cm2 & Uncertainty on {\tt FLUX052} & $1.3\times10^{-13}$ \\ {\tt MCXC} & & Identifier in the MCXC catalogue \citep{piffaretti2011}, if present & n/a \\ {\tt ANAME} & & Alternative name in \citet{piffaretti2011}, if present & n/a \\ \hline \end{tabular} \end{table*} \subsection[]{Cluster $L_X-\sigma$ relation from individual measurements} Fig.~\ref{fig:lx_vs_vdisp} shows the distribution of SPIDERS/CODEX clusters in the $L_C$-$\sigma_{\rm BWT}$ plane, where $\sigma_{\rm BWT}$ is computed using the biweight sample variance estimate. Only 39 clusters with more than 15 spectroscopic members are considered here. The raw $\sigma_{\rm BWT}$ was corrected from its expected bias, according to the model described in Sect.~\ref{sect:zveldisp}. This model is also used to assign error bars to the velocity dispersion measurements, based on the number of members within each system. Fig.~\ref{fig:lx_vs_vdisp} also shows the scaling relation derived from the HIFLUGCS cluster sample. This relation was derived from observations of 62 low-redshift clusters, with much denser spectroscopic coverage \citep[][]{zhang2011} than the current SPIDERS sample. For this comparison we considered the core-included luminosity-velocity dispersion relation. Considering the intrinsic scatter residing in the $L_X-\sigma$ relation (dotted lines in the figure), there is a satisfactory agreement between the position of these points and the HIFLUGCS scaling relations. We computed the best-fit power-law using the BCES bissector method\footnote{We are thankful to C.~Sif\'on for making the Python implementation of the BCES algorithm available at http://home.strw.leidenuniv.nl/\~{}sifon/pycorner/bces/.} \citep{akritas1996}, as a rough indicator of the overall trend in our sample. For this exercise, we fitted constants $A$ and $B$, defined such as: \begin{equation} \log_{10} \left( \frac{\sigma_{\rm BWT}}{700 ~{\rm km\,s}^{-1}} \right) = A + B . \log_{10} \left( \frac{L_X.E(z)^{-1}}{10^{44} ~{\rm ergs\,s}^{-1}} \right) \end{equation} The consistency between the best-fit power-law and our reference HIFLUGCS relation is encouraging. Proper derivation of scaling relations between X-ray quantities and velocity dispersions relying on a fully consistent statistical treatment and including covariances and selection effects, will constitute a major task once the SPIDERS sample of clusters grows up in size. \begin{figure} \includegraphics[width=84mm]{images/sigbwt-LX_plot.pdf} \caption{Individual SPIDERS-CODEX clusters in the $L_C-\sigma$ plane. Points represent CODEX clusters validated in the SEQUELS-DR12 demonstration sample with more than 15 spectroscopic members. The raw biweight variance calculations are indicated with light triangles, the bias-corrected values with squares, together with the uncertainty (see text). The plain and dotted red lines show the BCES fit to the bias-corrected values and 1-$\sigma$ uncertainty range. The solid line corresponds to the scaling relation from \citet[][]{zhang2011} and is not fit to the data. A typical 0.3\,dex intrinsic dispersion is materialized as dotted lines.} \label{fig:lx_vs_vdisp} \end{figure} \subsection[]{Cluster $L_X-\sigma$ relation from stacked velocity-distance diagrams} In this section, we investigate how stacking together clusters of similar properties can enhance the statistical power in determining scaling relations between those properties and average velocity dispersion measurements. This method is used \citep[e.g.][]{carlberg1997,biviano2009,rines2013, munari2013} when the number of spectroscopic members per cluster is low and does not allow accurate, individual, velocity dispersion measurements. \citet{becker2007} in particular could measure with accuracy the relation between optical richness and velocity dispersion of optically selected galaxy clusters up to $z=0.3$ by means of stacking systems in richness and redshift. Our approach here is similar and uses X-ray luminosity instead of richness. \subsubsection[]{An adaptive $L_X-z$ space binning} We first selected the 108 clusters with at least 8 members. This threshold ensures the uncertainty on the cluster rest velocity to be $\lesssim 200$~km/s for a typical 500~km/s velocity dispersion cluster (Eq.~\ref{eq:ruelzerr}). We split the sample in 3 redshift slices, namely $[0.03-0.26]$, $[0.26-0.50]$ and $[0.50-0.73]$. Each of them is subdivided into a number of $L_C$ bins, according to an adaptive procedure. Starting from the highest luminosity, each bin is enlarged until the clusters it contains bring $N_{\rm bin} > 150$ galaxies within $\pm 4000$~km\,s$^{-1}$ of their own cluster rest velocity, and we ensure the size of a bin in luminosity exceeds $\Delta \ln[L_C] > 0.35$. This value is indeed comparable to the typical uncertainty in a CODEX cluster luminosity (see Sect.~\ref{sect:catalogue_prod}). An additional constraint was added to the adaptive binning algorithm, such that each bin contains at least 50~\% of the number of galaxies $\lambda_{\rm scal}(L_X^{\rm cen})$ expected\footnote{$\lambda_{\rm scal}(L_X)$ was estimated from \citet[][their Eq.~29]{rykoff2012}.} to pertain to a single cluster at the centre bin luminosity. This last requirement ensures that each "stacked" cluster contains a high enough number of galaxies, thus avoiding biases in the resulting velocity dispersions \citep[see e.g.][their Fig.~12]{zhang2011}. Considering all clusters within a bin, red-sequence members with a spectroscopic redshift were assembled into phase-space diagrams, as shown in Fig.~\ref{fig:stacked-phase-space} for the specific bin $0.03<z<0.26$ and $0.75 \times 10^{44} ~{\rm ergs\,s}^{-1} < L_C < 1.1 \times 10^{44} ~{\rm ergs\,s}^{-1}$. In this particular example, 14 clusters are stacked together and 220 spectroscopic galaxies contribute to the stack (corresponding to the black crosses). To produce such stacked diagrams, the projected distance of each member is scaled by $R_{200c}$ of its host cluster, as estimated from the X-ray data\footnote{Following the scheme described in Sect.~\ref{sect:catalogue_prod}, this involves a scaling relation $L_X \rightarrow M_{200c} \rightarrow R_{200c}$.}. The cluster centre was chosen to be the optical centre, as derived by the RedMapper algorithm for each CODEX cluster. Individual galaxy velocities were rescaled by their parent cluster $v_{200} = \sqrt{G\,M_{200c}/R_{200c}}$ so to provide normalized velocities $v/v_{200}$. Finally, each stack is assigned a typical X-ray luminosity and a representative $\langle v_{200} \rangle$ by taking the error-weighted averages of the luminosities and $v_{200}$ values of all clusters in the stack. \begin{figure} \includegraphics[width=84mm]{images/nicephasespaceplot_file-6.pdf} \caption{Stacked phase-space diagram of 14 SPIDERS clusters with $0.03<z<0.26$ and $0.75 < L_C/ (10^{44} ~{\rm ergs\,s}^{-1}) < 1.1$. Black crosses represent all red-sequence members with a spectroscopic redshift, within $\pm 4000$~km\,s$^{-1}$ of their parent cluster redshift. Coloured symbols show galaxies selected by three cleaning techniques, as indicated in legend (iterative clipping, caustic method, clipping with $L_X-\sigma$ prior).} \label{fig:stacked-phase-space} \end{figure} \begin{table*} \centering \caption{\label{tab:stacked_bins}Velocity dispersion results from the stacked phase-space analysis. The 108 SPIDERS galaxy clusters are binned according to their redshift in $[z_{min}, z_{max}]$ and their luminosity $L_C$ in $[L_{min}, L_{max}]$, expressed in units $10^{44}$~ergs/s. The bins contain $N_{clu}$ clusters, and $N_{bin}$ galaxies contribute initially to each stack. The velocity dispersions obtained after selection from three different techniques are computed using the $\sigma_{\rm BWT}$ estimator.} \begin{tabular}{@{}ccccccc|cc|cc|cc@{}} \hline \multicolumn{7}{r}{Membership method:} & \multicolumn{2}{c}{Iterative clipping} & \multicolumn{2}{c}{Caustic} &\multicolumn{2}{c}{$\sigma(L_X)$ clipping} \\ \hline &&&&&&&&&&\\ ID & $z_{min}$ & $z_{max}$ & $L_{min}$ & $L_{max}$ & $N_{\rm clu}$ & $N_{\rm bin}$ & $N_{\rm sel}$ & $\sigma_{\rm BWT}$ & $N_{\rm sel}$ & $\sigma_{\rm BWT}$ & $N_{\rm sel}$ & $\sigma_{\rm BWT}$ \\ & & & & & & & & (km/s) & & (km/s) & & (km/s) \\ \hline \hline 1 & 0.03 & 0.26 & 0.02 & 0.2 & 10 & 131 & 123 & $330 \pm 27$ & 120 & $313 \pm 24$ & 123 & $330 \pm 27$ \\ 2 & 0.03 & 0.26 & 0.2 & 0.3 & 12 & 188 & 171 & $503 \pm 34$ & 162 & $444 \pm 26$ & 171 & $503 \pm 34$ \\ 4 & 0.03 & 0.26 & 0.3 & 0.45 & 15 & 218 & 206 & $557 \pm 30$ & 196 & $499 \pm 24$ & 206 & $557 \pm 30$ \\ 5 & 0.03 & 0.26 & 0.45 & 0.75 & 12 & 156 & 145 & $508 \pm 41$ & 143 & $491 \pm 38$ & 145 & $508 \pm 41$ \\ 6 & 0.03 & 0.26 & 0.75 & 1.1 & 14 & 220 & 202 & $546 \pm 25$ & 201 & $541 \pm 24$ & 205 & $568 \pm 30$ \\ 0 & 0.03 & 0.26 & 1.1 & 2 & 12 & 182 & 171 & $695 \pm 50$ & 159 & $582 \pm 34$ & 171 & $695 \pm 50$ \\ 3 & 0.03 & 0.26 & 2 & 20 & 7 & 116 & 114 & $871 \pm 65$ & 108 & $779 \pm 52$ & 114 & $871 \pm 65$ \\ &&&&&&&&&&\\ 8 & 0.26 & 0.50 & 0.09 & 0.92 & 2 & 16 & 15 & $355 \pm 107$ & - & - & 16 & $374 \pm 155$ \\ 9 & 0.26 & 0.50 & 0.92 & 2 & 14 & 172 & 161 & $561 \pm 39$ & 134 & $392 \pm 24$ & 164 & $585 \pm 44$ \\ 7 & 0.26 & 0.50 & 2 & 20 & 8 & 97 & 96 & $1011 \pm 80$ & 72 & $686 \pm 50$ & 95 & $985 \pm 72$ \\ &&&&&&&&&&\\ 10 & 0.50 & 0.73 & 2 & 20 & 2 & 39 & 23 & $1061 \pm 184$ & 12 & $773 \pm 179$ & 23 & $1061 \pm 184$ \\ \hline \end{tabular} \end{table*} \subsubsection[]{Member identification in stacked diagrams} Although stacked diagrams are pre-filtered such as to contain only red-sequence members within $4000$~km/s of their cluster parent, they still contain a fraction of potential interlopers. We investigate three methods to clean stacked diagrams and converge to more precise membership, within the limitations of our present catalogue: \begin{enumerate} \item The first method is very similar to the one used precedently for individual cluster velocity dispersions. It relies on an iterative 3-$\sigma$ clipping technique using the bi-weight average and bi-weight variance as estimates of the centre and velocity dispersions of the stacked clusters. Only members at $R_{\rm proj}/R_{200c} < 1$ are considered in this analysis. \item The second method relies on the identification of the caustic \citep{diaferio1999} in each diagram, similarly implemented as in \citet{zhang2011}. The caustic is a characteristic shape in the phase-space diagrams, it isolates interlopers from virialized members in a cluster. It effectively makes full use of the two-dimensional structure of the diagrams. \item The third method starts by estimating the expected velocity dispersion $\sigma_{\rm exp}$ of a galaxy cluster of luminosity $L_X^{\rm cen}$ using \citet{zhang2011} scaling relations. Galaxies with offset velocities larger than $3 \times \sigma_{\rm exp}$ are excluded. \end{enumerate} Results are illustrated in Fig.~\ref{fig:stacked-phase-space}, where 202/220, 201/220 and 205/220 members were selected by each of the respective methods, the most stringent selection originating from the caustic identification. \subsubsection[]{Velocity dispersions from stacked diagrams} Considering only members identified by one of the 3 'cleaning' methods, two numerical estimators of the velocity dispersion and their respective uncertainties are derived. In both cases only members within a projected radius less than $R_{200c}$ enter the computation. The first method computes the bi-weight variance $\sigma_{\rm BWT}$ of the selected members, and the uncertainty is based on 1000 bootstrap resamplings of the data. The second method is similar to \citet{rozo2015}. It is based on maximizing the likelihood: \begin{equation} \mathcal{L} = \prod_{i} \left[ p G(v_i ; 0,\sigma)+(1-p) \frac{1}{2 v_m} \right] \end{equation} with $v_i$ the velocities of individual members, $G(x ; \mu, \sigma)$ the Gaussian function of mean $\mu$ and standard deviation $\sigma$. Here $v_m$ is the maximal velocity, i.e.~$3 \, \sigma_{\rm BWT}$, $4 \langle v_{200} \rangle$ and $3\, \sigma_{\rm exp}$ for each of the cleaning method (i), (ii) and (iii) respectively. The parameters $\sigma_{\rm gauss}$ and $p$ that maximize $\mathcal{L}$ are found using the {\scshape Amoeba} algorithm and 1000 bootstrap resamplings are performed to estimate the uncertainty on $\sigma_{\rm gauss}$. Combining the three 'cleaning' methods to the two estimators leads to 6 estimates of the velocity dispersion for a given stacked phase-space diagram. \subsubsection[]{The $L_X - \sigma$ relation of stacked SPIDERS clusters} The values of the velocity dispersion in each $(z, L_C)$ bin are reported in Fig.~\ref{fig:compa_stack_sigestimates} (bin numbering listed in Table~\ref{tab:stacked_bins}). The externally derived scaling relation superimposed to guide the eye is taken from \citet[][]{zhang2011} who fit the $L_X-\sigma$ relation on individual, bright, X-ray clusters in the HIFLUGCS sample. This relation is the same as the black solid line in Fig.~\ref{fig:lx_vs_vdisp}. The dotted lines represent the typical intrinsic scatter ($\sim 0.3$ dex) in this relation. We defer quantitative measurements and a thorough assessment of the stacked $L_X-\sigma$ relation to further studies, that will rely on the entire SPIDERS sample of galaxy clusters and detailed treatment of numerical simulations. We note at this stage a broad agreement between the location of the data points and our reference $L_X-\sigma$ relation. Our results differ according to the combination of cleaning and fitting method employed. The clipping-based method may lead to more complete but less clean member sampling, the prior-based method is very similar and possibly introduces auto-correlation to some extent. In the present work, the caustic method filters out more members than the clipping- and prior-based techniques do, and it provides lower velocity dispersion values, hence higher deviations from the fiducial scaling relation (central column in Fig.~\ref{fig:lx_vs_vdisp}). Simulations \citep[e.g.][]{serra2013} indicate that the caustic method better distinguishes cluster members from interlopers than other methods~; however, small number statistics impact the precision of the determination of the amplitude of the caustic and thus the caustic mass distribution. A lower number of members tends to provide a slightly reduced amplitude, which causes underestimation of the total mass. Moreover, since the caustic-based filtering makes full use of the projected radius information enclosed in phase-space diagrams, one expects an increased sensitivity of this method to centering uncertainties, to uncertainties in the computation of the normalizing $R_{200c}$ and to sparsity in the 2-dimensional diagrams. Further studies based on numerical simulations tailored to SPIDERS stacks will assess the absolute and relative performances of the methods when combining higher-quality X-ray data (\emph{eROSITA} data) to the entire, larger, SPIDERS dataset. Interestingly, data points corresponding to the 'medium-redshift' bin ($z \in [0.26,0.50]$) deviate from this relation at low-$\sigma$ values, and do so more strongly than 'low-redshift' data points. Part of this deviation can be attributed to sample selection effects and Eddington bias. In App.~\ref{app:selbias} we describe a modeling of X-ray selection biases and their impact on scaling relations, by comparing the measured cluster luminosity. As shown in Fig.~\ref{fig:massmatrix}, Eddington bias makes low-mass system (equivalently, low-velocity dispersion systems) appear more luminous on average, and the effect increases with redshift, in agreement with the trend seen in this analysis. The results shown in App.~\ref{app:selbias} assume perfect association of the optical spectra to the X-ray emitting intra-cluster gas. Studying the reliability of such identification, as well as possible contaminants to the X-ray (due for instance to the increased presence of X-ray AGN in group-like halos) and optical signals, is beyond the scope of this paper. These additional sources of bias, likely to dominate in the low-count/low-richness regime, need to be addressed with further simulations. \begin{figure*} \includegraphics[width=\linewidth]{images/nice_lsigmaplot.pdf} \caption{The radial velocity dispersion ($\sigma_{\rm LOS}$) versus X-ray luminosity ($L_X$ in [0.1-2.4]~keV band) as drawn from the stacked phase-space analysis. Six different methods are used to extract a velocity dispersion estimate from phase-space diagrams binned in the $(z, L_C)$ plane, as shown in Fig.~\ref{fig:stacked-phase-space}. Columns from left to right correspond to the three different cleaning techniques: iterative $\sigma$-clipping, caustic identification and $\sigma(L_X)$-clipping. The top row corresponds to the bi-weight variance estimate $\sigma_{\rm BWT}$, while the bottom row correspond to the gaussian fit estimate $\sigma_{\rm gauss}$ (see text). Red numbers refer to the bin ID as listed in Table~\ref{tab:stacked_bins}. Plain, dashed and dotted line are identical in each panel, and correspond to the scaling relations derived in \citep[][]{zhang2011}. Colours encode the redshift binning used for this analysis.} \label{fig:compa_stack_sigestimates} \end{figure*} \section[]{Comparison with previous X-ray cluster catalogues} \label{sect:mcxc} \begin{figure*} \includegraphics[width=\linewidth]{images/samples_pilotarea.pdf} \caption{Distribution in equatorial coordinates of the main samples of objects discussed in this work. The grey shaded area corresponds to the X-ray flux sensitivity of the ROSAT All-sky survey in the footprint of the SPIDERS pilot area. Galaxy clusters from the MCXC catalogue \citep{piffaretti2011} are displayed and those not matching any validated SPIDERS clusters in this area are highlighted red (6 objects, see text).} \label{fig:pilotareasamples} \end{figure*} We compare our work to the objects extracted from the MCXC compilation of catalogues \citep{piffaretti2011}, which contains most of the ROSAT-based samples, including serendipitous detections from deep pointed observations. Fig.~\ref{fig:pilotareasamples} shows the distribution on sky of the samples discussed in this paper. We find 18 matches between the SPIDERS pilot sample and the MCXC database within 3\,arcmin of the CODEX X-ray position. Their properties are summarized in Table~\ref{tab:mcxc_matches} and compared with values extracted from the MCXC compilation. Five systems exhibit a richness lower than 20 in the overlap between the two catalogues. Fig.~\ref{fig:compazmcxc} compares the redshift values in both catalogues and demonstrates their good agreement. All but two agree within 1000~km\,s$^{-1}$ of the MCXC redshift, the other two agree within 3000~km\,s$^{-1}$. Within the survey footprint, 6 MCXC clusters could not be matched to a SPIDERS validated cluster (Table.~\ref{tab:mcxc_nonmatches}). Five of them have values of luminosity and redshift (Fig.~\ref{fig:lxzmcxc}) consistent with sources below or at the edge of the CODEX X-ray detectability -- these are sources detected in deep ROSAT pointed observations \citep{vikhlinin1998,mullis2003}. In particular, MCXC~J0921.2+4528 at $z=0.315$ is the brightest of these five and shows a flux significant at a 1.3-$\sigma$ level only (in the ROSAT All-sky survey) at the value of $1.2\times10^{-13}$~ergs\,s$^{-1}$\,cm$^{-2}$, hence it lies in the largely incomplete part of the sensitivity range. The sixth source is Abell~1361 and is located within a masked region of the ROSAT all-sky data used as input of the CODEX X-ray finding algorithm (a degree north of the sensitivity dip visible in Fig.~\ref{fig:pilotareasamples} at R.A. $\sim$11h40m and Dec. $\sim +45\deg$). \begin{figure} \includegraphics[width=84mm]{images/compa_zSPIzMCXC.pdf} \caption{Comparison of redshift values for the 18 matches between the SPIDERS pilot validated sample presented in this work and the MCXC meta-catalogue \citep{piffaretti2011}. The figures in parenthesis indicate the number of spectroscopic members entering the computation of the cluster redshift, error bars indicate the SPIDERS redshift uncertainty. Lines indicate the velocity offset at the comparison redshift.} \label{fig:compazmcxc} \end{figure} \begin{table*} \centering \caption{\label{tab:mcxc_matches}Comparison of the 18 galaxy clusters in common between the MCXC compilation \citep{piffaretti2011} and the SPIDERS-DR12 pilot sample presented in this work. Luminosities $L_{500}$ are expressed within the $R_{500}$ radius taken from each catalogue, in the [0.1-2.4]~keV band. $^*$: see also Table~\ref{tab:xclassrmdr12}.} \begin{tabular}{@{}cccccccc@{}} \hline SPIDERS & MCXC & Alternative & $z_{\rm spec}$ & $z_{\rm lit}$ & $L_{500}$ & $L_{500}$ & $\lambda_{\rm OPT}$ \\ ID & ID & name & (SPIDERS) & (MCXC) & (SPIDERS) & (MCXC) &\\ & & & & & $10^{44}$\,ergs\,s$^{-1}$ & $10^{44}$\,ergs\,s$^{-1}$ & \\ \hline 1\_2952 & J1053.7+4929 & & $0.141 \pm 0.001$ & 0.140 & $0.5 \pm 0.1$ & 1.7 & 12.2 \\ 1\_4189 & J0921.1+4538 & 3C 219 & $0.175 \pm 0.001$ & 0.175 & $1.5 \pm 0.3$ & 1.4 & 13.4 \\ 2\_2449 & J0907.8+4936 & VV 196 & $0.0351 \pm 0.0003$ & 0.035 & $0.14 \pm 0.03$ & 0.1 & 17.1 \\ 1\_2848 & J1013.6+4933 & VMF98 87 & $0.1330 \pm 0.0003$ & 0.133 & $0.2 \pm 0.1$ & 0.3 & 17.7 \\ 1\_4021 & J0822.1+4705 & A0646 & $0.1262 \pm 0.0009$ & 0.130 & $3.5 \pm 0.3$ & 3.0 & 19.0 \\ 1\_2788 & J1025.0+4750 & A1003 & $0.0627 \pm 0.0005$ & 0.052 & $0.2 1\pm 0.04$ & 0.1 & 24.4 \\ 1\_4240 & J0958.3+4702 & & $0.390 \pm 0.002$ & 0.390 & $3.0 \pm 0.8$ & 1.9 & 28.6 \\ 2\_4405 & J1351.7+4622 & & $0.0632 \pm 0.0003$ & 0.062 & $0.2 5\pm 0.04$ & 0.3 & 31.4 \\ 1\_1172 & J0759.7+5400 & Zw 0755.8+5408 & $0.1026 \pm 0.0006$ & 0.103 & $1.0 \pm 0.1$ & 1.1 & 34.9 \\ 1\_1198 & J0819.9+5634 & VMF98 50 & $0.272 \pm 0.002$ & 0.260 & $1.7 \pm 0.8$ & 0.9 & 37.1 \\ 2\_3671 & J0804.3+4646 & A0616 & $0.185 \pm 0.001$ & 0.187 & $1.1 \pm 0.3$ & 1.4 & 53.1 \\ 2\_3682 & J0805.7+4541 & A0620 & $0.133 \pm 0.001$ & 0.135 & $1.2 \pm 0.2$ & 0.9 & 64.3 \\ 2\_2602 & J1023.6+4907 & A0990 & $0.141 \pm 0.001$ & 0.144 & $3.6 \pm 0.3$ & 3.9 & 72.5 \\ 2\_4317 & J1313.1+4616 & A1697 & $0.1813 \pm 0.0006$ & 0.183 & $1.5 \pm 0.3$ & 2.6 & 82.5 \\ 1\_3111 & J1229.0+4737 & A1550 & $0.258 \pm 0.001$ & 0.254 & $3.1 \pm 0.6$ & 3.3 & 108.4 \\ 2\_4315 & J1306.9+4633 & A1682 & $0.224 \pm 0.001$ & 0.226 & $3.6 \pm 0.5$ & 5.1 & 123.9 \\ 2\_3664 & J0825.5+4707 & A0655 & $0.1271 \pm 0.0006$ & 0.127 & $1.8 \pm 0.3$ & 2.8 & 131.8 \\ 1\_4241$^{*}$ & J0943.1+4659 & A0851 & $0.409 \pm 0.002$ & 0.407 & $4.7 \pm 1.2$ & 4.9 & 148.8 \\ \hline \end{tabular} \end{table*} \begin{table} \centering \caption{\label{tab:mcxc_nonmatches}The 6 galaxy clusters found in the MCXC compilation \citep{piffaretti2011} within the footprint of the SPIDERS-DR12 pilot area, but not present in the SPIDERS-DR12 pilot sample. The last 5 entries correspond to faint clusters detected in deep ROSAT pointed observations, therefore they are unseen in shallower RASS data (see text).} \begin{tabular}{@{}cccc@{}} \hline MCXC & Alternative & $z_{\rm lit}$ & $L_{500}$ \\ ID & name & (MCXC) & (MCXC)\\ & & & $10^{44}$\,ergs\,s$^{-1}$ \\ \hline J1143.5+4623 & A1361 & 0.117 & 2.8 \\ J1256.6+4715 & VMF98 129 & 0.404 & 0.5 \\ J0818.9+5654 & VMF98 48 & 0.260 & 0.3 \\ J0820.4+5645 & VMF98 51 & 0.043 & 0.02 \\ J0921.2+4528 & VMF98 70 & 0.315 & 1.0 \\ J1056.2+4933 & VMF98 94 & 0.199 & 0.2 \\ \hline \end{tabular} \end{table} \section[]{Conclusions} \label{sect:conclusions} This paper introduces the SPIDERS spectroscopic follow-up of X-ray galaxy clusters with particular emphasis on the selection of targets. The galaxy cluster component in SDSS-IV/SPIDERS will obtain optical spectra of 40\,000-50\,000 galaxies identified as potential members of 5\,000 to 6\,000 massive (from $10^{13.5}$ to $10^{15.5} M_{\odot}$, peaking at $\sim 10^{14.5} M_{\odot}$) X-ray galaxy clusters in the Northern hemisphere, up to redshift 0.6 and above. This massive observational effort will bring the average number of galaxy spectra within their respective red-sequences from 2 to 10, therefore allowing precise calculation of galaxy cluster redshifts, relying on a median number of 8 member galaxies per system. Until the launch of the \emph{eROSITA} satellite (2017), the observed sample of X-ray galaxy clusters originates from the ROSAT all-sky survey and from XMM-Newton archival observations. The target selection heavily relies on the RedMapper algorithm, that is able to assign membership probabilities to galaxies photometrically identified as red-sequence members across the SDSS imaging data. The \emph{eROSITA} all-sky survey will complement this preliminary observational tier by bringing denser samples than ROSAT and a more detailed X-ray information over the entire surveyed area. The achieved spectral quality will allow secure redshift measurements of the targeted red galaxies (up to $i=21.2$ in $2\arcsec$ aperture), relying particularly on the extensive developments achieved for the BOSS and eBOSS surveys: observation planning and realization, processing pipelines, infrastructure, databases and analysis tools. A number of steps are envisaged to construct reliable catalogues of X-ray validated clusters with redshift, by assigning membership of galaxies within their parent clusters. These procedures will mix automatized algorithms -- to treat the bulk of the dataset in a most efficient way -- and visual screening -- to address peculiar cases, especially in the low-member regime. Throughout this paper, the feasibility of this programme was demonstrated based on a pilot sample of 230 galaxy clusters. All were confirmed with spectroscopic data, providing accurate redshifts at the $\Delta_z/(1+z) \sim 0.001$ level. We highlighted the difficulties implied by reduced X-ray information for the poorer systems (projection effects, ambiguous associations, etc.) Better X-ray data are required for these low-mass, high-redshift systems, as already provided by XMM or by \emph{eROSITA} in the future. The SPIDERS cluster follow-up programme is essential to achieve the cosmological analysis of the mass function and tridimensional distribution of X-ray galaxy clusters. Indeed, precise redshift information enables precise determination of cluster X-ray properties (luminosity, temperature, gas mass, etc.) related to the host halo mass, and accurate localization of these objects in the cosmic web. Moreover, a wealth of additional science cases will be addressed via the SPIDERS survey. Among them, we have shown that dynamical mass estimates are accessible for a subset of the clusters (in this paper through the radial velocity dispersion proxy), despite the low number of spectroscopic members per individual system. In particular, the stacking of X-ray clusters offers a promising avenue to the study of average properties in such a large sample. Specifically, the results of our pilot study could establish that the radial velocity dispersions correlate with the X-ray luminosity of (stacked) clusters in a similar way as local galaxy clusters do. The methods introduced in this paper are meant to evolve during the course of the survey, most likely including state-of-the art and most recent techniques. The quality and number of galaxy spectra within or in the line-of-sight of galaxy clusters will be exploited to address several science topics, ranging from galaxy formation and evolution to properties of the intergalactic medium. Finally, besides the exceptional dataset provided by the programme and its predicted science outcome, SPIDERS is already starting to pave the way for future, large-area, spectroscopic surveys. In particular, the 4MOST instrument on the ESO-VISTA telescope \citep{dejong2014} will follow-up \emph{eROSITA} clusters in the Southern hemisphere in the early 2020s and will largely benefit from the science and technical developments pursued within SPIDERS in SDSS-IV. \section*{Acknowledgments} We thank the referee for useful discussion that helped in improving the content of this paper. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. Y.Y.Z acknowledges support by the German BMWI through the Verbundforschung under grant 50\,OR\,1506. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is {http://www.sdss3.org/}.
2,877,628,090,910
arxiv
\section{Introduction} Pair production in strong background fields has been one of the most important issues in theoretical physics since the computation of the one-loop effective action in a constant electromagnetic field by Heisenberg and Euler\,\cite{HeisenbergEuler} and Schwinger\,\cite{Schwinger} and the discovery of the black hole radiation by Hawking\,\cite{Hawking}. The virtual pairs from vacuum fluctuations are separated into real pairs by the strong electric field in the Schwinger mechanism and by the causal horizon of the black hole in the Hawking radiation, as summarized in Table 1. The pair production is accompanied by the vacuum polarization, that is, the real part of the nonperturbative effective action. In quantum electrodynamics (QED), for instance, the mean number of pairs or the vacuum persistence (twice the imaginary part of the effective action) is closed related to the pole structure of the vacuum polarization. In the in-out formalism based on the Schwinger variational principle, the effective action is the scattering matrix amplitude between the in- and the out-vacua, which can be manifestly realized by the Bogoliubov transformation method\,\cite{DeWitt}. In this talk, we revisit the new approach to the vacuum polarization and the Hawking radiation of a Schwarzschild black hole in analogy with the Heisenberg-Euler and Schwinger effective action in QED\,\cite{KimHwang11}. Though it results from quantum field theory at one-loop, not from quantum gravity, the nonperturbative effective action, however, may still shed light on quantum aspects of black holes. \begin{table} \caption{\label{Tab1} Strong Field Physics: Analogy between QED and Black Hole} \begin{tabular}{lll} & {\bf Strong QED} & {\bf Black Hole}\\ \hline External agent & Electric field & Event horizon \\ Pair production & Schwinger mechanism & Hawking radiation\\ Nonperturbative action & Vacuum polarization & Stress tensor \\ \hline \end{tabular} \end{table} \section{Schwinger Mechanism and Effective Action} The vacuum polarization and the pair production have been systematically studied in spinor QED by Heisenberg and Euler and in scalar as well as spinor QED by Schwinger. The vacuum polarization may be written as\,\cite{Schwinger} \begin{eqnarray} \label{QED vac} {\cal L}_{\rm eff} = (-1)^{2 \sigma} \frac{(1 + 2 \sigma)}{2} \frac{qE}{2 \pi} \int \frac{d^2 {\bf k}_{\perp}}{(2\pi)^2} \, {\cal P} \int_{0}^{\infty} \frac{ds}{s} \exp \Bigl(- \frac{m^2 + {\bf k}_{\perp}^2}{2qE} s \Bigr) \nonumber\\ \times \Bigl[\frac{\cos^{2 \sigma} (s/2)}{\sin(s/2)} - \frac{2}{s} + (-1)^{2 \sigma} \frac{1 - \sigma}{6}s \Bigr], \label{qed act} \end{eqnarray} where $\sigma = 0$ for scalar QED and $\sigma = 1/2$ for spinor QED. The vacuum persistence, twice the sum of residues at simple poles of the vacuum polarization, is given by \begin{eqnarray} \label{QED per} 2 {\rm Im} {\cal L}_{\rm eff} = (-1)^{2 \sigma} \frac{(1 + 2 \sigma) (qE)}{2 \pi} \int \frac{d^2 {\bf k}_{\perp}}{(2\pi)^2} \ln \Bigl( 1 + (-1)^{2 \sigma} {\cal N}_{\bf k} \Bigr), \label{qed per} \end{eqnarray} where the mean number of produced pairs and the inverse temperature from the Unruh effect\,\cite{HwangKim09} are \begin{eqnarray} {\cal N}_{\bf k} = e^{- \beta (\frac{{\bf k}_{\perp}^2}{2m} + \frac{m}{2})}, \quad \beta = \frac{2 \pi}{(qE/m)}. \end{eqnarray} The inversion of spin-statistics has been argued in the vacuum polarization\,\cite{MGR,LabunRafelski} and in the vacuum persistence\,\cite{HwangKim09}, but its physical origin and meaning has not been understood yet. The Schwinger limit is the critical strength for $e^{-}e^{+}$ pair production, $E_c = m^2/|e| = 1.3 \times 10^{16} \, {\rm V/cm}$. In the in-out formalism the Schwinger variational principle leads to the effective action\,\cite{DeWitt} \begin{eqnarray} e^{iW} = e^{i \int d^D x \sqrt{-g} {\cal L}_{\rm eff}} = \langle 0, {\rm out} \vert 0, {\rm in} \rangle. \label{W} \end{eqnarray} The effective action (\ref{W}) is equivalent to summing the Feynman diagrams in Figure 1. The pair production necessarily makes the effective action complex since $\vert 0, {\rm out} \rangle \neq \vert 0, {\rm in} \rangle$. Further, the vacuum persistence and the mean number of produced pairs are related through \begin{eqnarray} e^{- 2 {\rm Im} W} = \vert \langle 0, {\rm out} \vert 0, {\rm in} \rangle \vert^2, \quad 2 {\rm Im} W = (-1)^{2 \sigma} VT \sum_{\bf k} \ln [1 +(-1)^{2 \sigma} {\cal N}_{\bf k}]. \label{vac per} \end{eqnarray} In the above $2 {\rm Im} W/(VT) = 2 {\rm Im} {\cal L}_{\rm eff}$ is the decay-rate of the in-vacuum per unit volume and per unit time and for a small pair-production rate, $2 {\rm Im} {\cal L}_{\rm eff} \simeq \sum_{\bf k} {\cal N}_{\bf k}$. \begin{figure}[h] \begin{center} \includegraphics[width=5.5cm,height=3.5cm]{feynman-diagram.eps} \caption{One-loop diagrams: the internal loop denotes a charged particle and the external legs (wave lines) denote the background photons and/or gravitons.} \label{fig_1} \end{center} \end{figure} Recently Kim, Lee and Yoon have further developed the in-out formalism and introduced the gamma-function regularization ($\Gamma$-regularization)\,\cite{KLY08,KLY10a,Kim11a}. The zero-temperature effective action for bosons and fermions is given by \begin{eqnarray} \label{eff ac} \frac{W}{VT} = {\cal L}_{\rm eff} = (-1)^{2 \sigma} \sum_{\bf k} \ln \alpha_{\bf k}^*. \label{eff act} \end{eqnarray} Here $\alpha_{\bf k}$ is the Bogoliubov coefficient between the out- and the in-vacua for each quantum number ${\bf k}$ \begin{eqnarray} \hat{a}_{{\rm out}, {\bf k}} = \alpha_{\bf k} \hat{a}_{{\rm in}, {\bf k}} + \beta_{\bf k} \hat{a}^{\dagger}_{{\rm in}, {\bf k}}, \end{eqnarray} and the coefficients satisfy the relation from the spin-statistics theorem \begin{eqnarray} |\alpha_{\bf k}|^2 + (-1)^{2 \sigma} |\beta_{\bf k}|^2 =1. \end{eqnarray} The mean number of produced pairs in (\ref{vac per}) is given by ${\cal N}_{\bf k} = |\beta_{\bf k}|^2$. In a constant electric field, the Bogoliubov coefficient may be found from the spin-diagonal component of the Dirac or the Klein-Gordon equation \begin{eqnarray} \alpha_{\bf k} = \frac{\sqrt{2 \pi}}{\Gamma (-p)} e^{- i (p \pm 1) \frac{\pi}{2}}, \quad p = - \frac{1}{2} \mp \frac{i}{2 \pi} {\cal S}_{\bf k}, \end{eqnarray} where the upper (lower) sign is from the time-dependent (Coulomb) gauge and ${\cal S}_{\bf k} = (m^2 + {\bf k}^2_{\perp} - 2 i \sigma qE)/(2qE)$ is the instanton action\,\cite{KLY10a}. Table 2 summarizes the background fields in which the pair production and/or the effective actions have been known. The in-out formalism has proved a consistent and computationally powerful method for the effective action and/or the pair production for an electromagnetic field in a curved spacetime such as de Sitter (dS) space or anti-de Sitter (AdS) space. Since the Bogoliubov coefficients can be derived from the exact solution of the field equation, it is expected that the effective action may be found when the background field and/or the spacetime have certain symmetry, leading to the exact solution. For instance, the Dirac or the Klein-Gordon equation in a constant electric field has the spectrum generating algebra $SU(1,1)$ and dS and AdS spaces have the maximal symmetry of the given dimensions. \begin{table} \caption{\label{Tab2} Exact Effective Action and/or Pair Production} \begin{tabular}{lll} {\bf Background Fields} & {\bf EA} and {\bf PP} & {\bf Reference}\\ \hline Constant EM-field & EA and PP & Heisenberg-Euler\,\cite{HeisenbergEuler} \\ & & Schwinger\,\cite{Schwinger} \\ Sauter-type E-field & PP & Nikishov\,\cite{Nikishov} \\ Sauter-type E-field & EA and PP & Dunne-Hall\,\cite{DunneHall}\\ & & Kim-Lee-Yoon\,\cite{KLY08,KLY10a}\\ E-field in dS and AdS space & PP & Kim-Page\,\cite{KimPage08}\\ & & Kim-Hwang-Wang\,\cite{KHW}\\ dS space & EA and PP & Kim\,\cite{Kim10-dS}\\ \hline EA: effective action & PP: pair production & \end{tabular} \end{table} \section{Vacuum Polarization and Hawking Radiation} The Hawking radiation of bosons and fermions from a charged rotating black hole is given by\,\cite{Page} \begin{eqnarray} N_{J} (\omega) = \frac{1 - |R_J|^2}{ e^{\beta (\omega - m \Omega_H - q \Phi_H)} + (-1)^{2 \sigma }}, \quad \beta = \frac{1}{k_{\rm B} T_{\rm H}}, \quad T_{\rm H} = \frac{\kappa}{2 \pi}. \end{eqnarray} Here $R_J$ is the amplification factor, $\Omega_H$ the angular momentum of the hole, $\Phi_H$ the electric potential and $\kappa$ the surface gravity on the event horizon. In the case of the zero amplification factor, the vacuum persistence is \begin{eqnarray} 2 {\rm Im} W = - (-1)^{2 \sigma } \sum_J \ln (1 -(-1)^{2 \sigma} e^{- \beta (\omega - m \Omega_H - q \Phi_H) } ). \label{bh per} \end{eqnarray} Note the change of sign in contrary to the QED case. A four-dimensional Schwarzschild black hole with mass $M$ has the inverse temperature $\beta = 8 \pi M$. Denoting $J = \{\omega, l, m, p \}$, with the spherical harmonics $l, m$ and the polarization $p$ and the energy $\omega$, the Bogoliubov coefficients for a massless boson field are found\,\cite{DeWitt} \begin{eqnarray} \alpha_{J} = A_{J} e^{2 \pi M \omega} \Gamma (1 + i 4M \omega), \quad \beta_{J} = - A_{J} e^{-2 \pi M \omega} \Gamma (1 + i 4M \omega). \end{eqnarray} Now the effective action (\ref{eff act}) takes the form \begin{eqnarray} W = i (8 \pi M) \sum_{l} (2l+1) (2p+1) \int \frac{d \omega}{2 \pi} \ln \Gamma(1 - i 4M \omega). \end{eqnarray} Employing the $\Gamma$-regularization, we find the effective action per unit horizon area\,\cite{KimHwang11} \begin{eqnarray} {\cal L}_{\rm eff} = - \frac{1}{16 \pi M} \sum_{l} (2l+1) (2p+1) \int \frac{d \omega}{2 \pi}\, {\cal P} \int_{0}^{\infty} \frac{ds}{s} e^{- 4M \omega s} \Bigl[\frac{\cos(s/2)}{\sin(s/2)} - \frac{2}{s} \Bigr]. \label{bh act} \end{eqnarray} It is remarkable that the effective action (\ref{bh act}) and the vacuum persistence (\ref{bh per}) have the form (\ref{qed act}) and (\ref{qed per}) of spinor QED in a constant electric field. The vacuum persistence quantifies the decay rate of the vacuum due to the Schwinger mechanism or the Hawking radiation. Further, it is known that the trace anomalies explain the vacuum persistence, that is, the Schwinger mechanism and the Hawking radiation. In fact, the vacuum persistence for bosons per unit horizon area\,\cite{KimHwang11} \begin{eqnarray} 2 {\rm Im} {\cal L}_{\rm eff} = \sum_{l} (2l+1) (2p+1) \frac{\pi}{12} \frac{1}{\beta^2}, \end{eqnarray} is equal to the total flux from the gravitational anomalies\,\cite{RobinsonWilczek}. \section{Conclusion} We have presented the one-loop effective action for QED in a constant electric field and the Hawking radiation of a Schwarzchild black hole in the in-out formalism. It consists of the vacuum polarization and the vacuum persistence responsible for pair production. The prominent feature of the nonperturbative effective action for a Schwarzschild black hole is that it shares many features in common with spinor QED effective action in a constant electric field. There remain a few questions to be further pursued: firstly, to find the local effective action outside the horizon, secondly, to investigate the amplification (grey body) factor, and thirdly, to find the effective action at two-loop and higher loops. Still another interesting question is the Schwinger effect in a Reissner-N\"{o}rstrom black hole. Finally, the origin of spin-statistics inversion of QED differently from gravity challenges a further study\,\cite{KimHwang11,HwangKim09,MGR,LabunRafelski}. \section{Acknowledgements} The author would like to thank W-Y.~Pauchy Hwang, Hyun Kyu Lee and Yongsung Yoon for early collaborations and Eun Ju Kang for drawing the figure. The participation of ICGAC10 was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0002-520). The work of this paper was supported in part by National Science Council Grant (NSC 100-2811-M-002-012), Taiwan.
2,877,628,090,911
arxiv
\section{Introduction} Brownian motion with purely time dependent drift and diffusion are ubiquitous in geophysical, environmental and biophysical processes. One can identify numerous geophysical and environmental processes which occur under the crucial effect of external time dependent and random forcing, e.g., the change between the snow-storage and snow-melt phases \cite{a,b}, outbreak of water-borne diseases \cite{c,d}, the life cycle of tidal communities \cite{e,f,g}, and many more. Stochastic models with time dependent drift and diffusion terms are extensively used in the study of neuroscience \cite{h,i,j}. One of the most useful tool to tackle such stochastic processes is the Fokker- Planck formalism \cite{k,l}. In this formalism, different realizations of a system are narrated in terms of probability density which denotes the system in a given state at a certain instant and the theoretical description of such 1-D diffusion motion is governed by : \begin{equation} \frac{\partial P(x,t)}{\partial t}=-\mu(t)\frac{\partial P(x,t)}{\partial x}+D(t)\frac{\partial^2 P(x,t)}{\partial x^2}. \end{equation} In this respect several interesting questions of wide inter- est can be raised such as, (i)the probability of finding the system in a certain domain at a certain instant (survival probability), (ii)the pdf of time $P(t_f|x_0)$ at which the system exit a certain domain first time (known as first passage time $t_f$ ) starting from initial point $x_0$, (iii)the pdf $P(M)$ of the maximum value of a BM process be- fore of its first passage time, and (iv)the joint probability distribution $P(M; tm)$ of the maximum value M and its occurrence time $t_m$ before the first passage time of the BM process.\\ \indent All the above mentioned PDFs are calculated and discussed for simple Wiener and Ornstein-Uhlenbeck processes \cite{l,m,n} as well as in the context of DNA breathing dynamics \cite{malay}. But, all these discussions are based on constant drift and diffusion terms. However, the extension to a time dependent drift and time dependent diffusion terms are not straightforward. This is mainly because of the fact that the system has broken both the space and time homogeneity. Several attempts are made to study BM process with purely time dependent drift and diffusion terms. One of the main work on BM with time dependent drift is barrierless electronic reactions in solutions \cite{amj1,amj2,amj3}. Generalizing the Oster-Nishijima model \cite{onj} to the low viscosity limit or the inertial limit, the authors observed a strong dependence on friction and temperature of the decay rate even in the absence of the barrier, which agrees well with numerical simulation of the full Lanevin equation \cite{amj2}. A series of works on stochastic resonance for time dependent sinusoidal drift is analyzed in Refs. \cite{amj4,amj5,amj6,amj7}. The first passage time statistics for a Wiener process with an exponential time dependent drift term are analyzed in the context of neu- ron dynamics in Refs. \cite{uph1,uph2}. Also, recent studies of DNA unzipping under periodic forcing need to be mentioned \cite{sanjay,alex,swan}. Recently, Molini {\it et. al} \cite{molini} make a study on BM with purely time dependent drift and diffusion terms.\\ \indent In this work, we extend above mentioned works \cite{l,m,n,malay,molini} by incorporating several PDFs of Brownian motion i.e. $P(A|x_0)$, $P(M)$ and $P(M; t_m)$ for a BM with purely time dependent drift and diffusion terms. One of the main objective of this work is to incorporate inertial effect in Brownian functional study and to our best of knowledge, it is the first attempt to incorporate inertial effect in first passage study which is one of the impor- tant unsolved problem. The other objective of this work is to advertise for the use of the recently studied backward Fokker-Planck (BFP) method \cite{snm1} and the path decomposition (PD) method \cite{snm2}. Both the BFP and PD methods are based on the Feynman-Kac formalism \cite{kac} and both of them are first time used for exploring BM process with purely time dependent drift and diffusion terms. Both the techniques are extensively used in study- ing many aspects of classical Brownian motion, as well as for exploring different problems in computer science and astronomy \cite{snm1,snm3,snm4}. For the first time, we consider these elegant methods to study the Brownian functionals for a BM with purely time dependent drift and diffu- sion. Unlike the standard FP treatment \cite{q,r,s} which yields distribution functions directly, we derive and solve differential equations for the Laplace transforms of var- ious Brownian functionals in the BFP method. On the other hand, we can utilize the PD method to calculate the distribution functions of interest by splitting a rep- resentative path of the dynamics into parts with their appropriate weighage of each part separately. This fact is justifiable by considering the Markovian property of the dynamics.\\ \indent The paper is organized as follows. In section II, we dis- cuss our BM process model with purely time dependent drift and diffusion terms. Then we discuss several dis- tribution functions of interest and their relevances. The BFP and PD methods are explained in short. In Sec. III, we introduce several PDFs for a BM with power law time dependent drift and diffusion terms. We illustrate the example of fresh water availability in summer in the snowmelt dominated regime with the power law time dependent drift and diffusion terms. We conclude our paper in section IV. \section{Model,Methods and Measures} \subsection{Model} We are interested with those kind of problem where time-dependent random forcing is predominant. Hence, the Fokker-Planck description of such problem can be made through Eq. (1). The associated stochastic differential equation for the state variable x(t) is given by : \begin{equation} dx(t)=\mu(t)dt+D(t)dW(t), \end{equation} where, $\mu(t)$ is the purely time dependent drift term, $D(t)$ denotes the diffusion term, and $W(t)$ is a Wiener process with Gaussian distribution. The Wiener process is an idealized statistical descriptions that apply to many physical systems \cite{l,m,von}. One of the most elegant theoretical method to tackle such kind of stochastic processes is the Fokker-Planck (FP) formalism \cite{l,m,von}. In this formalism, one can describe different realizations of a system by the probability density. One can find the system in a given state at a certain time, and the corresponding diffusion equation describes its temporal evolution. Several interesting questions related to such stochastic systems are of wide interest in several areas \cite{l,m,von}. One of the main interest in this field is to find the probability density for the system remains in a certain domain at a given instant and the moment at which the system escapes it for the first time. Due to the stochastic nature of the system, different realizations of the system leave a certain domain at different times and it is natural to consider the statistical properties of this random variable. Other interesting questions related with such first passage statistics are (i) finding the probability density $P(A|x_0)$ of area under a path (ii) probability density $P(M)$ of maximum size and (iii) the joint probability density $P(M,t_m)$ of maximum size and its occurrence time $t_m$.\\ \subsection{Methods} In one dimension, the first passage statistics related problem are basically formulated by considering a state variable which evolves stochastically according to a given law in its phase space. We are mainly concerned about the instant when the variable leaves a certain domain for the first time. To deal with such problem a number of several methods or approaches had been described in REfs. \cite{l,m,snm1,von}. Here, we describe two elegant methods (i)Backward Fokker-Planck (BFP) method and (ii) Path decomposition method (PD). \subsubsection{Backward Fokker-Planck Method (BFP)} Following Ref. \cite{snm1}, we can introduce a general description to compute the PDF of a Brownian functional in a time interval $\lbrack0,t_f\rbrack$, where $t_f$ is the first passage time of the process. Thus, one can introduce a functional to calculate different statistical properties of a Brownian functional : \begin{equation} T=\int_{0}^{t_f}U(x(\tau))d\tau, \end{equation} where, $x(\tau)$ is a Brownian path which follows differential Eq.(2) and it starts at $x_0$ at time $\tau=0$ and continues up to $\tau=t_f$. Here, $U(x(\tau))$ is a specified function of the path and its form depends on the quantity we are interested to calculate. For example, if we are interested to calculate first passage time one should choose $U(x(\tau))=1$. On the other hand, for the area distribution one should consider $U(x(\tau))=x(\tau)$. One can easily understand that $T$ is a random variable which can take different values for different Brownian paths. The main goal is to calculate probability distribution $P(T,t_f|x_0)$. Now, one may note that the random variable $T$ can be only positive for our choice of $U(x(\tau))$, Thus, one may consider the Laplace transform of the distribution $P(T|x_0)$ : \begin{eqnarray} Q(x_0,p)&=&\int_{0}^{\infty}dT P(T|x_0)\exp(-pT)\nonumber \\ &=& <\exp(-p\int_{0}^{t_f}U(x(\tau))d\tau)>. \end{eqnarray} Here, the angular bracket denotes the average over all possible paths starting at $x_0=0$ at $\tau=0$ and ending at the first time they cross the origin. For simplicity, we will drop the variable p in the function $Q(x_0,p)$ in the rest of our paper . Now, to derive a differential equation for $Q(x_0)$, we follow the method described in Ref. \cite{snm1}. Thus, we split the interval $\lbrack 0,t_f \rbrack $ into two parts. During the first interval $\lbrack 0,\Delta\tau\rbrack$, the path starts from $x_0$ and propagates up to $x_0 + \Delta x$. In the second interval $\lbrack \Delta \tau,t_f \rbrack$, the path starts at $x_0 + \Delta x $ and ends at 0 at time $t_f$ . Here, $\Delta\tau$ is a fixed, infinitesimally small time interval. To leading order in $\Delta\tau$, we obtain : $\int_{0}^{t_f}U(x(\tau ))d\tau \simeq U(x_0)\Delta\tau +\int_{\Delta\tau}^{t_f} U(x)d\tau$. As a result of that one can obtain from Eq. (4) : \begin{eqnarray} Q(x_0) &\simeq & \exp(-pU(x_0))< Q(x_0 + \Delta x)>_{\Delta x}\nonumber \\ &\simeq & (1-pU(x_0)\Delta\tau)<Q(x_0 + \Delta x)>_{\Delta x}. \end{eqnarray} Here, the angular bracket denotes the average over all possible realizations of $\Delta x$. Now, one can obtain from the dynamical equation for a free Langevin particle, i.e. from $\frac{dx}{dt}=\xi(t)$ that $\Delta x=\xi(0)\Delta \tau$. Now, expanding $Q(x_0 + \Delta x)$ in powers of $\Delta\tau$, and taking the averages over the noise by using the facts $<\xi(0)> = 0$ and $<\xi^2(0)> = 1/\Delta\tau$ as $\Delta\tau\rightarrow 0$, one obtains, to lowest order in $\Delta\tau$, the ordinary differential equation : \begin{equation} \frac{1}{2}\frac{d^2Q(x_0)}{dx_0^2}-pU(x_0)Q(x_0)=0. \end{equation} {\it Boundary Conditions:} Equation (6) is valid in the regime $x_0\epsilon \lbrack 0,\infty\rbrack$ with the following boundary conditions : (i) As the initial position $x_0\rightarrow 0$, the first passage time vanishes which gives us $Q(x_0=0)=1$, (ii)on the other hand, as $x_0\rightarrow \infty$, the first passage time diverges which results in $Q(x_0\rightarrow\infty)=0$.\\ \indent Thus, our scheme will be as follows. We can solve the differential Eq. (6), termed as the BFP equation \cite{snm1}. By solving Eq.(6) with appropriate boundary condition as mentioned above provides us the Laplace transformed pdfs of various quantities which are determined by the choice of U(x). Now, inverting the Laplace transform with respect to p, one can obtain the desired pdf $P(T|x_0)$. On the other hand, the standard Fokker-Planck method adopted in Refs. \cite{l,m} yields the distribution function P(x,t )directly. Thus, these two approaches are distinct, providing complementary information. \subsubsection{The path decomposition method (PD)} The basic principle of this PD method is very simple. Since, our motion in Eq.(2) is Markovian one can break a typical path into two parts. Thus, the weightage of the whole path is the product of the weights of the two split parts \cite{snm1}. Thus, the joint probability distribution $P(M,t_m)$ of the maximum bubble size M and the occurrence time $t_m$ at which this maximum occurs before first passage. Now, integrating over M, one can obtain the marginal distribution $P(t_m)$. The basic process to compute $P(M,t_m)$ by splitting a typical path into two parts, before and after $t_m$. Here, weights $W_L$ and $W_R$ are the weighage of the path before and after $t_m$. As a matter of fact, the total weight W of the whole path is : \begin{equation} W = W_L \times W_R \end{equation} On the left-hand side of $t_m$, the path propagates from $x_0$ at $t = 0$ to $M − \epsilon$ at $t = t_m$, without attaining the value $0$ or $M$ during the interval $\lbrack0,t_m\rbrack$ \cite{snm2}. Now, The weight $W_L$ can be determined by using a path-integral treatment based on the Feynman-Kac formalism. Let us we denote $q(x_0)$ be the probability that the motion described by Eq. (2) exits the interval $\lbrack 0,M \rbrack$ for the first time through the origin. Thus, $q(x_0)$ is the cumulative probability that the maximum before the first-passage time is $\leqslant M$. It is known that this function satisfies two boundary conditions: (i) $q(0) = 1$ and (ii) $q(M) = 0$. Let us consider a function $\phi_{\Delta\tau}(\Delta x)$ which gives us the distribution function of a small displacement $\Delta x$ in time $\Delta \tau\rightarrow 0$. Now, using the Markovian property of the dynamics (2), one can show that : \begin{equation} q(x_0)=\int q(x_0+\Delta x)\phi_{\Delta\tau}(\Delta x)d(\Delta x), \end{equation} Now, making a Taylor expansion of $q(x_0+\Delta x)$ and averaging over $\Delta x= \xi(0)\Delta \tau$, and using $<\xi(0)>=0$ and $<\xi^2(0)> = 1/\Delta\tau$. Thus, to the leading order in $\Delta \tau$ we obtain \begin{equation} \frac{d^2q(x_0)}{dx_0^2}=0. \end{equation} Now, solving the above equation with the help of above mentioned boundary boundary condition, one can obtain : \begin{equation} q(x_0)=1-\frac{x_0}{M}. \end{equation} Now, differentiating $q(x_0)$ with respect to $M$ we obtain : \begin{equation} P(M)=\frac{x_0}{M^2} \end{equation} Now, the $W_R$ is obtained as \begin{equation} W_R=q(M-\epsilon)=\frac{\epsilon}{M}. \end{equation} On the other hand, the weight $W_L$ can be obtained from the white noise is Gaussian and the probability of a path is given by : \begin{equation} P\lbrack \lbrace x(\tau)\rbrace\rbrack \propto \exp \Big\lbrack -\frac{1}{2}\int_{0}^{t}d\tau \Big(\frac{dx}{d\tau}\Big)^2\Big\rbrack. \end{equation} Then, the weight $W_L$ is then obtained as a sum over contributions from all possible paths : \begin{eqnarray} W_L &\propto & \int_{x(0)=x_0}^{x(t_m)=M-\epsilon}{\mathcal{D}}x(\tau)\exp \Big\lbrack -\frac{1}{2}\int_{0}^{t}d\tau \Big(\frac{dx}{d\tau}\Big)^2\Big\rbrack \nonumber \\ &\times & \prod_{\tau=0}^{t_m}\theta\lbrack x(\tau)\rbrack \prod_{\tau=0}^{t_m}\theta\lbrack M- x(\tau)\rbrack, \end{eqnarray} where, $\prod_{\tau=0}^{t_m}\theta\lbrack x(\tau)\rbrack$ and $\prod_{\tau=0}^{t_m}\theta\lbrack M-x(\tau)\rbrack$ enforce the requirements that the path does not cross either the level $0$ or the level $M$ for times between $0$ and $t_m$. Now, following Feynman-Kac \cite{kac}, the path integral can be identified with the propagator $<M-\epsilon|e^{-\hat{H}t_m}|x_0>$, corresponding to the quantum Hamiltonian $\hat{H}$ of a single particle of unit mass, \begin{equation} \hat{H}=-\frac{1}{2}\frac{d^2}{dx^2}+V(x), \end{equation} with potential energy $V(x)=0$ for $0<x<M$ and $V(x)=\infty$ for $x=0$ and $x=M$. Note, that the infinite potential energy at $x=0$ and at $x=M$ enforces the requirement that the path never crosses either the level 0 or level M. Finally, \begin{equation} W_L=\Big(\frac{x_0}{M-\epsilon}\sum_{n=1}^{\infty}e^{-E_nt_m}\psi_n(M-\epsilon)\psi_n(x_0), \end{equation} where, $\psi_n(x)$ and $E_n$ are the eigenfunctions and eigenenergies, respectively for the Hamiltonian $\hat{H}$. \subsection{Measures} Our primary focus is on several first-passage Brownian functionals of physical relevance. We consider the following quantities and explore their pdfs. In this context, we explore a physical phenomenon of snowmelt dynamics for the fresh water availability in summer.\\ (i){\it First passage time or lifetime of the stochastic process}: The first-passage time pdf $P(t_f |x_0)$, i.e., the pdf of the time of touching the origin first time with initial size $x_0$, provides the information about the lifetime of the stochastic process. A related quantity is the survival probability $C(x_0,t ) = 1 - \int_{0}^{t_f} P(t_f|x_0 )dt_f$ of the process. This survival probability is an experimentally measurable quantity. For example, in the context of DNA breathing dynamics $C(x_0,t )$ can be inferred from experiments by measuring fluorescence correlations of a tagged DNA \cite{bonnet1,bonnet2}. In the snow melt dynamics, our key stochastic variable is the total potential water availability, H (in terms of water equivalent from both snow and rainfall). Thus, the survival probability $C(H_0,t)$ for a given initial snow water equivalent $H_0$ and the pdf of first passage time $P(t_f|H_0)$ are very much useful quantities to offer important information about the timing between melting of snow and fresh water availability in summer under different climatic scenarios. \\ (ii){\it Area under a path:} If we consider a typical path which is described by Eq. (2), one can define the area under such a path before the first-passage time as $A =\int_{0}^{t_f} x(t)dt$. The interesting quantity is its pdf $P(A|x0)$ with an initial value $x_0$. This quantity is of interest because it provides a measure for the effectiveness of the corresponding stochastic processes. For example, if we consider the snow melt process, then $P(A|H_0)$ gives us the information about the average total snow water equivalent with initial value $H_0$. While the first-passage time distribution provides information about the lifetime, it does not contain any hint of the average total water equivalent before full melting. Quantities (i), (ii)can be calculated below by following the BFP method discussed in Sec. IIB-1.\\ {\it Maximum size M:} The other proposed measure for quantifying reactivity of the process is the distribution of the maximum size before the first-passage time, P(M). Let us consider again snow melt process. Now, the pdf $P(M)$ provide us about the information about the maximum total available fresh water equivalent before total melting of snow.\\ {\it Maximum size M and the corresponding time tm:} The joint probability distribution function $P(M,t_m)$ can be investigated here by following the PD method, which is based on the Feynman-Kac formalism \cite{kac} (see Sec. IIB-2). Using this pdf, one can further calculate the distribution function $P(t_m)$ of the time at which the process attains its maximum size before hitting the origin. This latter pdf is of interest because it provides information about the (average) time of occurrence of the biggest size before hitting the origin. \section{Snowmelt dynamics} Snowmelt is one of the main source of freshwater for many regions of the world and the snowmelt process is very much sensitive to temperature and precipitation fluctuations \cite{barnett1,barnett2}. Snow dynamics is basically consists of two phases : (a) an accumulation phase in which snow water equivalent (i.e. the amount of liquid water available by total and instantaneous melting of the entire snowmass) rises to its seasonal maximum $Q_0$ and the other one is (b)the depletion phase where the whole snowpack gradually decreases (release of stored water content) due to temperature fluctuation. To describe such a complex dynamics one needs a lot of physical parameters. Now, we are trying to build a simplified stochastic model which can describe the total water equivalent from both snow and rainfall during the melting season, as driven by both precipitation (solid to liquid transition) and increasing air temperature. Due to simplification of the stochastic model, we consider the total potential water availability (in terms of water equivalent) as the main stochastic variable. Here, we neglect any other effects connected with snow percolation and metamorphism etc. \cite{rango}. The predominant factors which govern the fresh water availability in the warm season are increasing air temperature and liquid precipitation. Accordingly, we assume the melting phase can be described by a power-law time dependent drift directed towards the total melting of the snowpack. On the other hand, positive and negative exponents of power-law diffusion usually represent precipitation events and pure melting periods, respectively. Following the "degree-day" approach with time-varying melting-rate coefficients, one can assume the melting process can be described by a linear function of time \cite{rango}. Considering a power-law form for drift and diffusion during the melting season, the dynamics of the total water equivalent from both snow melting and precipitation at a given point in space can be reasonably described by the Langevin equation \cite{bras} : \begin{equation} dH=-\mu(t)dt + D(t)dW(t), \end{equation} where, the drift part $\mu(t)=kt^{\alpha}$ represents the accumulation or depletion with a rate constant $k$ and the diffusion rate is given by $D(t)=\sqrt{2kt^{\alpha}}$. Also, both the rainfall and snowmelt contributions are included in $Q$. Here, we assume that the drift and the diffusion follow the same power law with exponent $\alpha$. This is a reasonable assumption in the sense that the snow melt is most predominant in the summer time i.e. the process is expected to increase its variability during warm season \cite{molini}. The initial value of the snow water equivalent (SWE), $H_0$, is the accumulated snow during the cold season.\\ \indent The Fokker-Planck equation corresponding to the differential Eq. (17) \begin{equation} \dfrac{\partial p(H,t|H_{0})}{\partial t}=-\mu(t)\dfrac{\partial p(H,t|H_{0})}{\partial H}+ D(t)\dfrac{\partial^{2}p(H,t|H_{0})}{\partial H^{2}}. \end{equation} Now, we can use the following transformation equations to go from $(H,t)$ to $(z,\tau)$ space \begin{equation} \tau=\int D(t) dt + B, \end{equation} and \begin{equation} z=H+\int\mu(t)dt +C. \end{equation} Using the above mentioned transformation equations one can reduce Eq. (18) into a constant co-efficient free diffusion equation form : \begin{equation} \frac{\partial p(z,\tau)}{\partial \tau}=\frac{\partial^2p(z,\tau)}{\partial z^2}, \end{equation} \figOne \subsection{PDF of first Passage Time : $P(t_f|H_0)$} Using the backward Fokker-plank method one can obtain the BFP equation \begin{equation} \dfrac{1}{2}\dfrac{d^{2}Q(z_{0},\tau)}{d z_{0}^{2}} -pU(z_{0})Q(z_{0}) = 0. \end{equation} Substituting $U(z_{0})= 1$ in equation (22), we obtain \begin{equation} \dfrac{1}{2}\dfrac{d^{2}Q}{d z_{0}^{2}} -pQ(z_{0}) = 0. \end{equation} The general solution of equation (23) is \begin{equation} Q(z_{0})=e^{-\sqrt{2p} z_{0}} \end{equation} Inverting the Laplace transform with respect to p gives the pdf of the first passage time for $\tau_f$ \begin{equation} P(\tau_{f}|z_{0})=\dfrac{z_{0}}{\sqrt{2\pi}}\dfrac{e^{-z_{0}^{2}/2\tau_{f}}}{\tau_{f}^{3/2}} \end{equation} Again transforming above equation into original variables $H$ and $t$ by using equations (19) and (20), we get \begin{eqnarray} P(t_{f}|H_{0})&=&D(t_f)\dfrac{1}{\sqrt{4\pi}}\dfrac{[H_{0}+\int_{0}^{t_{f}}\mu(t)dt]}{[1/2\int_{0}^{t_{f}}\sigma^{2}(t)dt]^{3/2}}\nonumber \\ &\times&\exp\bigg[-\dfrac{(H_{0}+\int_{0}^{t_{f}}\mu(t)dt)^2}{2\int_{0}^{t_{f}}\sigma^{2}(t)dt} \bigg] \end{eqnarray} Let us consider two different cases for the time dependent drift and diffusion : (1)$D(t)=\sqrt{2kt^{\alpha}}$, $\alpha>-1$, and $\mu(t)=0$.\\ Now, substituting $\mu(t)$ and $D(t)$ in equation (26), we obtain \begin{equation} P(t_{f}|H_{0})=D(t_f)\dfrac{1}{\sqrt{4\pi}}\dfrac{H_{0}}{\Big[k\dfrac{t_f^{\alpha+1}}{\alpha+1}\Big]^{3/2}} \exp \Big[-\dfrac{H_{0}^2}{2k\dfrac{t_f^{\alpha+1}}{\alpha+1}}\Big] \end{equation} (2)Case 2 : proportional power-law diffusion and drift i.e. $\mu(t)=qkt^{\alpha}$ and $D(t)=\sqrt{2kt^{\alpha}}$; then the first passage time distribution is given by \begin{eqnarray} &&P(t_{f}|H_{0})= \dfrac{\Big(H_{0}+\dfrac{qkt_{f}^{\alpha+1}}{\alpha+1}\Big)(1+\alpha)^{3/2}}{2\sqrt{\pi A}t_{f}^{(3+\alpha)/2}} \nonumber \\ &\times& \exp\Big[-\dfrac{t_{f}^{-(\alpha+1)}(kqt_{f}^{\alpha+1}+H_{0}+\alpha H_{0})^2}{2k(\alpha+1)}\Big] \end{eqnarray} \subsection{PDF of area till the first passage time: $P(A|H_0)$} Whereas the $P(t_f|H_0)$ can supply the important information about the time of melting and summer fresh water availability, the pdf $P(A|H_0)$ will supply us the useful information about the total summer fresh water availability under different climatic conditions.\\ We can compute the distribution of $A$,i.e., $P(A|H_0)$ by substituting $U(z_{0})=z_{0}$ in equation (13): \begin{equation} \dfrac{d^{2}Q}{d z_{0}^{2}} -2p z_{0}Q(z_{0}) = 0 \end{equation} The general solution of equation (29) is \begin{equation} Q(z_{0})=A_{1} Ai(2^{1/3}p^{1/3}z_{0})+B_{1}Bi(2^{1/3}p^{1/3}z_{0}), \end{equation} where $Ai(z)$ is the Airy function. Now, applying the boundary conditions :\\ 1.$ Q(z_{0})=0~~~~$ when $z_{0}\rightarrow \infty$\\ 2.$ Q(z_{0})=1~~~~$ when $z_{0}\rightarrow 0$, we obtain \begin{equation} Q(z_{0})=3^{2/3}\Gamma(2/3)Ai(2^{1/3}p^{1/3}z_{0}) \end{equation} Taking the inverse Laplace transform \begin{equation} P(A(\tau)|z_{0})=\dfrac{2^{1/3}}{3^{2/3}\Gamma(1/3)}\dfrac{z_{0}}{A(\tau)^{4/3}}\exp\Big[-\dfrac{2z_{0}^3}{9A(\tau)}\Big] \end{equation} Again transforming above equation into original variables $A(t)$ and $t$ by using equation (19) and (20), we obtain \begin{eqnarray} P[A(t)|H_{0}]&=&D(t)\dfrac{2^{1/3}}{3^{2/3}\Gamma(1/3)}\dfrac{H_{0}+\int_{0}^{t_{f}}\mu(t)dt}{[A(t)]^{4/3}}\nonumber \\ &&\exp \Big[-\dfrac{2[H_{0}+\int_{0}^{t_{f}}\mu(t)dt]^3}{9A(t)}\Big] \end{eqnarray} {\it Case (1): Unbiased power law time dependent diffusion }\\ In this case one can consider $D(t)=\sqrt{2kt^{\alpha}}$, $\alpha>-1$, and $\mu(t)=0$. Now, substituting the above mentioned values of $D(t)$ and $\mu(t)$ in Eq. () we obtain the pdf of area till $t_f$ : \begin{eqnarray} P(A|H_{0})&=&(kt_{f}^{\alpha})\dfrac{2^{1/3}}{3^{2/3}\Gamma(1/3)}\dfrac{H_{0}}{[A(t)]^{4/3}}\nonumber \\ && \times \exp\Big[-\dfrac{2H_{0}^3}{9A(t)}\Big] \end{eqnarray} \figThree {\it (2)Case 2 : Proportional power-law diffusion and drift}\\ In the case of proportional power-law diffusion and drift, one may consider $\mu(t)=qkt^{\alpha}$ and $D(t)=\sqrt{2kt^{\alpha}}$, the PDF of area till the first-passage time is given by: \begin{eqnarray} P(A|H_{0})&=&\dfrac{2^{1/3}}{3^{2/3}\Gamma(1/3)}\dfrac{(kt_{f}^{\alpha})[(\alpha +1)H_{0}+qkt_{f}^{\alpha+1}]}{(\alpha+1)[A(t)]^{4/3}}\nonumber \\ &&\exp\Big[-\dfrac{2(\alpha H_{0}+H_{0}+qkt_{f}^{\alpha+1})^{3}}{9(\alpha+1)^3A(t)}\Big] \end{eqnarray} \subsection{Joint probability distribution of maximum and its occurrence before first passage time : $P(M,t_m)$} The joint probability distribution of maximum and its occurrence before first passage time,$P(M,t_m)$, provides important information about the maximum available fresh water equivalent in summer as well as the exact timing of it. In that sense it is one of the important quantity to study. Now, following the Path decomposition method discussed in Section IIB-2 as well as in Ref. \cite{snm2}, we can obtain the exact expressions of joint probability distribution $P(M,t_m)$ for the two cases of power law.\\ {\it Case (1)Unbiased power law time dependent diffusion}\\ In this case one can consider $D(t)=2kt^{\alpha}$, $\alpha>-1$, and $\mu(t)=0$. Thus, the joint probability distribution $P(M,t_m)$ is given by \begin{eqnarray} P(M,t_{m})&= &(kt^{\alpha})\dfrac{\pi}{M^3} \sum_{n=1}^{\infty}(-1)^{n+1}n \sin\Big(\dfrac{n\pi x_{0}}{M}\bigg)\nonumber \\ &&\exp\Big(-\dfrac{n^2 \pi^2}{2M^2}k\dfrac{t_{m}^{\alpha+1}}{\alpha+1}\Big) \end{eqnarray} {\it (2)Case 2 : Proportional power-law diffusion and drift}\\ In the case of proportional power-law diffusion and drift, one may consider $\mu(t)=qkt^{\alpha}$ and $D(t)=\sqrt{2kt^{\alpha}}$, the joint probability distribution $P(M,t_m)$ is given by : \begin{eqnarray} P(M,t_{m})&= &(kt^{\alpha})\dfrac{\pi}{M^3} \sum_{n=1}^{\infty}(-1)^{n+1}n \sin\Big(\dfrac{n\pi }{M}(x_{0}+\dfrac{qkt_{m}^{\alpha+1}}{\alpha+1})\bigg)\nonumber \\ && \exp\Big(-\dfrac{n^2 \pi^2}{2M^2}A\dfrac{t_{m}^{\alpha+1}}{\alpha+1}\Big) \end{eqnarray} It is very difficult to plot the joint probability distribution. So, we are interested on the marginal distribution $P(t_m)$. \subsection{Marginal Distribution : $P(t_m)$} The marginal distribution is given by \begin{equation} P(\tau_{m})=\int_{z_{0}}^{\infty}dM P(M,\tau_{m}) \end{equation} Now, putting the expression of $P(M,\tau_m)$, one can obtain \begin{eqnarray} &&P(\tau_{m}) = \int_{z_{0}}^{\infty}dM\dfrac{\pi}{M^3} \sum_{n=1}^{\infty}(-1)^{n+1}n \sin\Big(\dfrac{n\pi z_{0}}{M}\Big)\nonumber \\ &&\times \exp\Big(-\dfrac{n^2 \pi^2}{2M^2}\tau_{m}\Big)\nonumber \\ &&= \sum_{n=1}^{\infty}(-1)^{n+1}n \pi\int_{z_{0}}^{\infty}\dfrac{dM}{M^3} \sin\Big(\dfrac{n\pi z_{0}}{M}\Big) \exp\Big(-\dfrac{n^2 \pi^2}{2M^2}\tau_{m}\Big). \nonumber \\ \end{eqnarray} Now, putting $u=\dfrac{n\pi z_{0}}{M}$ $\Rightarrow ~du=-\dfrac{n\pi z_{0}}{M^2}dM$, one can show that \begin{equation} P(\tau_{m})=\dfrac{1}{\pi \tau_{m}}\sum_{n=1}^{\infty}\dfrac{(-1)^{n+1}}{n}\int_{0}^{n\pi}du~ cos(u)\exp\bigg(-\dfrac{u^2}{2z_{0}^2}\tau_{m}\bigg) \end{equation} {\it Case I :Large $\tau_{m}$ asymptote ($\tau_{m}\gg z_{0}^2$)}\\ Introducing the variable $k=\sqrt{\dfrac{\tau_{m}}{2z_{0}^2}}$ in above equation, we obtain \begin{equation} P(\tau_{m})=\dfrac{z_{0}~ log2}{(\tau_{m}^{3/2})}\dfrac{1}{\sqrt{2\pi}} \end{equation} Now, again transforming into (x,t) space we obtain \begin{equation} P(t_{m})=D(t_m)\dfrac{log2}{\sqrt{2\pi}}\dfrac{[x_{0}+\int_{0}^{t_{m}}\mu(t)dt]}{[1/2\int_{0}^{t_{m}}D(t)dt]^{3/2}} \end{equation} {\it a. unbiased diffusion and $D(t)=2kt^\alpha$}\\ In this case, we obtain \begin{equation} P(t_{m})=\dfrac{log2}{\sqrt{2\pi k}}\dfrac{x_{0}(\alpha+1)^{3/2}}{t_{m}^{(\alpha+3)/2}} \end{equation} {\it Proportional power law drift and diffusion :$\mu(t)=qkt^\alpha$ and $D(t)=2kt^\alpha$}\\ In this case, the marginal distribution is given by \begin{equation} P(t_{m})=\dfrac{log2}{\sqrt{2\pi}}\dfrac{(\alpha+1)^{1/2}}{k^{1/2}}\dfrac{[(\alpha+1)x_{0}+qkt_{m}^{\alpha+1}]}{(\alpha+1)(t_{m}^{\alpha+1})^{3/2}} \end{equation} {\it Case II : Small-$t_{m}$asymptote:} \\ In this limit $\tau_{m}\ll z_{0}^2$. Now, taking the Laplace transform of the Eq. () \begin{equation} \int_{0}^{\infty}d\tau_{m}~e^{-s\tau_{m}}P(M,\tau_{m})=\dfrac{sinh(z_{0}\sqrt{2s})}{M~sinh(M\sqrt{2s})} \end{equation} Let us consider s becomes much larger than $z_{0}^{-2}$ and $M^{-2}$, we get \begin{equation} \int_{0}^{\infty}d\tau_{m}~e^{-s\tau_{m}}P(M,\tau_{m})\approx \dfrac{e^{-\sqrt{2s}(M-z_{0})}}{M} \end{equation} \figFive \figSix Taking the inverse Laplace transform \begin{equation} P(M,\tau_{m})\approx \dfrac{\tau_{m}^{-3/2}}{\sqrt{2\pi}}\dfrac{(M-z_{0})}{M}e^{\dfrac{(M-z_{0})^2}{2\tau_{m}}} \end{equation} Integrating the above equation over M in the limit $\tau_{m}\ll z_{0}^2$, we get \begin{equation} P(\tau_{m})\approx \dfrac{1}{z_{0}\sqrt{2\pi \tau_{m}}} \end{equation} Again,transforming into (x,t) variables we obtain \begin{equation} P(t_{m})=D(t_m)\dfrac{1}{\sqrt{2\pi}}\dfrac{1}{[x_{0}+\int_{0}^{t_{m}}\mu(t)dt]}\dfrac{1}{\bigg[ \dfrac{1}{2}\int_{0}^{t_{m}}D(t)dt \bigg]^{1/2}} \end{equation} {\it Unbiased power law diffusion and $D(t)=2kt^\alpha$}\\ \begin{equation} P(t_{m})=\bigg(\dfrac{k(\alpha+1)}{2\pi}\bigg)^{1/2}\dfrac{1}{x_{0}}\dfrac{t_{m}^{\alpha}}{(t_{m}^{\alpha+1})^{1/2}} \end{equation} {\it Proportional power law time dependent drift and diffusion}\\ In this case $\mu(t)=qkt^\alpha$ and $D(t)=2kt^\alpha$ \begin{equation} P(t_{m})=(\alpha+1)^{3/2}\sqrt{\dfrac{k}{2\pi}}\dfrac{t_{m}^{\alpha}}{[(\alpha+1)x_{0}+qkt_{m}^{\alpha+1}]}\dfrac{1}{(t_{m}^{\alpha+1})^{1/2}} \end{equation} \section{Conclusions} In this work, we analyze several relevant probability distribution functions of various Brownian functionals associated with the stochastic model for the total fresh water availability in mountain region incorporating both the temperature effect, snow accumulation and precipitation in the form of power law dependent drift $\mu(t)\sim qt^{\alpha}$ and diffusion constant $D(t)\sim kt^{\alpha}$. Based on the backward Fokker-Planck method discussed in Ref.\cite{snm1}, we derive (i) the first- passage time distribution $P(t_f|x0)$, providing informa- tion about the lifetime of the stochastic process, (ii) the distribution $P(A|x_0)$, of the area A covered by the ran- dom walk till the first-passage time, measuring the re- activity of stochastic processes, and (iii) the distribution P(M), of the maximum size M before first passage time, (iv) the joint probability distribution $P(M; t_m)$ of the maximum size M and the time $t_m$ of its occurrence be- fore the first passage time was also obtained by employing the Feynman-Kac path integral formulation. The advan- tage of the elegant methods adopted here is that they produce results on various functionals by making proper choices of a single term in a parent differential equation with appropriate boundary condition. We are at present studying these functionals for Brownian particle with in- ertia. If we assume initial velocity to be zero the problem is easily tractable. However, if we consider a Gibbsian distribution of the initial velocity the problem is really challenging and the work is under progress along this line \cite{malay1}.\\ Also, this study is helpful in analyzing the effect of periodic forcing in DNA unzipping \cite{sanjay} or the study on the effect of terahertz field on DNA breathing dynamics \cite{alex,swan}.In the context of integrate-fire model with sinusoidal modulation of neu- ron dynamics, the membrane voltage, V(t), is the stochastic variable under sinusoidal stimulus. In this context, $P(t_f|V_0)$ and $C(V_0,t)$ will provide important information about the timing of firing of neuron after reaching the threshold voltage $V_{th}$ with an initial value $V_0$ \cite{malay2} \begin{acknowledgments} MB acknowledge the financial support of IIT Bhubaneswar through seed money project SP0045. AMJ thanks DST, India for award of J C Bose national fellowship. \end{acknowledgments}
2,877,628,090,912
arxiv
\section{Introduction} With the rise of edge computing, FPGA vendors have been releasing and marketing CPU\texttt{+}FPGA SOCs as the ideal solution for this domain. As edge devices are often specialised for a single task in a constrained environment, it is advantageous to build dedicated hardware to improve performance and energy efficiency. FPGAs offer the advantage of targeted hardware without losing the ability to adapt the platform to changes (e.g., security updates), while being more efficient than a pure software solution. As \ac{HLS} matures~\cite{XilinxInc.2020VivadoSynthesis}, it becomes a more attractive approach to creating efficient high-preformance accelerators for FPGA devices. \ac{ML} algorithms are a prime candidate for acceleration at the edge, but their computational requirements exceed the capabilities of many embedded devices. Inference at the edge is a problem being addressed by many works, but training at the edge still faces hurdles to adoption despite its clear benefits. In the field of \acp{DT}, many algorithms are incompatible with devices of this class due to memory constraints. ID3~\cite{Quinlan1983LearningGames}, and derivatives such as C4.5 and C5.0 require the entire training dataset be present in memory for training. Incremental learning algorithms as ID5~\cite{UTGOFF1988ID5:ID3}, ID5R~\cite{Utgoff1989IMPROVEDLEARNING} and ITI~\cite{Utgoff1997DecisionRestructuring} do allow for ongoing learning from streaming data but store the dataset samples within the tree. Hoeffding Trees ~\cite{Domingos2000MiningStreams} are incremental learning trees, which are more suitable for embedded scenarios because they have the following advantages: They asymptotically guarantee the same classification as traditional batch learners, and they store information about the distribution of samples statistically rather than the samples themselves, which drastically reduces memory requirements, especially for large datasets. In this work, we present a flexible C/C\texttt{++} \ac{HLS} implementation of a Hoeffding Tree variant tailored for use in FPGAs, originally proposed by Lin et al.~\cite{Lin2019TowardsFPGA}. Their work built on an earlier variant in which the storage of the statistical data of the sampling distribution of the original Hoeffding Tree was replaced by a Gaussian approximation~\cite{Pfahringer2008HandlingTrees}. Lin et al. replace this approximation with quantile estimation using asymmetric signum functions~\cite{Althoff2017AnTesting}. The result is a larger memory footprint but a reduction in computational requirements, while achieving similar results. Since it is implemented in Verilog, the applicability of the implementation is limited to circuit synthesis, e.g. for FPGA. By using \ac{HLS}, an implementation can be created that is equally suitable for CPU and FPGA. The contributions of this work are as follows: \begin{itemize} \item A generic, template-based C/C++ implementation of the Hoeffding Tree classifier as per Lin et al. \cite{Lin2019TowardsFPGA}, but that is suited for \ac{HLS}. \item Functional validation of the implementation through software execution, and post-synthesis onto a Xilinx ZCU102 development board. \item Experimental evaluation of memory requirements of the tree object as a function of template parameters. \item Experimental evaluation of FPGA resource requirements and execution time of the synthesised training and inference method as a function of template parameters. \end{itemize} \section{HLS Hoeffding Tree Implementation} A decision tree is a type of machine learning algorithm used either for classification or regression. A decision tree performs sequential binary decisions over an incoming vector of features, and a classification is computed when a leaf node is reached. During training, leaf nodes are added to the tree based on a splitting criteria, which separates the data into two regions at every tree junction. A Hoeffding tree is a type of decision tree where the criteria is the Hoeffding bound, shown in Equation \ref{eq1}. The tree performs learning and inference by relying on a property of the Hoeffding bound that guarantees that best splitting point is chosen. If a gain function $G$, is to be maximised, then given $G(X)$ and $G(Y)$ (X and Y being the attributes that generate the highest and second highest values of $G$) if $G(X)-G(Y)>\varepsilon$ then the Hoeffding bound guarantees that with probability $1-\delta$ X is the best attribute to split on. $R$ represents the range of the attributes e $N$ the number of samples on a node. \begin{equation} \varepsilon = \sqrt{\frac{R^2ln(1/\delta)}{2N}} \label{eq1} \end{equation} \smallskip Over other criteria, the Hoeffding bound has two characteristics: it allows for online incremental learning and growth of the tree which asymptotically tends towards the results provided by batch learners, and is independent of the probability distribution of the data sampling. The Hoeffding tree allows for continuous learning and node splitting for a potentially infinite (e.g. streaming applications) number of samples \cite{Domingos2000MiningStreams}. FPGAs have been intensively studied for decision tree implementations, as a tree structure maps efficiently to specialised hardware. In conjunction with other optimisations, decision trees in FPGAs have been shown to outperform CPU and GPU solutions \cite{Barbareschi2021AdvancingStudy}. Lin et al. \cite{Lin2019TowardsFPGA} demonstrate speedups of up to 1500x for an RTL implementation of the Hoeffding tree versus a 2.6GHz processor. Our aim is to explore a higher abstraction level via HLS, providing greater applicability features, while evaluating the attainable performance. We implemented the tree as a C\texttt{++} class template. The parameters include the maximum number of nodes in the tree, the feature size, and the floating-point precision. The class contains the training and inference methods which are synthesised to hardware. At runtime, the C\texttt{++} tree object can be manipulated in software, and passed as an argument to the training/inference method, as summarised in Figure \ref{fig:software}. This allows for instantiation of several tree objects in memory (with different template parameters if desired). Trees with the same template parameters can be processed by the same synthesised circuit. Since the functions can also be invoked in software, this means that training or inference can be dynamically partitioned based on which device performs better for either task, as a function of the tree parameters. This also means that if FPGA is occupied processing a tree object, other trees can be evaluated via software without the need for a blocking wait. Finally, evaluation of multiple trees is possible by either a combination of software and hardware invocations, by deploying multiple instances of the hardware kernel, or by time-multiplexing a single hardware kernel (as explained below). Either case allows for the possibility of arbitrary runtime tree ensembles. This evaluation is currently future work. \begin{figure}[t] \centerline{\includegraphics[width=0.9\linewidth]{software2.pdf}} \caption{Software and hardware architecture of the Hoeffding Tree implementation; the training and inference kernels are shared by multiple tree objects} \label{fig:software} \end{figure} The Xilinx Vitis HLS flow enforces an OpenCL model for kernel invocation. The implemented kernel, \texttt{krnl\_Tree}, receives 4 arguments. A \texttt{HoeffdingTree} object as mentioned, an array of samples, an array of output classifications, and the size of these arrays. In this model, a large overhead penalty would occur for invocations with a single sample, due to the data transfer time. A practical application of the kernel design could be, e.g., in the sensor domain, where the tree could continuously sample fused data from multiple sensors (i.e., multiple attributes) without processor intervention, avoiding transfer overheads. Alternatively, streaming samples can be accumulated until a sufficiently large number is held that mitigates this overhead. This does not mean that the tree behaves as a batch learner, as one sample is processed per each \emph{infer-then-train} step. Inference on an incremental learning decision tree cannot be easily parallelised as the model changes and evolves with every training sample that arrives. This restricts the pipeline to dealing with one sample at a time, sequentially. The sample structure contains information about whether it should be used for training purposes or only for inference. Thus, as the kernel loops through the sample array, it executes either the \texttt{train} or \texttt{infer} method of the tree object accordingly. The results are placed in the output data structure. The OpenCL API allows for fine-grained control of how these arguments are passed to the kernels, each argument being a separate buffer with persistent storage. Thus, trees can be transferred to FPGA memory once, and not retrieved between executions of the kernels. With this mechanism, a tree object can reside in memory while only new samples are transferred in, and the model can be retrieved in a final stage. Conversely, the samples themselves may remain in memory, and trees freely exchanged. This is one strategy for the construction of tree ensembles mentioned previously. Trees can reuse the same kernel instance via time-multiplexing, or by concurrent instantiation of several copies of \texttt{krnl\_Tree}. In either case, the same read-only sample buffer can be assigned to all trees, thus significantly reducing overhead and preventing data duplication. For brevity, the evaluation of ensembles is out of the scope of this paper. \section{Experimental Evaluation} We performed the following experiments: evaluated the resource utilisation of a single synthesised tree for a range of values for the feature size and number of classes; evaluated the training and inference time of a single tree in hardware, versus the ARM CPU, for several synthetic clustering datasets (varying number of point, clusters, and feature size); evaluated the classification accuracy and execution time of a single tree for UCI's Bank and Covertype datasets. \subsection{Resource Utilisation} Table \ref{tab:resources} presents various configurations of the kernel, tailored for datasets of different dimensions (D), with different number of classes (K), number of samples (N) and max number of nodes (Nd). The purpose is to determine the effect of these parameters on FPGA resource utilisation. As expected, parameter N has no effect on resource utilisation as samples cannot be processed in parallel. The feature size and the number of classes result in an increase in resource usage. This is due to the highly sequential nature of the generated kernel, which also explains why the performance of this kernel on training tasks is poor compared to the CPU. This overall advantage is less surprising when considered in the context of an 11-fold CPU advantage in clock speed. Current \ac{HLS} tools cannot automatically parallelize sequential code. Without hardware design expertise in order to optimise the design, the implementation will be far from optimal. In our implementation, we still believe that further parallelization can be achieved even within a single tree, through inner loop unrolling or memory partitioning. One interesting result is that of the kernel's operating frequency. It remains unchanged for all configurations. Looking deeper into the cause of this phenomenon, one finds that the bottleneck is the sorting of a sample down from the root node to the appropriate leaf node. This sequential operation also prevents the kernel from being pipelined. \begin{table*}[htb] \centering \caption{N, D, K and Nd effects on FPGA Resource Utilisation} \begin{tabular}{crrrrrrrrr} \toprule Nodes & 100 & 100 & 100 & 1000 & 100 & 100 & 100 & 1000 \\ K & 5 & 5 & 10 & 5 & 5 & 5 & 10 & 5 \\ D & 3 & 100 & 3 & 3 & 3 & 100 & 3 & 3 \\ N & 40k & 40k & 40k & 40k & 500k & 500k & 500k & 500k \\ \midrule LUT & 23304 (8.6\%) & 20567 (7.6\%) & 23776 (8.8\%) & 24351 (9.0\%) & 23304 (8.6\%) & 20567 (7.6\%) & 23776 (8.8\%) & 24351 (9.0\%) \\ \midrule LUTRAM & 1395 (1.0\%) & 1179 (0.8\%) & 1399 (1.0\%) & 1397 (1.0\%) & 1395 (1.0\%) & 1179 (0.8\%) & 1399 (1.0\%) & 1397 (1.0\%) \\ \midrule FF & 35682 (6.6\%) & 29775 (5.5\%) & 36374 (6.7\%) & 36336 (6.7\%) & 35682 (6.6\%) & 29775 (5.5\%) & 36374 (6.7\%) & 36336 (6.7\%) \\ \midrule BRAM & 12 (1.3\%) & 9.5 (1.0\%) & 12 (1.3\%) & 12 (1.3\%) & 12 (1.3\%) & 9.5 (1.0\%) & 12 (1.3\%) & 12 (1.3\%) \\ \midrule DSP & 23 (0.9\%) & 25 (1.0\%) & 25 (1.0\%) & 25 (1.0\%) & 23 (0.9\%) & 25 (1.0\%) & 25 (1.0\%) & 25 (1.0\%) \\ \midrule BUFG & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) & 13 (3.2\%) \\ \midrule MMCM & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) & 1 (25.0\%) \\ \midrule Freq. (MHz) & 103.6 & 103.6 & 103.6 & 103.6 & 103.6 & 103.6 & 103.6 & 103.6 \\ \bottomrule \end{tabular} \label{tab:resources} \end{table*} \begin{figure}[h] \centerline{\includegraphics[width=\linewidth]{testBarsEPSgrey8log_1}} \caption{Size of Tree objects in bytes for Nd, D and K. Each bar in every grouping, depicts a tree with a max number of nodes from $2^0$ to $2^7$.} \label{fig:bytes} \end{figure} \subsection{Performance} These results were obtained by feeding the tree with datasets of K clusters in a D dimensional spaces, constituted of N points. For these experimental runs, we will have the entire dataset transferred in a single operation to the FPGA's memory. \begin{figure}[h] \centerline{\includegraphics[width=\linewidth]{treeCopy.pdf}} \caption{Illustrative visualisation of tree model derived from UCI Covertype dataset. The tree was only allowed to grow to 5 nodes.} \label{fig:viz} \end{figure} Looking at the first four rows of Table \ref{tab:benchmarks} (D=3) it can be observed that for a 3-dimensional dataset, regardless of the bundle size, the ARM CPU in the ZCU102 SoC significantly outperforms the FPGA implementation in both the training and inference tasks. Also, the performance gap between both implementations grows with the number of samples processed. This indicates that the kernel is slower, per iteration, than the pure software solution. Regarding the last four rows of Table \ref{tab:benchmarks} (D=100), the ARM CPU still outperforms the FPGA kernel in training. However, it does it with a lower margin and one that does not appear to grow with the added number of samples. On the inference task with this larger dataset, the FPGA outperforms the ARM processor by 8.3\texttimes. Table \ref{tab:uci_datasets} presents benchmarks of two of the UCI datasets used by Lin et al. \cite{Lin2019TowardsFPGA}. The same tree parameters were used ($\delta=0.001$, $\lambda=0.01$, $\tau=0.05$, $n_{min}=200$, $n_{pt}=10$, $n_{quantiles}=16$, $Nd=2047$), with one being of special relevance: Nd (maximum number of nodes). A significant slowdown occurred. With the increased number of nodes, the sequential tree traversal algorithm increases in length. Our HLS implementation achieves comparable accuracy for \emph{Bank}, although the performance for \emph{Covertype} is inferior. Lin et al. \cite{Lin2019TowardsFPGA} reports 89.30\% and 72.51\%, respectively. We believe a difference in calculation precision between the CPU and FPGA caused the degradation, despite the use of 32-bit floating point data types for both devices. \begin{table}[htb] \centering \caption{Training and inference times for four synthetic clustering datasets, for the ARM CPU (1.2Ghz) and the FPGA (103MHz)} \begin{tabular}{ccccrrr} \toprule K & D & N & Task & ARM CPU & FPGA & Speedup \\ \midrule \multirow{8}{*}[-2.2em]{5} & \multirow{4}{*}[-1em]{3} & \multirow{2}{*}[-0.4em]{40k} & Training & 207 ms & 1,990 ms & 0.10\texttimes \\ \cmidrule{4-7} & & & Inference & 151 ms & 462 ms & 0.33\texttimes \\ \cmidrule{3-7} & & \multirow{2}{*}[-0.4em]{500k} & Training & 2,983 ms & 30,933 ms & 0.10\texttimes \\ \cmidrule{4-7} & & & Inference & 2,260 ms & 11,442 ms & 0.20\texttimes \\ \cmidrule{2-7} & \multirow{4}{*}[-1em]{100} & \multirow{2}{*}[-0.4em]{40k} & Training & 6,028 ms & 51,648 ms & 0.12\texttimes \\ \cmidrule{4-7} & & & Inference & 3,924 ms & 469 ms & 8.37\texttimes \\ \cmidrule{3-7} & & \multirow{2}{*}[-0.4em]{500k} & Training & 75,763 ms & 651,775 ms & 0.12\texttimes \\ \cmidrule{4-7} & & & Inference & 49,495 ms & 11,494 ms & 4.31\texttimes \\ \bottomrule \end{tabular} \label{tab:benchmarks} \end{table} \begin{table}[htb] \centering \caption{Training time and Accuracy (Acc.) for Covertype and Bank datasets, for the ARM CPU (1.2Ghz) and the FPGA (103MHz)} \begin{tabular}{crr|rrr} \toprule & \multicolumn{2}{c}{ARM CPU} & \multicolumn{3}{c}{FPGA} \\ \cmidrule{2-6} & Acc. & Time & Acc. & Time & Speedup \\ \midrule Bank & 88.3\% & 202 ms & 88.3\% & 8,525 ms & 0.02\texttimes \\ \midrule Covertype & 72.2\% & 9,712 ms & 63.7\% & 374,600 ms & 0.03\texttimes \\ \bottomrule \end{tabular} \label{tab:uci_datasets} \end{table} \section{Related Work} Kulaga et al. \cite{Kuaga2014FPGAHls} present an \ac{HLS} decision tree ensemble solution for inference tasks. The results achieved are competitive regarding performance when compared to the ARM core present in the tested SoC. However, the design is highly dependent on the number of trees and corresponding depths, as a change in ensemble parameters requires re-tuning multiple pragmas. As we have also seen, an unavoidable sequential portion of the algorithm is the sample sorting through the tree structure. Unlike our approach, the number of trees in an ensemble is hardcoded into the synthesised kernel. In contrast, by having one or more synthesised training/inference methods (for different hyper-parameters), we can deploy \emph{N} instances of such circuits and process a runtime allocated number of trees. As previously stated, the work on this paper builds on Lin et al. \cite{Lin2019TowardsFPGA} work. However, their implementation is closed-source and done in Verilog, which excludes native execution on CPUs. Also, as the work was developed for a datacenter-class FPGA device, the implementation is very resource intensive and thus not suitable for small devices such as the ones used on embedded systems. InAccel\footnote{\emph{InAccel, 2019, XGBoost Exact Updater IP core, } https://github.com/inaccel/xgboost} provides an HLS implementation of the XGBoost learning algorithm, which is also based on decision trees. For a dataset of 65k points, 5 classes, and 128 features, the training time is 2.7 seconds. This is significantly faster than our performance for similarly sized datasets, but InAccel's implementation targets server-grade FPGA accelerator boards (including multi-board setups), while we target the embedded domain. However, the potential for HLS FPGA acceleration of decision tree algorithms is demonstrated, given expert optimisation of the code for HLS. \section{Conclusions} We presented a flexible and scalable implementation of a Hoeffding Tree compatible with HLS tools% \footnote{https://github.com/Sleepy105/Hoeffding-Tree/tree/fpt21} We performed a functional validation of the tree design, against software execution, by implementation on chip on a Xilinx ZCU102. We provide a evaluation of the design's resource usage for multiple template parameter values (i.e., maximum tree size, number of sample attributes, number of clusters, and number of dataset samples), as well as execution time versus an ARM Cortex-A53 processor. The resource requirements of the tree do not scale significantly with problem size, although further HLS optimisations such as unrolling remain unexplored. Even so, we outperform the ARM by 8.3x times for largest dataset for the inference task, while being 8.6x slower during training. As future work, we envision the use of tree ensembles, and the partitioning of training and inference task between software and hardware based on problem size. \section*{Acknowledgments} This work was supported by the PEPCC project (PTDC\slash EEI-HAC\slash 30848\slash 2017), financed by Fundação para a Ciência e Tecnologia (FCT). \bibliographystyle{IEEEtran}
2,877,628,090,913
arxiv
\section{Introduction} Cardiac diffusion tensor imaging is growing as a novel imaging modality as it is capable of interrogating the microstructure of the beating heart with no invasive surgery and without the use of any contrast agent \cite{mori_principles_2006}, making it accessible for patients with reduced kidney functionality \cite{schlaudecker_gadolinium-associated_2009} or for frequent scans. In clinical research studies, cardiac DTI has been shown to be useful in phenotyping several cardiomyopathies such as hypertrophic cardiomyopathy (HCM) and dilated cardiomyopathy (DCM) by quantitatively analysing the microstructural organisation and orientation of cardiomyocytes within the myocardium. As cardiac DTI is becoming more and more studied, the use of deep learning-based (DL) approaches applied to it is similarly increasing \cite{phipps_accelerated_2021,schlemper_stochastic_2018,ferreira_automating_2020,ferreira_accelerating_nodate,karimi_diffusion_2022,weine_synthetically_2022,cao_cs-gan_2020,tanzer_faster_2022}. Most of the recent work in the field, although, suffer from a common crucial shortcoming: in order to quickly show the potential of deep learning on improving cardiac DTI, many publications don't go beyond applying some well-known general-purpose architecture to the data they have available, often ignoring inherent properties of the acquisition method. As an example, in much of the recently published work, we find a widespread use of out-of-the-box models such as the popular U-Net \cite{ronneberger_u-net_2015}, which is a 2D model designed for real data. MRI data, on the other hand, is complex by definition and the subject of the acquisition is often 3D rather than 2D, making the standard U-Net ill-fitting for the task. In this work, we compare the effect of making relatively small architectural changes to the popular U-Net model when applied to a general image-to-image task in DTI. Specifically, we choose the task of removing artefacts from cardiac DTI images acquired with an SMS protocol, and we compare the classic 2D magnitude-only U-Net with 3D and complex versions of the same model, with the goal of providing future researchers with a better starting point for their experimental work. To this end, we also make the code for all of our tested models available on GitHub\footnote{\url{https://github.com/Michael-Tanzer/architectures-tanzer-stacom22}}. \section{Background} \subsection{Cardiac diffusion tensor imaging} Diffusion tensor imaging measures the diffusion of water molecules for every voxel in the imaged tissue and approximates as a 3D tensor. As the free diffusion of water in the tissue is constrained by the shape of cardiac muscle microstructure, studying such tensors has been shown to give us information related to the shape and orientation of the cardiomyocytes in the imaged tissues. The cardiac diffusion tensor information is commonly visualised and quantified through four per-voxel metric maps: Mean Diffusivity (MD) that quantifies the total diffusion in the voxel (higher corresponds to more diffusion), Fractional Anisotropy (FA) that quantifies the level of organisation of the tissue (higher corresponds to a higher organisation), Helix Angle (HA) and Second Eigenvector (E2) Angle (E2A) that quantify the 3D orientation and shape of the tissue in the voxel \cite{basser1995inferring,kung2011presence}. These maps have been shown to be a promising tool for phenotyping many cardiac pathology in a clinical setting \cite{bihan_diffusion_2001,niellesvallespin_cardiac_2020,nielles-vallespin_assessment_2017}. \subsection{Simultaneous multi-slice acquisition (SMS)} Simultaneous multi-slice techniques have been used with great success to reduce the acquisition time in brain diffusion tensor imaging (DTI) \cite{setsompop_improving_2012}. SMS uses a multi-band excitation pulse to simultaneously excite several 2D slices within the imaged tissue. In SMS, each receiver coil collects a single frequency-domain image for all the excited slices, the signal received is a weighted sum of the signals that would be emitted by exciting each slice individually. By making use of the redundant information from the receiver coils that surround the tissue we want to image, we can then separate the signal from the excited slices using modified versions of the GRAPPA \cite{griswold_generalized_2002} and SENSE \cite{pruessmann_sense_1999} algorithms used for in-plane acceleration. Unfortunately, as the information is often insufficient and as the problem is not fully characterised, the slice separation algorithms introduce artefacts in the separated slices \cite{barth_simultaneous_2016}. The artefacts are often referred to as interslice-leakage artefact as they arise when information from one slice erroneously ends up on a different slice. Because of the disposition in space of the MRI coils, the leakage between two slices is more evident when the slices are closer in the imaged tissue. \subsection{Deep learning in DTI} As cardiac DTI grows in popularity, a lot of work has been published trying to improve the acquisition quality or trying to shorten its long scan-times. Among others, Ferreira et al. \cite{ferreira_accelerating_nodate}, T\"{a}nzer et al. \cite{tanzer_faster_2022}, and Phipps et al. \cite{phipps_accelerated_2021} reduced the number of repetitions needed to increase the low SNR by using a de-noising framework to restore high image quality from less acquired data. Schlemper et al. \cite{schlemper_stochastic_2018} applied a cascade of convolutional neural networks to fill-in k-space entries acquired with a compressed sensing protocol, further reducing the scan times. Ferreira et al. \cite{ferreira_automating_2020} propose a U-Net for the segmentation of the left ventricle, automating part of the DTI maps computation, Cao et al. \cite{cao_cs-gan_2020} show how a GAN model can be used as a de-aliasing model for DTI, and Tian et al. \cite{tian_deep_2020} work on self-supervised DTI de-noising is also based on a modified U-Net model. Most of the examples reported above, despite all being a great contribution to the field of deep-learning-accelerated cardiac DTI, have a 2D real-valued U-Net model at their core. This shows that the choice of data type and dimensionality is not the top priority of many influential publications. \section{Methods} \subsection{Complex neural networks} As MRI data is inherently complex, we explore the possibility of training using complex data. Traditionally, there are two main ways to achieve this: separating the complex data into non-complex components, or using a model that performs complex operations. The former is a more straight-forward approach: the complex data is split into its real and imaginary components or into magnitude and phase and then a real-values model is trained using the split data. This can be done in multiple ways: either by treating the components as different channels, or by training two separate models, one for each component. In our comparison we split the data in magnitude and phase as they are more meaningful in the physics of MRI and we use them as separate input channels to a unified model. The latter option requires more thinking: while some operations naturally extend to the complex domain, some others do not and need to be re-designed. We report the main changes to the used operators in Table \ref{tab:complex-nns}. In the table $z = a + ib$ where $i = \sqrt{-1}$, \textit{c} is the convolution layer, and \textit{tc} is the transpose convolution layer. \begin{table}[tbh] \centering \footnotesize \begin{tabular}{lllll} \toprule \multicolumn{2}{c}{Operation} & Naive imp. & Complex equivalent \\ \midrule \midrule Multiplication & Inner product $\ $ & $\sum_i w_i z_i$ & \checkmark \\ & Convolution & $\operatorname{c}(a+ib)$ & $ \operatorname{c}_r (a) - \operatorname{c}_i(b) + i ( \operatorname{c}_i (a) + \operatorname{c}_r (b))$ \\ & Trans. conv. & $ \operatorname{tc}(a+ib)$ & $ \operatorname{tc}_r (a) + \operatorname{tc}_i(b) + i ( \operatorname{tc}_i (a) - \operatorname{tc}_r (b))$ \\ \midrule Activation & Sigmoid & $\frac{1}{1+e^{-(a + ib)}}$ & $\operatorname{sigmoid}(a) + i\operatorname{sigmoid}(b)$ \\ & ReLU & $\max(0, a + ib)$ & \begin{tabular}[c]{@{}l@{}}$\left\{\begin{array}{ll} (|z|+b) \frac{z}{|z|} & \text { if }|z|+b \geq 0 \\ 0 & \text { if }|z|+b<0\\ \end{array}\right.$\end{tabular} \\ & Dropout & $\operatorname{DO}(a) + i \operatorname{DO}(b)\ \ $ & $\operatorname{DO}(a + ib)$ \\ \midrule Pooling & Max & $\max_i z_i$ & $z_k$ where $\operatorname{argmax}_k\{|z_k|\}$ \\ & Average & $\frac{1}{N} \sum_i z_i$ & \checkmark \\ \midrule Normalisation $\ $ & Batch norm & $\mathbf{\hat{z}} = \frac{\mathbf{z} - \mathbb{E}[\mathbf{z}]}{\sqrt{\mathbb{V}[\mathbf{z}] + \epsilon}}$ & \checkmark\\ \midrule Loss & Euclidean & $|| Y - Z ||^2_2$ & $|| \operatorname{abs}(Y) - \operatorname{abs}(Z) ||_2^2$ \\ & L1 & $|| Y - Z ||_1$ & $|| \operatorname{abs}(Y) - \operatorname{abs}(Z) ||_1$ \\ \bottomrule\\ \end{tabular} \caption{Neural networks operators and their complex counterparts. Marked with a checkmark those operations that don't need substantial modification to extend their usage to complex space.} \label{tab:complex-nns} \end{table} Once we have defined these basic operations we can then build a model that uses them and is therefore fit to process complex numbers. The advantage of this approach is that the model makes use of properties of complex numbers during the training instead of letting the network learn a relationship between the input channels like in the previous method. \subsection{Experimental setting} In order to provide a better understanding on the effect of using a 3D or complex model, we compare all combinations of architectures obtained by modifying the following properties: \begin{itemize} \item 2D vs 3D: whether the model uses 2D operations or 3D operations. \item Magnitude vs complex data: whether we train the model with the absolute component alone or whether we use the full complex representation of the data. Notice that ``complex data" is further split into 1. fully complex models that use complex operations and 2. models that keep standard operations, but use the phase data as a separate input channel. We refer to the former magnitude-only models as ``Mag", fully complex models as ``Comp", and magnitude-and-phase models as ``MagPhs" for brevity. \item All slices vs individual slices: whether the model is trained to correct all SMS-acquired slices simultaneously or whether to correct a single slice at a time. When all the slices are used and the model is 3D, the slices are arranged in the third spatial dimension, while for 2D models the slices are treated as image channels. \end{itemize} This comparison results in 12 combinations of dimensionality (2 types), data (3 types), and input data (2 types). When a single model is referred to, we often use a shorthand: for instance, if we refer to a 2D fully-complex model trained on all the SMS slices, we will shorten it as ``2D-All-Comp". All the models were trained for 200 epochs with the Adam optimiser \cite{kingma_adam_2017}, a learning rate of 0.0003 that was then lowered by a factor of 10 after 100 epochs, a batch size of 16, mean absolute error loss, and residual learning. The data was padded and normalised in the range 0 to 1 and then randomly augmented with random rotation and random vertical and horizontal flipping. All the results were computed on the test set for the epoch in which the validation MAE was lowest. In order to ensure consistency, we kept the model size and architecture as fixed as possible by choosing a number of parameters, 3 millions, and a general architecture (U-Net with 5 layers with a doubling number of channels in each encoding layer) and subsequently adjusting the initial number of channels. The models have the following starting number of channels: \begin{itemize} \item 2D Mag and MagPhs: 28 \item 3D Mag and MagPhs: 16 \item 2D Complex: 20 \item 3D Complex: 11 \end{itemize} The data used for the training came from 31 ex-vivo swine hearts and was acquired with SMS factor equal to 2 and distance factor equal to 400\%. Each heart was scanned in multiple locations to cover as much of the volume as possible. The ground truth images were obtained by scanning each heart again with the same protocol but no SMS acceleration. This results in around 43,000 2D complex slices for the training, 1200 slices for validation and 1200 slices for testing. \subsection{Results evaluation} When working with DTI data, there are two main components we are interested in evaluating: the acquired images and the DTI maps derived from the images. The latter are particularly important as they are the main tools a clinician would use in a clinical setting. When evaluating the artefact-removal results of our models we therefore need to take both into account. \noindent To evaluate the image quality we use mostly standard wide-spread metrics: \begin{itemize} \item Mean Absolute Error (MAE) ↓: $\frac{1}{nm}\sum_{i=1}^n \sum_{j=1}^m \left| X^{(i, j)} - Y^{(i, j)} \right|$ where $x$ and $y$ are the predicted and target images respectively. For complex images the MAE is computed with respect to their magnitude and for MagPhs the phase information is not taken into account. \item Peak Signal to Noise Ratio (PSNR) ↑: $10 \log_{10}\left(\frac{\operatorname{MAX}_X^2}{\operatorname{MSE}}\right)$ where $\operatorname{MAX}_X$ is maximum pixel intensity value across the image and $\operatorname{MSE}$ is the mean squared error between the predicted image and the target image. In the case of complex images we compute the PSNR on their magnitude and for the MagPhs case we discard the phase data. \item Structural Similarity Index (SSIM) ↑: SSIM measures the perceived image degradation based on the loss of structure between the output and target images. It is computed as follows: \begin{equation} \operatorname{SSIM}(X, Y)=\frac{\left(2 \mu_{X} \mu_{X}+c_{1}\right)\left(2 \sigma_{XY}+c_{2}\right)}{\left(\mu_{X}^{2}+\mu_{Y}^{2}+c_{1}\right)\left(\sigma_{X}^{2}+\sigma_{X}^{2}+c_{2}\right)} \end{equation} Where $\mu_I$ represents the mean over $I$, $\sigma_I$ the standard deviation over $I$, $\sigma_{IJ}$ the covariance, and $c_1$ and $c_2$ are fixed scalars used for numerical reasons. \end{itemize} When analysing the DTI maps, we need to distinguish between scalar maps (MD and FA) and angular maps (HA and E2A). While for scalar maps we can use MAE (↓) as an error metric, angular maps are defined in the range $[-90^\circ, 90^\circ)$ and they wrap around at the two extrema of this range. For these maps we use the Mean Angle Absolute Error (MAAE, ↓) defined below instead \begin{equation*} \operatorname{MAAE}(X, Y) = \frac{1}{NM} \sum_{i=0}^{N} \sum_{j=0}^{M} \begin{cases} \left|X^{(i,j)} - Y^{(i,j)}\right|,\ if\ \left|X^{(i,j)} - Y^{(i,j)}\right| < 90^\circ\\ 180^\circ - \left|X^{(i,j)} - Y^{(i,j)}\right|,\ \text{otherwise} \end{cases} \end{equation*} Moreover, as the DTI maps are not well-defined for the background voxels, we only consider the metrics computed on voxels belonging to the cardiac tissue. All the results are reported as the median over the means on a per-slice basis and its inter-quartile range as \textit{median [iqr]}. When we perform a statistical significance test we use the Wilcoxon rank test with $P=0.05$. \section{Results} In Tables \ref{tab:numerical-results-maps} and \ref{tab:numerical-results-images} we report the performance metrics for the output images and for the DTI maps on the test set for all our models. The values of MD have been scaled by $10^5$ and the values of FA by $10^2$ to improve readability. In the table we mark in bold the best result across all models and we underline the second-best result. The values are also colour-coded from green (best result) to red (worst result) through yellow (median result) on a per-metric basis to better compare the models at first glance. \begin{table}[tbh] \footnotesize \centering \begin{tabular}{@{}l|rrrrrrrr@{}} \toprule & \multicolumn{2}{c}{HA} & \multicolumn{2}{c}{E2A} & \multicolumn{2}{c}{MD ($\times10^5$)} & \multicolumn{2}{c}{FA ($\times10^2$)} \\ \cmidrule(l){2-9} \multirow{-2}{*}{Run name} & Median & IQR & Median & IQR & Median & IQR & Median & IQR \\ \midrule 2D-ALL-ABS & \cellcolor[HTML]{E0E5B6}16.9 & 5.0 & \cellcolor[HTML]{E8E9BC}26.1 & 4.8 & \cellcolor[HTML]{F4EDC4}5.44 & 1.51 & \cellcolor[HTML]{FFCDB1}5.95 & 1.19 \\ 2D-ALL-Comp & \cellcolor[HTML]{FFE8C5}18.0 & 4.6 & \cellcolor[HTML]{F2EDC3}26.4 & 4.7 & \cellcolor[HTML]{FBF0C9}5.50 & 0.97 & \cellcolor[HTML]{A9D08E}\ftextbf{5.41} & 1.33 \\ 2D-ALL-MagPhs & \cellcolor[HTML]{DCE4B2}16.7 & 2.6 & \cellcolor[HTML]{A9D08E}\ftextbf{24.5}& 4.4 & \cellcolor[HTML]{FFEFCA}5.55 & 1.03 & \cellcolor[HTML]{FFEDC9}5.86 & 0.40 \\ 3D-ALL-ABS & \cellcolor[HTML]{E1E6B6}16.9 & 4.3 & \cellcolor[HTML]{F4EDC4}26.4 & 3.5 & \cellcolor[HTML]{FFAA98}5.98 & 1.26 & \cellcolor[HTML]{FF6565}6.24 & 0.59 \\ 3D-ALL-Comp & \cellcolor[HTML]{FF6565}19.4 & 4.8 & \cellcolor[HTML]{FF6E6B}27.8 & 5.1 & \cellcolor[HTML]{FF897F}6.19 & 1.47 & \cellcolor[HTML]{FFB09C}6.03 & 3.26 \\ 3D-ALL-MagPhs & \cellcolor[HTML]{A9D08E}\ftextbf{15.0} & 3.8 & \cellcolor[HTML]{D1DFAA}{\ul 25.5} & 4.8 & \cellcolor[HTML]{B1D394}{\ul 4.89} & 0.67 & \cellcolor[HTML]{B2D395}{\ul 5.46} & 1.35 \\ 2D-Single-ABS & \cellcolor[HTML]{D3E0AC}{\ul 16.4} & 4.2 & \cellcolor[HTML]{D1DFAA}{\ul 25.5} & 5.5 & \cellcolor[HTML]{DEE5B4}5.26 & 1.28 & \cellcolor[HTML]{FCF0C9}5.83 & 0.51 \\ 2D-Single-Comp & \cellcolor[HTML]{FFAF9B}18.6 & 4.7 & \cellcolor[HTML]{FFAE9A}27.3 & 5.0 & \cellcolor[HTML]{D9E3B1}5.22 & 0.95 & \cellcolor[HTML]{D4E1AD}5.63 & 2.25 \\ 2D-Single-MagPhs$\ \ $ & \cellcolor[HTML]{FF9487}18.9 & 3.8 & \cellcolor[HTML]{FFB59F}27.2 & 4.7 & \cellcolor[HTML]{FFD5B7}5.71 & 1.17 & \cellcolor[HTML]{FFD4B6}5.93 & 0.37 \\ 3D-Single-ABS & \cellcolor[HTML]{FBF0C9}17.8 & 3.1 & \cellcolor[HTML]{FFD2B5}27.0 & 5.4 & \cellcolor[HTML]{FF6565}6.41 & 1.92 & \cellcolor[HTML]{EEEBBF}5.76 & 0.34 \\ 3D-Single-Comp & \cellcolor[HTML]{FF9487}18.9 & 5.2 & \cellcolor[HTML]{FFCBB0}27.0 & 5.3 & \cellcolor[HTML]{A9D08E}\ftextbf{4.82}& 0.96 & \cellcolor[HTML]{B6D597}5.48 & 1.12 \\ 3D-Single-MagPhs & \cellcolor[HTML]{FFC8AD}18.4 & 3.2 & \cellcolor[HTML]{FF6565}27.9 & 6.0 & \cellcolor[HTML]{FF7A75}6.28 & 1.88 & \cellcolor[HTML]{FFE6C3}5.88 & 0.72 \\ \bottomrule \multicolumn{5}{c}{} \end{tabular} \caption{Numerical results related to the DTI maps computed from the AI-processed data. We report MAE for MD and FA and MAAE for HA and E2A. In bold and underlined, respectively, the best and second-best results for each metric.} \label{tab:numerical-results-maps} \end{table} \begin{table}[tbh] \footnotesize \centering \begin{tabular}{@{}l|rrrrrr@{}} \toprule & \multicolumn{2}{c}{MAE ($\times10^3$)} & \multicolumn{2}{c}{PSNR} & \multicolumn{2}{c}{SSIM} \\ \cmidrule(l){2-7} \multirow{-2}{*}{Run name} & Median & IQR & Median & IQR & Median & IQR \\ \midrule 2D-ALL-ABS & \cellcolor[HTML]{FFF2CC}1.75 & 0.73 & \cellcolor[HTML]{BED89D}{\ul 37.2} & 3.56 & \cellcolor[HTML]{FFF1CB}0.917 & 0.021 \\ 2D-ALL-Comp & \cellcolor[HTML]{FFF1CB}1.82 & 0.78 & \cellcolor[HTML]{FFF1CB}36.6 & 3.26 & \cellcolor[HTML]{FFEEC9}0.911 & 0.026 \\ 2D-ALL-MagPhs & \cellcolor[HTML]{FFF0CB}1.84 & 0.78 & \cellcolor[HTML]{A9D08E}\ftextbf{37.4}& 2.85 & \cellcolor[HTML]{DFE6B5}0.921 & 0.020 \\ 3D-ALL-ABS & \cellcolor[HTML]{F9EFC7}1.74 & 0.62 & \cellcolor[HTML]{DBE4B2}37.0 & 2.80 & \cellcolor[HTML]{EDEBBF}0.920 & 0.018 \\ 3D-ALL-Comp & \cellcolor[HTML]{FF6565}7.15 & 3.92 & \cellcolor[HTML]{FF9E8F}31.1 & 1.41 & \cellcolor[HTML]{FF6565}0.618 & 0.012 \\ 3D-ALL-MagPhs & \cellcolor[HTML]{DAE3B1}1.71 & 0.71 & \cellcolor[HTML]{FFD8B9}34.9 & 3.94 & \cellcolor[HTML]{CCDEA8}0.922 & 0.020 \\ 2D-Single-ABS & \cellcolor[HTML]{BDD89C}1.68 & 0.76 & \cellcolor[HTML]{C0D99F}37.2 & 4.52 & \cellcolor[HTML]{BFD99E}{\ul 0.923} & 0.022 \\ 2D-Single-Comp & \cellcolor[HTML]{FFEBC7}2.04 & 0.75 & \cellcolor[HTML]{FFECC7}36.2 & 3.26 & \cellcolor[HTML]{FFE5C3}0.892 & 0.024 \\ 2D-Single-MagPhs$\ \ $ & \cellcolor[HTML]{AED291}{\ul 1.67} & 0.75 & \cellcolor[HTML]{FDF2CB}36.6 & 5.31 & \cellcolor[HTML]{A9D08E}\ftextbf{0.925}& 0.022 \\ 3D-Single-ABS & \cellcolor[HTML]{A9D08E}\ftextbf{1.66} & 0.72 & \cellcolor[HTML]{FF6565}27.2 & 5.46 & \cellcolor[HTML]{FFEDC8}0.909 & 0.028 \\ 3D-Single-Comp & \cellcolor[HTML]{FF978A}5.24 & 4.19 & \cellcolor[HTML]{FFEBC7}36.2 & 3.37 & \cellcolor[HTML]{FFE4C2}0.889 & 0.020 \\ 3D-Single-MagPhs & \cellcolor[HTML]{D3E0AC}1.70 & 0.77 & \cellcolor[HTML]{CCDEA7}37.1 & 3.83 & \cellcolor[HTML]{C0D99F}{\ul 0.923} & 0.026 \\ \bottomrule \multicolumn{5}{c}{} \end{tabular} \caption{Numerical results related to the artefact-removal output of our proposed AI models. The results here refer to the images produced by our model. In bold and underlined, respectively, the best and second-best results for each metric.} \label{tab:numerical-results-images} \end{table} We also visually report an example chosen from the test set in Figure \ref{fig:images}. \begin{figure}[tbh] \centering \includegraphics[width=0.85\textwidth]{images_comparison.drawio.pdf} \caption{Example DTI maps for an example from the test set.} \label{fig:images} \end{figure} \section{Discussion} Analysing Tables \ref{tab:numerical-results-maps} and \ref{tab:numerical-results-images}, we can see how simpler 2D models that only use magnitude data seem to be extremely stable and easy to train, resulting in very good performance overall. Moreover, these models were also faster to train compared to 3D or complex models. 2D MagPhs models also performed remarkably well given how little the architectures differs from the 2D real-values model. Fully complex and 3D models are significantly slower to train on average due to the higher number of operations needed. Moreover, they are also associated with worse performance especially for the DTI maps results. There is an important point to be made about the worse performances of the more advanced models: as we aimed to keep the number of parameters fixed, we reduced the number of channels for each layer in the more advanced models, reducing the effective capacity of these models and therefore negatively affecting their performances. It can be hypothesised that these more advanced models would perform better given a higher number of fixed parameters and a longer training time. Moreover, as our dataset was acquired with SMS factor equal to 2, the 3D architectures have little additional spatial information to exploit when removing the artefacts from the slices. If the acceleration factor were higher, it is likely that better exploiting the 3D spatial information would produce better performances. Visually, from Figure \ref{fig:images}, we can notice how the AI-derived maps all show extremely similar flaws regardless of the model used (e.g. top-left quadrant of the HA maps), suggesting that 1. the information learned by the models is similar, and 2. almost none of the models is able to overcome the incorrect information present in the SMS version of the images used as input. \section{Conclusion} As cardiac DTI becomes closer and closer to a clinical reality, deep learning is also becoming a vital tool in alleviating some of its downsides such as long scan times and low SNR. In the hurry associated with a new emerging field, many authors prioritise proof-of-concept work to showcase innovative ideas over exploring and comparing known and common options. In this work we lay out the basis of a comparison between input types and model dimensionality and we show how, despite our initial assumption, given a fixed number of parameters and a reasonable train time, 2D models vastly outperform their 3D counterparts and complex-valued networks are not preferable. On the other hand, the seemingly naive use of separate input channels for magnitude and phase data has beneficial performance over discarding the phase information as it is commonly done. As an advice for future studies, we suggest the use of 2D models and, if available, the of phase information together with the absolute information for their model-development starting point. \clearpage \bibliographystyle{splncs04}
2,877,628,090,914
arxiv
\section{Introduction} Internet Service Providers (ISP) recognize Network Function Virtualization (NFV) as a key concept to reducing capital and operational expenditures. In NFV, service provisioning is achieved by concatenating Virtual Network Functions (VNFs) in a specific sequence order, defined as Service Function Chains (SFCs). The placement of VNFs is a well known problem in the community which can follow different optimization objectives, such as network load balancing and end-to-end delay. Once VNFs are deployed in the network, the dynamic traffic demand patterns require either reallocation or scaling of VNFs to pursuing different objectives. Moreover, part of the workload may need to be migrated to the cloud due to, for instance, non-optimal deployments or insufficient resources within physical servers of the ISP. The migration and replication of VNFs is a problem widely studied from different perspectives to date. In all studies, when performing migrations in runtime, it was shown that the active flows need to be rerouted causing service disruptions. The use of replications, on the other hand, requires extra server resources, due to virtualization overhead, and extra network resources, due to state synchronization tasks. From an ISP-centric point of view, the use of third party clouds for a possible migration or replication of VNFs has an impact not only on the performance of the system but also on the monetary costs for the ISP when using third-party cloud services. For these reasons, accurate prediction of future resource utilization or traffic demand values is the key for ISP to better proactively allocate their resources. We propose to study how traffic forecasting can help us generally reduce the number of migrations and replications in ISPs, as well as the related placements in third-party clouds. We formulate the placement problem as a Mixed-Integer Linear Programming (MILP) model and solve the placement in two phases, the latter one focused on migrations and replications to be able to better understand their effects. We analyze and compare three scenarios for the VNF migrations and replications based on: (i) the current observed traffic demands only, (ii) specific maximum traffic demand value observed in the past, or (iii) predictive traffic values. In the latter case, we specifically use LSTM networks for traffic predictions. The placement model also considers the impact of migrations on the service delays due to service interruptions and the impact replications on the network and server resource utilization due to virtual machine (VM) overhead and synchronization traffic. Since the MILP model cannot be used as online solution, we propose a greedy algorithm for that purpose and analyze its performance. The rest of the paper is organized as follows. Section II presents related work and our contribution. Section III describes the reference scenario. Section IV formulates the optimization model. Section V describes the online heuristic approaches. Section VI analyzes the performance of the model and heuristics and Section VII concludes the paper. \section{Related Work and Our Contribution} \subsection{VNF placement, migrations and replications} Significant amount of previous work has focused on the placement of virtual resources for VNFs \cite{Laghrissi2019}, specially with variants of the joint optimization placement problem with different objectives. For instance, in \cite{Tajiki2017}, a resource allocation solution is proposed for optimizing energy efficiency, while considering delay, network and server utilization. \cite{Basta2017} proposed models to finding the optimal dimensioning and resource allocation with latency constraints in mobile networks. \cite{Qu_2017} studied how to optimize the VNF placement and traffic routing while considering reliability and end-to-end delays. In \cite{Golkarifard_2021}, the authors propose to solve a joint decision problem when placing VNFs considering multiple real-world aspects in order to deal with highly varying traffic requests. Within the placement problem topic, migration and replications of VNFs are known as specific sub-problems that need to be solved in the context of resource and service management. Regarding migrations, since VNFs are commonly running over VMs, there is the possibility of migrating VMs entirely \cite{Xia2016} or migrationg only the internal states of VNFs \cite{Xia2016a} to new VMs. In this regard, while the interruption and rerouting of active flows is possible \cite{Gember-Jacobson2014}, there is always a service downtime duration that will vary depending on the path latencies \cite{Taleb2019}. Some authors, like in \cite{Cziva2018}, propose a dynamic placement scheduler to minimize the end-to-end latencies when performing migrations. In \cite{Eramo2017}, a trade-off was found between the power consumption and QoS degradation to determine whether a migration is appropriate in order to minimize its negative impact due to the service interruptions. On the other hand, replications have been primarily used to provide service reliability \cite{michael2016, Engelmann2018}, whereby minimization of the number of required replicas \cite{Ding2017} is one of the main objectives. In addition, replications need to be studied in the context of reduction of end-to-end service delays \cite{Yuan2020}, load balancing on the network links \cite{Carpio2017a} or to load balance the server utilization \cite{Carpio2017b}. Studies combining both migrations and replications have also been carried out, e.g., \cite{Huang2018}, where a balancing between the number of migrations and replications is proposed in order to maximize the network throughput and minimize the delay. In our previous work \cite{Carpio2018}, we proposed an optimization method to deriving a trade-off between migrations and replications while improving server, network load balancing and QoS. Unlike migrations, replications need to consider the impact on traffic synchronization between VNFs, which is an important issue that adds considerable traffic overhead in the network \cite{Alharbi2019}. \subsection{Traffic forecasting and VNF resource requirement predictions} While NFV provides network operators more flexibility to instantiate VNFs at runtime, the dynamic change of network states due to the highly variant traffic load at the edge requires prediction mechanisms to proactively adapt the placement of VNFs accordingly. To address this issues, two approaches have been proposed, one by predicting the resources that VNFs will require based on past utilization \cite{Mijumbi_2016} while the other one by using traffic forecasting (predictions) techniques to calculate how much resources the VNFs will need to serve that traffic correspondingly \cite{Rahman_2018}. In both cases, a more traditional approach uses either the statistical analysis of time series, or machine learning. Examples of the statistical analysis can be found, for instance, in \cite{Yao_2020} where the authors introduce a mechanism based on Fourier-Series to determine upcoming demands to perform online VNF scaling. In \cite{Sun_2016}, the authors also use Fourier-Series with the same purpose but, in this case, with the objective of reducing blocking probability. A slightly different approach in this area is proposed in \cite{Tang_2019} where a method is used based on linear regression to predict traffic and to scale VNFs in order to improve service availability. Yet another example in \cite{Qu_2020} uses a fractional Brownian motion (fBm) traffic model to learn traffic parameters in order to predict time-varying VNF resource demand. Most of the recent work in this area, however, include machine learning based methods. In the area of predicting resource requirements, \cite{Mijumbi2017a} uses Feedforward Neural Networks (FNN) to predict future requirements of VNFs based on its past utilization and the influence from neighbor VNFs. With a similar objective, the \cite{Shi2015} uses a Bayesian learning approach to learn from historical resource usage data from VNFs and predict future resource reliability. Another example in \cite{Kim_2019} uses an specific type of Recurrent Neural Network (RNN) which is based on attention and embedding techniques jointly with Long Short Term Memory (LSTM) model to predict CPU utilization from VNFs with high accuracy. For traffic forecasting with ML, \cite{Alawe_2018} uses both RNN and Deep Neural Networks (DNN) to forecast traffic changes and prove that these methods can improve delay when provisioning new resources to VNFs as compared to threshold-based methods. Since one of the main objectives when traffic predicting is to determine when to scale VNFs, as discussed in \cite{Subramanya_2019}. Here, it is proposed to use of Multilayer Perceptron (MLP) to predict the required number of VNFs in relation with the network traffic to scaling the deployment of VNFs. \subsection{Our Contribution} So far, we lack studies on to how traffic prediction can be used to minimize migrations and replications of VNFs. To this end, we contribute with studying how traffic forecasting can help on reducing the number of migration and replication of VNFs by optimizing their placement in a proactive manner. This is motivated especially by three previously mentioned studies, \cite{Golkarifard_2021}, \cite{Alawe_2018} and \cite{Subramanya_2019}, that showed the need to consider highly varying traffic requests when placing VNFs in 5G networks and the role that traffic forecasting plays in placement and scaling of VNFs. We analyze this problem from an ISP point of view by using a MILP generic multipath based model comparing three scenarios: (i) when VNFs are placed only considering current observed traffic demands, (ii) when VNFs are placed considering the 80\% of the specific maximum traffic demand value and (iii) when VNFs are placed considering predicted traffic values. For traffic forecasting, we use an LSTM model which is proven to be one of the most accurate methods in time series forecasting problems. The placement model also considers the impact of migration of VNFs have on the service delays due to service interruptions, considering individual delays per each traffic demand on a per-path basis, i.e., individually per each path. Regarding replications, we consider their impact on the network and server resource utilization due to VM overhead and synchronization traffic used for maintaining states. Additionally, we propose a greedy algorithm as online solution for the MILP model and we compare it to basic random- and first-fit approaches. Finally, we contribute to by showing that traffic prediction can reduce the number of migrations when enough available resources to allocate replicas, while also reducing the utilization of the cloud. \section{Reference Scenario} \label{sec_3} We assume that an ISP owns the network infrastructure close to the end users where it install small groups of servers for the NFV Infrastructure. We also assume that the ISP uses the cloud as a third party to offload VNFs when, for instance, its own infrastructure cannot deploy new VNFs. Our model follows a two phase optimization process, in order to study the impact of migrations and replications of VNFs have on the ISP network while minimizing the utilization of the cloud. \subsection{Optimization Scenarios and assumptions} \label{opt_scenarios} Since our approach to optimizations is carried out from the point of view of an ISP who owns the physical server infrastructure, given a certain network topology with certain number of servers located in network nodes, we assume that all nodes of that topology have direct links to a third party cloud server. The specific resulting resource utilization from the links connecting to the cloud and the cloud servers are not considered in the analysis, but the geographic location of the cloud servers for service delay is. The optimization is divided in two phases. During the first one, the model optimizes by minimizing the placement of VNFs in the cloud, so the ISP network is as much utilized as possible, and also by minimizing the number of VNF replicas at certain time step $t$. After that, a second placement is carried out at time $t + \Delta t$ while considering initial placement of VNFs that took over during the first phase. In this case, minimizing the migration of VNFs from the first placement is also added to the objective altogether with the minimization of replications and cloud VNFs. Since the traffic demands, and, therefore, the amount of resources allocated by VNFs vary over time, during the first phase at time $t$, a certain traffic bandwidth is considered which is different from the one considered during the second phase after $\Delta t$. The main objective is, therefore, to study how migrations and replications can be minimized in the network while at the same time also reducing the usage of the cloud. This is done while comparing three different scenarios when optimizing during the first phase: i) considering the current observed traffic demands at time $t$, ii) considering the 80\% of the maximum traffic demand values can have and iii) considering the predicted traffic demands at time $t + \Delta t$. For the sake of simplicity, we consider a VNF instance maps 1:1 to a VM where some server resources are reserved to the VM independently of the processed traffic. We define the end-to-end service delay, as the sum of propagation delay (time for the data to travel trough the fiber), processing delay (time for the VNF to process the data) and service interruption delays caused by migrations. These delays will be explained in detail in the next section, however, let us shortly focus on the migration process in order to better understand its impact on the service delay. We assume a migration occurs when a VNF is reallocated into a new location and, still, there are active flows being served. So, we omit here the case of cold migrations. Most of the migration process occurs without affecting the perceived delay by the end user since, before performing a migration, a new VNF instance is deployed in a new location and its state is synchronized with the old instance. However, we consider there is always a short interruption of the active flows to commute to the new VNF \cite{Taleb2019}. In this sense, the service delay can be interpreted as a worst case delay. In our model, we consider a multipath based approach where every SFC can use multiple paths, whereby each path can exhibit different delays due to different links and VNFs are traversed. On the other hand, we make use of replications to address scalability but without introducing delays due to the replication process does not stop active flows. But, we do consider the synchronization traffic between replicas in order maintain their states synchronized, as we detail later. \begin{figure}[!t] \centering \subfloat[First placement]{\includegraphics[width=0.8\columnwidth]{model_init}% \label{model_init}} \hfil \subfloat[Migration during second placement]{\includegraphics[width=0.8\columnwidth]{model_mgr}% \label{model_mgr}} \hfil \subfloat[Replication during second placement]{\includegraphics[width=0.8\columnwidth]{model_rep}% \label{model_rep}} \caption{Examples of different possible scenarios for the model} \label{models} \end{figure} \subsection{Migrations and replications} To better understand the model, let us now illustrate an example (shown in Fig. \ref{model_init}) of an SFC is providing service to traffic demands $\lambda1$ and $\lambda2$ with two chained VNFs, $v1$ and $v2$ instantiated in server $x1$, from node $n1$, and server $x2$ from node $n2$, respectively. Depending on the functionality, every VNF can be of a different type $t$, however for simplification in this example, we assume all VNFs are of the same type $t$, so they all require the same amount resources. The service delay is calculated as the sum of propagation delays, processing delays and service interruption delays. As an example, assuming $D_{l}$ is the propagation delay of a link $l$ and $d_{x,v}^{\text{pro}}(\lambda)$ is the processing delay experienced by a traffic demand $\lambda$ traversing a VNF $v$ on a server $x$, then, the delay for traffic demand $\lambda_1$ using that specific path $p$ is $\hat{d}_{p}^{\lambda_1} = D_{l1} + d_{x1,v1}^{\text{pro}}(\lambda1) + d_{x2,v2}^{\text{pro}}(\lambda1)$. In this phase, which is taken as the initial placement for the second phase, we do not consider delays caused by service interruptions, since there are no migrations yet. For the second phase, the traffic demands change, so the current VNFs in the network have the possibility to be either be migrated or replicated. An example is shown in Fig. \ref{model_mgr}, where VNF $v2$ is migrated from server $x2$ to server $x3$. From the delay point of view, here because a service interruption ocurred due to active flows are stopped, a delay will be added and the new resulting service delay. Another example is shown in Fig. \ref{model_rep}, where instead of migrating, the VNF $v2$ is replicated into server $x3$ and only traffic demand $\lambda2$ is routed to the new replica location. In this case, there is synchronization traffic added between both VNFs $v2$ to maintain their states synchronized. \subsection{Traffic demand model and time series forecasting} \label{traffic_model} We assume that every source destination pair of nodes within the ISP network generates a certain number of traffic demands with specific bandwidth. The traffic demands data samples are generated using a lognormal distribution with a time-varying mean and variance, which simulates the behavior of common traffic patterns in the internet \cite{}. The time-varying mean values are obtained using superposition of sinusoidal functions, i.e.: \begin{equation} \label{traffic_equation} y(t) = \alpha + \sum_{k=1}^n \beta_k \cdot \sin(\omega_k^t + \phi_k) \end{equation} , where $\alpha$ is a constant amplitude, $\beta_k$ and $\phi_k$ are frequency dependent constants, and $n$ the number of frequency components, in our case equal to 2. We generate 24 data samples per period simulating one day. An example of a resulting function is shown in Fig. \ref{fig:traffic_gen}. In the first scenario, during the first placement the VNFs are allocated based on the observed traffic at that specific time step. In the second scenario, the VNFs are allocated assuming the demands values are at the 80\% of the specific maximum traffic demand value instead of considering the real observed values. This let us consider this case as the most conservative one, since an overprovisioning of resources will occur in most of the cases. In the third scenario, the VNFs are allocated considering the predicted traffic demand values after $\Delta t$ instead of the observed ones. Then, the resulting placement from the three scenarios during this first placement is used as initial condition for the optimization of the second phase, where in all cases only the the real observed values are considered. For the last scenario, a time series forecasting problem is modelled where a certain number of periods $D-1$ are used for training and one period for evaluation. We specifically use one LSTM network for every traffic demand with input and output sizes of 1 unit and 8 units in a hidden layer. The model uses Rectified Linear Unit (ReLU) as the activation function and is fit with Adam optimizer and optimized using the mean squared error (\emph{mse}) loss function. The batch size for the model is 4 and the validation data is 10\% of the total. The number of epochs is not constrained, instead an early stopping function is used with a minimum delta of 0.001 and a patience of 10 epochs. Specific parameters are later described during the evaluation of the model. \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{traffic_gen} \caption{Normalized traffic demand example} \label{fig:traffic_gen} \end{figure} \newcommand{n \in \mathbb{N}}{n \in \mathbb{N}} \newcommand{m \in \mathbb{N}}{m \in \mathbb{N}} \newcommand{n \in \mathbb{N}_p}{n \in \mathbb{N}_p} \newcommand{m}{m} \newcommand{\m \in \mathbb{N}_p}{m \in \mathbb{N}_p} \newcommand{y}{y} \newcommand{x \in \mathbb{X}}{x \in \mathbb{X}} \newcommand{x \in \mathbb{X}_C}{x \in \mathbb{X}_C} \newcommand{\y \in \mathbb{X}}{y \in \mathbb{X}} \newcommand{x \in \mathbb{X}_n}{x \in \mathbb{X}_n} \newcommand{\y \in \mathbb{X}_m}{y \in \mathbb{X}_m} \newcommand{x \in \mathbb{X}_p}{x \in \mathbb{X}_p} \newcommand{\ell \in \mathbb{L}}{\ell \in \mathbb{L}} \newcommand{p \in \mathbb{P}}{p \in \mathbb{P}} \newcommand{p \in \mathbb{P}_s}{p \in \mathbb{P}_s} \newcommand{s \in \mathbb{S}}{s \in \mathbb{S}} \newcommand{\lambda \in \Lambda}{\lambda \in \Lambda} \newcommand{\lambda \in \Lambda_s}{\lambda \in \Lambda_s} \newcommand{\lambda' \in \Lambda_s}{\lambda' \in \Lambda_s} \newcommand{v \in \mathbb{V}_s}{v \in \mathbb{V}_s} \newcommand{t \in \mathbb{T}}{t \in \mathbb{T}} \newcommand{y \in \mathbb{Y}}{y \in \mathbb{Y}} \newcommand{T_{p}^\ell}{T_{p}^\ell} \newcommand{T_{p}^{n, m}}{T_{p}^{n, m}} \newcommand{\Gamma_{t(v)}^\mathrm{pro}}{\Gamma_{t(v)}^\mathrm{pro}} \newcommand{\Gamma_{t(v)}^\mathrm{syn}}{\Gamma_{t(v)}^\mathrm{syn}} \newcommand{\Theta_{t(v)}^s}{\Theta_{t(v)}^s} \newcommand{\vartheta}{\vartheta} \newcommand{R_{t(v)}}{R_{t(v)}} \newcommand{D_\ell}{D_\ell} \newcommand{D_s^{\mathrm{max}}}{D_s^{\mathrm{max}}} \newcommand{\hat{D}_{s}^\mathrm{{max}}}{\hat{D}_{s}^\mathrm{{max}}} \newcommand{D_{t(v)}^\mathrm{proq}}{D_{t(v)}^\mathrm{proq}} \newcommand{D_{t(v)}^\mathrm{prox}}{D_{t(v)}^\mathrm{prox}} \newcommand{D_{t(v)}^\mathrm{pro,max}}{D_{t(v)}^\mathrm{pro,max}} \newcommand{D_{t(v)}^\mathrm{pro\_x,min}}{D_{t(v)}^\mathrm{pro\_x,min}} \newcommand{D^\mathrm{dwt}}{D^\mathrm{dwt}} \newcommand{C_x^\mathrm{max}}{C_x^\mathrm{max}} \newcommand{C_{\ell}^\mathrm{max}}{C_{\ell}^\mathrm{max}} \newcommand{C_{x, t(v)}^\mathrm{proq,max}}{C_{x, t(v)}^\mathrm{proq,max}} \newcommand{E_{i}}{E_{i}} \newcommand{ \alpha_{u}}{ \alpha_{u}} \newcommand{K_x}{K_x} \newcommand{K_{t(v)}}{K_{t(v)}} \newcommand{\rho}{\rho} \newcommand{x_n}{x_n} \newcommand{z_{p}^s}{z_{p}^s} \newcommand{z_{p}^{\lambda,s}}{z_{p}^{\lambda,s}} \newcommand{z_{p}^{\lambda',s}}{z_{p}^{\lambda',s}} \newcommand{f_x}{f_x} \newcommand{f_x^{v,s}}{f_x^{v,s}} \newcommand{f_\y^{v,s}}{f_y^{v,s}} \newcommand{F_x^{v,s}}{F_x^{v,s}} \newcommand{f_{x,\lambda}^{v,s}}{f_{x,\lambda}^{v,s}} \newcommand{f_{x,\lambda'}^{v,s}}{f_{x,\lambda'}^{v,s}} \newcommand{ f_{\y, \lambda}^{(v-1),s}}{ f_{y, \lambda}^{(v-1),s}} \newcommand{ g_{x, \y}^{v,s}}{ g_{x, y}^{v,s}} \newcommand{h_{p}^{v,s}}{h_{p}^{v,s}} \newcommand{k_\ell}{k_\ell} \newcommand{k_x}{k_x} \newcommand{k_v^s}{k_v^s} \newcommand{q_p^{\lambda, s}}{q_p^{\lambda, s}} \newcommand{\hat{q}_p^{\lambda, s}}{\hat{q}_p^{\lambda, s}} \newcommand{\hat{\hat{q}}_p^{\lambda, s}}{\hat{\hat{q}}_p^{\lambda, s}} \newcommand{y_p^{\lambda, s}}{y_p^{\lambda, s}} \newcommand{u_\ell}{u_\ell} \newcommand{u_x}{u_x} \newcommand{d_p^{s}}{d_p^{s}} \newcommand{d_p^{\lambda,s}}{d_p^{\lambda,s}} \newcommand{\hat{d}_p^{\lambda,s}}{\hat{d}_p^{\lambda,s}} \newcommand{d_{x, v, s}^{\mathrm{pro}}}{d_{x, v, s}^{\mathrm{pro}}} \newcommand{d_{x, v, s}^{\mathrm{proq}}}{d_{x, v, s}^{\mathrm{proq}}} \newcommand{d_{x, v, s}^{\mathrm{prox}}}{d_{x, v, s}^{\mathrm{prox}}} \newcommand{d_{s}^{\mathrm{dwt}}}{d_{s}^{\mathrm{dwt}}} \newcommand{d_{x, \lambda}^{v,s}}{d_{x, \lambda}^{v,s}} \newcommand{\Lambda_x^{v,s}}{\Lambda_x^{v,s}} \begin{table}[!h] \caption{Parameters and variables notation} \label{notation} \centering \begin{tabular}{>{\centering\arraybackslash}p{0.18\columnwidth} p{0.74\columnwidth}@{}} \toprule \textbf{Param.} & \textbf{Meaning} \\ \midrule $\mathbb{N}$ & set of nodes: $\mathbb{N} = \{1,...,N\}$, $n \in \mathbb{N}$. \\ $\mathbb{X}$ & set of servers: $\mathbb{X} = \{1,...,X\}$, $x \in \mathbb{X}$. \\ $\mathbb{L}$ & set of links: $\mathbb{L} = \{1,...,L\}$, $\ell \in \mathbb{L}$. \\ $\mathbb{P}$ & set of admissible paths: $\mathbb{P} = \{1,...,P\}$, $p \in \mathbb{P}$. \\ $\mathbb{S}$ & set of SFCs: $\mathbb{S} = \{1,...,S\}$, $s \in \mathbb{S}$. \\ $\mathbb{T}$ & set of VNF types: $\mathbb{T} = \{1,...,T\}$, $t \in \mathbb{T}$. \\ $\mathbb{V}_s$ & ordered set, $v \in \mathbb{V}_s$ is the $v$\textsuperscript{th} VNF in set $\mathbb{V}_s$. \\ $\Lambda$ & set of traffic demands: $\Lambda = \{1,...,\Lambda\}$, $\lambda \in \Lambda$. \\ $\Lambda_s \subseteq \Lambda$ & subset of traffic demands $\lambda \in \Lambda_s$ for SFC $s \in \mathbb{S}$. \\ $\mathbb{N}_p \subseteq \mathbb{N}$ & subset of ordered nodes in path $p \in \mathbb{P}$. \\ $\mathbb{X}_n \subseteq \mathbb{X}$ & subset of servers attached to node $n \in \mathbb{N}$. \\ $\mathbb{X}_p \subseteq \mathbb{X}$ & subset of ordered servers in path $p \in \mathbb{P}$. \\ $\mathbb{X}_C \subseteq \mathbb{X}$ & subset of servers located at the cloud. \\ $\mathbb{P}_s \subseteq \mathbb{P}$ & subset of admissible paths $p \in \mathbb{P}_s$ for $s \in \mathbb{S}$. \\ $T_{p}^\ell, T_{p}^{n, m}$ & binary, 1 if path $p \in \mathbb{P}$ traverses link $\ell \in \mathbb{L}$ and 1 if connects node $n \in \mathbb{N}$ and $m \in \mathbb{N}$ as source and destination path nodes respectively. \\ $\Gamma_{t(v)}^\mathrm{pro}, \Gamma_{t(v)}^\mathrm{syn}$ & continuous, load ratio of a VNF of type $t \in V_t$ and traffic ratio for synchronization traffic between two VNFs of type $t \in V_t$, respectively. \\ $\Theta_{t(v)}^s$ & integer, overhead for VNF $v \in \mathbb{V}_s$ of type $t \in V_t$. \\ $C_{\ell}^\mathrm{max}, C_x^\mathrm{max}$ & integers, maximum capacity of link $\ell \in \mathbb{L}$ and of server $x \in \mathbb{X}$, respectively. \\ $C_{x, t(v)}^\mathrm{proq,max}$ & integer, maximum processing capacity that can be assigned by a server $x$ to a VNF of type $t$ \\ $D_\ell$ & continuous, propagation delay of link $\ell \in \mathbb{L}$. \\ $D_s^{\mathrm{max}}, D^\mathrm{dwt}$ & continuous, max. service delay of a SFC $s \in \mathbb{S}$ and service downtime duration caused by a migration, respectively. \\ $D_{t(v)}^\mathrm{pro,max}$ & continuous, maximum allowed processing delay for a VNF of type $t$. \\ $D_{t(v)}^\mathrm{proq}, D_{t(v)}^\mathrm{prox}$ & continuous, delay of a VNF $v$ of type $t$ due to queues and processing, respectively. \\ \toprule \textbf{Vars.} & \textbf{Meaning} \\ \midrule $z_{p}^s$ & binary, 1 if SFC $s$ uses path $p \in \mathbb{P}_s$. \\ $z_{p}^{\lambda,s}$ & binary, 1 if traffic demand $\lambda$ from SFC $s$ uses path $p \in \mathbb{P}_s$. \\ $f_x$ & binary, 1 if server $x$ is used, 0 otherwise. \\ $f_x^{v,s}$ & binary, 1 if VNF $v \in \mathbb{V}_s$ from SFC $s$ is allocated at server $x \in \mathbb{X}$, 0 otherwise. \\ $f_{x,\lambda}^{v,s}$ & binary, 1 if VNF $v \in \mathbb{V}_s$ from SFC $s$ is used at server $x \in \mathbb{X}$ by traffic demand $\lambda$, 0 otherwise. \\ $h_{p}^{v,s}$ & binary, 1 if VNF $v \in \mathbb{V}_s$ from SFC $s$ uses path $p \in \mathbb{P}$ for state synchronization, 0 otherwise. \\ $d_p^{\lambda,s}$ & continuous, service delay of a traffic demand $\lambda$ in path $p$. \\ $u_\ell$, $u_x$ & continuous, utilization of a link $\ell \in \mathbb{L}$ and server $x \in \mathbb{X}$, respectively. \\ \bottomrule \end{tabular} \end{table} \section{Problem Formulation} \label{LP_models} We model the network as $\mathbb{G}=(\mathbb{N} \cup \mathbb{X}, \mathbb{L})$ where $\mathbb{N} = \{1,...,N\}$ is a set of nodes, $\mathbb{X} = \{1,...,X\}$ is a set of servers and $\mathbb{L} = \{1,...,L\}$ is a set of directed links. Specifically, $\mathbb{X}_n$ is a subset of servers $x \in \mathbb{X}$ attached to node $n \in \mathbb{N}$. We denote the set of all SFCs as $\mathbb{S} = \{1,...,S\}$, where a specific SFC $s \in \mathbb{S}$ is an ordered set of VNFs $\mathbb{V}_s = \{1,...,V_s\}$, each VNF being of type $t$, $t \in \mathbb{T}$, $\mathbb{T} = \{1,...,T\}$, where $v \in \mathbb{V}_s$ is the $v$\textsuperscript{th} VNF in set $V_s$. Table \ref{notation} summarizes the notations. It should be noted that the model is written such that it can be efficiently used in optimization solvers. For instance, the big M method is avoided when possible or its value is minimized in order to avoid numerical issues with the solver. \subsection{Objective Function} We define the joint optimization problem as the minimization of the sum of the number of migrations and replications, i.e., \begin{subequations} \label{obj_func} \begin{align} \text{\emph{minimize}}: \quad & \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} \bigg[ W_m \sum_{x \in \mathbb{X}} F_x^{v,s} (1 - f_x^{v,s}) \\ & + W_r \Big[ \big(\sum_{x \in \mathbb{X}} f_x^{v,s} \big) - 1 \Big] \bigg] \\ & + W_c \sum_{x \in \mathbb{X}_C} \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} f_x^{v,s} \end{align} \end{subequations} , where the variable $f_x^{v,s}$ specifies if a VNF $v$ from service chain $s$ is allocated in server $x$. Since the optimization process follows two different phases, after the first placement we take the value of variables $f_x^{v,s}$ and convert them into the input parameters $F_x^{v,s}$ for the next placement step, i.e. \begin{equation} \label{initialsolutionmapping} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall x \in \mathbb{X}: f_x^{v,s} \Rightarrow F_x^{v,s} \end{equation} The parameter $F_x^{v,s}$ determines if a VNF $v$ of a service chain $s$ was placed on server $x$ during the initial placement. In this way, the first term of the equation (\ref{obj_func}) counts the number of migrations, the second term counts the number of replications and the third term counts the number of functions allocated in cloud servers (here only $\mathbb{X}_C$ subset is considered). We next follow up with the definition of constraints. \subsection{General Constraints} The general constraints are related to the traffic routing, the VNF placement and the mapping between VNFs and paths. \subsubsection{Routing} For a given network, the input set $p \in \mathbb{P}_s$ is the set of all pre-calculated paths for SFC $s$. The binary variable $z_{p}^{\lambda,s}=1$ indicates, that a traffic demand $\lambda \in \Lambda_s$ of the SFC $s$ is using path $p \in \mathbb{P}_s$. The first routing constraint specifies that each traffic demand $\lambda \in \Lambda_s$ from SFC $s \in \mathbb{S}$ has to use only one path $p \in \mathbb{P}_s$, i.e.: \begin{equation} \label{onePathPerDemand} \forall s \in \mathbb{S}, \forall \lambda \in \Lambda_s: \sum_{p \in \mathbb{P}_s} z_{p}^{\lambda,s} = 1 \end{equation} Then, the next constraint takes the activated paths from the variable $z_{p}^{\lambda,s}$ and activates the path for a certain SFC $s$: \begin{equation} \label{activatePathForService} \forall s \in \mathbb{S}, \forall p \in \mathbb{P}_s, \forall \lambda \in \Lambda_s: z_{p}^{\lambda,s} \leq z_{p}^s \leq \sum_{\lambda' \in \Lambda_s} z_{p}^{\lambda',s} \end{equation} This forces $z_{p}^s$ to be 1 when at least one traffic demand is using path $p$, whereas the right side forces to $z_{p}^s$ to be 0 when no traffic demand $\lambda$ is using path $p$. \subsubsection{VNF placement} VNF placement is modeled using the binary variable $f_{x,\lambda}^{v,s}$, which has only value 1, if VNF $v$ from SFC $s$ is allocated at server $x \in \mathbb{X}$ and used by traffic demand $ \lambda \in \Lambda_s$. Similar to (\ref{onePathPerDemand}), the next constraint defines that each traffic demand $\lambda \in \Lambda_s$ from SFC $s \in \mathbb{S}$ traverses every VNF $v \in \mathbb{V}_s$ in only one specific server $x \in \mathbb{X}$: \begin{equation} \label{oneFunctionPerDemand} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall \lambda \in \Lambda_s: \sum_{x \in \mathbb{X}} f_{x,\lambda}^{v,s} = 1 \end{equation} Then, similarly to (\ref{activatePathForService}), the next constraint takes the activated VNFs for each traffic demand from the variable $f_{x,\lambda}^{v,s}$ and activates the VNF for a certain SFC $s$ as follows: \begin{equation} \label{mappingFunctionsWithDemands} \begin{split} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall x \in \mathbb{X}, \forall \lambda \in \Lambda_s: \\ f_{x,\lambda}^{v,s} \leq f_x^{v,s} \leq \!\!\!\! \sum_{\lambda' \in \Lambda_s} f_{x,\lambda'}^{v,s} \end{split} \end{equation} , where the left side forces to $f_x^{v,s}$ to be 1 when at least one traffic demand $\lambda \in \Lambda_s$ is using VNF $v \in \mathbb{V}_s$ at server $x \in \mathbb{X}$ and the right side forces to $f_x^{v,s}$ to be 0 when no traffic demand is using that specific VNF $v$ on server $x$. Likewise, we determine if a server is being used or not by constraining the variable $f_x$ as: \begin{equation} \label{used-server} \forall x \in \mathbb{X}: \frac{1}{|\mathbb{S}||\mathbb{V}_s|} \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} f_x^{v,s} \leq f_x \leq \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} f_x^{v,s} \text{ ,} \end{equation} where $f_x$ is 1 if at least one VNF from any SFC is allocated at server $x \in \mathbb{X}$, 0 otherwise. \subsubsection{Mapping VNFs to paths} The next equation maps the activated VNF to the activated paths defined in the previous constraints. The first one defines how many times a VNF can be replicated: \begin{equation} \label{pathsConstrainedByFunctions} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s: \sum_{x \in \mathbb{X}} f_x^{v,s} \leq R_{t(v)} \sum_{p \in \mathbb{P}_s} z_{p}^s + 1 - R_{t(v)} \end{equation} , where $R_{t(v)}$ specifies if a certain VNF $v$ of type $t$ is replicable. When $R_{t(v)}$ is 0, the total number of activated VNFs $v \in \mathbb{V}_s$ from SFC $s \in \mathbb{S}$ is $\sum_{x \in \mathbb{X}} f_x^{v,s} \leq 1$. In case the VNF is replicable, then the maximum number of replicas is limited by the total number of activated paths $\sum_{p \in \mathbb{P}_s} z_{p}^s$ for that specific SFC $s$. The next constraint activates the VNFs on the activated paths: \begin{equation} \label{functionPlacement} \forall s \in \mathbb{S}, \forall p \in \mathbb{P}_s, \forall \lambda \in \Lambda_s , \forall v \in \mathbb{V}_s: z_{p}^{\lambda,s} \leq \sum_{x \in \mathbb{X}_p} f_{x,\lambda}^{v,s} \end{equation} If the variable $z_{p}^{\lambda,s}$ is activated, then every VNF $v \in \mathbb{V}_s$ from SFC $s \in \mathbb{S}$ has to be activated in some server $x \in \mathbb{X}_p$ from the path $p \in \mathbb{P}$ for a specific traffic demand $\lambda$. When $z_{p}^{\lambda,s}$ is deactivated, then no VNFs can be placed for that specific traffic demand. The last general constraint controls that all VNFs $V_s$ from a specific SFC $s$ are traversed by every traffic demand $\lambda \in \Lambda_s$ in the given order, i.e.: \begin{equation} \label{functionSequenceOrder} \begin{split} \forall s \in \mathbb{S}, \forall \lambda \in \Lambda_s, \forall p \in \mathbb{P}_s, \forall v \in \mathbb{V}_s, \forall n, m \in \mathbb{N}: \\ \Bigg( \sum_{m = 1}^{n} \sum_{\y \in \mathbb{X}_m} \! f_{\y, \lambda}^{(v-1),s} \!\! \Bigg) - \! \!\! \sum_{x \in \mathbb{X}_n} \!\! f_{x,\lambda}^{v,s} \geq z_{p}^{\lambda,s} \! - 1 \left\{ \begin{array}{ll} 1< v \leq |\mathbb{V}_s | \\ n \neq m \end{array} \right. \end{split} \end{equation} , where the variable $z_{p}^{\lambda,s}$ activates the ordering constraint side (left side) when is 1 and deactivates it, otherwise. Then, if path $p \in \mathbb{P}$ is activated, the ordering is checked for every traffic demand $\lambda \in \Lambda_s$ individually by using the variable $f_{x,\lambda}^{v,s}$. Hence, for every traffic demand $\lambda$ of SFC $s$, the $v$\textsuperscript{th} VNF is allocated at server $x \in \mathbb{X}_n$ only if the previous $(v - 1)$\textsuperscript{th} VNF is allocated at any server $\y \in \mathbb{X}_m$, where $m$ is the i\textsuperscript{th} node from 1 until $n$ traversed by path $p$. It should be noted, that the correct sequence of VNFs relies on the correct sequence of subset of servers, i.e. $x \in \mathbb{X}_n$. This assumes that the correct sequence of VNFs inside these subsets is organized by the local routing, which may be located at the node $n$ or at a local switch not modeled in detail. \subsection{Traffic and Performance Constraints} \subsubsection{Synchronization traffic} When performing replications of VNFs, the stateful states between the original and replicas has to be maintained in order to be reliable against VNF failures and avoiding the loss of information. For this reason, we consider that when a VNF is replicated, the generated synchronization traffic between replicas and the original has to be also considered. The amount of the state synchronization traffic depends on the state space and its time dynamic, where it is assumed, that each VNF has full knowledge on the state of all its instances used to implement the VNF $v \in \mathbb{V}_s$. Let us assume, that this amount is proportional to the total traffic offered to the SFC weighted by an synchronization ratio $\Gamma_{t(v)}^\mathrm{syn}$, which depends on the type of VNF $t$. In summary, the directional traffic from a VNF to its replica is given by $\Gamma_{t(v)}^\mathrm{syn} | \Lambda_s | $, and its routing should be optimized within the network. In order to know if the same VNF $v \in \mathbb{V}_s$ from SFC $s$ is placed in two different servers $x \in \mathbb{X}$ and $\y \in \mathbb{X}$, we define: \begin{equation} \label{sync-traffic} \begin{split} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall x \in \mathbb{X}, \forall \y \in \mathbb{X}: \\ g_{x, \y}^{v,s} = f_x^{v,s} f_\y^{v,s} \text{, \quad for } y \! \neq \! x \end{split} \end{equation} , where the variable $ g_{x, \y}^{v,s}$ is 1 only when both variables $f_x^{v,s}$ and $f_\y^{v,s}$ are also 1, and 0 otherwise. In this way, this variable is used to know if two different servers have the same VNF placed, which means that model is allocating one replica. We use the well-known linearization method when multiplying two binary variables. In case $ g_{x, \y}^{v,s} = 1$, we need to carry the synchronization traffic from server $x$ to $y$, by selecting only one predefined path between them, i.e.: \begin{equation} \label{hpvs} \begin{split} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall n, m \in \mathbb{N}, \forall x \in \mathbb{X}_n, \forall \y \in \mathbb{X}_m: \\ g_{x, \y}^{v,s} \leq \sum_{p \in \mathbb{P}} h_{p}^{v,s} \cdot T_{p}^{n, m} \leq 1 \text {,\quad for } n \neq m \end{split} \end{equation} \begin{equation} \label{hpvs_2} \begin{split} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall n, m \in \mathbb{N}: \\ \sum_{p \in \mathbb{P}} h_{p}^{v,s} \cdot T_{p}^{n, m} \leq \sum_{x \in \mathbb{X}_n} \sum_{\y \in \mathbb{X}_m} g_{x, \y}^{v,s} \text {,\quad for } n \neq m \end{split} \end{equation} , where the constant $ T_{p}^{n, m} = 1$ indicates, that the path $ p \in \mathbb{P}$ exists which connects servers $ x \in \mathbb{X}_n$ and $ y \in X_m$ using the shortest path between nodes $n$ and $m$. The right term of \eqref{hpvs} guarantees that only one path $p \in \mathbb{P}$ is selected by variable $h_{p}^{v,s}$. Moreover, \eqref{hpvs_2} guarantees that this path is only used if at least one $ g_{x, \y}^{v,s}$ is 1. Note that $h_{p}^{v,s}$ is a binary variable used for every VNF $v$ of SFC $s$. \subsubsection{Link and server utilization} The utilization of a link is calculated as follows: \begin{equation} \label{linkutil} \begin{split} \forall \ell \in \mathbb{L}: u_\ell = \frac{1}{C_{\ell}^\mathrm{max}} \sum_{s \in \mathbb{S}} \sum_{p \in \mathbb{P}_s} \sum_{\lambda \in \Lambda_s} \lambda \cdot T_{p}^\ell \cdot z_{p}^{\lambda,s} + \\ \frac{1}{C_{\ell}^\mathrm{max}} \sum_{p \in \mathbb{P}} T_{p}^\ell \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} \Gamma_{t(v)}^\mathrm{syn} \cdot |\Lambda_s| \cdot h_{p}^{v,s} \leq 1 \text{ ,} \end{split} \end{equation} where $\lambda \cdot T_{p}^\ell$ adds the traffic demands from SFC $s \in \mathbb{S}$ when a path $p \in \mathbb{P}_s$ traverses the link $\ell \in \mathbb{L}$. Then, the variable $z_{p}^{\lambda,s}$ specifies if the traffic demand $\lambda$ from SFC $s$ is using path $p$. The second term is the sum of the extra traffic generated due to the state synchronization between VNFs $v \in \mathbb{V}_s$ from SFC $s$, which is proportional to its total traffic $|\Lambda_s|$ multiplied by the synchronization traffic ratio $\Gamma_{t(v)}^\mathrm{syn}$ of the VNF of type $t$. This traffic is only added, if the variable $h_{p}^{v,s}$ is 1, which indicates that path $p \in \mathbb{P}$ is used for synchronization by a VNF $v$ from SFC $s$, and the link $ \ell \in \mathbb{L}$ belongs to this path. Both summation terms are divided by the maximum link capacity $C_{\ell}^\mathrm{max}$ to restrict the utilization. The processing load of a server is derived as \begin{equation} \label{server_load} \gamma_x = \sum_{s \in \mathbb{S}} \sum_{v \in \mathbb{V}_s} \Big( \Gamma_{t(v)}^\mathrm{pro} \sum_{\lambda \in \Lambda_s} \lambda \cdot f_{x,\lambda}^{v,s} + \Theta_{t(v)}^s \cdot f_x^{v,s} \Big) \end{equation} , where the first term sums the traffic $\lambda \in \Lambda_s$ that is using the VNF $v \in \mathbb{V}_s$ from SFC $s \in \mathbb{S}$ at server $x \in \mathbb{X}$, which is determined by the variable $f_{x,\lambda}^{v,s}$, and multiplied by the processing load ratio $\Gamma_{t(v)}^\mathrm{pro}$ of the VNF of type $t$. The second term adds the overhead generated by the VM where the VNF is running and is only added, when the variable $f_x^{v,s}$ determines that this VNF is placed in server $x$. Then, the utilization follows to be given by \begin{equation} \label{serverutil} \forall x \in \mathbb{X}: u_x = \frac{\gamma_x}{C_x^\mathrm{max}} \leq 1 \text{ ,} \end{equation} where $C_x^\mathrm{max}$ is the maximum processing capacity. \subsubsection{Service delay} \label{service_delay} Since every service has a maximum allowed delay $D_s^{\mathrm{max}}$ specified in the SLA agreement, in case of exceeding it, some penalty costs are applied. In our model, and for simplicity, we take into account the propagation delay due to the traversed links, the processing delay that every VNF requires in the servers and, where applicable, the downtime delays caused by the interruption of the service during the migrations of VNFs. \emph{Processing delay}: The processing delay $d_{x, v, s}^{\mathrm{pro}}$ of a VNF $v$ in a server $x$ depends, on the one side, on the amount of traffic being processed by a specific VNF, described by $d_{x, v, s}^{\mathrm{proq}}$, and on $d_{x, v, s}^{\mathrm{prox}}$, which is related to the VNF type and the total server load $u_x$, given as \begin{subequations} \label{processing_delay_equations} \begin{equation} \forall s \in \mathbb{S}, \forall v \in \mathbb{V}_s, \forall x \in \mathbb{X}_p: d_{x, v, s}^{\mathrm{pro}} = d_{x, v, s}^{\mathrm{proq}} + d_{x, v, s}^{\mathrm{prox}} \end{equation} \begin{equation} d_{x, v, s}^{\mathrm{proq}} = D_{t(v)}^\mathrm{proq} \frac{ \Gamma_{t(v)}^\mathrm{pro} \cdot \sum_{\lambda \in \Lambda_s} f_{x,\lambda}^{v,s} \cdot \lambda}{C_{x, t(v)}^\mathrm{proq,max}} \label{processing_delay_equations_B} \end{equation} \begin{equation} d_{x, v, s}^{\mathrm{prox}} = D_{t(v)}^\mathrm{pro\_x,min} \cdot f_x^{v,s} + D_{t(v)}^\mathrm{prox} \cdot u_x \label{processing_delay_equations_C} \end{equation} \end{subequations} In \eqref{processing_delay_equations_B}, the numerator of $d_{x, v, s}^{\mathrm{proq}}$ determines the total processing load assigned to the VNF of type $t$, which is controlled by the variables $f_{x,\lambda}^{v,s}$. Thus, if the assigned processing load is equal to $C_{x, t(v)}^\mathrm{proq,max}$, the VNF adds the processing delay $D_{t(v)}^\mathrm{proq}$. The second delay term, given in \eqref{processing_delay_equations_C}, adds the load independent minimum delay associated to the usage of a type of this VNF, and a delay part which increases with the server utilization. As a consequence the processing delay $d_{x, v, s}^{\mathrm{pro}}(\vec{ \lambda})$ depends on the server $x$, the used VNF type and linearly increases with increasing traffic. Furthermore, the dependency on all traffic demands is denoted by the vector $\vec{ \lambda}$, which is omitted for simplicity in \eqref{processing_delay_equations}. \emph{Downtime duration}: If a VNF $v$ of SFC $s$ has to be migrated, we assume an interruption of the service with duration $D^\mathrm{dwt}$. Thus, the total service downtime will consider the migration of all VNFs in that SFC which yields a constraint as follows: \begin{equation} \label{migration_delay_equations} \forall s \in \mathbb{S}: d_{s}^{\mathrm{dwt}} = D^\mathrm{dwt} \sum_{x \in \mathbb{X}} \sum_{v \in \mathbb{V}_s} F_x^{v,s} (1 - f_x^{v,s}) \end{equation} , where the parameter $F_x^{v,s}$ determines if a VNF $v$ was placed on server $x$ during the first placement. Thus, if a VNF migrates to another server $y \neq x$, the variable $f_x^{v,s}$ is equal to zero and the service downtime $D^\mathrm{dwt}$ has to be taken into account. \emph{Total delay}: Because the model allows that different traffic demands per service can be assigned to different paths, we define individual end-to-end delay $\hat{d}_p^{\lambda,s}$ for every traffic demand, as follows: \begin{equation} \label{exact_service_delay} \begin{split} \forall s \in \mathbb{S}, \forall \lambda \in \Lambda_s, \forall p \in \mathbb{P}_s: \\ \hat{d}_p^{\lambda,s} = \sum_{\ell \in \mathbb{L}} D_\ell \cdot T_{p}^\ell + \sum_{x \in \mathbb{X}_p} \sum_{v \in \mathbb{V}_s} d_{x, v, s}^{\mathrm{pro}}(\vec{\lambda}) \cdot f_{x,\lambda}^{v,s} + d_{s}^{\mathrm{dwt}} \end{split} \end{equation} The first term is the propagation delay, where $D_\ell$ is the delay of the link $\ell$, and $T_{p}^\ell$ specifies if the link $\ell$ is traversed by path $p \in \mathbb{P}_s$. The second term adds the processing delays caused by all VNFs from the SFC placed on the servers $x \in \mathbb{X}_p$, in which the variable $f_{x,\lambda}^{v,s}$ has to ensure that the demand $\lambda$ is processed at a specific server $x$. Finally, the third term is the total downtime duration due to the migrations of that service chain. It should be noted that the second term of \eqref{exact_service_delay} includes a nonlinear relation between the binary variable $f^{v,s}_{x, \lambda}$ and the delay variable $d_{x, v, s}^{\mathrm{pro}}$, which also depends on all decision variables $f^{v{'},s{'}}_{x, \lambda{'}}$. To solve that, we introduce a new delay variable $d_{x, \lambda}^{v,s}$, which is bounded as follows: \begin{equation} \label{new-variable} d_{x, v, s}^{\mathrm{pro}} - D_{t(v)}^\mathrm{pro,max}(1 - f_{x,\lambda}^{v,s}) \leq d_{x, \lambda}^{v,s} \leq D_{t(v)}^\mathrm{pro,max} \cdot f_{x,\lambda}^{v,s} \end{equation} If the VNF is selected at server $x$ by $f_{x,\lambda}^{v,s}=1$, the variable is lower bounded by the exact delay $d_{x, v, s}^{\mathrm{pro}}$ and upper bounded by the maximum VNF delay $D_{t(v)}^\mathrm{pro,max}$. Since $d_{x, v, s}^{\mathrm{pro}} \leq d_{x, \lambda}^{v,s} \leq D_{t(v)}^\mathrm{pro,max}$, the specific delay of a VNF can be restricted. If the VNF is not selected, i.e., $f_{x,\lambda}^{v,s}=0$, the variable has value $d_{x, \lambda}^{v,s}=0$, since the constant $D_{t(v)}^\mathrm{pro,max}$ makes the left size of \eqref{new-variable} to be negative. Hence, the end-to-end delay is mapped to an upper and lower bounded variable $d_p^{\lambda,s}$ given as \begin{equation} \label{total_service_delay} \begin{split} \forall s \in \mathbb{S}, \forall \lambda \in \Lambda_s, \forall p \in \mathbb{P}_s: \\ d_p^{\lambda,s} = \sum_{\ell \in \mathbb{L}} D_\ell \cdot T_{p}^\ell + \sum_{x \in \mathbb{X}_p} \sum_{v \in \mathbb{V}_s} d_{x, \lambda}^{v,s} + d_{s}^{\mathrm{dwt}} \quad \text{,} \end{split} \end{equation} in which the bounding feature is used in the optimization scenarios described next. \section{Online Heuristic Approaches} Since the model presented is a MILP optimization problem and these models are known to be NP-hard \cite{Bulut2015}, in this section we propose a greedy algorithm to work as an online solution and, First-Fit and Random-Fit algorithms for comparison purposes. \subsection{First-Fit and Random-Fit algorithms} Both \emph{First-Fit} (FF) and \emph{Random-Fit} (RF) algorithms are described in Algorithm \ref{algorithm:ff_rf}. While both approaches share most of the code, the \emph{FF\_RF} parameter specifies whether the code has to run FF or RF. The process starts with a loop where every demand from every SFC is going to be considered (line \ref{main_loop_FFRF}). The first step is to then retrieve all the paths with enough link resources to assign traffic demand $\lambda$ and that also connect both source and destination nodes (line \ref{admissible_paths_FFRF}). These paths are saved into $\mathbb{P}_s'$, from where one admissible path $p$, first one for FF or a random one for RF, is selected (line \ref{choose_path_FFRF}). In this point, we make sure here that in this path, there are enough server resources to allocate all the VNFs for SFC $s$. Then, from that path, for every VNF $v$ from SFC $s$ (line \ref{for_functions_FFRF}) we start with the process of selecting servers for allocations. First, we retrieve all servers with enough free capacity to allocate the VNF $v$ and to provide service to demand $\lambda$ (line \ref{available_servers_FFRF}), and then we choose the first available server in FF or a random one in RF (line \ref{choose_server_FFRF}). It is to be noted here, that to satisfy VNF ordering (see equation \ref{functionSequenceOrder}), the procedure \emph{chooseServer} will return a valid server from before/after the previous/next VNF allocated. While for the FF case, we assure in line \ref{choose_path_FFRF} that there will always be a server where to allocate the next VNF in the chain, in RF case we make sure here (line \ref{choose_server_FFRF}) that after the random server selected there is still place to allocate all the rest of the VNFs from the chain in next servers in the path, or we select another server instead. In line \ref{add_function_to_server_FFRF}, we assign the demand and the VNF to the server (i.e. equations (\ref{oneFunctionPerDemand}) and (\ref{mappingFunctionsWithDemands})). After all the VNFs have been placed, the next step is to route traffic demand $\lambda$ to path $p$ (line \ref{route_demand_to_path_FFRF}), to finally add the synchronization traffic for the service chain (line \ref{add_synch_traffic_FFRF}). \begin{algorithm}[!t] \caption{First-Fit/Random-Fit: \textit{main(FF\_RF)}} \begin{algorithmic}[1] \For{$s \in \mathbb{S}$, $\lambda \in \Lambda_s$} \label{main_loop_FFRF} \State $\mathbb{P}_s' \gets$ getAdmissiblePaths($s$, $\lambda$) \label{admissible_paths_FFRF} \State $p \gets$ choosePath(FF\_RF, $\mathbb{P}_s'$) \label{choose_path_FFRF} \For{$v \in \mathbb{V}_s$} \label{for_functions_FFRF} \State $\mathbb{X}_p' \gets$ getAvailableServers($s$, $\lambda$, $v$, $p$) \label{available_servers_FFRF} \State $x \gets$ chooseServer(FF\_RF, $\mathbb{X}_p'$) \label{choose_server_FFRF} \State addVNFToServer($s$, $v$, $\lambda$, $x$) \label{add_function_to_server_FFRF} \EndFor \State routeDemandToPath($s$, $p$, $\lambda$) \label{route_demand_to_path_FFRF} \State addSynchronizationTraffic($s$) \label{add_synch_traffic_FFRF} \EndFor \end{algorithmic} \label{algorithm:ff_rf} \end{algorithm} \subsection{Greedy algorithm} The greedy algorithm main function is described in Algorithm \ref{algorithm:greedy_main}. The procedure starts with the natural ordering of SFCs by the total traffic demand value (line \ref{order_services_GRD}). This is done in order to first allocate services with lower impact on the utilization resources in order to avoid the creation of bottlenecks in servers and links during the firsts phases of the allocation. Then, it starts iterating over each service (line \ref{for_services_GRD}) and over each traffic demand for certain service (line \ref{for_demands_GRD}). Then, for each traffic demand we first retrieve all paths with enough free link resources in $\mathbb{P}_s'$ (line \ref{admissible_paths_GRD}). Then, we choose a path $p$ inside of a loop from all retrieved paths (line \ref{choose_path_GRD}, details explained later).This is done to ensure that in case a path cannot be used for allocating all VNFs, the algorithm tries with the next one. Once the path is selected, we start with the placement of all VNFs on the selected path. First, all the available servers for a specific VNF $v$ on path $p$ are retrieved in variable $\mathbb{X}_p'$ (line \ref{available_servers_GRD}), then we choose one server $x$ for that specific VNF in line \ref{choose_server_GRD} (this procedure explained later) and place the VNF (line \ref{map_function_to_server_GRD}). In case the VNF has been already placed by another demand of the same service, the demand is associated to that VNF, instead. Finally after all VNFs are placed, we map the demand over path (line \ref{map_demand_to_path_GRD}). Finally, as in the previous case, the synchronization traffic for that service is added (line \ref{add_synch_traffic_GRD}) \begin{algorithm}[!t] \caption{Greedy: \textit{main()}} \begin{algorithmic}[1] \State $\mathbb{S}'$ = orderServicesByTotalDemandValue($\mathbb{S}$) \label{order_services_GRD} \For{$s \in \mathbb{S}'$} \label{for_services_GRD} \For{$\lambda \in \Lambda_s$} \label{for_demands_GRD} \State $\mathbb{P}_s' \gets$ getAdmissiblePaths($s$, $\lambda$) \label{admissible_paths_GRD} \For{$p \in \mathbb{P}_s'$} \State $p \gets$ choosePath($s$, $\lambda$, $\mathbb{P}_s'$) \label{choose_path_GRD} \Comment{go to Alg. \ref{algorithm:greedy_choosePath}} \For{$v \in \mathbb{V}_s$} \State $\mathbb{X}_p' \gets$ getAvailableServers($s$, $\lambda$, $p$, $v$) \label{available_servers_GRD} \State $x \gets$ chooseServer($s$, $\lambda$, $p$, $v$, $\mathbb{X}_p'$) \label{choose_server_GRD} \Comment{go to Alg. \ref{algorithm:greedy_chooseServer}} \State mapVNFToServer($v$ , $x$) \label{map_function_to_server_GRD} \EndFor \State mapDemandToPath($s$, $p$, $\lambda$) \label{map_demand_to_path_GRD} \EndFor \EndFor \State addSynchronizationTraffic($s$) \label{add_synch_traffic_GRD} \EndFor \end{algorithmic} \label{algorithm:greedy_main} \end{algorithm} \begin{algorithm}[!t] \caption{Greedy: \textit{choosePath($s$, $\lambda$, $\mathbb{P}_s'$)}} \begin{algorithmic}[1] \State $p \gets$ getUsedPathDemandInitPlacement($s$, $\lambda$, $\mathbb{P}_s'$) \label{get_used_path_for_demand_init_GRD} \If{$p$} \Return $p$ \EndIf \State $p \gets$ getUsedPathInitialPlacement($s$, $\mathbb{P}_s'$) \label{get_used_path_init_GRD} \If{$p$} \Return $p$ \EndIf \State $p \gets$ getUsedPathForSFC($s$, $\mathbb{P}_s'$) \label{get_used_path_for_service_GRD} \If{$p$} \Return $p$ \EndIf \State \Return getPathWithShortestDelay($s$, $\lambda$, $\mathbb{P}_s'$) \label{get_path_shortest_delay_GRD} \end{algorithmic} \label{algorithm:greedy_choosePath} \end{algorithm} \begin{algorithm}[!t] \caption{Greedy: \textit{chooseServer($s$, $\lambda$, $p$, $v$, $\mathbb{X}_p'$, A)}} \begin{algorithmic}[1] \State $\mathbb{X}_p' \gets$ removeServersPreviousVNFs($\mathbb{X}_p'$) \label{remove_servers_previous_vnfs_GRD} \State $\mathbb{X}_p' \gets$ removeServersNextVNFs($\mathbb{X}_p'$) \label{remove_servers_next_vnfs_GRD} \State $c \gets$ getCloudServer($\mathbb{X}_p'$) \label{get_cloud_server_GRD} \State $x \gets$ getUsedServerDemandInitialPlace($s$, $v$, $\lambda$, $\mathbb{X}_p'$) \label{get_used_server_demand_init_GRD} \State checkPosition(x, c, A) \Comment{go to line \ref{check_position}} \label{check_position_1_GRD} \State $x \gets$ getUsedServerInitialPlacement($s$, $v$, $\mathbb{X}_p'$) \label{get_used_server_init_GRD} \State checkPosition(x, c, A) \Comment{go to line \ref{check_position}} \label{check_position_2_GRD} \State $x \gets$ getUsedServerForSFC($s$, $v$, $\mathbb{X}_p'$) \label{get_used_server_GRD} \State checkPosition(x, c, A) \Comment{go to line \ref{check_position}} \label{check_position_3_GRD} \If{!A} \label{check_last_try_GRD} \Return null \Else \ \Return $\mathbb{X}_p'$[0] \label{get_first_server_GRD} \EndIf \Procedure{checkPosition}{$x$, $c$, A} \label{check_position} \If{$x$ != null} \label{check_x_valid_GRD} \If{$A$ \textbf{\&} $c$ \textbf{\&} indexOf($x$) < indexOf($c$)} \label{check_x_c_1_GRD} \State \Return x \ElsIf{$A$ \textbf{\&} $c$ \textbf{\&} indexOf($x$) > indexOf($c$)} \label{check_x_c_2_GRD} \State \Return c \Else \ \Return $x$ \label{check_x_c_3_GRD} \EndIf \EndIf \EndProcedure \end{algorithmic} \label{algorithm:greedy_chooseServer} \end{algorithm} When selecting a path for a specific traffic demand in line \ref{choose_path_GRD}, the procedure described in Algorithm \ref{algorithm:greedy_choosePath} is executed. This procedure execute the following methods in this specific order: return an already used path for the same demand $\lambda$ during the initial placement (line \ref{get_used_path_for_demand_init_GRD}), return any used path for SFC $s$ during the initial placement (line \ref{get_used_path_init_GRD}), return any used path for SFC $s$ (line \ref{get_used_path_for_service_GRD}) or return the path with shortest path delay (line \ref{get_path_shortest_delay_GRD}). If one method does not return a path, then the next one is executed. Going back to Algorithm \ref{algorithm:greedy_main}, when choosing a server for a specific VNF in line \ref{choose_server_GRD}, the procedure described in Algorithm \ref{algorithm:greedy_chooseServer} is executed. In this point, we first remove servers from the set $\mathbb{X}_p'$ that have already allocated VNFs before/after the current VNF in the path (lines \ref{remove_servers_previous_vnfs_GRD} and \ref{remove_servers_next_vnfs_GRD}), in order to satisfy with sequence order equation (\ref{functionSequenceOrder}). Then, we follow up with the selection of a server from the remaining ones. Here, in case it exists, we first retrieve the cloud server $c$ in the path (line \ref{get_cloud_server_GRD}). Then, we retrieve a server already used for VNF $v$ and demand $\lambda$ during the initial placement (line \ref{get_used_server_demand_init_GRD}) into server $x$. In line \ref{check_position_1_GRD}, we check the position in the path of that server, where the procedure is specified in line \ref{check_position}. This procedure receives the server $x$, the cloud server $c$ in case it exists and the boolean variable $A$ which specifies whether this is the last attempt in terms of remaining available paths. The objective here is to first check if $x$ is valid (line \ref{check_x_valid_GRD}), otherwise it finishes. In case is valid, then we return $x$ if it is the last attempt $A$, if it exists a cloud server in the path and if the index of $x$ is lower than the index of $c$ in the array. In case this condition does not apply, we continue with the next condition in line \ref{check_x_c_2_GRD} with the difference of checking whether $x$ is after the cloud server in the array. In that case, the cloud server is returned. If none of the previous applies, then $x$ is returned in line \ref{check_x_c_3_GRD}. This procedure is basically performed to make sure that in all cases there will be a location where to place VNFs which is in the cloud server, but choosing it as the last option. Continuing with line \ref{get_used_server_init_GRD}, similarly here we try to retrieve a server used during the initial placement for service $s$ regardless for which traffic demand and perform the same procedure like in the previous case (line \ref{check_position_2_GRD}). While the first method tries to reuse the same exact server like in the initial placement in order to avoid a migration, here we try to use a server already used by some other demand during the initial placement for the same service in order to avoid a replication. Similarly, the next case in line \ref{get_used_server_GRD} retrieves an already used server by the same service regarless it is from initial placement or allocated during the current placement. Here again we are trying to avoid an unnecessary replication and we check again like before the position of the returned server in line \ref{check_position_3_GRD}. If none of the previous methods returned a valid server, then we return null in line \ref{check_last_try_GRD} in order to later try with the next available path in case this is not the latest path. If it is the latest path, then we just return the first available server in the set (line \ref{get_first_server_GRD}). \subsubsection{Computational complexity} In terms of complexity from bottom to top, for the Algorithm \ref{algorithm:greedy_chooseServer} considering $V_{L}$ as the length of the longest SFC, it is in the order of $\Theta = O(V_{L} \cdot |\mathbb{X}|)$. The Algorithm \ref{algorithm:greedy_choosePath} is in the order of $\Theta' = O(P_{S})$ where $P_{S}$ is the number of paths per SFC. The Algorithm \ref{algorithm:greedy_main} is calculated based on the complexity of Algorithm \ref{algorithm:greedy_choosePath} and \ref{algorithm:greedy_chooseServer}, and the complexity of the synchronization traffic (line \ref{add_synch_traffic_GRD}) which is in the order of $\Theta'' = O(V_{L} \cdot |\mathbb{X}^2| \cdot |\mathbb{P}|)$. Considering $L_{P}$ as the length of the longest path, then the complexity of the entire Algorithm \ref{algorithm:greedy_main} is in the order of $O(|\mathbb{S}^2| + |\Lambda| \cdot L_{P} \cdot P_{S} \cdot [\Theta' + V_{L} \cdot \Theta] \cdot \Theta'')$, which can be simplified as $O(|\mathbb{S}^2| + |\Lambda| \cdot L_{P} \cdot V_{L} \cdot |\mathbb{X}^2| \cdot |\mathbb{P}| \cdot [P_{S}^2 + V_{L}^2 \cdot |\mathbb{X}| \cdot P_{S}])$. \begin{figure}[!t] \centering \subfloat[N7 network]{\includegraphics[width=0.40\columnwidth]{N7}% \label{fig:N7}} \hfil \subfloat[N45 network]{\includegraphics[width=0.60\columnwidth]{N45}% \label{fig:N45}} \caption{Network topologies used in the performance evaluation.} \end{figure} \begin{figure}[!t] \centering \subfloat[RMSE]{\includegraphics[width=0.24\textwidth]{rmse}% \label{fig:rmse}} \hfil \subfloat[Training time]{\includegraphics[width=0.236\textwidth]{time}% \label{fig:time}} \hfil \subfloat[Validation for 1 period]{\includegraphics[width=0.237\textwidth]{test_1-days}% \label{fig:test_1-days}} \hfil \subfloat[Validation for 50 periods]{\includegraphics[width=0.237\textwidth]{test_50-days} \label{fig:test_50-days}} \caption{Traffic prediction model results} \label{fig:traffic_prediction} \end{figure} \section{Performance evaluation} We use MILP model implemented with Gurobi Optimizer tool to evaluate a smaller size network N7 (7 nodes, 20 directed links with 500 units of capacity each, see Fig. \ref{fig:N7}) and heuristics for a larger-size network N45 (45 nodes, 140, directed links with 1000 units of capacity, Fig. \ref{fig:N45}). In N7, every node is equipped with one server, whereas in N45 there are 8 servers per node. In both networks, we assume that all nodes can establish on-demand connectivity to a third-party cloud server of which the geographic location is determined based on the closest common locations used by cloud providers. Thus, for the 7-nodes network, in N7 the geographic locations are based regionally, such as the area of Braunschweig (Germany) for the network and the area of Frankfurt, for the cloud server, respectively. For N45, we use a modified version of Palmetto network in South Carolina, USA and the cloud server in North Virginia, USA. The propagation delay is correspondingly calculated considering the distance between nodes from their latitude and longitude using the Haversine method using 2/3 of the speed of light. We thereby assume the links used to connect to the third-party cloud have sufficient capacity for any demand, and therefore do not impact the analysis of server utilization. For each source-destination pair of nodes, 3 paths are pre-computed that do not traverse the cloud node and 1 additional path that does. Also 2 additional paths per node are computed for the synchronization traffic between possible VNFs allocated in the cloud and in the network. The path computation is carried out in this way to make sure the model has enough freedom to allocate all SFCs in the network and at least there is one admissible path per SFC to allocate VNFs in the cloud. We assume that every source-destination pair of nodes (except the cloud node) instantiates independent SFCs with variable length from 1 to 10 VNFs depending depending on the scenario. The processing load of a certain VNF is calculated from the total amount of processed traffic in the VNF multiplied by a random load ratio ($\Gamma_{t(v)}^\mathrm{pro}$) between 1\% and 100\%. Additionally, an overhead ($\Theta_{t(v)}^s$) is calculated as a random percentage between 1\% and 10\% of the processing load \cite{Reddy2014}. The synchronization traffic between VNFs ($\Gamma_{t(v)}^\mathrm{syn}$) is calculated as 10\% of the processing load of the VNF. The delay parameters per VNF, already explained is section \ref{service_delay}, are specified using typical values as follows: $D_{t(v)}^\mathrm{proq} = 3~ms$, $D_{t(v)}^\mathrm{prox} =5~ms$, $D_{t(v)}^\mathrm{pro\_x,min} = 2~ms$ and $D_{t(v)}^\mathrm{pro,max} = 10~ms$. In the networks studied, for all SFCs the service delay is constrained to $D_s^{\mathrm{max}} = 400~ms$. The round trip time is, for both networks, always shorter than 5 ms which leads to a service downtime of duration $D^\mathrm{dwt} = 27.5~ms$ when performing a migration, in the worst case scenario \cite{Taleb2019}. Two types of results are produced: (i) one setting all SFCs with a certain length while all servers have the same capacity and (ii) one setting all servers to the certain capacity while all SFCs have a random length. The reason for that is to independently see the effects that SFCs lengths and server capacities have on the network. In case i), the server capacities are set to 1000 for N7 and 2000 units for N45, and all SFC lengths are chosen in increments from 1 to 10. In case ii), the server capacities vary from 250 to 3000 units and every SFC is of random length between 1 and 10. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{diagram}% \caption{Optimization scenarios} \label{fig:diagram} \end{figure} \subsection{Optimization scenarios} We assume that every source-destination pair of nodes generates between 1 and 3 traffic flows, with traffic demand per flow set to a random value between 1 and 100 traffic units. For each traffic demand, 24 values are generated in one time period following a lognormal distribution with time-varying mean and variance, as explained in Section \ref{traffic_model}. For the time series forecasting, one LSTM network is created and trained per each traffic flow for a certain number of periods, and then evaluated for 1 time period. To determine the optimum number of required training periods, the model has been tested using from 1 to 1000 periods for training. The resulting RMSE is shown in \ref{fig:rmse} where it shows that above 50 periods of time, the performance is not improving anymore. However, the training time continues to increase with the number of training periods as expected, see Fig. \ref{fig:time}. Taking 1 period as the worst case and 50 as the best case, Fig. \ref{fig:test_1-days} and Fig. \ref{fig:test_50-days} show the predicted and observed normalized traffic demand values over time during the evaluation period, respectively. Here, we can see how the number of training periods impacts the accuracy of the model. To illustrate the issues of computation time, we show the results obtained by using the CPU of a machine with an Intel Core i7-6700 and 32 GB of RAM. The total computation time considering all traffic demands for takes $\approx$7 minutes in N7, when training for 1 period and $\approx$12 minutes when training for 50 periods. For N45, it takes in total $\approx$13 hours for 50 training periods. While the specific total computational time can be improved by using GPUs or by training models in parallel, it should be noted that the network size needs to be considered when using predictions. \begin{figure}[!t] \centering \subfloat[Variable SFC lenght]{\includegraphics[width=0.50\columnwidth]{7nodes_sfclen_MGR_REP_CLOUD_objval_LP}% \label{fig:7nodes_sfclen_MGR_REP_CLOUD_objval_LP}} \subfloat[Variable server capacity]{\includegraphics[width=0.50\columnwidth]{7nodes_servercap_MGR_REP_CLOUD_objval_LP}% \label{fig:7nodes_servercap_MGR_REP_CLOUD_objval_LP}} \caption{Objective function value for \texttt{obsv}, \texttt{over} and \texttt{pred} scenarios in the N7 network using the \texttt{MILP} model.} \label{fig:7nodes_MGR_REP_CLOUD_objval_LP} \end{figure} \begin{figure}[!t] \centering \subfloat[Variable SFC length]{\includegraphics[width=0.50\columnwidth]{7nodes_sfclen_MGR_REP_CLOUD_pred_objval}% \label{fig:7nodes_sfclen_MGR_REP_CLOUD_pred_objval}} \hfil \subfloat[Variable server capacity]{\includegraphics[width=0.50\columnwidth]{7nodes_servercap_MGR_REP_CLOUD_pred_objval}% \label{fig:7nodes_servercap_MGR_REP_CLOUD_pred_objval}} \caption{Objective function values of \texttt{RF}, \texttt{FF}, \texttt{GRD} and \texttt{MILP} for the \texttt{pred} scenario in the N7 network.} \label{fig:7nodes_MGR_REP_CLOUD_pred_objval} \end{figure} From the generated traffic demand values produced for the evaluation period, three optimization scenarios are derived based on which values are considered during the first placement: i) observed values (\texttt{obsv}), ii) 80\% of the maximum individual traffic demand values, which corresponds to overprovisioning (\texttt{over}) and iii) predicted values (\texttt{pred}). After the first placement, the second placement is carried out considering the location of the VNFs during the first placement, as explained in equation \ref{initialsolutionmapping} and considering the new traffic demand values after a $\Delta t$ time shift from the set of traffic demand values (see Fig. \ref{fig:test_50-days}). In our case, the first time step for the first placement is taken randomly from the first 18 time values and $\Delta t$ is set to 6 time periods. Hence, for the first scenario \texttt{obsv}, only the current observed values at time $t$ are considered for the placement of VNFs. In the second scenario \texttt{over}, the observed values are ignored, and instead, the VNFs are placed assuming the traffic is always at the 80\% of the maximum traffic demand value. The third scenario places VNFs considering the predicted traffic values after $\Delta t$. Fig. \ref{fig:diagram}) illustrates the optimization process. The second placement uses the first placement as input, and it optimizes the placement again by considering the real monitored and observed traffic demand values. The first placement is carried out using either the \texttt{MILP} model in N7, or the greedy algorithm (\texttt{GRD}) in N45. In all cases, the objective is to allocate VNF while minimizing the number of replications and the number of virtual functions placed in the cloud. In the first placement, the are no migrations from any previous step to consider. In the second placement, the \texttt{MILP} model in N7, and all heuristics for both networks, all by considering the same objectives which to minimizing the number of migrations, replications and cloud VNFs. Finally, for the reminder of the paper we show the results obtained from the second placement, while using the three scenarios during the first placement, as described. \begin{figure}[!t] \centering \subfloat[Variable SFC lenght]{\includegraphics[width=0.50\columnwidth]{palmetto_sfclen_MGR_REP_CLOUD_objval_GRD}% \label{fig:palmetto_sfclen_MGR_REP_CLOUD_objval_GRD}} \subfloat[Variable server capacity]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_MGR_REP_CLOUD_objval_GRD}% \label{fig:palmetto_servercap_MGR_REP_CLOUD_objval_GRD}} \caption{Objective function value for \texttt{obsv}, \texttt{over} and \texttt{pred} scenarios in the N45 network using the \texttt{GRD} algorithm.} \label{fig:palmetto_MGR_REP_CLOUD_objval_GRD} \end{figure} \begin{figure}[!t] \centering \subfloat[Variable SFC length]{\includegraphics[width=0.50\columnwidth]{palmetto_sfclen_MGR_REP_CLOUD_pred_objval}% \label{fig:palmetto_sfclen_MGR_REP_CLOUD_pred_objval}} \hfil \subfloat[Variable server capacity]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_MGR_REP_CLOUD_pred_objval}% \label{fig:palmetto_servercap_MGR_REP_CLOUD_pred_objval}} \caption{Objective function values for \texttt{RF}, \texttt{FF} and \texttt{GRD} for the \texttt{pred} scenario in the N45 network.} \label{fig:palmetto_MGR_REP_CLOUD_pred_objval} \end{figure} \subsection{Objective function} Since the objective function (equation \eqref{obj_func}) is a joint optimization from three different weighted terms, we first show the results when minimizing all terms, so all three weights $W_m$, $W_r$ and $W_c$ are equal to 1. Fig. \ref{fig:7nodes_MGR_REP_CLOUD_objval_LP} shows the objective value for the three scenarios \texttt{obsv}, \texttt{over} and \texttt{pred} when varying the SFC lengths and when varying the server capacities in N7. It should be noted that some zero values for certain SFC lengths or server capacities are omitted in the plots due to clarity. We can observe that \texttt{pred} overperforms the other two cases. Between \texttt{over} and \texttt{obsv}, when the servers are overloaded the \texttt{over} case performs slightly better than \texttt{obsv} as expected, due to the overprovisioning factor. Before analyzing the three scenarios in large network N45, let us first compare how heuristics compare to \texttt{MILP} model in N7. Fig. \ref{fig:7nodes_MGR_REP_CLOUD_pred_objval} shows again the objective values for \texttt{pred} scenario, but now comparing the \texttt{MILP} model with the heuristic algorithms \texttt{RF}, \texttt{FF} and \texttt{GRD}. Here we can see that both \texttt{RF} and \texttt{FF} are far from the optimal solution, being \texttt{RF} slightly better than \texttt{FF} in most cases. When using the greedy algorithm for the N45 network, we compare again the three scenarios \texttt{obsv}, \texttt{over} and \texttt{pred} in Fig. \ref{fig:palmetto_MGR_REP_CLOUD_objval_GRD}. Here, we can see a more clear difference between the three cases, being again the \texttt{pred} scenario the one with a clear advantage compared to the other two. This case also better illustrates how \texttt{over} case overperforms \texttt{obsv} case mostly when the servers are overloaded confirming what we could slightly see with the N7 network. From Fig. \ref{fig:palmetto_MGR_REP_CLOUD_pred_objval} we can compare \texttt{RF}, \texttt{FF} and \texttt{GRD}, in this case for the N45 network. Different from N7, here we can see how \texttt{FF} outperforms \texttt{RF} in all cases. We see here a trend on \texttt{FF} working better the more free the network and servers are, but in any case the achieved values are comparable to the \texttt{GRD} algorithm which performs always better. \begin{figure}[!t] \centering \subfloat[Minimizing migrations]{\includegraphics[width=0.50\columnwidth]{7nodes_sfclen_LP_MGR_repcld}% \label{fig:7nodes_sfclen_LP_MGR_repcld}} \hfil \subfloat[Minimizing replications]{\includegraphics[width=0.50\columnwidth]{7nodes_sfclen_LP_REP_mgrcld}% \label{fig:7nodes_sfclen_LP_REP_mgrcld}} \hfil \subfloat[Minimizing cloud VNFs]{\includegraphics[width=0.50\columnwidth]{7nodes_sfclen_LP_CLOUD_mgrrep}% \label{fig:7nodes_sfclen_LP_CLOUD_mgrrep}} \caption{Number of migrations, replications and cloud VNFs for different SFC lengths in the N7 network using \texttt{MILP} model.} \label{fig:7nodes_sfclen_objval_LP} \end{figure} \begin{figure}[!t] \centering \subfloat[Number of migrations]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_mgr}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_mgr}} \hfil \subfloat[Number of replications]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_rep}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_rep}} \hfil \subfloat[Number of cloud VNFs]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_cld}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_cld}} \caption{Number of migrations, replications and cloud VNFs for different server capacity in N45 network using \texttt{GRD} algorithm.} \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD} \end{figure} \subsection{Migrations, Replications and Cloud VNFs} In order to better see how the model behaves individually when minimizing only one of the terms, we set a certain weight (i.e. $W_m$, $W_r$ or $W_c$) equal to 1, and the others close to 0 in such a way that the sum of all secondary terms is within interval $[0,1)$. By doing that, we limit the freedom of the model while, at the same time, we ensure there is no impact on the main term which value is always going to be a positive integer. In this regard, Fig. \ref{fig:7nodes_sfclen_LP_MGR_repcld} shows the results in terms of number of replications (\texttt{rep}) and number of cloud VNFs (\texttt{cld}), when minimizing the number of migrations for the three scenarios \texttt{obsv}, \texttt{over} and \texttt{pred} and different SFC lengths in N7. By looking at \texttt{over-rep} and \texttt{over-cld}, we see how overprovisioning does not allocate replicas and places more functions in the cloud compared to other cases. In comparison, the \texttt{obsv} case allocates less functions in the cloud at expenses of deploying a considerable number of replicas. The \texttt{pred} case can be seen as a trade-off solution, as it allocates considerably less VNFs in the cloud compared to \texttt{over}, independently from the SFC length, and less than \texttt{obsv} mostly when the servers are overloaded with long SFCs. In terms of replicas, the \texttt{pred} requires much less resources in almost all cases compared to \texttt{obsv} case. When minimizing the number of replications, see Fig. \ref{fig:7nodes_sfclen_LP_REP_mgrcld}, the difference between \texttt{pred} and \texttt{obsv} in terms of allocations in the cloud is much smaller, but still reduces the number of migrations independently from the SFC length. Here the \texttt{over} case behaves quite similar to \texttt{pred} in number of migrations, but instead requires to allocate more cloud VNFs. When minimizing the number of functions in the cloud, see Fig. \ref{fig:7nodes_sfclen_LP_CLOUD_mgrrep}, we see how \texttt{pred} requires much less migrations compared to the other two cases, but no remarkable difference regarding replications. To individually see the number of migrations, replications and cloud VNFs with no influence from the weights (i.e. all terms the same weight), we now study N45 network. Fig. \ref{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_mgr} shows how \texttt{obsv} case requires much more migrations compared to the other cases except when the servers are either too overloaded or too underloaded where the values become closer to \texttt{over} case. On the other hand, \texttt{pred} case requires the same number of migrations than \texttt{over} when the servers are overloaded and improves when there is enough free available resources. In Fig. \ref{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_rep}, regarding the number of replications we see that there is no much difference between \texttt{pred} and \texttt{obsv}, but \texttt{over} case is the one requiring significantly less replications, except for the cases where the servers are either too overloaded or too underloaded. This effect can be explained by the fact that when there are no available resources in the servers, the model cannot perform replications, and when there are more than enough available resources, the model avoids replications that are not essential. When looking at Fig. \ref{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_cld}, we see that there is almost no difference between \texttt{obsv} and \texttt{pred}, but the \texttt{over} case allocates considerably more cloud VNFs than the other two cases. \begin{figure}[!t] \centering \subfloat[Average link utilization]{\includegraphics[width=0.50\columnwidth]{7nodes_servercap_LP_MGR_REP_CLOUD_lu}% \label{fig:7nodes_servercap_LP_MGR_REP_CLOUD_lu}} \hfil \subfloat[Average server utilization]{\includegraphics[width=0.50\columnwidth]{7nodes_servercap_LP_MGR_REP_CLOUD_xu}% \label{fig:7nodes_servercap_LP_MGR_REP_CLOUD_xu}} \hfil \subfloat[Average service delay]{\includegraphics[width=0.50\columnwidth]{7nodes_servercap_LP_MGR_REP_CLOUD_sd}% \label{fig:7nodes_servercap_LP_MGR_REP_CLOUD_sd}} \caption{Resource utilization and service delays for different server capacities in the N7 network} \label{fig:7nodes_servercap_resources} \end{figure} \subsection{Resource Utilization and Service Delay} To show the difference between the three scenarios, Fig. \ref{fig:7nodes_servercap_LP_MGR_REP_CLOUD_lu}, Fig. \ref{fig:7nodes_servercap_LP_MGR_REP_CLOUD_xu} and Fig. \ref{fig:7nodes_servercap_LP_MGR_REP_CLOUD_sd} show the average link utilization, server utilization and service delay, respectively, versus a varying server capacity for N7. For both link and server utilization, the link capacity connecting to the cloud and the cloud servers are not considered. Here, in most cases when the network is not overloaded, the \texttt{over} case has slightly lower link utilization compared to the other cases, since this case allocates more cloud VNFs, so the edge network is less utilized and less replicas are used, so less synchronization traffic is added to the network. Between \texttt{pred} and \texttt{obsv} cases, the first one has slightly lower link utilization in some specific cases. This difference is inexistent when looking at the server utilization, and here only \texttt{over} case has lower utilization for the same reason as before. When comparing the three cases for the average service delay, we notice how \texttt{over} has the lowest delay, even though it allocates generally more cloud VNFs as we have seen before, so the propagation delay is larger. However, this case performs less migrations compared to the other cases, and therefore, there is less penalty due to service interruptions. When comparing \texttt{pred} with \texttt{obsv}, we see how \texttt{pred} has less service delay, so less migrations are required. Fig. \ref{fig:palmetto_servercap_resources} shows again the same results, but this time for the N45 network. Here, we can better see the difference in the lower utilization of links of the \texttt{over} case compared with the other two. This is again due to the fact that overprovisioning results into a higher usage of the cloud, so the network is less utilized. This is also confirmed when looking at the average server utilization where \texttt{pred} and \texttt{obsv} cases make full usage of all server resources at the edge before using the cloud, contrary to the \texttt{over} case. The most interesting case is with regard the service delay, where we can see how the \texttt{pred} case is able to outperform \texttt{over} when the servers are not overloaded since the number of migrations are much lower as we could see from Fig. \ref{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_mgr}. \begin{figure}[!t] \centering \subfloat[Average link utilization]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_lu}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_lu}} \hfil \subfloat[Average server utilization]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_xu}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_xu}} \hfil \subfloat[Average service delay]{\includegraphics[width=0.50\columnwidth]{palmetto_servercap_GRD_MGR_REP_CLOUD_sd}% \label{fig:palmetto_servercap_GRD_MGR_REP_CLOUD_sd}} \caption{Resource utilization and service delays for different server capacities in the N45 network} \label{fig:palmetto_servercap_resources} \end{figure} \subsection{Discussion and remarks} From all three scenarios analyzed, we see observe that in all cases, predicting the traffic demands helps to reduce the overall number of migrations, replications and usage of the cloud. More specifically, the overprovisioning case requires in general less replications compared to the other two cases, but requires as many migrations as with the prediction case, when the network is overloaded and considerably more when the network is underloaded. Because overprovisioning does not consider the fluctuations of traffic, it can, in the best case, match the real traffic and, in the worst case, to provision excessive resources in advance, which results in using more the cloud compared to the other two cases. Placing VNFs only considering the observed traffic results in using a similar total amount of resources as with prediction, since there is no much difference in the number of replications and usage of the cloud, but it requires significantly more migrations to be able to accommodate future demands. In summary, we can say that when using traffic prediction, the number of migrations can be reduced up to 45\% when there is enough available resources to allocate replicas, compared to other cases studied. This is true at expenses of using replications and cloud placements, as much as in the observed traffic case. When comparing it to the overprovisioning case, that statement remains true, but also the usage of the cloud is reduced by allocating almost up to double number of replications. However, for traffic prediction to successfully help on this problem, it requires certain amount of training periods per independent traffic demand in the network, which can result in high computational resources and computational time for larger networks. \section{Conclusions} We studied the problem of optimal placement of VNFs from an ISP point of view, when minimizing migrations and replications. We proposed a traffic forecasting model using LSTM networks and used it to place VNFs accordingly to the predicted traffic demands. We proposed an offline MILP model as well as an online greedy algorithm for the placement optimization problem. We compared three scenarios by either considering: (i) the current observed traffic demands only, (ii) overprovisioning of the 80\% of every specific maximum traffic demand value had in the past, or (iii) the predicted traffic values based on history. We showed that with traffic prediction, the number of migrations can be reduced up to 45\% when there is enough available resources to allocate replicas. This also results in less usage of the third-party clouds as compared to capacity overprovisioning. While overprovisioning can be valid a solution when unexpected traffic peaks appear resulting in higher usage of the cloud temporarily, traffic prediction can minimize the need for the same that by anticipating a proper placement and replication inside the network. The usage of LSTM networks, however, requires non-negligible training time and computational resources which is also something that needs to be taken into consideration. \printcredits \bibliographystyle{cas-model2-names}
2,877,628,090,915
arxiv
\section{Introduction} \label{sec:intro} The work of Mostow and Prasad implies that every finite volume hyperbolic $3$-manifold admits a unique hyperbolic structure, up to isometry \cite{Pr},\cite{Mos}. Thus, geometric invariants of a hyperbolic manifold, such as volume and geodesic lengths, are also topological invariants. It is natural to ask: how effective can such invariants be at distinguishing hyperbolic $3$-manifolds? Furthermore, how do these invariants interact with one another? In this paper, we will study how mutations along \emph{hyperelliptic surfaces} inside of a hyperbolic $3$-manifold affect such invariants. A hyperelliptic surface $F$ is a surface admitting a \emph{hyperelliptic involution}: an order two automorphism of $F$ which fixes every isotopy class of curves in $F$. A \emph{mutation} along a hyperelliptic surface $F$ in a hyperbolic $3$-manifold $M$ is an operation where we cut $M$ along $F$, and then reglue by a hyperelliptic involution $\mu$ of $F$, often producing a new $3$-manifold, $M^{\mu}$. While a mutation can often change the global topology of a manifold, the action is subtle enough that many geometric, quantum, and classical invariants are preserved under mutation; see \cite{DGST} for details. In particular, Ruberman showed that mutating hyperbolic $3$-manifolds along incompressible, $\partial$-incompressible surfaces preserves hyperbolicity and volume in \cite{Ru}. Here, we investigate under which conditions such mutations preserve the smallest $n$ values of the length spectrum, the \emph{initial length spectrum}. The \emph{length spectrum} of a manifold, $M$, is the set of all lengths of closed geodesics in $M$ counted with multiplicites. We will also consider the \emph{complex length spectrum} of $M$: the set of all complex lengths of closed geodesics in $M$ counted with multiplicities. Given a closed geodesic $\gamma \subset M$, the \emph{complex length} of $\gamma$ is the complex number $\ell_{\mathbb{C}}(\gamma) = \ell(\gamma) + i \theta$ where $\ell(\gamma)$ denotes the length of $\gamma$ and $\theta$ is the angle of rotation incurred by traveling once around $\gamma$. Throughout this paper, any surface will be connected, orientable, and of finite complexity, unless stated otherwise. Any hyperbolic $3$-manifold $M$ will have finite volume and be connected, complete, and orientable. Our investigation requires a surface that we mutate along to be a \emph{least area surface in $M$}, or a close variant, to be defined later. \begin{defn}[Least Area Surface in $M$] \label{defn:LA} Let $F \subset M$ be a properly and smoothly embedded surface in a Riemannian $3$-manifold $M$. Then $F$ is called a \emph{least area surface} if $F$ minimizes area in its homotopy class. \end{defn} Least area surfaces inside of $3$-manifolds are well studied objects. Schoen--Yau \cite{ScYa} showed that incompressible surfaces inside closed $3$-manifolds can always be homotoped to smoothly immersed least area surfaces. Freedman--Hass-- Scott \cite{FHS} showed that this resulting immersion is an embedding. Ruberman expanded this analysis to noncompact surfaces in noncompact hyperbolic $3$-manifolds in \cite{Ru}, where he provided conditions for the existence, uniqueness, and embeddedness of least area surfaces in a hyperbolic $3$-manifold. The following theorem gives three possible properties of a hyperbolic $3$-manifold $M$ that can help determine the topology amd geometry of $\gamma \cap F \subset M$, where $\gamma$ is a closed geodesic and $F$ is an incompressible surface. These properties are the maximal embedded tube radius $r$ of a neighborhood of $\gamma$, denoted $T_{r}(\gamma)$, the length of $\gamma$, denoted $\ell(\gamma)$, and the \emph{normalized length} of a Dehn filling, which we describe in Definition \ref{defn:NL}. By a closed curve $n \cdot \gamma$, we mean a simple closed curve that is in the homotopy class of $[n \cdot \gamma] \in \pi_{1}(\partial T_{r}(\gamma))$. We can now state this result. \begin{thm} \label{thm:main} Let $M$ be a hyperbolic manifold with $F \subset M$ an embedded surface that is incompressible and $\partial$-incompressible with $\left|\chi(F) \right| \leq 2$. Let $\gamma \subset M$ be a closed geodesic with embedded tubular radius $r$. Assume \begin{enumerate} \item $r > 2 \ln(1 + \sqrt{2}) $, or \item $\ell(\gamma) < 0.015$, or \item $\gamma$ is the core of a solid torus added by Dehn filling $N \cong M \setminus \gamma$ along a slope of normalized length $\widehat{L} \geq 14.90$. \end{enumerate} Then $\gamma$ can be isotoped disjoint from $F$. Furthermore, if $F$ is embedded in least area form, then either $\gamma \cap F = \emptyset$ without any isotopy or $n \cdot \gamma$ is isotopic into $F$ for some $n \in \mathbb{N}$. \end{thm} A few remarks about this theorem: \begin{itemize} \item This theorem is both a topological and a geometrical statement about $\gamma \cap F$. Only the topological statement is necessary for showing that mutation preserves the initial length spectrum; see Theorem \ref{cor:syspreserved}. \item This theorem is stated in full generality in Theorem \ref{thm:gammasep} where no constraints are made on the Euler characteristic. We mainly care about $\left|\chi(F) \right| \leq 2$ because the surfaces we will consider in our main result (Theorem \ref{cor:systole_and_vol}) are all\textit{ Conway spheres}, i.e, $4$-punctured spheres inside of knot complements. \item Theorem \ref{thm:gammasep} is stated in terms of \textit{almost least area surfaces}, which generalize least area surfaces; see Definition \ref{def:ALAS}. \item $(2)$ implies $(1)$ by the work of Meyerhoff stated in Theorem \ref{thm:Collar_lemma}. $(3)$ implies $(1)$ by the work of Hodgson and Kerckhoff \cite{HoKe}, \cite{HoKe2} on cone deformations. Furthermore, a version of $(3)$ implies $(2)$ exists, but we must adjust the lower bound on normalized length to be $\widehat{L} \geq 20.76$. \item $(3)$ can be stated in terms of Dehn filling multiple curves; see Corollary \ref{cor:disjointgeo}. \end{itemize} The proof of Theorem \ref{thm:main} relies on both the topology and geometry of $F \cap T_{r}(\gamma)$, where $T_{r}(\gamma)$ is the embedded tubular neighborhood of radius $r$ around $\gamma$. Since $F$ is incompressible and $\partial$-incompressible, $F$ can be isotoped into almost least area form by Theorem \ref{thm:LAsurfaces}. As a result, components of $F \cap T_{r}(\gamma)$ must be disks or annuli. If a component of $F \cap T_{r}(\gamma)$ that intersects $\gamma$ is a disk, $D_{r}$, then we work to get an area contradiction. Specifically, if $r$ is sufficiently large, then the area of $D_{r}$ inside of this neighborhood will be too big, and so, $\gamma$ must be disjoint from $F$ in this case. As mentioned in the remarks, conditions $(2)$ and $(3)$ each imply $(1)$, so all of our cases rely on a sufficiently large tube radius in the end. If a component of $F \cap T_{r}(\gamma)$ that intersects $\gamma$ is an annulus, $A_{r}$, then this annulus must be parallel to the boundary torus $\partial T_{r}(\gamma)$. Here, $\gamma$ can be isotoped disjoint from $A_{r}$, and more generally, isotoped disjoint from $F$. The following theorem tells us when the initial length spectrum is preserved under mutation. \begin{thm} \label{cor:syspreserved} Let $F \subset M$ be a properly embedded surface that is incompressible, $\partial$-incompressible, and admits a hyperelliptic involution $\mu$. Suppose that $M$ has $n$ geodesics shorter than some constant $L <0.015$. Then $M$ and $M^{\mu}$ have (at least) the same $n$ initial values of their respective (complex) length spectra. \end{thm} Under these hypotheses, any sufficiently short geodesic $\gamma$ in $M$ can be isotoped disjoint from $F$. After this isotopy, if we mutate $M$ along $(F, \mu)$ to obtain $M^{\mu}$, then there will also be a closed curve in $M^{\mu}$ corresponding with $\gamma$. We just need to analyze the representations $\rho: \pi_{1}(M) \rightarrow \text{PSL}(2, \mathbb{C})$ and $\rho_{\mu}: \pi_{1}(M^{\mu}) \rightarrow \text{PSL}(2, \mathbb{C})$ to see that $[\gamma]$, as an element of either $\pi_{1}(M)$ or $\pi_{1}(M^{\mu})$, has the same representation (up to conjugacy) in $\text{PSL}(2, \mathbb{C})$, and so, the same (complex) length associated to it in either case. Note that Theorem \ref{cor:syspreserved} only relies on the topological statement from Theorem \ref{thm:main}. In fact, any $\gamma$ that can be homotoped disjoint from $F$ will be preserved under mutation since we only need to consider $\gamma$ as a representative of an element of $\pi_{1}(M)$ or $\pi_{1}(M^{\mu})$; this follows from Theorem \ref{thm:mutationrep} and Lemma \ref{lemma:Fgroups}. This theorem gives us a tool to produce non-isometric hyperbolic $3$-manifolds that have at least the same initial length spectrum. Over the past $35$ years, there have been a number of constructions for producing non-isometric hyperbolic $3$-manifolds that are \textit{iso-length spectral}, i.e., have the same length spectrum. Vign\'{e}ras in \cite{Vi} used arithmetic techniques to produce the first known constructions of such manifolds. Sunada developed a general method for constructing iso-length spectral manifolds \cite{Su}, which helped him produce many iso-length spectral, non-isometric Riemann surfaces. This technique produces covers of a manifold $M$ that are iso-length spectral by finding certain group theoretic conditions on subgroups of $\pi_{1}(M)$. We will refer to any such group theoretic construction for producing covers that have either the same length spectrum or some variation of this as a \textit{Sunada-type construction}. Since Sunada's original work, many Sunada-type constructions have been developed. These constructions often have very interesting relations to volume. McReynolds uses a Sunada-type construction in \cite{McR} to build arbitrarily large sets of closed, iso-length spectral, non-isometric hyperbolic manifolds. Furthermore, the growth of size of these sets of manifolds as a function of volume is super-polynomial. In contrast, Leininger--McReynolds--Neumann--Reid in \cite{LMNR} also use a Sunada-type construction to show that for any closed hyperbolic $3$-manifold $M$, there exists infinitely many covers $\left\{M_{j}, N_{j}\right\}$ of $M$, such that the length sets of these pairs are equal but $\frac{vol(M_{j})}{vol(N_{J})} \rightarrow \infty$. Here, the \textit{length set} of a manifold is the set of all lengths of closed geodesics counted without multiplicities. Thus, volume can behave drastically differently for hyperbolic $3$-manifolds that are iso-length spectral as compared with hyperbolic $3$-manifolds with the same length set. All of the constructions mentioned above produce \textit{commensurable manifolds}, that is, manifolds that share a common finite-sheeted cover. Sunada type constructions will always produce commensurable manifolds since they involve taking covers of a common manifold and commensurability is an equivalence relation. On the other hand, the work of Reid \cite{Re} and Chinburg--Hamilton--Long--Reid \cite{ChHaLoRe} shows that iso-length spectral, non-isometric \underline{arithmetic} hyperbolic $3$-manifolds are \textit{always} commensurable. To date, all known examples of iso-length spectral, non-isometric hyperbolic $3$-manifolds are commensurable. This raises the following question: \begin{question} \label{q:spectral} Do there exist incommensurable iso-length spectral hyperbolic $3$-manifolds? \end{question} Here, we construct large families of mutant pretzel knot complements which have the same initial (complex) length spectrum, the same volume, and are pairwise incommensurable. Our construction does not use arithmetic methods or a Sunada-type construction, but rather, the simple cut and paste operation of mutating along Conway spheres. This work is highlighted in our main theorem below. See Section \ref{sec:RT_and_PK} for the definition of a pretzel knot. \begin{thm} \label{cor:systole_and_vol} For each $n \in \mathbb{N}$, $n \geq 2$, there exist $\frac{(2n)!}{2}$ non-isometric hyperbolic pretzel knot complements that differ by mutation, $\left\{M_{2n+1}^{\sigma}\right\}$, such that these manifolds: \begin{itemize} \item have the same $2n+1$ shortest geodesic (complex) lengths, \item are pairwise incommensurable, \item have the same volume, and \item $\left(\frac{2n-1}{2}\right)v_{\mathrm{oct}} \leq vol(M^{\sigma}_{2n+1}) \leq \left(4n+2\right)v_{\mathrm{oct}}$, where $v_{\mathrm{oct}} \left(\approx 3.6638\right)$ is the volume of a regular ideal octahedron. \end{itemize} \end{thm} Theorem \ref{cor:systole_and_vol} provides an answer to a weak form of Question \ref{q:spectral}. While these mutant pretzel knot complements have the same initial length spectrum, we doubt that any of them are actually iso-length spectral. Almost all sufficiently long geodesics in one of these pretzel knot complements have homotopically essential intersections with all of the Conway spheres. Thus, their corresponding geodesic lengths should be changed by mutation. The fact that these hyperbolic pretzel knot complements are pairwise incommensurable comes from the following theorem. See Section \ref{sec:commensurablity} for full details. \begin{thm} \label{thm:incom} Let $n \geq 2$ and let $q_{1}, \ldots, q_{2n+1}$ be integers such that only $q_{1}$ is even, $q_{i} \neq q_{j}$ for $i \neq j$, and all $q_{i}$ are sufficiently large. Then the complement of the hyperbolic pretzel knot $K \left( \frac{1}{q_{1}}, \frac{1}{q_{2}}, \ldots, \frac{1}{q_{2n+1}} \right)$ is the only knot complement in its commensurability class. In particular, any two of these hyperbolic pretzel knot complements are incommensurable. \end{thm} Proving that a particular knot complement is the only knot complement in its commensurability class is generally not an easy task. Only two large classes of knot complements are known to have this property. Reid and Walsh in \cite{ReWa} have shown that hyperbolic $2$-bridge knot complements are the only knot complements in their respective commensurability classes, and similarly, Macasieb and Mattman in \cite{MM} have shown this for the complements of hyperbolic pretzel knots of the form $K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{n} \right)$, $n \in \mathbb{Z} \setminus \left\{7\right\}$. Usually the hardest part of this work is showing that these knot complements have no \textit{hidden symmetries}, that is, these knot complements are not irregular covers of orbifolds. We are able to rule out hidden symmetries by analyzing the cusp shapes of certain \textit{untwisted augmented links} (see Section \ref{sec:GEOofUAL}) that we Dehn fill along to obtain our pretzel knot complements. Now, let us outline the rest of this paper. In Section \ref{subsec:MR}, we prove the monotonicity of the mass ratio for least area disks in $\mathbb{H}^{3}$. This result helps give a lower bound on the area of a least area disk inside a ball in $\mathbb{H}^{3}$. Section \ref{sec:LA_surfaces_and_geo} gives the proof of Theorem \ref{thm:main} and states this result in its full generality. This section is broken down into subsections, each dealing with one of the conditions to be satisfied for Theorem \ref{thm:main}. Section \ref{subsec:symmsurf_and_mut} gives the proof of Theorem \ref{cor:syspreserved} and a number of corollaries to this theorem. In Section \ref{sec:RT_and_PK}, we construct and describe our class of hyperbolic pretzel knots which are mutants of one another. We also highlight a theorem from our past work \cite{Mi} that describes how many of these mutant pretzel knot complements are non-isometric and have the same volume. In Section \ref{sec:GEOofUAL}, we analyze the geometry of our pretzel knots by realizing them as Dehn fillings of untwisted augmented links, whose complements have a very simple polyhedral decomposition. In particular, this analysis allows us to put a lower bound on the normalized lengths of the Dehn fillings performed to obtain our pretzel knot complements, and also, helps determine the cusp shapes of the pretzel knots themselves. In Section \ref{sec:commensurablity} we prove that these knots are pairwise incommensurable. In Section \ref{sec:mutations_sys}, we apply Theorem \ref{cor:syspreserved} to show that our class of pretzel knot complements have the same initial length spectrum. We also give an application to closed hyperbolic $3$-manifolds with the same initial length spectrum. Putting all these results together gives Theorem \ref{cor:systole_and_vol} in Section \ref{sec:mutations_sys}. We are grateful to David Futer for his help and guidance with this project. We thank Frank Morgan for directing us to the monotonicity of the mass ratio result found in his book \cite{Mo}. We thank Jessica Purcell for providing useful comments and help with understanding cone deformations. Finally, we thank the referees for making numerous helpful comments. \section{Monotonicity of the Mass ratio for least area disks in $\mathbb{H}^{3}$} \label{subsec:MR} Throughout this section, $\ell(-)$ will denote hyperbolic length, and $B(a,r) \subset \mathbb{H}^{3}$ will denote a ball of radius $r$ centered at $a$. Also, $A(-)$ will denote the area that a smoothly immersed surface inherits from a hyperbolic $3$-manifold by pulling back the hyperbolic metric. Here, we establish a useful result for least area disks in $\mathbb{H}^{3}$. \begin{defn}[Least Area Disk] \label{defn:LA2} Let $D \subset M$ be a properly and smoothly embedded disk in a Riemannian $3$-manifold $M$. Let $c$ be a simple closed curve in $M$ such that $\partial D =c$. Then $D$ is called a \emph{least area disk} in $M$, if $D$ minimizes area amongst all properly and smoothly immersed disks with boundary $c$. \end{defn} The compactness theorem in \cite[Theorem 5.5]{Mo} guarantees that this infimum is always realized for disks in $\mathbb{R}^{n}$. Furthermore, the regularity theorem in \cite[Theorem 8.1]{Mo} says such an area minimizing disk is smooth and embedded in its interior. Similar results hold for disks in $\mathbb{H}^{n}$. The following definition will be useful for analyzing least area disks in $\mathbb{H}^{3}$. \begin{defn}[Mass Ratio and Density] Let $a \in \mathbb{H}^{3}$ and consider $A(D \cap B(a,r))$, the area of a disk inside a ball. Define the \emph{mass ratio} to be \begin{center} $\Theta (D, a, r) = \frac{A(D \cap B(a,r))}{4\pi\sinh^{2}(\frac{r}{2})}$. \end{center} Define the \emph{density} of $D$ at $a$ to be \begin{center} $\Theta (D,a) = \lim_{r \rightarrow 0} \Theta (D, a, r)$. \end{center} \end{defn} A few comments about the above definition. First, $4\pi\sinh^{2}(\frac{r}{2})$ is the area of a totally geodesic disk of radius $r$ in $\mathbb{H}^{n}$. Also, for smoothly immersed surfaces, $\Theta (D,a) \geq 1$ at any point $a \in D$. For an embedded surface we actually have $\Theta (D,a) = 1$. If $D$ is not embedded at a point $a \in D$, then restricting to a subset of $D'$ of $D$ so that $D' \cap B(a,r)$ is an embedding only decreases the numerator of the mass ratio. See \cite[Chapter 2]{Mo} for more on densities. The monotonicity of the mass ratio was proved in the case for Euclidean geometry by Federer \cite{Fe} and a proof can also be found in Morgan \cite[Theorem $9.3$]{Mo}. Here, we obtain a similar result in $\mathbb{H}^{3}$ by using the same techniques as the proof given in Morgan. \begin{thm} \label{thm:Monotonicity_of_MR} Let $D$ be a least area disk in $\mathbb{H}^{3}$. Let $a \in \mathring D \subset \mathbb{H}^{3}$. Then for $0 < r< d(a, \partial D)$, the mass ratio $\Theta (D, a, r)$ is a monotonically increasing function of $r$. \end{thm} To prove this theorem, we need the following basic fact in hyperbolic trigonometry: \begin{lemma} \label{lemma:hyptrig} $\frac{\sinh(\frac{r}{2})}{\cosh(\frac{r}{2})} = \frac{\cosh(r)-1}{\sinh(r)}$, for $r >0$. \end{lemma} \begin{proof} This is a simple algebraic exercise that requires a few identities: \begin{center} $\frac{\sinh(\frac{r}{2})}{\cosh(\frac{r}{2})} = \sqrt{\frac{\cosh(r)-1}{\cosh(r)+1}} = \frac{\cosh(r)-1}{\sqrt{\cosh^{2}(r)-1}} = \frac{\cosh(r)-1}{\sinh(r)}$. \end{center} The first equality comes from well-known half-angle formulas. The rest of the equalities come from algebraic manipulations and the fact that $1 = \cosh^{2}(r) - \sinh^{2}(r)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Monotonicity_of_MR}] For $0 < r< d(a, \partial D)$, let $f(r)$ denote $A(D \cap B(a,r))$. Obviously, $f$ is monotonically increasing, which implies that $f'(r)$ exists almost everywhere. Set $\gamma_{r} = \partial (D \cap B(a,r))$. Now, we have that \begin{center} $(1)$ \; \; $ \ell(\gamma_{r}) \leq f'(r)$, \end{center} which is the ``co-area formula'' from \cite[Lemma 2.2]{HS}. This inequality holds whenever $\gamma_{r}$ is a $1$-manifold, i.e., whenever $D$ intersects $\partial B(a,r)$ transversely. Since $D$ is area-minimizing, $A(D \cap B(a,r)) \leq A(C)$, where $C$ is the cone over $\gamma_{r}$ to $a$. \begin{figure}[ht] \includegraphics[scale=0.65]{Cone_in_Ball.eps} \caption{The hyperbolic cone $C$ over $\gamma_{r}$ to $a$ in the upper half-space model of $\mathbb{H}^{3}$.} \label{Cone_in_Ball} \end{figure} \textbf{Claim:} $A(C) = \ell(\gamma_{r}) \frac{\cosh(r) - 1}{\sinh(r)}$. \\ Let $\gamma$ be the projection of $\gamma_{r}$ to the unit tangent sphere centered at $a$. Our area form is $dA = ds dR$, where $dR$ is the change in radius of a hyperbolic sphere and $ds = \sinh(R) d\theta$ is arc length on a sphere of radius $R$. The area form on $A(C)$ is inherited from geodesic polar coordinates in $\mathbb{H}^{3}$. We have that \begin{center} $A(C) = \int_{0}^{r} \int_{0}^{\ell(\gamma_{R})} ds dR = \int_{0}^{r} \int_{0}^{\ell(\gamma_{R})} \sinh(R) d\theta dR = \int_{\gamma} d\theta \int_{0}^{r} \sinh(R) dR = \ell(\gamma)(\cosh(r) - 1)$. \end{center} In order to rescale to make $A(C)$ a function of $\ell(\gamma_{r})$, we use the fact that $\ell(\gamma_{r}) = \int_{\gamma}\sinh(r) d\theta = \ell(\gamma) \sinh(r)$ to get that $A(C) = \ell(\gamma_{r}) \frac{\cosh(r)-1}{\sinh(r)}$. \\ Putting (1) together with the previous claim and Lemma \ref{lemma:hyptrig} gives: \begin{center} $f(r) \leq A(C) = \ell(\gamma_{r}) \frac{\cosh(r) - 1}{\sinh(r)} \leq f'(r) \frac{\cosh(r) - 1}{\sinh(r)} = f'(r) \frac{\sinh(\frac{r}{2})}{\cosh(\frac{r}{2})}$. \end{center} Consequently, \begin{center} $\frac{d}{dr} \left[4\pi \Theta (D, a, r)\right] = \frac{d}{dr}\left[f(r) \sinh^{-2}(\frac{r}{2}) \right] = \frac{f'(r)}{\sinh^{2}(\frac{r}{2})} - \frac{f(r)\cosh(\frac{r}{2})}{\sinh^{3}(\frac{r}{2})} = \frac{\cosh(\frac{r}{2})}{\sinh^{3}(\frac{r}{2})} \left[ f'(r)\frac{\sinh(\frac{r}{2})}{\cosh(\frac{r}{2})} - f(r) \right] \geq 0$ \end{center} since $\frac{\cosh(\frac{r}{2})}{\sinh^{3}(\frac{r}{2})} \geq 0$ for any $r > 0$. \end{proof} The following corollary will play a pivotal role in Section \ref{sec:LA_surfaces_and_geo}. \begin{cor} \label{cor2} Suppose $D \subset \mathbb{H}^{3}$ is a least area disk and $a \in \mathring D$. Then $A(D \cap B(a,r)) \geq 4\pi\sinh^{2}(\frac{r}{2})$, for any $r$, $0 < r \leq d(a, \partial D)$. \end{cor} \begin{proof} Let $D \subset \mathbb{H}^{3}$ be a least area surface and $a \in \mathring D \subset \mathbb{H}^{3}$. Since $\Theta (D, a, r)$ is increasing with $r$, we have that: \begin{center} $\Theta (D,a) =\lim_{t \rightarrow 0} \Theta (D, a, t) \leq \Theta (D,a,r) =\frac{A(D \cap B(a,r))}{4\pi\sinh^{2}(\frac{r}{2})}$, \end{center} for any $0 < r < d(a, \partial D)$. By continuity of the area function, we can extend this up to $r = d(a, \partial D)$. Now, being smoothly immersed implies that $\Theta(D,a) \geq 1$ for all $a \in \mathring S$. By the above, we have that $A(D \cap B(a,r)) \geq 4\pi\sinh^{2}(\frac{r}{2})$, for any $0 < r \leq d(a, \partial D)$, as desired. \end{proof} \section{Least area surfaces and short geodesics in hyperbolic $3$-manifolds} \label{sec:LA_surfaces_and_geo} First, let us set some notation. Let $M$ be a hyperbolic $3$-manifold. The universal cover of $M$ is $\mathbb{H}^{3}$, and there exists a covering map $\rho: \mathbb{H}^{3} \rightarrow M$. Let $T_{r}(\gamma)$ denote an embedded tubular neighborhood of radius $r$ about a closed geodesic $\gamma \subset M$. $\gamma$ lifts to a geodesic, $\tilde{\gamma}$, in $\mathbb{H}^{3}$, and we will assume that the endpoints of $\tilde{\gamma}$ are $0$ and $\infty$. Let $T_{r}(\tilde{\gamma})$ be the tubular neighborhood of radius $r$ about $\tilde{\gamma}$ in $\mathbb{H}^{3}$. Let $F$ be a surface in $M$ realized by the map $\varphi: S \rightarrow F$. Suppose $\gamma \cap F \neq \emptyset$, and say $p_{0} = \varphi(s_{0}) \in \gamma \cap F \subset M$. Let $\tilde{S}$ be the universal cover of $S$, and denote by $\rho_{1}$ the covering map $\rho_{1}: \tilde{S} \rightarrow S$. Let $\tilde{s_{0}} \in \tilde{S}$ be a point with $\rho_{1}(\tilde{s_{0}}) = s_{0}$ and let $\tilde{\varphi}: \tilde{S} \rightarrow \mathbb{H}^{3}$ be a lift of $\varphi$ such that $\tilde{p_{0}} = \tilde{\varphi}(\tilde{s_{0}})$ is a point in $\tilde{\gamma}$. We have the following commutative diagram. \begin{center} $\begin{CD} (\tilde{S}, \tilde{s_{0}}) @> \tilde{\varphi} >> (\mathbb{H}^{3}, \tilde{p_{0}})\\ @VV\rho_{1}V @VV\rho V\\ (S, s_{0}) @>\varphi>> (M, p_{0}) \end{CD}$ \end{center} The focus of the following subsections is to prove a number of propositions that can tell us when $\gamma$ can be isotoped disjoint from $F$ based on a variety of geometric and topological properties. Specifically, we will be interested in the tube radius of $\gamma$, the length of $\gamma$, and particular Dehn filling slopes. We will then use these conditions to show when the initial length spectrum can be preserved under mutation. We will always be working with an \textit{almost least area surface} $F$ that is incompressible and $\partial$-incompressible in a hyperbolic $3$-manifold $M$. The existence and embeddedness of such surfaces is provided by the following result of Ruberman. First, we define an almost least area surface. \begin{defn}[Almost Least Area Surface in $M$] \label{def:ALAS} A properly and smoothly embedded surface $F$ in a Riemannian $3$-manifold $M$ is called \textit{almost least area} if $F$ is either a least area surface (as given in Definition \ref{defn:LA}), or is the boundary of an $\epsilon$-neighborhood of a one-sided embedded least area surface $F'$. \end{defn} \textbf{Remark:} Theorems about almost least area surfaces hold for all $\epsilon$ sufficiently small. \newline For the rest of Section \ref{sec:LA_surfaces_and_geo}, we will assume that any surface $F \subset M$ is a properly and smoothly embedded surface inside of a hyperbolic $3$-manifold $M$. \begin{thm} \cite[Theorem 1.6]{Ru} \label{thm:LAsurfaces} Let $F \subset M$ be a surface that is incompressible and $\partial$-incompressible. Then $F$ can be properly isotoped to an almost least area surface. \end{thm} \subsection{Least area surfaces and the tube radius of $\gamma$} \label{subsec:LA_surfaces} The following proposition tells us that a closed geodesic $\gamma$ can be isotoped disjoint from an incompressible surface, if $\gamma$ has a sufficiently large embedded tubular radius. This fact can also be shown using \cite[Lemma 4.3]{FP2}. However, here we provide additional geometric information about $\gamma \cap F$, when $F$ is in almost least area form. Recall that by a closed curve $n \cdot \gamma$, we mean a simple closed curve that is in the homotopy class of $[n \cdot \gamma] \in \pi_{1}(\partial T_{r}(\gamma))$. \begin{prop} \label{thm:LA_surface_disjoint} Let $\gamma \subset M$ be a closed geodesic with embedded tubular radius $r$, and let $F$ be a surface in $M$ that is incompressible and $\partial$-incompressible. Set $h(x) = 2\sinh^{-1}(\sqrt{\frac{x}{2}})$. Assume $r> f( \left|\chi(F) \right| )$. Then $\gamma$ can be isotoped disjoint from $F$. Furthermore, if $F$ is in almost least area form, then either $\gamma \cap F = \emptyset$ without any isotopy or $n \cdot \gamma$ is isotopic into $F$ for some $n \in \mathbb{N}$. In particular, if $\left|\chi(F) \right| \leq 2$, then our result holds whenever $r > 2 \ln(1 + \sqrt{2}) $. \end{prop} In order to prove this proposition, we will need the following lemma, which gives a lower bound on the area of a least area disk inside a tubular neighborhood of a geodesic. \begin{lemma} \label{lemma:LA_ surface_in_tube} Let $\gamma \subset M$ be a closed geodesic with embedded tubular neighborhood $T_{r}(\gamma)$. Suppose $D_{r}$ is a least area disk in $M$ such that $\gamma \cap D_{r} \neq \emptyset$ and $ \partial D_{r} \subset \partial T_{r}(\gamma)$. Then $A(D_{r} \cap T_{r}(\gamma)) \geq 4\pi\sinh^{2}(\frac{r}{2})$. \end{lemma} \begin{proof} Since $\pi_{1}(D_{r})$ is trivial, $D_{r}$ lifts isometrically to a disk $\tilde{D_{r}} \subset T_{r}(\tilde{\gamma}) \subset \mathbb{H}^{3}$, with $\partial \tilde{D_{r}} \subset \partial T_{r}(\tilde{\gamma})$ and $\tilde{p_{0}} \in \tilde{D_{r}} \cap \tilde{\gamma}$. Since $D_{r}$ is least area and $D_{r}$ lifts isometrically to $\tilde{D_{r}}$, $\tilde{D_{r}}$ is a least area disk in $\mathbb{H}^{3}$ for the boundary curve $c = \partial \tilde{D_{r}}$. See figure \ref{Disk_in_Tube}. By Corollary \ref{cor2}, $A(\tilde{D_{r}} \cap B(\tilde{p_{0}}, r)) \geq 4\pi\sinh^{2}(\frac{r}{2})$. Therefore, \begin{center} $A(D_{r}) = A(\tilde{D_{r}}) \geq 4\pi\sinh^{2}(\frac{r}{2})$, \end{center} as desired. \end{proof} \begin{figure}[ht] \includegraphics[scale=0.65]{Disk_in_tube.eps} \caption{The lift of a disk $D_{r}$ to $\mathbb{H}^{3}$ in the upper half-space model.} \label{Disk_in_Tube} \end{figure} \begin{proof}[Proof of Proposition \ref{thm:LA_surface_disjoint}] Assume that $F$ has been isotoped to an (embedded) almost least area surface, as provided by Theorem \ref{thm:LAsurfaces}. Set $F_{r}= F \cap T_{r}(\gamma)$. We will always choose $r$ so that $F$ intersects $\partial(T_{r}(\gamma))$ transversely. By Sard's Theorem, this will hold for almost every $r$. Assume that $\gamma \cap F \neq \emptyset$. \textbf{Claim:} $F_{r}$ is incompressible in $T_{r}(\gamma)$, and consequently, each component of $F_{r}$ is a disk or annulus. Suppose that $F_{r}$ is compressible in $T_{r}(\gamma)$. Then there exists a disk $D' \subset T_{r}(\gamma)$ with $\partial D' \subset F_{r}$, but $\partial D'$ does not bound a disk in $F_{r}$. Since $F$ is incompressible in $M$, $\partial D'$ bounds a disk $D$ in $F$. We claim that the interior of $D$ must lie completely outside of $T_{r}(\gamma)$. If the interior of $D$ intersects the interior of $T_{r}(\gamma)$, then $F_{r}$ would have at least two boundary components on $\partial T_{r}(\gamma)$ (one coming from $\partial D$ and one coming from the interior of $D$ intersecting $\partial T_{r}(\gamma)$). These boundary components obviously bound disks lying on $\partial T_{r}(\gamma)$ in $M$. Since $F$ is incompressible in $M$, these boundary components of $F_{r}$ must bound disks in $F$. This gives a contradiction since there would be an annulus that is a subset of $F$ connecting these two boundary components, and so, such disks could not exist. So, $D$ must lie completely outside of $T_{r}(\gamma)$. Lift $D$ isometrically to a disk $\tilde{D} \subset \mathbb{H}^{3}$ with $\tilde{\partial D} \subset {T}_{r}(\tilde{\gamma})$. Here, the interior of $\tilde{\partial D}$ must also lie completely outside of ${T}_{r}(\tilde{\gamma})$. Now, $\tilde{D}$ can be homotoped to a disk (keeping $\tilde{\partial D}$ fixed) that lies on ${T}_{r}(\tilde{\gamma})$ via a nearest point projection map. We claim that this homotopy is area-decreasing. For this, we give $\mathbb{H}^{3}$ coordinates $(\rho, \theta, h)$, where $\rho \in \left( 0, \infty \right) $, $ \theta \in \left[ 0, 2\pi \right]$, and $h \in \mathbb{R}$. A point in $\mathbb{H}^{3}$ with coordinates $(\rho, \theta, h)$ has distance $\rho$ to the point on $\tilde{\gamma}$ at signed distance $h$ from $(0,0,1)$, and $\theta$ is the polar angle coordinate of its projections to the $(x,y)$-plane. A direct computation shows that \begin{center} $(\rho, \theta, h) = e^{h}(\tanh \rho \cos \theta, \tanh \rho \sin \theta, \sech \rho)$ \end{center} pulls back the hyperbolic metric on the upper half-space model to the diagonal metric with respective diagonal entries $1$, $\sinh^{2} \rho$, and $\cosh^{2} \rho$. The nearest point projection to ${T}_{r}(\tilde{\gamma})$ in these coordinates is given by $(\rho, \theta, h) \rightarrow (r, \theta, h)$, for $ \rho \geq r$. A direct computation shows that this projection reduces the area form of $\tilde{D}$ pointwise. Thus, projecting $\tilde{D}$ onto $\partial {T}_{r}(\tilde{\gamma})$ will give an area-decreasing homotopy. Projecting this homotopy down to $M$ yields an area-decreasing homotopy of $F$, which is a contradiction if $F$ is a least area surface. If $F$ is an $\epsilon$-neighborhood of a one-sided embedded least area surface $F'$, then we choose $\epsilon$ sufficiently small so that the strict area inequality we get from this homotopy still holds as an inequality. Thus, $F_{r}$ is incompressible in $T_{r}(\tilde{\gamma})$. The only incompressible surfaces with boundary that can be inside of $T_{r}(\tilde{\gamma})$ are essential disks and annuli. We will now consider the two possibilities for the geometry of $\gamma \cap F$, when $F$ is in almost least area form. \textbf{Case 1:} A component of $F_{r}$ is a disk that intersects $\gamma$. \\ Say $D_{r}$ is a disk component of $F_{r}$ that intersects $\gamma$. If $F$ is in least area form, then we have the following area inequality: \begin{center} $2\pi \left| \chi(F) \right| \geq A(F) > A(F_{r}) \geq A(D_{r} \cap T_{r}(\gamma)) \geq 4\pi\sinh^{2}(\frac{r}{2})$. \end{center} The first inequality comes from the Gauss-Bonnet Theorem, combined with properties of minimal surfaces (see Futer--Purcell \cite[Lemma $3.7$]{FP2}). The last inequality comes from Lemma \ref{lemma:LA_ surface_in_tube}. Note that, we have a strict inequality for a least area surface, and by taking $\epsilon$ sufficiently small, the inequality $2\pi \left| \chi(F) \right| \geq 4\pi\sinh^{2}(\frac{r}{2})$ still holds if $F$ has been homotoped from a least area surface to an $\epsilon$-neighborhood of a one-sided least area surface. This gives us that $\sqrt{\frac{\left| \chi(F) \right|}{2}} \geq \sinh({\frac{r}{2}})$. Recall that $\sinh^{-1}(y) = \ln( y + \sqrt{y^{2}+1})$ and that $\sinh(x)$ is an increasing function. Thus, \begin{center} $h(\left| \chi(F) \right|) = 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}}) = 2 \ln ( \sqrt{\frac{\left| \chi(F) \right|}{2}} + \sqrt{\frac{\left| \chi(F) \right|}{2} +1} ) \geq r.$ \end{center} So, if $\gamma$ has a large enough embedded tubular radius, we will have a contradiction, specifically, if $r > h(\left| \chi(F) \right|)$. In particular, if $\left|\chi(F) \right| \leq 2$, then $r > 2 \ln(1 + \sqrt{2}) = h( \left|\chi(F) \right| )$ will provide the necessary area contradiction, and so, $\gamma \cap F = \emptyset$. \textbf{Case 2:} Every component of $F_{r}$ that intersects $\gamma$ is an annulus. \\ Suppose $A_{r}$ is an annulus component of $F_{r}$ that intersects $\gamma$. In this case, the inclusion map $i: A_{r} \rightarrow T_{r}(\gamma)$, induces an injective homomorphism $i_{\ast}: \pi_{1} ( A_{r}) \hookrightarrow \pi_{1} ( T_{r}(\gamma) )$ with $[\alpha] \mapsto [n \cdot \gamma]$ for some $n \in \mathbb{N}$, where $[\alpha]$ is the homotopy class of the core of the annulus $A_{r}$. Now, $[\alpha]$ can be represented by a curve $\alpha$ on a component of $\partial A_{r}$, with $A_{r}$ providing the isotopy between the core and the boundary component. Since $\partial A_{r} \subset \partial T_{r}(\gamma)$, $\alpha$ is isotopic into the boundary torus $\partial T_{r}(\gamma)$, providing a satellite knot of the form $n \cdot \gamma$ on $\partial T_{r}(\gamma)$. Finally, we show that our topological statement holds, that is, $\gamma$ can be isotoped disjoint from $F$ in both cases. Obviously, if $\gamma \cap F = \emptyset$, then no isotopy needs to even take place. So, suppose $n \cdot \gamma$ is isotopic into $F$. The proof of case $2$ explains the topology of such a situation. Specifically, the annuli $\left\{A_{r}^{i}\right\}_{i=1}^{n}$ are boundary parallel to $\partial T_{r}(\gamma)$, and so, could be isotoped disjoint from $\gamma$. If $F_{r}$ consists of multiple annuli that intersect $\gamma$, then we start by isotoping the outermost annuli to the boundary and proceed inward. Equivalently, we could keep $\left\{A_{r}^{i}\right\}_{i=1}^{n}$ fixed (since it is part of our least area surface $F$) and isotope $\gamma$ so that this closed curve is disjoint from $\left\{A_{r}^{i}\right\}_{i=1}^{n}$, and more generally, disjoint from $F$. \end{proof} It is important to note that case $2$ of Proposition \ref{thm:LA_surface_disjoint} is certainly a possiblity and can be an obstruction to a useful lower bound estimate on $A(F)$. Techniques similar to the proof of Theorem \ref{thm:Monotonicity_of_MR} can be used to find a lower bound for $A(F \cap T_{r}(\gamma))$ when every component is an annulus, but the lower bound is of the form $C_{0} \cdot \ell(\gamma) \cdot \sinh(r)$, where $C_{0} >0$ is a constant. It is possible to put a hyperbolic metric on a given surface $F$ so that a specific geodesic is arbitrarily short and contains an embedded collar of area $2 \ell(\gamma) \cdot \sinh(r)$. So, if $\ell(\gamma)$ is sufficiently short and $\gamma$ actually lies on $F$, then the quantity $C_{0} \cdot \ell(\gamma) \cdot \sinh(r)$ could be too small to be useful for our purposes. \subsection{Least area surfaces and the length of $\gamma$} \label{subsec:LA_surfaces_length} Next, we will examine when $\gamma$ can be isotoped disjoint from $F$ based on the length of $\gamma$. To do this, we will need to use the Collar Lemma, which essentially says that the shorter the length of a closed geodesic in a hyperbolic $3$-manifold, the larger the embedded tubular neighborhood of that geodesic. The following qualitative version of the Collar Lemma comes from Meyerhoff \cite{Me}: \begin{thm}[Collar Lemma] \label{thm:Collar_lemma} Let $\gamma \subset M$ be a closed geodesic in a hyperbolic $3$-manifold with (real) length $\ell(\gamma)$. Suppose $\ell(\gamma) < \frac{\sqrt{3}}{4\pi} \left[\ln ( \sqrt{2} + 1)\right]^{2} \approx 0.107$. Then there exists an embedded tubular neighborhood around $\gamma$ whose radius $r$ satisfies \begin{center} $\sinh^{2}(r) = \frac{1}{2} \left(\frac{\sqrt{1-2k(\ell(\gamma))}}{k(\ell(\gamma))} - 1\right)$ where $k(x) = \cosh \left( \sqrt{ \frac{4 \pi x}{\sqrt{3}}} \right) - 1$. \end{center} \end{thm} \begin{prop} \label{thm:LA_surface_disjoint_corelength} Let $\gamma \subset M$ be a closed geodesic, and let $F$ be a surface in $M$ that is incompressible and $\partial$-incompressible. Set $g(x) = 2x^{2}+4x+1$. Assume $\frac{\sqrt{1-2k(\ell(\gamma))}}{k(\ell(\gamma))} > g( \left| \chi(F) \right|)$. Then $\gamma$ can be isotoped disjoint from $F$. Furthermore, if $F$ is in almost least area form, then either $\gamma \cap F = \emptyset$ without any isotopy or $n \cdot \gamma$ is isotopic into $F$ for some $n \in \mathbb{N}$. In particular, if $\left|\chi(F) \right| \leq 2$ our result holds whenever $\ell(\gamma) < 0.015$. \end{prop} \begin{proof} We will use the Collar Lemma to show that if $\ell(\gamma)$ is sufficiently small, then the tube radius $r$ is sufficiently large. Then Proposition \ref{thm:LA_surface_disjoint} will give us the desired result. So, we need to see when $r > f(\left| \chi(F) \right|) = 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}})$. Assume that $\ell(\gamma) < 0.107$, so the Collar Lemma applies. Then we have $\sinh^{2}(r) = \frac{1}{2} \left(\frac{\sqrt{1-2k}}{k} - 1\right)$ where $k(\ell(\gamma)) = \cosh \left( \sqrt{ \frac{4 \pi \ell(\gamma)}{\sqrt{3}}} \right) - 1$. Now, $k(\ell(\gamma))$ is an increasing function on $0 < \ell(\gamma) < \infty$ with $k(\ell(\gamma)) \rightarrow 0$ as $\ell(\gamma) \rightarrow 0$, while $\frac{1}{2} \left(\frac{\sqrt{1-2k}}{k} - 1\right)$ is a decreasing function ($0 < k \leq \frac{1}{2}$), which heads to $\infty$ as $k \rightarrow 0$. So, as $\ell(\gamma) \rightarrow 0$, $\sinh^{2}(r) = \frac{1}{2} \left(\frac{\sqrt{1-2k}}{k} - 1\right) \rightarrow \infty$. Specifically, we need the following inequality to hold: \begin{eqnarray*} r = \sinh^{-1}(\sqrt{\frac{1}{2}(\frac{\sqrt{1-2k}}{k}-1})) & > & 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}}),\\ \frac{1}{2} (\frac{\sqrt{1-2k}}{k}-1) & > & \sinh^{2}( 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}})), \\ \frac{\sqrt{1-2k}}{k} & > & 2 \sinh^{2}( 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}}))+1. \end{eqnarray*} Note that, \begin{eqnarray*} 2 \sinh^{2}( 2 \sinh^{-1}(\sqrt{\frac{\left| \chi(F) \right|}{2}}))+1 & = & 2\sinh^{2}(\sinh^{-1}(2\sqrt{\frac{\left| \chi(F) \right|}{2}}\sqrt{\frac{\left| \chi(F) \right|}{2}+1}))+1 \\ & = & 2(2\sqrt{\frac{\left| \chi(F) \right|}{2}}\sqrt{\frac{\left| \chi(F) \right|}{2}+1})^{2}+1 \\ & = & 2\left| \chi(F) \right|^{2} + 4\left| \chi(F) \right| +1 \\ & = & g( \left| \chi(F) \right|). \end{eqnarray*} For the case when $\left|\chi(F) \right| \leq 2$, we just need to check when the inequality \begin{center} $ \left(\frac{\sqrt{1-2k(\ell(\gamma))}}{k(\ell(\gamma))} \right) > g(2) = 17$ \end{center} is satisfied. This occurs when $\ell(\gamma) < 0.015$, giving the desired result. \end{proof} \subsection{Least area surfaces and Dehn filling slopes} \label{subsec:LA_surfaces_Dehn_filling} Now, we would like to examine the geometry and topology of $\gamma \cap F$ based on certain Dehn filling slopes. In order to do this, we need to go over some background on Dehn fillings. Given a hyperbolic $3$-manifold $M$ with a cusp corresponding to a torus boundary on $\partial M$, we choose a basis $\left\langle m,l \right\rangle$ for the fundamental group of the torus. After this choice of basis, we can form the manifold $M\left(p,q\right)$ obtained by doing a $\left(p,q \right)$-Dehn surgery on the cusp, where $\left(p,q \right)$ is a coprime pair of integers. A \emph{$\left(p,q \right)$-Dehn surgery} maps the boundary of the meridian disk to $s = pm+ql$. Similary, we can form the manifold $M\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$ by performing a $\left(p_{i},q_{i}\right)$-Dehn surgery on the $i^{th}$ cusp of $M$, for each $i$, $ 1 \leq i \leq k$. Thurston showed that $M\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$ is in fact a hyperbolic $3$-manifold for all $\left((p_{1},q_{1}) \dots (p_{k}, q_{k}) \right)$ near $\left(\infty,\dots, \infty\right)$; see \cite{Th}. Following Thurston's work, many people developed techniques to more explicitly understand the change in geometry under Dehn surgery. The work of Hodgson and Kerckhoff \cite{HoKe2}, \cite{HoKe} shows that if the \textit{normalized lengths} of the slopes on which Dehn fillings are performed are sufficiently large, then it is possible to give explicit bounds on the geometry of the filled manifold. Their work will be helpful for us to determine when core geodesics (coming from Dehn filling) can be isotoped disjoint from incompressible surfaces inside of $M\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$. We now define normalized length. \begin{defn}[Normalized Length] \label{defn:NL} Given a Euclidean torus $T$, the \emph{normalized length of a slope $s = pm + ql$} is defined to be: \begin{center} $\widehat{L}(s) = \widehat{L}((p,q))= \frac{\text{Length}((p,q))}{\sqrt{\text{Area}(T)}}$, \end{center} where Length($(p,q)$) is defined to be the length of a geodesic representative of $s$ on $T$. If we are considering multiple slopes, $\left\{s_{i}\right\}_{i=1}^{k}$, then define $\widehat{L}$ by the equation $\frac{1}{\widehat{L}^{2}} = \sum_{i=1}^{k} \frac{1}{\widehat{L}(s_{i})^{2}}$. \end{defn} Note that, normalized length is scale invariant and well-defined for cusps of $M$. We now introduce some functions and terminology needed to understand certain results we will use from \cite{HoKe}. For the rest of this section, $M$ and $N$ will denote hyperbolic $3$-manifolds such that $M = N\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$. Each of these Dehn fillings produces a solid torus in $M$ whose core geodesic will be denoted by $\gamma_{i}$. We will use $r_{i}$ to denote the maximal embedded tube radus of $\gamma_{i}$. Section $5.1$ of \cite{HoKe} defines the \textit{visual area} of the boundary of such an embedded tube and observes that it is equal to $\ell(\gamma_{i})\alpha_{i}$, where $\alpha_{i}$ is the cone angle around $\gamma_{i}$ (see above (25) on page 1068 there). Since $M$ is a manifold, its \textit{total visual area}, i.e., the sum of the visual areas of all tube boundaries, is $A = 2\pi \sum_{i=1}^{k} \ell(\gamma_{i})$. The following two theorems come from \cite{HoKe}. The first relates the normalized lengths to the tube radii of the core geodesics resulting from Dehn filling, and the second relates these normalized lengths to the total visual area. The functions $f(z)$, $A(z)$, and $I(z)$ used in these theorems are given below. Also, $f(z)$ is formula $43$ on page $1080$ of \cite{HoKe}, and $A(z)$ is given on page $1080$ of \cite{HoKe} (though it is defined in terms of a function $H(z)$ given on page $1079$). \begin{itemize} \item $f(z) = 3.3957(1-z) \exp (-\int_{1}^{z} F(w) dw)$, where $F(w) = \frac{-(1+4w+6w^{2}+w^{4})}{(w+1)(1+w^{2})^{2}}$, \item $A(z) = \frac{3.3957z(1-z^{2})}{1+z^{2}}$, \item $I(z) = \frac{(2\pi)^{2}}{f(z)}$. \end{itemize} \begin{thm}\cite{HoKe} \label{thm:NLgivesradius} Suppose that $M$ is obtained from $N$ by Dehn filling along slopes whose normalized lengths satisfy $\widehat{L} > 7.5832$. If $\widehat{L}^{2} \geq I(z)$, then the tube radius $r_{i}$ of each $\gamma_{i}$ stays larger than $R = \tanh^{-1}(z)$. \end{thm} Theorem \ref{thm:NLgivesradius} is a slightly different version of Theorem $5.7$ from \cite{HoKe}. In \cite[Theorem 5.7]{HoKe}, the conclusion states that the tube radius of each $\gamma_{i}$ stays larger than a fixed radius $R_{0} = \tanh(\frac{1}{\sqrt{3}})$. In our version, the tube radius parameter $R$ is not fixed, but rather, a lower bound for it is given in terms of $z$. The two paragraphs proceeding Theorem $5.7$ in \cite{HoKe} justify this change. \begin{thm}\cite[Theorem 5.12]{HoKe} \label{thm:NLgiveslength} Suppose that $M$ is obtained from $N$ by Dehn filling along slopes whose normalized lengths satisfy $\widehat{L} > 7.5832$. Then the total visual area $A$ satisfies $A \leq A(z)$ where the variable $z$ is determined by $f(z) = \frac{(2\pi)^{2}}{\widehat{L}^{2}}$. \end{thm} The following proposition explicitly relates the normalized length of Dehn fillings to the geometry of the resulting core geodesics. \begin{prop} \label{thm:LA_surface_disjoint_conedef} Suppose $M = N\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$. Let $\left\{\gamma_{i}\right\}_{i=1}^{k} \subset M$ denote the set of closed geodesics which come from the cores of the solid tori obtained from Dehn filling cusps of $N$, and let $r_{i}$ denote the maximal embedded tube radius of $\gamma_{i}$. \begin{itemize} \item If for each $i = 1, \dots, k$ we have $\widehat{L}((p_{i}, q_{i})) \geq 14.90\sqrt{k}$, then $r_{i} > 2 \ln(1 + \sqrt{2})$. \item If for each $i = 1, \dots, k$ we have $\widehat{L}((p_{i}, q_{i})) \geq 20.76\sqrt{k}$, then $\ell(\gamma_{i}) < 0.015$. \end{itemize} \end{prop} \begin{proof} For the first bullet, we use Theorem \ref{thm:NLgivesradius} to guarantee each tube radius $r_{i}$ is sufficiently large by making each normalized length $\widehat{L}((p_{i}, q_{i}))$ sufficiently large. Specifically, we require $\widehat{L}^{2} \geq I(z)$, for $z = \tanh(2 \ln(1 + \sqrt{2}))$ to guarantee that the tube radius of each $\gamma_{i}$ is at least $2(\ln(1+\sqrt{2}))$. Since for each $i = 1, \dots, k$ we have $\widehat{L}((p_{i}, q_{i})) \geq 14.90\sqrt{k}$, it follows that \begin{center} $\frac{1}{\widehat{L}^{2}} = \sum_{i=1}^{k} \frac{1}{\widehat{L}(p_{i}, q_{i})^{2}} \leq (k) (\frac{1}{14.90 \sqrt{k}})^{2} = \frac{1}{222.01}$. \end{center} Thus, $\widehat{L}^{2} \geq 222.01$. Doing the necessary algebra reveals that $222.01 \geq I(z)$ when $z = \tanh(2 \ln(1 + \sqrt{2}))$, giving the desired result. Now we consider the second bullet. For the filled manifold $M$, we have that the total visual area $A = 2\pi \sum_{i=1}^{k} \ell(\gamma_{i})$. In our case, we want each $\ell(\gamma_{i}) < 0.015$, which will certainly be true if $\sum_{i=1}^{k} \ell(\gamma_{i}) < 0.015$. Thus, if $A \leq 2\pi(0.015)$ then each geodesic $\gamma_{i}$ will be sufficiently short. By Theorem \ref{thm:NLgiveslength}, we know that $A \leq A(z) = \frac{3.3957z(1-z^{2})}{1+z^{2}}$, where the variable $z$ is determined by the equation $f(z) = \frac{(2\pi)^{2}}{\widehat{L}}$. Thus, we need to choose our $\widehat{L}((p_{i},q_{i}))$ sufficiently large so that $z$ satisfies $A(z) \leq 2\pi(0.015)$. Doing some algebra yields the following. \begin{center} $\widehat{L} = \sqrt{\frac{(2\pi)^{2}}{f(z)}} = \sqrt{\frac{(2\pi)^{2} \exp (\int_{1}^{z} F(w) dw)}{3.3957(1-z)}}$. \end{center} Choosing each $\widehat{L}((p_{i}, q_{i})) \geq 20.76\sqrt{k}$ results in $A(z) \leq 2\pi(0.015)$, as needed. \end{proof} Either of these conditions will guarantee that any such core geodesics $\gamma_{i}$ can be isotoped disjoint from an incompressible surface $F$ with $\left|\chi(F) \right| \leq 2$. This comes from combining Proposition \ref{thm:LA_surface_disjoint_conedef} with Proposition \ref{thm:LA_surface_disjoint} in the first case and Proposition \ref{thm:LA_surface_disjoint_corelength} in the second case, respectively. However, while the lower bound on normalized length is smaller for the first bullet, in certain applications we will actually want to guarantee that not only our geodesics can be isotoped disjoint from $F$, but also, these geodesics are sufficiently short. This is why we include the second condition. These results are summarized in Corollary \ref{cor:disjointgeo} in the next section. \subsection{Summary of conditions} \label{subsec:summary} We now summarize the conditions under which $\gamma$ can be isotoped disjoint from $F$. This will be used in the proof of Theorem \ref{thm:mutationrep} and its corollaries. \begin{thm} \label{thm:gammasep} Let $M$ be a hyperbolic manifold with $F \subset M$ a surface that is incompressible and $\partial$-incompressible. Let $\gamma \subset M$ be a closed geodesic with embedded tubular radius $r$. Assume \begin{enumerate} \item $r> h( \left|\chi(F) \right| )$, or \item $\frac{\sqrt{1-2k(\ell(\gamma))}}{k(\ell(\gamma))} > g( \left| \chi(F) \right|)$. \end{enumerate} Then $\gamma$ can be isotoped disjoint from $F$. Furthermore, if $F$ is in almost least area form, then either $\gamma \cap F = \emptyset$ without any isotopy or $n \cdot \gamma$ is isotopic into $F$ for some $n \in \mathbb{N}$. \end{thm} \begin{proof} Combine Proposition \ref{thm:LA_surface_disjoint} and Proposition \ref{thm:LA_surface_disjoint_corelength}. \end{proof} Plugging in $\left|\chi(F) \right| \leq 2$ gives the following immediate corollary. \begin{cor} \label{cor:disjointexplicit} Let $M$ be a hyperbolic manifold with $F \subset M$ a surface that is incompressible and $\partial$-incompressible with $\left|\chi(F) \right| \leq 2$. Let $\gamma \subset M$ be a closed geodesic with embedded tubular radius $r$. Assume \begin{enumerate} \item $r> 2 \ln (1 + \sqrt{2})$, or \item $\ell(\gamma) < 0.015$. \end{enumerate} Then $\gamma$ can be isotoped disjoint from $F$. Furthermore, if $F$ is in almost least area form, then either $\gamma \cap F = \emptyset$ without any isotopy or $n \cdot \gamma$ is isotopic into $F$ for some $n \in \mathbb{N}$. \end{cor} For our applications, we will mainly be concerned with closed geodesics that are the core geodesics coming from Dehn fillings and surfaces $F_{i}$ with $\left|\chi(F_{i}) \right| \leq 2$. Thus, the following corollary will be useful, which comes from combining Corollary \ref{cor:disjointexplicit} with Proposition \ref{thm:LA_surface_disjoint_conedef}. \begin{cor} \label{cor:disjointgeo} Suppose $M = N\left((p_{1},q_{1}), \dots, (p_{k}, q_{k})\right)$ and $F \subset M$ a surface that is incompressible and $\partial$-incompressible with $\left|\chi(F) \right| \leq 2$. Let $\left\{\gamma_{i}\right\}_{i=1}^{k} \subset M$ denote the core geodesics coming from Dehn filling cusps of N, each with embedded tube radius $r_{i}$. \begin{enumerate} \item If for each $i = 1, \dots, k$ we have that $\widehat{L}((p_{i}, q_{i})) \geq 14.90\sqrt{k}$, then each $\gamma_{i}$ can be isotoped disjoint from $F$ and each $r_{i} > 2 \ln(1 + \sqrt{2})$. \item If for each $i = 1, \dots, k$ we have that $\widehat{L}((p_{i}, q_{i})) \geq 20.76\sqrt{k}$, then in addition each $\ell(\gamma_{i}) < 0.015$. \end{enumerate} Furthermore, if $F$ is in almost least area form, then either $\gamma_{i} \cap F = \emptyset$ without any isotopy or $n \cdot \gamma_{i}$ is isotopic into $F$ for some $n \in \mathbb{N}$. \end{cor} Combining the results from this section gives a proof of Theorem \ref{thm:main} from the introduction. \begin{proof}[Proof of Theorem \ref{thm:main}] Corollary \ref{cor:disjointexplicit} takes care of the first two cases of Theorem \ref{thm:main}, while Corollary \ref{cor:disjointgeo} takes care of the third case by considering Dehn filling a single cusp. \end{proof} \section{Hyperelliptic surfaces and mutations that preserve geodesics} \label{subsec:symmsurf_and_mut} In this section, we will prove that mutating along hyperelliptic surfaces inside hyperbolic $3$-manifolds preserves the initial (complex) length spectrum. In what follows, let $S_{g,n}$ denote a surface of genus $g$ and $n$ boundary components. Recall that a hyperelliptic surface $S$ is a surface that admits at least one non-trivial involution $\mu$ of $S$ so that $\mu$ fixes every isotopy class of curves in $S$. Note that, the surfaces $S_{2,0}$, $S_{1,2}$, $S_{1,1}$, $S_{0,3}$, and $S_{0,4}$ are always hyperelliptic, regardless of their hyperbolic structures. Also, these are all surfaces with Euler characteristic $-1$ or $-2$. For our constructions in Section \ref{sec:RT_and_PK}, we will examine $4$-punctured spheres that arise in hyperbolic knot complements. An $S_{0,4}$ in a knot complement is called a \emph{Conway sphere}. \begin{figure}[ht] \includegraphics[scale=0.50]{Conwaysphere.eps} \caption{A standard Conway sphere.} \label{Conwaysphere} \end{figure} For a Conway sphere there are three hyperelliptic (orientation preserving) involutions, given by $180^{\circ}$ rotations about the $x$-axis, $y$-axis, and $z$-axis, respectively, as shown in figure \ref{Conwaysphere}. \begin{defn}[Mutation] \label{def:Mutation} A \emph{mutation} along a hyperelliptic surface $S$ in a $3$-manifold $M$ is the process of cutting $M$ along $S$ and then regluing by one of the nontrivial involutions of $S$ to obtain the $3$-manifold $M^{\mu}$. If $K$ is a knot in $\mathbb{S}^{3}$ with a Conway sphere $S$, then cutting $(S^{3}, K)$ along $(S, S \cap K)$ and regluing by a mutation, $\mu$, yields a knot $K^{\mu} \subset S^{3}$. \end{defn} Corollary \ref{cor:disjointexplicit} will help us determine a lower bound on the number of geodesic lengths preserved under mutation. To do this, we first need to see how representations of $\pi_{1}(M$) and $\pi_{1}(M^{\mu})$ are related as amalgamated products and HNN-extensions along representations of $\pi_{1}(F)$. In fact, Kuessner in \cite{Ku} gives a different proof of Ruberman's result about mutations and volume that uses these decompositions of representations of $\pi_{1}(M$) and $\pi_{1}(M^{\mu})$ along with the Maskit combination theorem and homological arguments. The following theorem due to Ruberman characterizes an essential feature of a hyperelliptic surface $(F, \mu)$. \begin{thm}\cite[Theorem $2.2$]{Ru} \label{thm:mutationrep} Let $(F, \mu)$ be a hyperelliptic surface, and let $\rho_{F}: \pi_{1}(F) \rightarrow \text{PSL}(2, \mathbb{C})$ be a discrete and faithful representation taking cusps of $F$ to parabolics. Then there exists $\beta \in \text{PSL}(2, \mathbb{C})$ such that $\rho_{F} \mu_{\ast} = \beta \rho_{F} \beta^{-1}$. \end{thm} Geometrically, this means that a hyperelliptic involution acts as a rigid motion of a fundamental domain for $\rho_{F}(\pi_{1}(F))$ in $\mathbb{H}^{3}$. In what follows, suppose that $M = \mathbb{H}^{3} / \Gamma$ where $\Gamma$ is the Kleinian group corresponding to the representation $\rho: \pi_{1}(M) \rightarrow \text{PSL}(2, \mathbb{C})$. In addition, assume that $(F, \mu)$ is a hyperelliptic surface inside of $M$, and mutation along $F$ produces $M^{\mu}$. If $F$ is separating in $M$, then assume cutting along $F$ decomposes $M$ into two pieces, $M_{a}$ and $M_{b}$. If $F$ is non-separating, then assume cutting along $F$ decomposes $M$ into $N$ where $\partial N = F_{1} \cup F_{2}$. Here, $F_{1}$ and $F_{2}$ are copies of $F$ and $M$ is the quotient of $N$ under some homeomorphism $\psi: F_{1} \rightarrow F_{2}$. Also, assume that $\Gamma_{a}$, $\Gamma_{b}$, $\Gamma_{F}$, and $\Gamma_{N}$ are Kleinian subgroups of $\Gamma$ that are isomorphic to $\pi_{1}(M_{a})$, $\pi_{1}(M_{b})$, $\pi_{1}(F)$, and $\pi_{1}(N)$, respectively, with these isomorphisms coming from restricting $\rho:\pi_{1}(M) \rightarrow \text{PSL}(2, \mathbb{C})$. The previous paragraph tells us that $\Gamma = \left\langle \Gamma_{a}, \Gamma_{b} \right\rangle \cong \Gamma_{a} \ast_{\Gamma_{F}} \Gamma_{b}$ when $F$ is separating and $\Gamma = \left\langle \Gamma_{N} , \gamma \right\rangle \cong \Gamma_{N} \ast_{\gamma}$ where $\gamma g \gamma^{-1} = \psi_{\ast}(g)$ for $g$ in the subgroup $\Gamma_{1}$ of $\Gamma_{N}$, when $F$ is non-separating. The following lemma shows that we also get a decomposition of $\Gamma^{\mu}$ in terms of $\Gamma_{a}$ and $\Gamma_{b}$. A similar lemma is given by Kuessner in \cite[Proposition 3.1]{Ku}. In the following lemma and theorem, we use $=$ to denote equality of Kleinian groups and $\cong$ to denote an abstract group isomorphism. \begin{lemma} \label{lemma:Fgroups} Let $F \subset M$ be a properly embedded surface that is incompressible, \\ $\partial$-incompressible, and admits a hyperelliptic involution $\mu$. If $F$ is separating, then there exists $\beta \in \text{PSL}(2, \mathbb{C})$ such that \begin{center} $\Gamma = \left\langle \Gamma_{a}, \Gamma_{b} \right\rangle \cong \Gamma_{a} \ast_{\Gamma_{F}} \Gamma_{b}$ and $\Gamma^{\mu} = \left\langle \Gamma_{a}, \beta \Gamma_{b} \beta^{-1} \right\rangle \cong \Gamma_{a} \ast_{\Gamma_{F}} \beta \Gamma_{b} \beta^{-1}$. \end{center} If $F$ is non-separating, then there exists $\beta \in \text{PSL}(2, \mathbb{C})$ such that \begin{center} $\Gamma = \left\langle \Gamma_{N} , \gamma \right\rangle \cong \Gamma_{N} \ast_{\gamma}$ and $\Gamma^{\mu} = \left\langle \Gamma_{N} , \gamma \beta \right\rangle \cong \Gamma_{N} \ast_{\gamma \beta}$, \end{center} where $\gamma g \gamma^{-1} = \psi_{\ast}(g)$ for $g$ in the subgroup $\Gamma_{1}$ of $\Gamma_{N}$ uniformizing $\pi_{1}(F)$ and $\beta$ normalizes $\Gamma_{1}$ with $\beta g \beta^{-1} = \mu_{\ast}(g)$. \begin{flushleft} In both cases, $\Gamma^{\mu}$ is discrete and $M^{\mu}$ is homeomorphic to $\mathbb{H}^{3} / \Gamma^{\mu}$. \end{flushleft} \end{lemma} \textbf{Remark: }In Kuessner's version of this statement, he assumes that the surface $F$ is not a virtual fiber. However, after the proof of \cite[Proposition 3.1]{Ku}, Kuessner suggests a slight variation of his proof that removes this requirement. Here, we make no such requirement of $F$ and prove the more general case by following Kuessner's suggestion to utilize the least area surface machinery that Ruberman develops in \cite{Ru}. \begin{proof} Here, we give a proof of the case when $F$ is separating. The non-separating case is similar. Since $F$ is incompressible in M, $F$ is also incompressible in $M_{a}$ and $M_{b}$. Thus, the inclusion maps $i: F \rightarrow M_{a}$ and $j: F \rightarrow M_{b}$ induce monomorphisms $i_{\ast}: \pi_{1}(F) \rightarrow \pi_{1}(M_{a})$ and $j_{\ast}: \pi_{1}(F) \rightarrow \pi_{1}(M_{b})$, respectively. Let $\rho_{a}$ denote the restriction of $\rho$ to $\pi_{1}(M_{a})$, and similarly, let $\rho_{F}$ denote the restriction of $\rho$ to $\pi_{1}(F)$. Then the map $f_{1}: \Gamma_{F} \rightarrow \Gamma_{a}$ defined by $f_{1} = \rho_{a} i_{\ast} \rho_{F}^{-1}$ is a well-defined monomorphism. Similarly, we have a monomorphism $f_{2}: \Gamma_{F} \rightarrow \Gamma_{b}$, defined by $f_{2} = \rho_{b} j_{\ast} \rho_{F}^{-1}$, where $\rho_{b}$ denotes the restriction of $\rho$ to $\pi_{1}(M_{b})$. This tells us that $\Gamma \cong \Gamma_{a} \ast_{\Gamma_{F}} \Gamma_{b} \cong (\Gamma_{a} \ast \Gamma_{b}) / N$, where $N$ is the normal subgroup of $\Gamma_{a} \ast \Gamma_{b}$ generated by elements of the form $f_{1}(h)f_{2}(h)^{-1}$, for all $h \in \Gamma_{F}$. Now, $M^{\mu}$ is also constructed by cutting $M$ along $F$, and then gluing the pieces $M_{a}$ and $M_{b}$ back together along $F$. However, we now rotate one of these pieces, say $M_{b}$, by the hyperelliptic involution $\mu$ before gluing it back to $M_{a}$ along $F$. Theorem \ref{thm:mutationrep} provides the existence of some $\beta \in \text{PSL}(2, \mathbb{C})$ such that $\rho_{F} \mu_{\ast} = \beta \rho_{F} \beta^{-1}$. Let $f_{\beta}: \Gamma_{b} \xrightarrow{\sim} \beta\Gamma_{b}\beta^{-1}$ be the map that conjugates by $\beta$. This gives us a well-defined monomorphism $f_{3}: \Gamma_{F} \rightarrow \beta \Gamma_{b} \beta^{-1}$ defined by $f_{3} = f_{\beta} f_{2}$. First, we will show that $\Gamma^{\mu} \cong \Gamma_{a} \ast_{\Gamma_{F}} \beta \Gamma_{b} \beta^{-1} \cong (\Gamma_{a} \ast \beta \Gamma_{b} \beta^{-1}) / K$, where $K$ is the subgroup generated by elements of the form $f_{1}(h)f_{3}(h)^{-1}$ for all $h \in \Gamma_{F}$. This group isomorphism will follow from the Maskit combination theorem \cite[VII.A.10]{Mas}. Assume that $F$ is isotopic to its least-area representative; the case where $F$ double covers a least-area representative is left to the reader. Ruberman's Theorem \ref{thm:mutationrep} implies that the element $\beta$ such that $\rho_{F} \mu_{\ast} = \beta \rho_{F} \beta^{-1}$ induces an isometric involution $\tilde{\tau}$ of the cover $M_{F} \rightarrow M$ corresponding to $\pi_{1}(F)$. In the proof of \cite[Theorem 1.3]{Ru}, Ruberman shows that a least-area representative of $F$ lifts to an embedding $\hat{F}$ in $M_{F}$. Furthermore, $\hat{F}$ is invariant under $\hat{\tau}$, and so, the preimage $\tilde{F}$ of $\hat{F}$ in $\mathbb{H}^{3}$ is $\beta$-invariant. Since $\tilde{F}$ is a properly embedded plane in $\mathbb{H}^{3}$, we have that $\mathbb{H}^{3} \setminus \tilde{F}$ decomposes into two (non-empty) $3$-balls, $B_{a}$ and $B_{b}$. We claim that $B_{a}$ and $B_{b}$ comprise a proper interactive pair of sets (in the sense of \cite[VII.A]{Mas}) for $\Gamma_{a}$ and $\beta \Gamma_{b} \beta^{-1}$. Here, we can follow the same argument as Kuessner in \cite[Proposition 3.1]{Ku}, but replace the subsets $B_{1}$ and $B_{2}$ of $\partial_{\infty} \mathbb{H}^{3}$ with $B_{a}$ and $B_{b}$. The Maskit combination theorem then implies that $\Gamma^{\mu} = \left\langle \Gamma_{a}, \beta \Gamma_{b} \beta^{-1} \right\rangle \cong \Gamma_{a} \ast_{\Gamma_{F}} \beta \Gamma_{b} \beta^{-1}$. The fact that $\Gamma^{\mu}$ is discrete follows from the argument in \cite[VII.C.4]{Mas}. Finally, we claim that $M^{\mu}$ is homeomorphic to $\mathbb{H}^{3} / \Gamma^{\mu}$. By applying van Kampen's Theorem, we have that $\pi_{1}M^{\mu}\cong \pi_{1}M_{a} \ast_{\pi_{1}F} \pi_{1}M_{b}$, where the respective inclusions of $\pi_{1}F$ are given by $i_{\ast}$ and $j_{\ast}\mu_{\ast}$. This gives an isomorphism $\rho^{\mu}: \pi_{1}M^{\mu} \rightarrow \Gamma^{\mu}$ defined on $\pi_{1}M_{a}$ by $\rho$ and on $\pi_{1}M_{b}$ by $\beta \rho \beta^{-1}$, as desired. \end{proof} By combining the previous lemma with Corollary \ref{cor:disjointexplicit} and Corollary \ref{cor:disjointgeo}, we can now give a number of scenarios for which mutation preserves a portion of the (complex) length spectrum. In what follows, set $G_{L}(M) = \left\{ \gamma \subset M : \gamma \hspace{0.05in} \text{is a closed geodesic and} \hspace{0.05in} \ell(\gamma) < L \right\}$, that is, $G_{L}(M)$ denotes the geodesics in $M$ that make up the initial length spectrum up to a cut off length of $L$. \begin{thm} \label{cor:syspreserved2} Let $F \subset M$ be a surface that is incompressible, $\partial$-incompressible, and admits a hyperelliptic involution $\mu$. Then for any $L <0.015$, $G_{L}(M)$ is in bijective correspondence with $G_{L}(M^{\mu})$. In particular, if $M$ has $n$ geodesics shorter than $L$, then $M$ and $M^{\mu}$ have at least the same $n$ initial values of their respective complex length spectra. \end{thm} \begin{proof} Since there are $n$ geodesics shorter than $L$, let $\left\{\gamma_{i}\right\}_{i=1}^{n} = G_{L}(M)$. By Corollary \ref{cor:disjointexplicit}, we can isotope any such $\gamma_{i}$ disjoint from $F$, and assume we have performed these isotopies. We claim that for each $\gamma_{i}$, mutation along $F$ will produce a closed geodesic $\gamma_{i}^{\mu}$ in $M^{\mu}$, such that $\ell_{\mathbb{C}}(\gamma_{i}) = \ell_{\mathbb{C}}(\gamma_{i}^{\mu})$. In what follows, we abuse notation and let $\gamma_{i}$ refer to multiple representatives from the homotopy class $[\gamma_{i}] \in \pi_{1}(M)$, and not just the geodesic representative. Similarly for $[\gamma_{i}^{\mu}] \in \pi_{1}(M^{\mu})$. \textbf{Proof of claim:} First, suppose that $F$ separates $M$. By Lemma \ref{lemma:Fgroups}, we have that $\Gamma = \left\langle \Gamma_{a}, \Gamma_{b} \right\rangle$ and $\Gamma^{\mu} = \left\langle \Gamma_{a}, \beta \Gamma_{b} \beta^{-1} \right\rangle$, for some $\beta \in \text{PSL}(2, \mathbb{C})$. Since any $\gamma_{i} \subset M$ has been isotoped disjoint from $F$, $\gamma_{i} \in M_{a}$, or $\gamma_{i} \in M_{b}$. Without loss of generality, assume $[\gamma_{i}] \in \pi_{1}(M_{a})$, i.e., $\gamma_{i}$ now lies in $M_{a}$. $[\gamma_{i}] \in \pi_{1}(M)$ has a unique (complex) length associated to it, $\ell_{\mathbb{C}}(\gamma_{i})$, coming from the representation $\rho: \pi_{1}(M) \xrightarrow{\sim} \Gamma = \left\langle \Gamma_{a}, \Gamma_{b} \right\rangle \subset \text{PSL}(2, \mathbb{C})$. This (complex) length is determined by the trace of its representation. Specifically, $\cosh(\frac{\ell_{\mathbb{C}}(\gamma)}{2}) = \pm \frac{tr(\gamma)}{2}$, where $tr(\gamma)$ denotes the trace of the representation of $\gamma$. Since we have isotoped $\gamma_{i}$ disjoint from $F$, mutating along $F$ to obtain $M^{\mu}$ will produce a corresponding homotopy class $[\gamma_{i}^{\mu}] \in \pi_{1}(M^{\mu})$. Similarly, $[\gamma_{i}^{\mu}] \in \pi_{1}(M^{\mu})$ also has a unique (complex) length associated to it, coming from $\rho_{\mu}: \pi_{1}(M^{\mu}) \xrightarrow{\sim} \Gamma^{\mu} = \left\langle \Gamma_{a}, \beta \Gamma_{b} \beta^{-1} \right\rangle \subset \text{PSL}(2, \mathbb{C})$. Thus, $[\gamma_{i}]$ and $[\gamma_{i}^{\mu}]$ have the same representation in $\text{PSL}(2, \mathbb{C})$ since $\rho$ and $\rho_{\mu}$ agree on $\pi_{1}(M_{a})$. So, the same complex length is associated to $\gamma_{i}$ and $\gamma_{i}^{\mu}$, as desired. Note that, if some $\gamma_{i}$ was isotoped into $M_{b}$ instead, then the representations of $\gamma_{i}$ and $\gamma_{i}^{\mu}$ into $\text{PSL}(2, \mathbb{C})$ would be conjugate to one another. Since trace is preserved by conjugation, the corresponding complex lengths will still be preserved too. If $F$ is non-separating in $M$, then Lemma \ref{lemma:Fgroups} gives us that $\Gamma = \left\langle \Gamma_{N} , \gamma \right\rangle$ and $\Gamma^{\mu} = \left\langle \Gamma_{N} , \gamma \beta \right\rangle$. Since each $\gamma_{i}$ has been isotoped disjoint from $F$, we once again have that $[\gamma_{i}] \in \pi_{1}(N) \subset \pi_{1}(M)$ and $[\gamma_{i}^{\mu}] \in \pi_{1}(N) \subset \pi_{1}(M^{\mu})$ have the same representation in $\text{PSL}(2, \mathbb{C})$ (up to conjugation), and so, the same complex length associated to them. It remains to show that there exists a bijective correspondence between $G_{L}(M)$ and $G_{L}(M^{\mu})$, since this will imply that the $n$ shortest geodesics in $M^{\mu}$ are exactly the set $\left\{\gamma_{i}^{\mu}\right\}_{i=1}^{n}$ coming from mutating the set $\left\{\gamma_{i}\right\}_{i=1}^{n} = G_{L}(M)$. Let $f : G_{L}(M) \rightarrow G_{L}(M^{\mu})$ be the function defined by $f(\gamma_{i}) = \gamma_{i}^{\mu}$, for each $\gamma_{i} \in G_{L}(M)$. This map is obviously one-to-one: if $\gamma_{i}^{\mu} = f(\gamma_{i}) = f(\gamma_{j}) = \gamma_{j}^{\mu}$, then mutating $M^{\mu}$ along $(F, \mu)$ to obtain $M$ implies $\gamma_{i} = \gamma_{j}$. Now, suppose $f$ is not onto, and so, there exists some $\gamma^{\mu} \in G_{L}(M^{\mu})$ such that $\gamma^{\mu} \notin \left\{\gamma_{i}^{\mu}\right\}_{i=1}^{n}$. Mutate $M^{\mu}$ by ($F, \mu)$ to obtain $M$. Since $\gamma^{\mu} \in G_{L}(M^{\mu})$, $\ell(\gamma^{\mu}) < L < 0.015$, which implies that $\gamma^{\mu}$ can be isotoped disjoint from $F$. As the proof of our claim shows, this implies that there is a corresponding $\gamma \in M$ with the same complex length as $\gamma^{\mu}$. However, then $\ell(\gamma) < L$, i.e. $\gamma \in G_{L}(M)$, which is a contradiction. Thus, $f$ gives a bijective correspondence between $G_{L}(M)$ and $G_{L}(M^{\mu})$, as desired. \end{proof} \textbf{Remark:} This theorem uses the length condition from Corollary \ref{cor:disjointexplicit} to determine when $M$ and its mutant $M^{\mu}$ have the same initial length spectra. We also get corollaries (highlighted below) using this proof, based upon the tube radius condition and the normalized length condition. However, with the tube radius condition, we can not guarantee that these common geodesic lengths are the shortest ones in the length spectra of $M$ and $M^{\mu}$, since there can exist geodesics with a very large embedded tube radius that are not very short. Thus, we can only say that that a portion of these length spectra are the same, not necessarily the initial length spectra. Fortunately, for the normalized length condition, we can still get a corollary that determines when $M$ and $M^{\mu}$ have the same initial length spectra, by using the second condition in Corollary \ref{cor:disjointgeo}. \begin{cor} \label{cor:syspreservedr} Let $F \subset M$ be a surface that is incompressible, $\partial$-incompressible, and admits a hyperelliptic involution $\mu$. Suppose that $M$ has exactly $n$ geodesics with embedded tubular radius larger than some constant $R > 2 \ln(1 + \sqrt{2})$. Then $M$ and $M^{\mu}$ have at least $n$ common values in their respective (complex) length spectra. \end{cor} \begin{cor} \label{cor:syspreservednl} Let $F \subset M$ be a surface that is incompressible, $\partial$-incompressible, and admits a hyperelliptic involution $\mu$. Suppose that $M$ has exactly $n$ geodesics that are the core geodesics coming from Dehn filling a hyperbolic $3$-manifold $N$. Let $\widehat{L}(s_{i})$ denote the normalized slope length of the $i^{th}$ Dehn filling. \begin{itemize} \item If $\widehat{L}(s_{i}) \geq 14.90\sqrt{n}$ for each $i$, $1 \leq i \leq n$, then the $n$ core geodesics of the filling tori lie in the set of preserved (complex) geodesic lengths. \item If $\widehat{L}(s_{i}) \geq 20.76\sqrt{n}$ for each $i$, $1 \leq i \leq n$, then $M$ and $M^{\mu}$ have at least the same $n$ initial values of their respective (complex) length spectra. \end{itemize} \end{cor} \textbf{Remark:} Theorem \ref{cor:syspreserved2}, and so, the second part of Corollary \ref{cor:syspreservednl}, both require geodesics of length less than $0.015$ in order to get a lower bound on how much of the initial length spectrum is preserved under mutation. The work of Meyerhoff \cite{Me} shows that the Margulis constant for the thick-thin decomposition in dimension $3$ is at least $0.104$. Thus, the geodesics corresponding to the initial length spectrum guaranteed to be preserved under mutation are all contained in the thin parts of these manifolds. Specifically, they must all be cores of solid tori and possibly multiples of these cores. In general, many more geodesics are preserved under mutation. Lemma \ref{lemma:Fgroups} implies that every element of the non-elementary groups $\Gamma_{a}$ and $\Gamma_{b}$, or $\Gamma_{N}$ in the non-separating case, maintains its complex length under mutation. This includes any geodesics that can be homotoped disjoint from the mutation surface. \section{Hyperbolic Pretzel Knots: $\left\{ K_{2n+1} \right\} _{n=2}^{\infty}$} \label{sec:RT_and_PK} Here, we construct a specific class of pretzel knots, $\left\{ K_{2n+1} \right\} _{n=2}^{\infty}$. We will be able to show that for each $n \geq 2$, $K_{2n+1}$ generates a large number of mutant pretzel knots whose complements all have the same volume and initial length spectrum. This section describes pretzel links, their classification, and the basic properties of $\left\{ K_{2n+1} \right\} _{n=2}^{\infty}$. \subsection{Pretzel Links} \label{subsec:PL} We shall describe vertical tangles and see how they can be used to construct pretzel links. Afterwards, we will give a simple classification of pretzel links. \begin{defn}[Pretzel link] The \emph{vertical tangles}, denoted by $\frac{1}{n}$, are made of $n$ vertical half-twists, $n \in \mathbb{Z}$, as depicted in figure \ref{verticaltangles}. A \emph{pretzel link}, denoted $K\left( \frac{1}{q_{1}}, \frac{1}{q_{2}},\ldots, \frac{1}{q_{n}} \right)$, is defined to be the link constructed by connecting $n$ vertical tangles in a cyclic fashion, reading clockwise, with the $i^{th}$-tangle associated with the fraction $\frac{1}{q_{i}}$. \end{defn} \begin{figure}[ht] \includegraphics[scale=0.65]{verticaltangles.eps} \caption{Some of the vertical tangles with their associated fractions.} \label{verticaltangles} \end{figure} $K$ in figure \ref{augmentedpretzel} is the pretzel link $K = K( \frac{1}{4}, \frac{1}{7}, \frac{1}{9})$. Note that, each vertical tangle corresponds with a \textit{twist region} for a knot diagram of a pretzel link. Twist regions are defined at the beginning of Section \ref{subsubsec:AL_and_DF}. Now, we state the classification of pretzel links, which is a special case of the classification of Montesinos links. The classification of Montesinos links was originally proved by Bonahon in 1979 \cite{Bo}, and another proof was given by Boileau and Siebenmann in 1980 \cite{BS}. A proof similar to the one done by Boileau and Siebenmann can be found in \cite[Theorem $12.29$]{BZ}. Here, we state the theorem solely in terms of pretzel links. \begin{thm} \cite{Bo} \label{thm:Bo} The pretzel links $K\left( \frac{1}{q_{1}}, \frac{1}{q_{2}},\ldots, \frac{1}{q_{n}} \right)$ with $n \geq 3$ and $\sum_{j=1}^{n}\frac{1}{q_{j}} \leq n-2$, are classified by the ordered set of fractions $\left(\frac{1}{q_{1}}, \ldots, \frac{1}{q_{n}} \right)$ up to the action of the dihedral group generated by cyclic permutations and reversal of order. \end{thm} \subsection{Our Construction} \label{subsec:pretzelknots} Consider the pretzel link $K_{2n+1} = K \left( \frac{1}{q_{1}}, \frac{1}{q_{2}}, \ldots, \frac{1}{q_{2n+1}} \right)$, where each $q_{i} > 6$, $q_{1}$ is even, each $q_{i}$ is odd for $i >1$, and $q_{i} \neq q_{j}$ for $i \neq j$. We will always work with the diagram of $K_{2n+1}$ that is depicted below in figure \ref{figure6}. Each $R_{i}$ in this diagram of $K_{2n+1}$ represents a twist region in which the vertical tangle $\frac{1}{q_{i}}$ takes place. For $n \geq 2$, $K_{2n+1}$ has the properties listed below; details can be found in \cite{Mi}. Though our construction here is slightly different, it still retains all the same key properties listed below. \begin{enumerate} \item Each $K_{2n+1}$ is a hyperbolic knot (link with a single component). \item This diagram of $K_{2n+1}$ is alternating. \item This diagram of $K_{2n+1}$ is prime and twist-reduced (definitions can be found in \cite{FP}). \end{enumerate} We are focused on keeping our constructions knots because by the Gordon--Luecke Theorem \cite{GL}, we know that knots are determined by their complements. \begin{figure}[ht] \includegraphics[scale=0.55]{pretzel_construction.eps} \caption{The pretzel knot $K_{2n+1}$. Each twist region $R_{i}$ contains a vertical tangle with $q_{i}$ positive crossings.} \label{figure6} \end{figure} \subsection{Mutations of $K_{2n+1}$ that preserve volume} \label{sec:Mutations} In this subsection, we will see how mutations can be useful for preserving the volume of a large class of hyperbolic $3$-manifolds $\left\{M_{2n+1}^{\sigma}\right\}$, with $M_{2n+1}^{\sigma}= \mathbb{S}^{3} \setminus K_{2n+1}^{\sigma}$. $K^{\sigma}_{2n+1}$ is one of our hyperbolic preztel knots constructed in Section \ref{subsec:pretzelknots}, and the upper index $\sigma$ signifies a combination of mutations along Conway spheres, which we will now describe. Given a $K_{2n+1}$, consider the set $\left\{ (S_{a}, \sigma_{a}) \right\}_{a=1}^{2n}$ where $S_{a}$ is a Conway sphere that encloses only $R_{a}$ and $R_{a+1}$ on one side, and $\sigma_{a}$ is the mutation along $S_{a}$ which rotates about the $y$-axis. On one of our pretzel knots, such a mutation $\sigma_{a}$ interchanges the vertical tangles $R_{a}$ and $R_{a+1}$, as depicted in figure \ref{SAmutated}. In terms of our pretzel knot vector, such a mutation just switches $\frac{1}{q_{a}}$ and $\frac{1}{q_{a+1}}$. \begin{figure}[ht] \includegraphics[scale=0.50]{SAmutated.eps} \caption{Mutation along the Conway sphere $S_{a}$} \label{SAmutated} \end{figure} In \cite{Mi}, we used the following theorem proved by Ruberman to construct many hyperbolic knot complements with the same volume. \begin{thm} \cite[Theorem $1.3$]{Ru} \label{thm:Ru1} Let $\mu$ be any mutation of an incompressible and $\partial$-incompressible hyperelliptic surface in a hyperbolic $3$-manifold $M$. Then $M^{\mu}$ is also hyperbolic, and $vol(M^{\mu}) = vol(M)$. \end{thm} Ruberman's proof of this theorem requires the hyperelliptic surface $S$ to be isotoped into least area form in order to perform a volume-preserving mutation of a hyperbolic $3$-manifold $M$ along $S$. This fact will be crucial, considering the conditions for Theorem \ref{thm:gammasep}. By the proof of \cite[Theorem $2$]{Mi}, for a given $M_{2n+1} = \mathbb{S}^{3} \setminus K_{2n+1}$, performing combinations of mutations along the collection $\left\{ (S_{a}, \sigma_{a}) \right\}_{a=1}^{2n}$ produces a large number of non-isometric hyperbolic knot complements with the same volume, and this number grows as n increases. Specifically, we have: \begin{thm} \cite{Mi} \label{thm:volume} For each $n \in \mathbb{N}$, $n>2$, there exist $\frac{(2n)!}{2}$ distinct hyperbolic pretzel knots, $\left\{K_{2n+1}^{\sigma}\right\}$, obtained from each other via mutations along the Conway spheres $\left\{ (S_{a}, \sigma_{a}) \right\}$. Furthermore, for each such $n$, \begin{itemize} \item their knot complements have the same volumes, and \item $\left(\frac{2n-1}{2}\right)v_{\mathrm{oct}} \leq vol(M^{\sigma}_{2n+1}) \leq \left(4n+2\right)v_{\mathrm{oct}}$, where $v_{\mathrm{oct}} \left(\approx 3.6638\right)$ is the volume of a regular ideal octahedron. \end{itemize} \end{thm} \section{The Geometry of Untwisted Augmented Links} \label{sec:GEOofUAL} The goal of this section is to better understand the geometry and topology of our pretzel knots by realizing them as Dehn fillings of untwisted augmented links. Recall that $K_{2n+1} = K \left( \frac{1}{q_{1}}, \frac{1}{q_{2}}, \ldots, \frac{1}{q_{2n+1}} \right)$ with $q_{1}$ even, while the rest are odd and distinct. We can realize each $K_{2n+1}$ as a Dehn surgery along specific components of a hyperbolic link $L_{2n+1}$. We want to find a lower bound on the normalized length of the Dehn filling slopes along each of these components in order to apply Corollary \ref{cor:syspreservednl}. We also can understand the cusp shape of $K_{2n+1}$ by first studying the cusp shape of this knot as a component of $L_{2n+1}$. This will be used to determine that these knots are pairwise incommensurable in Section \ref{sec:commensurablity}. The following analysis will help us determine the properties we are interested in. \subsection{Augmented Links} \label{subsubsec:AL_and_DF} First, we will go over some basic properties of knots. We usually visualize a knot by its \emph{diagram}. A diagram of a knot can be viewed as a $4$-valent planar graph $G$, with over-under crossing information at each vertex. Here, we will need to consider the number of \textit{twist regions} in a given diagram. A twist region of a knot diagram is a maximal string of bigons arranged from end to end. A single crossing adjacent to no bigons is also a twist region. We also care about the amount of twisting done in each twist region. We describe the amount of twisting in terms of half twists and full twists. A \textit{half twist} of a twist region of a diagram consists of a single crossing of two strands. A \textit{full twist} consists of two half twists. Now, we can define augmented links, which were introduced by Adams \cite{Ad} and have been studied extensively by Futer and Purcell in \cite{FP} and Purcell in \cite{Pu3}, \cite{Pu2}. For an introduction to augmented links, we suggest first reading \cite{Pu4}. \begin{figure}[h] \includegraphics[scale=0.60]{augmented_pretzel.eps} \caption{Diagrams of a knot $K$ with three twist regions, the augmented link $L'$ with three crossing circles, the untwisted augmented link $L$, and the flat augmented link $J$.} \label{augmentedpretzel} \end{figure} \begin{defn}[Augmented Links] \label{def:AL} Given a diagram of a knot or link $K$, insert a simple closed curve encircling each twist region. This gives a diagram for a new link $L'$, which is the \textit{augmented link} obtained from $K$. Obtain a new link $L$ by removing all full twists from each twist region in the diagram of $L'$. We shall refer to the link $L$ as the \textit{untwisted augmented link}. Each twist region now has either no crossings or a single crossing. If we remove all of the remaining single crossings from the twist regions, then we form the \textit{flat augmented link}, $J$. \end{defn} The top two diagrams in figure \ref{augmentedpretzel} show a link $K$ with three twist regions and then the corresponding augmented link $L'$. The bottom two diagrams of figure \ref{augmentedpretzel} show the corresponding untwisted augmented link $L$ and flat augmented link $J$. The simple closed curves inserted to augment $K$ are called \textit{crossing circles}. The untwisted augmented link $L$ has a diagram consisting of crossing circle components bounding components from the link. Near each crossing circle, the link component is embedded in the projection plane if the corresponding twist region contained only full twists. Otherwise, there is a single half twist. $L$ is made up of two types of components: the crossing circles and the other components coming from the original link $K$. We shall refer to these other components as the \textit{knot components} of $L$. When $K$ is a knot, there is a single knot component in $L$, which will be the case for our work. The $3$-manifolds $S^{3} \setminus L$ and $S^{3} \setminus L'$ actually are homeomorphic. Performing $t_{i}$ full twists along the punctured disk bound by a crossing circle and then regluing this disk gives a homeomorphism between link exteriors. Thus, if either $S^{3} \setminus L$ or $S^{3} \setminus L'$ is hyperbolic, then Mostow--Prasad rigidity implies that the two manifolds are isometric. Next, we shall examine the polyhedral decompositions of certain untwisted augmented links. We will do this by first examining such structures on the corresponding flat augmented links, which are almost the same, but easier to initially analyze. \subsection{Ideal Polyhedral Decompositions of Untwisted Augmented Links} \label{subsec:PDofUAL} The polyhedral decompositions of untwisted augmented link complements have been thoroughly described in \cite{FP}. This polyhedral decomposition was first described by Agol and Thurston in the appendix of \cite{La}, and many of its essential properties are highlighted in the following theorem. \begin{thm} \label{thm:polyprops} Let $L$ be the untwisted augmented link corresponding to a link $K$. Assume the given diagram of $K$ is prime, twist-reduced, and $K$ has at least two twist regions. Then $\mathbb{S}^{3} \setminus L$ has the following properties: \begin{enumerate} \item $\mathbb{S}^{3} \setminus L$ has a complete hyperbolic structure. \item This hyperbolic $3$-manifold decomposes into two identical ideal, totally geodesic polyhedra, $I$ and $I'$, all of whose dihedral angles are $\frac{\pi}{2}$. \item The faces of $I$ and $I'$ can be checkerboard colored, shaded and white. \item Shaded faces come in pairs on each polyhedron, and they are constructed by peeling apart half of a single $2$-punctured disc bounded by a crossing circle. \item White faces come from portions of the projection plane bounded by knot strands. \end{enumerate} \end{thm} Here, we will briefly describe this decomposition and the resulting circle packings, with emphasis on our untwisted augmented link complements, $N_{2n+1} = \mathbb{S}^{3} \setminus L_{2n+1}$. We direct the reader to \cite[Sections 6 and 7]{Pu2} for more details on cusp shape analysis of untwisted augmented link complements. \begin{figure}[ht] \includegraphics[scale=0.50]{Flat_and_Untwisted.eps} \caption{The untwisted augmented link $L_{2n+1}$ and the flat augmented link $J_{2n+1}$.} \label{flatanduntwisted} \end{figure} First, consider $\mathbb{S}^{3} \setminus J_{2n+1}$, where $J_{2n+1}$ is the flat augmented link, whose diagram is shown in figure \ref{flatanduntwisted}. In the diagram of $J_{2n+1}$, the knot strands all lie on the projection plane. To subdivide $\mathbb{S}^{3} \setminus J_{2n+1}$ into polyhedra, first slice it along the projection plane, cutting $\mathbb{S}^{3}$ into two identical $3$-balls. These identical polyhedra are transformed into ideal polyhedra by collapsing strands of $J_{2n+1}$ to ideal vertices. These ideal polyhedra have two types of faces: shaded faces and white faces, described in the above theorem. To go from an ideal polyhedral decomposition of $\mathbb{S}^{3} \setminus J_{2n+1}$ to one for $\mathbb{S}^{3} \setminus L_{2n+1}$, we just have to introduce a half-twist into our gluing at each shaded face where a crossing circle bounds a single twist. Depicted in figure \ref{polydecomp} below is an ideal polyhedral decomposition of the flat augmented link, $J_{2n+1}$. \begin{figure}[h] \includegraphics[scale=0.60]{polydecomp.eps} \caption{The polyhedral decomposition of $J_{2n+1}$.} \label{polydecomp} \end{figure} In \cite[Section 6]{Pu2}, Purcell describes a circle packing associated to the white faces of the polyhedra (and a dual circle packing associated to the shaded faces). Figure \ref{pretzelcuspshape} depicts the circle packing coming from the white faces of the polyhedral decomposition of $J_{2n+1}$. \begin{figure}[h] \includegraphics[scale=0.60]{PretzelCuspShape.eps} \caption{The resulting circle packing for $J_{2n+1}$} \label{pretzelcuspshape} \end{figure} The decomposition of $\mathbb{S}^{3} \setminus L_{2n+1}$ is determined by this circle packing. First, slice off half-spaces bounded by geodesic hemispheres in $\mathbb{H}^{3}$ corresponding to each circle in the circle packing. These give the geodesic white faces of the polyhedron. The shaded faces are obtained by slicing off hemispheres in $\mathbb{H}^{3}$ corresponding to each circle of the dual circle packing. Finally, we just need to make sure we glue up most of the shaded faces with a half-twist. Only the two shaded faces corresponding to the first twist region are glued up without a half-twist. A careful analysis of this polyhedral decomposition also leads to a canonical method for cusp expansion. Given any $\mathbb{S}^{3} \setminus L_{2n+1}$, we have $2n+2$ cusps corresponding to the $2n+2$ components of the link $L_{2n+1}$ (the $2n+1$ crossing circles and the single knot component). We initially start with disjoint horoball neigborhoods of these cusps and then follow the expansion instructions described in \cite[Section $3$]{FP}: given any ordering of the cusps, expand them one at a time until a horoball neighborhood $C$ meets another horoball neighborhood or $C$ meets the \textit{midpoint} of some edge of our polyhedral decomposition. See \cite[Definition $3.6$]{FP} for the definition of midpoint in this context. This choice of cusp expansion results in the following theorem, which is now stated in terms of our untwisted augmented link complements. \begin{thm}\cite{FP} \label{thm:cuspexpansion} Given any $\mathbb{S}^{3} \setminus L_{2n+1}$, expand the cusps as described above. This results in a unique horoball packing where each boundary horosphere of a horoball neighborhood meets the midpoint of every edge asymptotic to its ideal point. \end{thm} The fact that this cusp expansion is unique will be essential for analyzing cusp neighborhoods and horoball packings in Section \ref{subsec:NLonC} and in Proposition \ref{prop:no_rigid}. \subsection{Normalized Lengths on Cusps} \label{subsec:NLonC} For this section, we will specialize our analysis to just our pretzel knot complements $M_{2n+1} = \mathbb{S}^{3} \setminus K_{2n+1}$ which result from Dehn filling the $2n+1$ crossing circles, $\left\{ C_{i} \right\}_{i=1}^{2n+1}$, of $N_{2n+1} = \mathbb{S}^{3} \setminus L_{2n+1}$. Recall that $K_{2n+1}$ has $2n+1$ twist regions with $q_{i}$ crossings in the $i^{th}$ twist region, and in $L_{2n+1}$, exactly $2n$ of these crossing circle enclose a single crossing since $2n$ of our $q_{i}$ are odd. To apply Corollary \ref{cor:syspreservednl}, we will need to examine normalized lengths of particular slopes on the cusps in $N_{2n+1}$ corresponding to crossing circles. In \cite[Proposition 6.5]{Pu2}, Purcell gives the general case for providing bounds on the normalized lengths $\widehat{L}(s_{i})$ of Dehn filling crossing circles of an untwisted augmented link. In the general case, re-inserting $q_{i}$ crossings gives $\widehat{L}(s_{i}) \geq \sqrt{q_{i}}$. By restricting to untwisted augmented links corresponding to hyperbolic pretzel knots, we are able to provide a substantial improvement on this bound, highlighted in the proposition given below. \begin{prop} \label{prop:NL} On the cusps of $N_{2n+1}$ corresponding to crossing circles, we have the following normalized lengths: Let $s_{i}$ be the slope such that Dehn filling $N_{2n+1}$ along $s_{i}$ re-inserts the $q_{i}-1$ or $q_{i}$ crossings at that twist region. Then $\widehat{L}\left(s_{i}\right) \geq \sqrt{\frac{(2n-1)(1+q_{i}^{2})}{4n}}$. In particular, if $n \geq 2$, we have that $\widehat{L}(s_{i}) \geq \sqrt{ \frac{3(1+q_{i}^{2})}{8}} $. \end{prop} \begin{proof} Pictured in figure \ref{pretzelcuspshape} is a circle packing for $J_{2n+1}$ coming from the white faces. There also exists a circle packing for the shaded faces, which is dual to the circle packing coming from the white faces. These two circle packings also determine the same circle packings for $L_{2n+1}$ since the only difference between $L_{2n+1}$ and $J_{2n+1}$ is how the two ideal polyhedra are glued together. Much of what follows in the next two paragraphs is done in \cite[Sections $2$ and $3$]{FP}. In their work, the cusp shapes are analyzed with respect to any augmented link, while we will specialize to our $L_{2n+1}$. First, let us recall our polyhedra obtained in Section \ref{subsec:PDofUAL}. Each cusp will be tiled by rectangles given by the intersection of the cusp with the totally geodesic white and shaded faces of the polyhedra. Two opposite sides of each of these rectangles come from the intersection of the cusp with shaded faces of the polyhedra (corresponding with the $2$-punctured disc in the diagram of $L_{2n+1}$), and the other two sides from white faces. Call these sides shaded sides and white sides, respectively. We can make an appropriate choice of cusp neighborhoods as in Theorem \ref{thm:cuspexpansion}. This allows us to consider the geometry of our rectangles tiling a cusp. Our crossing circle cusp is tiled by two rectangles, each rectangle corresponding with a vertex in one of the polyhedra. In terms of our circle packing of $\mathbb{S}^{2}$, this vertex corresponds with a point of tangency of two circles. Consider the point of tangency given by $P_{i} \cap P_{i+1}$, which corresponds to one of the two identical rectangles making up the crossing circle cusp $C_{i+1}$. By the rotational symmetry of the circle packing in figure \ref{pretzelcuspshape}, all of these rectangles (along with their circle packings) are in fact isometric. Thus, taking a step along a shaded side will be the same for any such rectangle, and similarly for stepping along a white side. Let $s$ represent taking one step along a shaded face and $w$ represent taking one step along a white face. Each torus cusp, $T$, has universal cover $\tilde{T} = \mathbb{R}^{2}$. $\tilde{T}$ contains a rectangular lattice coming from the white and shaded faces of our polyhedron. We let $(s,w)$ be our choice of basis for this $\mathbb{Z}^{2}$ lattice. Now, we shall examine the normalized length in terms of our longitudes and meridians of the cusps corresponding to crossing circles. Lemma $2.6$ from \cite{FP} tells us that the meridian is given by $w \pm s$ when there is a half-twist, and the meridian is $w$ without the half-twist. In either case, the longitude is given by $2s$. When $q_{i}$ is odd, $\frac{q_{i} - 1}{2}$ full twists were removed in constructing $L_{2n+1}$, so the surgery slope for the $i^{th}$ crossing circle will be $(1, \frac{q_{i} - 1}{2})$. Thus, the slope $s_{i}$ is given by $(w \pm s) \pm \frac{q_{i}-1}{2} (2s) = w \pm q_{i}s$, when $q_{i}$ is odd. For the single even $q_{i}$, the surgery slope is $(1, \frac{q_{i}}{2})$ and the slope is given by $w \pm \frac{q_{i}}{2} (2s) = w \pm q_{i}s$; see \cite[Theorem 2.7]{FP}. In either case, the normalized length of $s_{i}$ is: \begin{center} $\widehat{L}(s_{i}) = \frac{ \sqrt{ \ell(w)^{2} + q_{i}^{2}\ell(s)^{2} } } {\sqrt{ 2\ell(w)\ell(s)}}$. \end{center} Here, $\ell(w)$ and $\ell(s)$ denote the lengths of $w$ and $s$ respectively, on our choice of cusp neighborhoods. To bound the normalized length, we need to bound $\ell(w)$ and $\ell(s)$. We shall use our circle packing to obtain such bounds. Consider the tangency given by $P_{i} \cap P_{i+1}$, which corresponds to one of the two rectangles making up our cusp. Note that, $P_{i}$ is also tangent to circles $P_{i-1}$, $A$, and $B$, while $P_{i+1}$ is also tangent to $P_{i+2}$, $A$, and $B$ ($i$ values taken mod $2n+1$). Apply a M\"{o}bius transformation taking $P_{i} \cap P_{i+1}$ to infinity. This takes the two tangent circles $P_{i}$ and $P_{i+1}$ to parallel lines, as in figure \ref{pretzelcuspshapes2}. This also gives the similarity structure of the rectangle under consideration. Our choice of cusp neighborhoods results in $\ell(s) =1$. This makes the circles $A$ and $B$ lying under the dashed lines in figure \ref{pretzelcuspshapes2} have diameter $1$. Since circles in our circle packing can not overlap, this forces $\ell(w) \geq 1$. Note that, the dashed lines come from our dual circle packing corresponding to shaded faces. \begin{figure}[h] \includegraphics[scale=0.75]{PretzelCuspShapes2.eps} \caption{The cusp shape of one of the rectangles tiling our crossing circle cusp. This rectangle is determined by sending the tangency point $P_{i} \cap P_{i+1}$ to $\infty$}. \label{pretzelcuspshapes2} \end{figure} Now, we just need to find an upper bound for $\ell(w)$. Again, consider figure \ref{pretzelcuspshapes2}. Since $P_{j}$ is tangent to $A$, $B$, $P_{j-1}$, and $P_{j+1}$ for $1 \leq j \leq 2n+1$, all the circles $P_{1}, \dots, P_{i-1}, P_{i+2}, \dots, P_{2n+1}$ lie in between our parallel lines and in between $A$ and $B$, stacked together as depicted in figure \ref{pretzelcuspshapes2} to meet our tangency conditions. Notice, that this circle packing of one of these rectangles has two lines of symmetry: the line $l_{w}$ going through the two $w$ sides in their respective midpoints, and the line $l_{s}$ going through the two $s$ sides in their respective midpoints. $l_{w}$ is a translate of $s$ and $l_{s}$ is a translate of $w$. Reflecting across either of these lines preserves our circle packing. In particular, $l_{w}$ must intersect each $P_{j}$, $j \neq i, i+1$ in a diameter. Let $D(P_{j})$ denote the diameter of circle $P_{j}$. Then $\displaystyle\sum\limits_{j \neq i, i+1} D(P_{j}) = l(s) = 1$. Next, the fact that we have symmetries about both $l_{s}$ and $l_{w}$ and an odd number of $P_{j}$ packed in between $A$ and $B$ implies that one of our $P_{j}$'s is centered at $l_{s} \cap l_{w}$. Call this circle $P_{j}^{\ast}$. Note that, $l_{s}$ intersects $A$, $B$, and $P_{j}^{\ast}$ in their respective centers. Thus, $\ell(w) = \ell(l_{s}) = \frac{D(A)}{2} + \frac{D(B)}{2} + D(P_{J}^{\ast}) = 1 + D(P_{J}^{\ast})$. Now, we claim that $P_{j}^{\ast}$ has the minimal diameter amongst $P_{j}$, $j \neq i, i+1$. This follows from our tangency conditions: each such $P_{j}$ must be tangent to both $A$ and $B$. The diameter of $P_{j}^{\ast}$ obviously minimizes the distance between $A$ and $B$. For any other $P_{j}$, consider the line $l_{j}$ in $P_{j}$ from $P_{j} \cap A$ to $P_{j} \cap B$. Then we have that $D(P_{j}^{\ast}) \leq \ell(l_{j}) \leq D(P_{j})$. The first inequality holds because $D(P_{j}^{\ast})$ minimizes distance from $A$ to $B$, while the second inequality is obviously true for any circle. So, $D(P_{j}^{\ast})$ must be the smallest such diameter. Finally, we have $1 = \ell(s) = \displaystyle\sum\limits_{j \neq i, i+1} D(P_{j}) \geq \displaystyle\sum\limits_{j \neq i, i+1} D(P_{j}^{\ast}) = (2n-1)D(P_{j}^{\ast})$, which implies that $D(P_{j}^{\ast}) < \frac{1}{2n-1}$. This helps give us the desired upper bound on $\ell(w)$: \begin{center} $\ell(w) = \ell(l_{s}) = 1 + D(P_{J}^{\ast}) \leq 1 + \frac{1}{2n-1} = \frac{2n}{2n-1}$. \end{center} With these bounds, we have that \begin{center} $\widehat{L}(s_{i}) = \frac{ \sqrt{ \ell(w)^{2} + q_{i}^{2}\ell(s)^{2} } } {\sqrt{ 2\ell(w)\ell(s)}} = \frac{ \sqrt{ \ell(w)^{2} + q_{i}^{2} } }{ \sqrt{2\ell(w)}} \geq \frac { \sqrt {1+q_{i}^{2} } } {\sqrt{2\ell(w)}} \geq \frac { \sqrt {1+q_{i}^{2} } } {\sqrt{\frac{4n}{2n-1}}} = \sqrt{\frac{(2n-1)(1+q_{i}^{2})}{4n}}$. \end{center} In particular, if $n \geq 2$, we have that $\widehat{L}(s_{i}) \geq \sqrt{ \frac{3(1+q_{i}^{2})}{8}} $. \end{proof} We will also need to analyze the cusp shape of the one cusp $C$ corresponding to the knot component of $L_{2n+1}$. Such an analysis will play an important role in determining that our knot complements are not commensurable with one another. We will see that the tiling of the cusp $C$ by rectangles which come from truncating certain vertices of our ideal polyhedral decomposition has a number of nice properties, highlighted in the following proposition. \begin{prop} \label{prop:Knotcusp} Let $C$ be the cusp corresponding to the knot component of $L_{2n+1}$. This cusp has the following properties: \begin{enumerate} \item There are $4(2n+1)$ rectangles tiling this cusp, half of which come from each ideal polyhedron. \item This cusp shape is rectangular (and not a square). \item All of these rectangles, along with their circle packings, are isometric to one another. \end{enumerate} \end{prop} \begin{proof} Theorem \ref{thm:cuspexpansion} gives us an appropriate choice of cusp neighborhoods, which allows us to fix the geometry of our cusp $C$. $\textbf{(1):}$ Consider the ideal polyhedral decomposition in figure \ref{polydecomp} for $J_{2n+1}$. There are $2n+1$ disks corresponding to crossing circles, and we peel each of these disks apart to obtain $2(2n+1)$ shaded faces on each polyhedron. For each shaded face, there are two vertices corresponding to rectangles that tile the knot component cusp $C$; specifically, the two vertices meeting $A$ or the two vertices meeting $B$, depending on the face. Since each of these vertices is shared by exactly two shaded faces, we obtain $2(2n+1)$ rectangles from each polyhedron, or $4(2n+1)$ total such rectangles. $L_{2n+1}$ admits the same polyhedral decomposition as $J_{2n+1}$; the only difference is that the gluing along shaded faces might change the gluing of the polyhedron. $\textbf{(2):}$ This holds if there are no half-twists under any of the crossing circles, as in $J_{2n+1}$; see \cite[Section 2]{FP}. However, $L_{2n+1}$ has $2n$ half-twists in its diagram. A half-twist shifts the gluing of the rectangles making up the cusp. Since $K$ is a knot, it must go through each crossing circle twice, and so, it will pass through an even number of half-twists. Thus, from Lemma $2.6$ in \cite{FP}, the fundamental domain for this torus is given by the meridian $2s$ and the longitude $2(2n+1)w + 2ks$, for some integer $k$. By a change of basis, we can see that this cusp shape is once again rectangular. Note that, this fundamental domain is not a square since $\ell(s) = 1$ and $1 < \ell(w) < 2$. $\textbf{(3):}$ Consider the circle packing depicted in figure \ref{pretzelcuspshape}. The rectangles tiling our cusp $C$ come from mapping $P_{i} \cap A$ to $\infty$ or mapping $P_{i} \cap B$ to $\infty$ for $i = 1, \ldots, 2n+1$. By the rotational symmetry of this circle packing, any $P_{i} \cap A$ and $P_{j} \cap A$ will determine isometric rectangles, and similarly for $P_{i} \cap B$ and $P_{j} \cap B$. In fact, $P_{i} \cap A$ and $P_{j} \cap B$ will also determine isometric rectangles. The circle packings of these rectangles are exactly the same except the roles of $A$ and $B$ have been switched; see figure \ref{pretzelcuspshapes3}. We can also see that $P_{i} \cap A$ and $P_{j} \cap B$ determine isometric rectangles by considering the reflection through the circle running through the $P_{i} \cap P_{j}$. This reflection gives a symmetry exchanging $A$ and $B$. \end{proof} \begin{figure}[h] \includegraphics[scale=0.75]{PretzelCuspShapes3.eps} \caption{The cusp shape of any one of the rectangles tiling our knot cusp $C$.} \label{pretzelcuspshapes3} \end{figure} Without loss of generality, we will assume any such rectangle coming from the tiling of our knot cusp looks like the one depicted in figure \ref{pretzelcuspshapes3}, i.e., we assume $P_{i} \cap A$ is mapped to $\infty$. \begin{lemma} \label{lemma:rectanglesize} Let $R$ be any rectangle from the tiling of $C$. Let $P_{j}^{\ast}$ be the smallest such $P_{j}$ in the circle packing of this rectangle. Then for all $n \geq 2$, the circle packing of $R$ has the following size bounds: \begin{enumerate} \item $\ell(s) =1$ and $ 1< \ell(w) < 2$, \item $\frac{n-2}{n-1} < D(B) < 1$, \item $D(B) > \frac{1}{2}$, \item $D(P_{j}^{\ast}) < \frac{1}{n-1}$. \end{enumerate} \end{lemma} \begin{proof} As before, our choice of cusp neighborhood results in $\ell(s) = 1$. Then $D(P_{1}) = D(P_{3}) = 1$. We will assume our rectangle is the one depicted in figure \ref{pretzelcuspshapes3}. By part $3$ of Proposition \ref{prop:Knotcusp}, all such rectangles tiling our cusp, along with their circle packings, are isometric to this one, up to relabelling. First, we claim that for any $L_{2n+1}$, $1< \ell(w) < 2$. The lower bound follows from the fact that $D(P_{1}) = D(P_{3}) = 1$, and $P_{1}$ and $P_{3}$ can not be tangent to one another. If $\ell(w) > 2$, then $D(B) > 1$ in order to be tangent to both $P_{1}$ and $P_{3}$. However, since $\ell(s) =1 $, $B$ would not be tangent to $A$ and $P_{2}$. If $\ell(w) = 2$, then $D(B) =1$ in order to meet its tangency conditions. Since $\ell(s) =1$, $B$ must separate our rectangle into two parts, one to the right of $B$ and one to the left of $B$. This violates the tangency conditions of the $P_{j}$, for $j = 4 , \dots, 2n+1$. So, $1< \ell(w) < 2$ and $D(B) < 1$. Take the vector $w$ and translate it vertically so it intersects $P_{j}^{\ast}$ in its center. This line will intersect all the $P_{j}$ in some segment $l(P_{j})$, which must be at least as large as $D(P_{j}^{\ast})$. This can easily be seen by translating $P_{j}^{\ast}$ horizontally along this line so that its point of tangency with $A$ is $P_{j} \cap A$. Note that, there are exactly $2n-2$ circles $\left\{ P_{j} \right\}_{j=4}^{2n+1}$ packed under $B$. This gives the following inequality: \begin{center} $2 > \ell(w) > \displaystyle\sum\limits_{j=4}^{2n+1} l(P_{j}) \geq \displaystyle\sum\limits_{j = 4}^{2n+1} D(P_{j}^{\ast}) = (2n-2)D(P_{j}^{\ast})$. \end{center} This gives us that $D(P_{j}^{\ast}) < \frac{2}{2n-2} = \frac{1}{n-1}$. Now, for any such $j$, $D(B) + D(P_{j}) \geq 1$. Combining with the previous result, we have that \begin{center} $D(B) \geq 1 - D(P_{j}^{\ast}) > 1 - \frac{1}{n-1} = \frac{n-2}{n-1}$, \end{center} as desired. Finally, we need to show that $D(B) > \frac{1}{2}$. This is already true if $n>2$ since $\frac{n-2}{n-1} < D(B)$. So, assume $n=2$, which means there are exactly two circles, $P_{4}$ and $P_{5}$, packed under $B$. Suppose $D(B) \leq \frac{1}{2}$. Then $D(P_{4}) > \frac{1}{2}$ since $D(B) + D(P_{4}) > 1$. Also, $\ell(w) \leq D(B) + \frac{D(P_{1})}{2} + \frac{D(P_{3})}{2} = \frac{3}{2}$. Take the vector $w$ and translate it vertically so that it intersects $P_{4}$ in its center, and take the vector $s$ and translate it horizontally so that it intersects $P_{3}$ in its center. We shall still refer to the translates of these vectors as $w$ and $s$, respectively. Now consider the right triangle with vertices at the center of $P_{3}$, $w \cap s$, and the left end point of $P_{4} \cap w$. The hypotenuse, $c$, of this triangle has length at least $\frac{1}{2}$ since $\frac{D(P_{3})}{2} = \frac{1}{2}$. The height, $a$, has length less than $\frac{1}{4}$ since $\frac{D(P_{3})}{2} = \frac{1}{2}$ and $\frac{D(P_{4})}{2} \geq \frac{1}{4}$. The base, $b$, has length less than $\frac{1}{4}$ since $\frac{\ell(w)}{2} \leq \frac{3}{4}$ and $D(P_{4}) > \frac{1}{2}$. This gives us that $\frac{1}{4} \leq \ell(c)^{2} = \ell(a)^{2} + \ell(b)^{2} \leq \frac{1}{16} + \frac{1}{16} = \frac{1}{8}$, which is a contradiction. Thus, $D(B) > \frac{1}{2}$. \end{proof} \section{Commensurability classes of hyperbolic pretzel knot complements} \label{sec:commensurablity} Recall that two hyperbolic $3$-manifolds $M_{1} = \mathbb{H}^{3} / \Gamma_{1}$ and $M_{2} = \mathbb{H}^{3} / \Gamma_{2}$ are called \textit{commensurable} if they share a common finite-sheeted cover. In terms of fundamental groups, this definition is equivalent to $\Gamma_{1}$ and a conjugate of $\Gamma_{2}$ in PSL$(2, \mathbb{C})$ sharing some finite index subgroup. The \textit{commensurability class} of a hyperbolic $3$-manifold $M$ is the set of all $3$-manifolds commensurable with $M$. We are interested in the case when $M = \mathbb{S}^{3} \setminus K$, where $K$ is a hyperbolic knot. It is conjectured in \cite{ReWa} that there are at most three knot complements in the commensurability class of a hyperbolic knot complement. In particular, Reid and Walsh show that when $K$ is a hyperbolic $2$-bridge knot, then $M$ is the only knot complement in its commensurability class. Their work provides criteria for checking whether or not a hyperbolic knot complement is the only knot complement in its commensurability class. Specifically, we have the following theorem coming from Reid and Walsh's work in \cite[Section 5]{ReWa}; this version of the theorem can be found at the beginning of \cite{MM}. \begin{thm} \label{thm:comm_knots} Let $K$ be a hyperbolic knot in $\mathbb{S}^{3}$. If $K$ admits no hidden symmetries, has no lens space surgery, and admits either no symmetries or else only a strong inversion and no other symmetries, then $\mathbb{S}^{3} \setminus K$ is the only knot complement in its commensurability class. \end{thm} Macasieb and Mattman use this criterion in \cite{MM} to show that for any hyperbolic pretzel knot of the form $K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{n} \right)$, $n \in \mathbb{Z} \setminus \left\{7\right\}$, its knot complement $\mathbb{S}^{3} \setminus K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{n} \right)$ is the only knot complement in its commensurability class. The main challenge in their work was showing that these knots admit no \textit{hidden symmetries}. \begin{defn} \label{defn:hiddensym} Let $\Gamma$ be a finite co-volume Kleinian group. The \textit{normalizer} of $\Gamma$ is \begin{center} $N(\Gamma) = \left\{ g \in \text{PSL}(2, \mathbb{C}) : g\Gamma g^{-1} = \Gamma \right\}$. \end{center} The \textit{commensurator} of $\Gamma$ is \begin{center} $C(\Gamma) = \left\{ g \in \text{PSL}(2, \mathbb{C}) : \left|\Gamma : \Gamma \cap g\Gamma g^{-1} \right| < \infty \: \text{and} \left|g \Gamma g^{-1} : \Gamma \cap g^{-1}\Gamma g \right| < \infty \right\}$. \end{center} If $N(\Gamma)$ is strictly smaller than $C(\Gamma)$, then $\Gamma$ and $\mathbb{H}^{3} / \Gamma$ are said to have \textit{hidden symmetries}. If $\mathbb{H}^{3} / \Gamma \cong \mathbb{S}^{3} \setminus K$, then we also say that $K$ admits hidden symmetries. \end{defn} Here, we would also like to apply Reid and Walsh's criterion to show that our hyperbolic pretzel knot complements are the only knot complements in their respective commensurability classes. The following proposition immediately takes care of symmetries and lens space surgeries. Given a knot $K \subset \mathbb{S}^{3}$, $K$ admits a \textit{strong inversion} if there exists an involution $t$ of $(\mathbb{S}^{3}, K)$ such that the fixed point set of $t$ intersects the knot in exactly two points. \begin{prop} \label{prop:no_surg_or_sym} Let $M = \mathbb{S}^{3} \setminus K$, where $K = K\left( \frac{1}{q_{1}}, \frac{1}{q_{2}},\ldots, \frac{1}{q_{n}} \right)$ is a hyperbolic pretzel knot with all $q_{i}$ distinct, exactly one $q_{i}$ even, and $K \neq K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{7} \right)$. Then $M$ admits no lens space surgeries, and a strong inversion is its only symmetry. In particular, any $M_{2n+1}^{\sigma}$ admits no lens space surgeries, and a strong inversions is its only symmetry. \end{prop} \begin{proof} All pretzel knots admitting lens space surgeries have been classified by Ichihara and Jong in \cite{IJ}, and this classification is also implied by the work of Lidman and Moore in \cite{LiMo}. Both works show that the only hyperbolic pretzel knot that admits any lens spaces surgeries is $K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{7} \right)$. To deal with symmetries, we first note that the work of Boileau and Zimmermann \cite{BoZi} implies that Sym$(\mathbb{S}^{3}, K) = \mathbb{Z}_{2}$. It is easy to see that the one non-trivial symmetry of any $K$ is a strong inversion. Consider the knot diagram of $K_{2n+1}^{\sigma}$ as shown in figure \ref{figure6}. Recall that exactly one twist region $R_{i}$ has an even number of crossings. Consider the involution of our knot in $\mathbb{S}^{3}$ whose axis cuts directly through the middle of all of our twist regions. This involution will intersect $K_{2n+1}^{\sigma}$ in exactly two points, always inside the one twist region with an even number of crossings. In the other twist regions, this axis will miss the knot, passing in between two strands at a crossing. This process for finding the strong involution generalizes to any pretzel knot $K$ with exactly one $q_{i}$ even. \end{proof} It remains to rule out hidden symmetries. In \cite{MM}, Macasieb and Mattman do this by arguing that the invariant trace field of any $K\left( \frac{1}{-2}, \frac{1}{3}, \frac{1}{n} \right)$ has neither $\mathbb{Q}(i)$ nor $\mathbb{Q}(\sqrt{-3})$ as a subfield. This criterion for the existence of hidden symmetries is supplied by Neumann and Reid \cite{NeRe}. Here, we use a geometric approach to show that our knots do not admit hidden symmetries. We will also use a criterion for the existence of hidden symmetries provided by Neumann and Reid in \cite{NeRe}, stated below. \begin{prop} \cite[Proposition 9.1]{NeRe} \label{lemma:no_hidden_sym} Let $\mathbb{H}^{3} / \Gamma$ be a hyperbolic knot complement which is not the figure-$8$ knot complement. Then $\mathbb{H}^{3} / \Gamma$ admits hidden symmetries if and only if $\mathbb{H}^{3}/ C(\Gamma)$ has a rigid Euclidean cusp cross-section. \end{prop} The rigid Euclidean orbifolds are $\mathbb{S}^{2}(2,4,4)$, $\mathbb{S}^{2}(3,3,3)$, and $\mathbb{S}^{2}(2,3,6)$, and are named so because their moduli spaces are trivial. The following proposition will imply that our hyperbolic pretzel knot complements do not admit hidden symmetries, and so, they are the only knot complements in their respective commensurability classes. In what follows, $\mathbb{H}^{3} = \left\{(x,y,z) | z>0\right\}$. \begin{prop} \label{prop:no_rigid} For all $n \geq 2$ and $q_{i}$ sufficiently large, the hyperbolic knot complement $M = \mathbb{S}^{3} \setminus K = N_{2n+1}\left( (1, q_{1}) , \dots, (1, q_{2n+1}) \right)$ admits no hidden symmetries. \end{prop} \begin{proof} We will show that any such hyperbolic knot complement does not cover a $3$-orbifold that admits a rigid cusp $2$-orbifold, and so, by Proposition \ref{lemma:no_hidden_sym}, these knot complements admit no hidden symmetries. First, we shall analyze the cusp of $N_{2n+1}$ corresponding to the knot component of $L_{2n+1}$, and then expand this analysis to the cusp shape of any such $M$. In particular, we will prove that this cusp of $N_{2n+1}$ does not cover any rigid $2$-orbifold. This is accomplished by showing that the horoball packing corresponding to this cusp does not admit an order three or order four rotational symmetry. Then, by taking sufficiently long Dehn surgeries along all of the crossing circles of $L_{2n+1}$, we can make sure that the cusp of $M$ also does not cover any rigid $2$-orbifold. Throughout this proof, let $C$ denote the cusp of $N_{2n+1}$ that corresponds to the knot component of $L_{2n+1}$. Lift to $\mathbb{H}^{3}$ so that one of the lifts of the cusp $C$ is a horoball centered at $\infty$, denoted $H_{\infty}$. There will be a collection of disjoint horoballs in $\mathbb{H}^{3}$ associated with each cusp in $N_{2n+1}$. We expand our horoballs according to the procedure given by Theorem \ref{thm:cuspexpansion}. Specifically, we pick an order for our cusps, and expand the horoball neighborhood of a cusp until it either meets another horoball or meets the \textit{midpoint} of some edge of one of the polyhedra; see \cite[Definition $3.6$]{FP} . This procedure allows us to expand $H_{\infty}$ to height $z=1$, since any other horoballs will have diameter at most $1$ under these expansion instructions; see \cite[Theorem $3.8$]{FP}. We shall refer to a horoball of diameter $1$ as a \textit{maximal horoball}. This procedure from \cite[Theorem $3.8$]{FP} results in maximal horoballs sitting at each vertex of a rectangle tiling our cusp cross-section $C$. \begin{figure}[h] \includegraphics[scale=0.60]{cusptiling.eps} \caption{The cusp tiling of a cross-section of $C$. The red circles denote the shadows of maximal horoballs from $C$, and the green circles denote the shadows of maximal horoballs from crossing circles.} \label{cusptiling} \end{figure} By Proposition \ref{prop:Knotcusp} and Lemma \ref{lemma:rectanglesize}, the cusp cross-section of $C$ is tiled by a collection of rectangles in a very particular fashion. All of these rectangles have the same dimensions: $\ell(s)$ by $\ell(w)$, with $\ell(s) =1$ and $1 < \ell(w) < 2$. Furthermore, the circle packing for each of these rectangles is exactly the same. These $4(2n+1)$ rectangles are glued together to form a $2 \times 2(2n+1)$ block of rectangles. Expand this tiling of the cusp cross-section to cover the entire plane. From our view at $\infty$, we will see the shadow of a maximal horoball centered at each vertex. Specifically, each of the $2n+1$ crossing disks gives three vertices, two of which correspond to horoballs coming from our cusp $C$. In terms of our $2 \times 2(2n+1)$ block of rectangles, the vertices along the middle row correspond with maximal horoballs of our crossing circles. Vertices along the top and bottom rows of the block correspond with maximal horoballs from $C$. We claim that they are in fact the only maximal horoballs of $C$. See figure \ref{cusptiling} for a diagram showing the maximal horoballs of $C$. \begin{figure}[h] \includegraphics[scale=0.60]{PretzelCuspShapes4.eps} \caption{The local picture of our cusp tiling of a cross section of $C$. The red circles denote the shadows of maximal horoballs from $C$, and the green circles denote the shadows of maximal horoballs from crossing circles.} \label{pretzelcuspshapes4} \end{figure} Our circle packing analysis of the rectangles tiling $C$ from Lemma \ref{lemma:rectanglesize} will help us prove this claim. Figure \ref{pretzelcuspshapes4} shows two adjacent rectangles coming from the tiling of $C$, along with their circle packings. This figure also includes the shadows of the maximal horoballs located at vertices. See figure \ref{pretzelcuspshapes3} for a picture of one of these rectangles without the horoball shadows. Suppose there exists another maximal horoball of $C$, call it $H$. We know $H$ can not intersect the other maximal horoballs, except possibly in points of tangencies. Also, $H$ must be centered at a point either outside of the circles or on the boundary of one of the circles from our circle packing since in constructing our link complement, we cut away hemispheres bound by these circles. On our cusp cross-section of $C$, there are two lines of symmetry that will be useful here: the line $A$ and the line $l_{w}$, which cuts through the vector $w$ in its midpoints. Our horoball packing admits reflective symmetries about both of these lines. We shall now consider two cases. \textbf{Case $1$:} $H$ is centered along $l_{w}$. Since the center of $H$ can not be contained in $B$, $H$ is either centered at $x_{0} = P_{2} \cap B$ or some $y$ that lies below $B$ and above $A$ on $l_{w}$. First, suppose $H$ is centered at $x_{0}$. Since $\ell(w) <2$ and there are maximal horoballs at the corners of any such rectangle, $H$ can not be maximal. Now, suppose $H$ is centered at some $y$ as described above. By applying the reflection along $A$, $H$ will get mapped to another maximal horoball. For $n \geq 2$, we know that $D(B) > \frac{1}{2}$ by Lemma \ref{lemma:rectanglesize}. Thus, for $n \geq 2$, the distance from the center of $H$ to $l_{w} \cap A$ is less than $\frac{1}{2}$. In this case, $H$ will overlap with its image. In order to meet our tangency conditions, $H$ must map to itself. This implies that $H$ is centered at $y_{0} = l_{w} \cap A$. Once again, since $\ell(w) <2$ and there are maximal horoballs at the corners of any such rectangle, $H$ can not be maximal. \textbf{Case $2$:} Assume $H$ is not centered along $l_{w}$. Then the reflection along $l_{w}$ maps $H$ to some other maximal horoball, $H'$. Now, if $H'$and $H$ intersect, it must be at a point of tangency. So, both $H$ and $H'$ each must be centered a distance at least $\frac{1}{2}$ from $l_{w}$. This implies that the center of $H$ is at most a distance $\frac{1}{2}$ from the $s$ side of the rectangle closest to $H$. Also, since $\ell(s)=1$, the center of $H$ will be at most a distance $\frac{1}{2}$ from a $w$ side of a rectangle. Therefore, the center of $H$ will be at most a distance of $\sqrt{(\frac{1}{2})^{2} + (\frac{1}{2})^{2}} = \frac{1}{\sqrt{2}} < 1$ away from a corner of a rectangle, which is also a center of a maximal horoball. This implies that $H$ will overlap with a maximal horoball at one of the corners, which can not happen. Thus, the only maximal horoballs of $C$ occur at the corners of our rectangles as specified above. We now claim that the horoball packing corresponding to the cusp $C$ of $N_{2n+1}$ does not admit an order three or order four rotational symmetry. We fail to have such symmetries because of the shape of our rectangles. Pick any maximal horoball $H$ of $C$ such that $H \neq H_{\infty}$. The distance from the center of $H$ to the center of any other maximal horoball of $C$ in the $s$ direction is an integer multiple of $2 \ell(s) = 2$, and the distance from the center of $H$ to the center of any other maximal horoball of $C$ in the $w$ direction is an integer multiple of $\ell(w)$, where $\ell(w) < 2$. Next, note that the distance across the diagonal of the $2s \times w$ rectangle from the center of $H$ to the center of another maximal horoball of $C$ is $\sqrt{(2\ell(s))^{2}+ \ell(w)^{2}} = \sqrt{4 + \ell(w)^{2}} > \sqrt{5}> 2$ since $\ell(w) > 1$. This implies that the two closest maximal horoballs of $C$ are a distance $\ell(w)$ in the $w$ direction (one to the left of $H$ and one to the right of $H$). This gives an infinite string of pairwise closest maximal horoballs all centered on the same line: take any $H \neq H_{\infty}$ and each translate of $H$ by $n \cdot \ell(w)$, $n \in \mathbb{Z}$ determines another horoball in this string; see figure \ref{cusptiling}. Any rotational symmetry would have to map a string of pairwise closest maximal horoballs to another string of pairwise closest maximal horoballs. Thus, the only possible rotational symmetry would be an order two symmetry, where each such string maps back to itself. So, the horoball packing of $C$ does not admit an order three or order four rotational symmetry. Thus, this cusp does not cover a $2$-orbifold that has an order three or order four cone point. But any rigid cusp $2$-orbifold has an order three or order four cone point. Therefore, $C$ does not cover any rigid cusp $2$-orbifold. Since the cusp cross-section of $N_{2n+1}$ corresponding to the knot component of $L_{2n+1}$ does not admit order three or order four rotational symmetries, we can now show that the cusp cross-section of $M$ also doesn't have these symmetries. This is made possible by taking sufficiently long Dehn fillings along the crossing circles. As $q_{i} \rightarrow \infty$, any such $M$ converges to $N_{2n+1}$ in the geometric topology. Convergence in the geometric topology implies that we can fix a compact subset of $\mathbb{H}^{3}$, and the geometry of our horoball packing of $C'$ (the cusp of $M$ corresponding to the knot $K$) can be made sufficiently close to the geometry of $C$ on this compact subset, by choosing $q_{i}$ sufficiently large. So, consider the set of maximal horoballs $H_{1}, \dots, H_{k}$ of $C$ that intersect some fixed fundamental domain for the stabilizer of $\infty$. We claim that for sufficiently small $\delta$, we can choose $q_{i}$ large enough so that each such $H_{i}$ has diameter and center $\delta$-close to a corresponding horoball $H_{i}'$ in $C'$. Both the maximal horoball packing for $C$ and the maximal horoball packing for $C'$ are invariant under rank-two groups of translations. Let $g_{i}$ be a translation whose action on $C$ is determined by $H_{i} = g_{i}(H_{\infty})$. Specifically, $g_{i}$ is the translation determined by $g_{i}(p) = H_{i} \cap H_{\infty}$ for a point $p$ of intersection between $H_{\infty}$ and a fixed maximal horoball of $C$. Then there exists a $g_{i}'$ that acts on $C'$ such that $g_{i}'(p) \rightarrow g_{i}(p)$ as $q_{i} \rightarrow \infty$, which implies that the center and diameter of some $H_{i}'$ approach those of $H_{i}$. We refer to the horoballs $H_{1}' ,\dots, H'_{k}$ as \textit{almost maximal horoballs}. Now we can show that $C'$ lacks any order three or order four rotational symmetries by using the same type of argument we used for $C$. For $C$, we had infinite strings of pairwise closest maximal horoballs, with each string centered on a horizontal line. For $C'$, we get finite strings (since we are working over a compact domain) of pairwise closest almost maximal horoballs. These horoballs might not be centered on horizontal lines, but instead, are within a sufficiently small $\epsilon$ of being centered on horizontal lines. If anything, this will only further break any possible symmetries. Any rotational symmetry would have to map a string of pairwise closest almost maximal horoballs to another string of pairwise closest almost maximal horoballs. Again, the only possible rotational symmetry would be an order two symmetry. Thus, the one cusp of $M$ cannot cover a rigid $2$-orbifold, and so, $M$ does not admit hidden symmetries. \end{proof} Combining Proposition \ref{prop:no_rigid} with Proposition \ref{prop:no_surg_or_sym} shows that we have covered the three criteria in Reid and Walsh's theorem. This gives the following theorem, which applies to our pretzel knots $K_{2n+1} = K \left( \frac{1}{q_{1}}, \frac{1}{q_{2}}, \ldots, \frac{1}{q_{2n+1}} \right)$, if we assume that all $q_{i}$ are sufficiently large. \begin{thm} \label{thm:non_commensurable} Let $n \geq 2$ and let $q_{1}, \ldots, q_{2n+1}$ be integers such that only $q_{1}$ is even, $q_{i} \neq q_{j}$ for $i \neq j$, and all $q_{i}$ are sufficiently large. Then the complement of the hyperbolic pretzel knot $K \left( \frac{1}{q_{1}}, \frac{1}{q_{2}}, \ldots, \frac{1}{q_{2n+1}} \right)$ is the only knot complement in its commensurability class. In particular, any two of these hyperbolic pretzel knot complements are incommensurable. \end{thm} The work of Schwartz \cite[Theorem 1.1]{Sch} tells us that two cusped hyperbolic $3$-manifolds are commensurable if and only if their fundamental groups are quasi-isometric. This immediately gives the following corollary. \begin{cor} \label{cor:notQI} If two pretzel knot complements as described in Theorem \ref{thm:non_commensurable} are non-isometric, then they do not have quasi-isometric fundamental groups. \end{cor} \section{Mutations and short geodesics coming from Dehn fillings} \label{sec:mutations_sys} In this section, we shall analyze the behavior of short geodesics in the set of knot complements $\left\{M_{2n+1}^{\sigma}\right\}$. If there is enough vertical twisting in each twist region, i.e., if each $q_{i}$ is sufficiently large, then we can easily figure out which geodesic are the shortest. This analysis is possible by realizing our pretzel knot complements as Dehn surgeries along untwisted augmented link complements. We shall also see that if each $q_{i}$ is sufficiently large, then the initial length spectrum is actually preserved under mutation, and so, we will be able to generate a large class of hyperbolic knot complements with both the same volume and the same initial length spectrum. Here, we also give an application to closed hyperbolic $3$-manifolds that come from Dehn filling $M_{2n+1}^{\sigma}$ along $K_{2n+1}^{\sigma}$. For each $n \in \mathbb{N}$, $n \geq 2$ these sets of closed manifolds will have the same volume and the same initial length spectrum. We end this section by raising some questions about the effectiveness of geometric invariants of hyperbolic $3$-manifolds. \subsection{Mutations of $K_{2n+1}$ with the same initial length spectrum} \label{subsec:mutations_of_K} Given the untwisted augmented link complement $N_{2n+1} = \mathbb{S}^{3} \setminus L_{2n+1}$, we form $M_{2n+1} =\mathbb{S}^{3} \setminus K_{2n+1}$ by performing Dehn surgeries $(1, \frac{q_{i} - 1}{2})$ along $2n$ of the crossing circle cusps, and one Dehn surgery $(1 , \frac{q_{1}}{2})$ along the crossing circle cusp not enclosing a half-twist, i.e., \begin{center} $M_{2n+1} = N_{2n+1} \left( (1, \frac{q_{1}}{2}), (1, \frac{q_{2} - 1}{2}), \dots, (1, \frac{q_{2n+1} - 1}{2}) \right)$. \end{center} Similarly, any mutation $M^{\sigma}_{2n+1}$ is obtained by performing the same Dehn surgeries on $N_{2n+1}$, just with some of the surgery coefficients permuted. We now show that if each $q_{i}$ is sufficiently large, then the core geodesics in $M_{2n+1}$ are sufficiently short, and so, they are preserved under mutation. \begin{thm} \label{thm:sys_preserved} Let $\left\{ \gamma_{i}^{\sigma} \right\}_{i=1}^{2n+1}$ be the $2n+1$ geodesics in $M_{2n+1}^{\sigma}$ that came from Dehn filling the crossing circles of $N_{2n+1}$. For each $n \in \mathbb{N}$, there exists a constant $Q = Q(n) = \sqrt{(20.76)^{2}\frac{(2n+1)(4n)}{2n-1}-1}$, such that if each $q_{i} \geq Q$, then $\left\{ \gamma_{i}^{\sigma} \right\}_{i=1}^{2n+1}$ make up at least $2n+1$ of the shortest geodesics in their respective hyperbolic $3$-manifold and every $M_{2n+1}^{\sigma}$ has at least the same shortest $2n+1$ (complex) geodesic lengths. \end{thm} \begin{proof} Given $M_{2n+1}$, we must show that the result holds for a mutation $\sigma_{a}$ along $S_{a}$, and the general result will follow by induction. By Proposition \ref{prop:NL}, we know that the normalized length of the $i^{th}$ filling slope satisfies $\widehat{L}\left(s_{i}\right) \geq \sqrt{\frac{(2n-1)(1+q_{i}^{2})}{4n}}$. If each $\widehat{L}(s_{i}) \geq 20.76\sqrt{2n+1}$, then Corollary \ref{cor:syspreservednl} tells us that $M$ and $M^{\sigma_{a}}$ have (at least) the same $2n+1$ shortest (complex) geodesic lengths, and (at least) a portion of the initial length spectrum is given by $\left\{\ell_{\mathbb{C}}(\gamma_{i})\right\}_{i=1}^{2n+1} = \left\{\ell_{\mathbb{C}}(\gamma_{i}^{\sigma_{a}})\right\}_{i=1}^{2n+1}$. Thus, we need to just solve $\sqrt{\frac{(2n-1)(1+q_{i}^{2})}{4n}} \geq 20.76\sqrt{2n+1}$ for $q_{i}$ to determine $Q$. \end{proof} The following theorem comes from combining Theorem \ref{thm:sys_preserved}, Theorem \ref{thm:non_commensurable}, and Theorem \ref{thm:volume}, and requires all $q_{i}$ to be chosen sufficienty large. This theorem shows that there are large classes of geometrically similar pretzel knots -- they have non-isometric knot complements, but a large number of their geometric invariants are the same. \begin{thm} \label{thm:similarpretzels} For each $n \in \mathbb{N}$, $n\geq 2$, there exist $\frac{(2n)!}{2}$ non-isometric hyperbolic pretzel knot complements, $\left\{M_{2n+1}^{\sigma}\right\}$, such that these manifolds: \begin{itemize} \item have the same $2n+1$ shortest geodesic (complex) lengths, \item are pairwise incommensurable, \item have the same volume, and \item $\left(\frac{2n-1}{2}\right)v_{\mathrm{oct}} \leq vol(M^{\sigma}_{2n+1}) \leq \left(4n+2\right)v_{\mathrm{oct}}$, where $v_{\mathrm{oct}} \left(\approx 3.6638\right)$ is the volume of a regular ideal octahedron. \end{itemize} \end{thm} \subsection{Closed hyperbolic $3$-manifolds with the same volume and initial length spectrum} \label{subsec:closed} Let $M = \mathbb{S}^{3} \setminus K$ and let $M(p,q)$ denote the closed manifold obtained by performing a $(p,q)$-Dehn surgery along the knot $K$. In \cite[Theorem 3]{Mi}, we show that for each $n \in \mathbb{N}$, $n \geq 2$, and for $(p,q)$ sufficiently large, $M_{2n+1}^{\sigma}(p,q)$ and $M_{2n+1}^{\sigma'}(p,q)$ have the same volume and are non-isometric closed hyperbolic $3$-manifolds, whenever $M_{2n+1}^{\sigma}$ and $M_{2n+1}^{\sigma'}$ are non-isometric. This proof relies on another result of Ruberman's \cite[Theorem 5.5]{Ru} which shows that corresponding Dehn surgeries on a hyperbolic knot $K$ and its fellow mutant $K^{\mu}$ will often result in manifolds with the same volume. Specifically, this happens when a Conway sphere and its mutation are \textit{unlinked}. \begin{defn}[Unlinked] \label{def:Unlinked} Let $K$ be a knot in $S^{3}$ admitting a Conway sphere $S$. Observe that a specific choice of a mutation $\mu$ gives a pair of $S^{0}$'s on the knot such that each $S^{0}$ is preserved by $\mu$. We say that $\mu$ and $S$ are \textit{unlinked} if these $S^{0}$'s are unlinked on $K$. \end{defn} Being unlinked allows one to tube together the boundary components of a Conway sphere that are interchanged by $\mu$ to create a closed surface of genus two, which we call $S'$. $S'$ is also a hyperelliptic surface, and its involution is the same as the involution $\mu$ of our Conway sphere. Dehn surgeries on $\mathbb{S}^{3} \setminus K$ and its mutant $\mathbb{S}^{3} \setminus K^{\mu}$ differ by mutating along this closed surface. Thus, Ruberman's result for preserving volume will apply to these closed manifolds. Combining our work in \cite{Mi} with Theorem \ref{cor:syspreserved2} gives the following. \begin{thm} \label{thm:closedmanifolds} For each $n \in \mathbb{N}$, $n \geq 2$, and any $(p,q)$ sufficiently large, there exist $\frac{(2n-1)!}{2}$ non-isometric closed hyperbolic $3$-manifolds $\left\{M_{2n+1}^{\sigma}(p,q)\right\}$ such that these manifolds: \begin{itemize} \item have the same $2n+2$ shortest (complex) geodesic lengths, \item have the same volumes, and \item $vol(M^{\sigma}_{2n+1}(p,q)) < (4n+2)v_{oct}$. \end{itemize} \end{thm} \begin{proof} In \cite{Mi}, we constructed our $K_{2n+1}$ so that all Conway spheres in $\left\{ (S_{a}, \sigma_{a}) \right\}_{a=1}^{2n}$ are unlinked. However, here we have slightly modified this construction of each $K_{2n+1}$. Specifically, we now have one twist region with an even number of twists in $K_{2n+1}$. As a result, $(S_{1}, \sigma_{1})$ is not unlinked. Thus, we will only mutate along the other Conway spheres: $\left\{ (S_{a}, \sigma_{a}) \right\}_{a=2}^{2n}$. These combinations of mutations create $\frac{(2n)!}{2(2n)} = \frac{(2n-1)!}{2}$ non-isometric, hyperbolic pretzel knots; see \cite[Theorem $2$]{Mi} for more details. Let $\sigma$ and $\sigma'$ be any combination of mutations along our unlinked Conway spheres resulting in non-isometric knot complements. Now, $M_{2n+1}^{\sigma}(p,q)$ and $M_{2n+1}^{\sigma'}(p,q)$ have the same volume by Ruberman's work. In \cite[Theorem $3$]{Mi}, we show that $M_{2n+1}^{\sigma}(p,q)$ and $M_{2n+1}^{\sigma'}(p,q)$ are non-isometric by choosing $(p,q)$ sufficiently large so that the core geodesics resulting from this Dehn filling are the systoles of their respective manifolds. This comes from the work of Neumann--Zagier \cite{NZ}. So, for $(p,q)$ sufficiently large, any $M_{2n+1}^{\sigma}(p,q)$ will have at least $2n+2$ closed geodesics shorter than a constant $L< 0.015$. $2n+1$ of these geodesics come from Dehn filling our crossing circles of $L_{2n+1}$, and the systole comes from then Dehn filling the knot component. We can apply Corollary \ref{cor:syspreservednl} to these closed manifolds to show that they have the same $2n+2$ shortest geodesic lengths. The upper bound on volume follows from the proof of \cite[Theorem $3$]{Mi}. \end{proof} \subsection{Closing Remarks} \label{subsec:CR} The fact that the manifolds $\left\{M_{2n+1}^{\sigma}\right\}$ are constructed by mutating knot complements that are pairwise incommensurable sharply contrasts any of the known constructions for building large classes of hyperbolic $3$-manifolds that are iso-length spectral. However, we only know that our mutant knot complements have the same initial length spectra. Based on experimental evidence from SnapPy, the author doubts that any of these manifolds actually are iso-length spectral. It would be interesting to know if this mutation process could be used to produce iso-length spectral hyperbolic $3$-manifolds that are incommensurable. In addition, there is a general recipe for our type of construction and we did not necessarily need to use pretzel knots. In order to construct a large number of non-isometric hyperbolic manifolds with the same volume and the same initial length spectrum, you need the following key ingredients. \begin{itemize} \item An initial hyperbolic $3$-manifold $M$ with: \begin{itemize} \item a large number of hyperelliptic surfaces in $M$ to mutate along to create the set of manifolds $\left\{M^{\sigma}\right\}$, and \item a way to determine your shortest geodesics in $M$ and make sure they are sufficiently short, i.e., realize them as the cores of sufficiently long Dehn fillings. \end{itemize} \item A simple method to determine how much double counting you are doing, i.e., a method to determine if any $M^{\sigma}$ and $M^{\sigma'}$ are isometric or not. \end{itemize} Given this recipe, you want to maximize the number of hyperelliptic surfaces in $M$ to mutate along and maximize the number of sufficiently short geodesics, while minimizing the double counting. It would be interesting to examine how well we did with maximizing and minimizing these parameters. Such an examination leads us to consider the function $N(v,s)$, which counts the number of hyperbolic $3$-manifolds with same volume $v$ and the same $s$ shortest geodesic lengths. We can also consider the restriction of this counting function to specific classes of hyperbolic $3$-manifolds. Let $N_{K}(v,s)$ denote the restriction of $N(v,s)$ to hyperbolic knot complements and $N_{Cl}(v,s)$ denote the restriction of $N(v,s)$ to closed hyperbolic $3$-manifolds. An immediate corollary of Theorem \ref{thm:similarpretzels} and Theorem \ref{thm:closedmanifolds} gives the following lower bounds on the growth rates of $N_{K}(v,s)$ and $N_{Cl}(v,s)$ as functions of $v$. The proof of this corollary is the same as the proof of \cite[Theorem $1$]{Mi}, except we can now take the short geodesic lengths into account. \begin{cor} \label{cor:growth} There are sequences $\left\{(v_{n}, s_{n})\right\}$ and $\left\{(x_{n}, t_{n})\right\}$ with $(v_{n}, s_{n}),(x_{n}, t_{n}) \rightarrow (\infty, \infty)$ such that \begin{center} $N_{K}((v_{n}, s_{n})) \geq (v_{n})^{(\frac{v_{n}}{8})}$ and $N_{Cl}((x_{n}, t_{n})) \geq \left(x_{n} \right)^{\left(\frac{x_{n}}{8} \right)}$ \end{center} for all $n \gg 0$. \end{cor} This corollary tells us that the counting function $N((v,s))$ grows at least factorially fast with $v$, and immediately raises some questions. \begin{question} Can a Sunada-type construction or an arithmetic method be applied to also show $N((v,s))$ grows at least factorially fast with $v$? Also, are there sequences $\left\{(v_{n}, s_{n})\right\}$ with $v_{n} \rightarrow \infty$ such that $N((v_{n}, s_{n}))$ grows faster than factorially with $v_{n}$? \end{question} It would be interesting to find a construction realizing a growth rate faster than the one given in Corollary \ref{cor:growth} or show that a factorial growth rate is actually the best we can do. \bibliographystyle{hamsplain}
2,877,628,090,916
arxiv
\section{Introduction} Interactions of electrons with ions and atoms are atomic processes of fundamental nature. Electron-impact ionization plays a significant role not only in many fields of physics, but also in other sciences. Single ionization is usually the strongest among various ionization processes, but multiple ionization is important in various environments with an abundance of energetic electrons. Compared with other multiple ionization processes, double ionization (DI) has the largest impact to ionization state distribution. Direct and indirect processes are responsible for the formation of the charge state of the resulting ion with two removed electrons. Indirect process is determined by ionization-autoionization (IA), while direct process occurs due to the simultaneous ionization of two electrons in the target ion. From the theoretical point of view, the many-body problem with a few outgoing electrons in the vacuum has to be solved in the latter case. DI has been widely studied for the light elements theoretically and experimentally \cite{1991sscp_54_13_muller, 2003pr_374_91_berakdar, 2008aamop_55_293_muller, 2012epjd_66_1_colgan}. Many developed theoretical methods \cite{1973jpb_6_270_tweed, 1999jpb_32_5047_kheifets, 2000jpb_33_4323_defrance, 2004pra_70_032705_pindzola} show a good agreement with experimental measurements for the two-electron systems. Time-dependent close-coupling approach has been used to analyze more complex systems with more than two electrons \cite{2009jpb_42_215204_pindzola, 2010jpb_43_105204_pindzola, 2011jpb_44_105202_pindzola}. However, those calculations are cumbersome even for the light atoms. From the perspective of classical approach, the mechanisms of DI were identified by Gryzinski \cite{1965pr_138_A336_gryzinski}. Unfortunately, this approach for one or another reason failed to provide a good agreements with measurements in most cases. The main aim of our paper is to show that direct double ionization (DDI) can be investigated as a sequence of a few processes which take place in the atomic system. To demonstrate possibilities of our approach, we have performed calculations of DI cross-sections for the light ions: $O^{1+}$, $O^{2+}$, $O^{3+}$, $C^{1+}$, and $Ar^{2+}$. In addition, DI cross-sections have been studied in $W^{5+}$ and $W^{25+}$ ions using Unresolved Transition Array (UTA) approach. First of all, we consider an ensemble of ions or atoms which undergo collisions with electrons of energy $\varepsilon_0$. Some of the ions of the ensemble are excited to the higher levels while the others reach the next ionization stage after electron impact. After the first collision with electrons, populations of ions in various levels are different because cross-sections of the electron-impact excitation and ionization processes to these levels are also different. We assume that after the first ionization process from $i$ level to $j$ level with cross-section $\sigma^{CI}_{ij}(\varepsilon_{0})$ the population of the final level is $p_{j}$. An additional electron can be removed by scattered or ejected electrons. Probability to remove the additional electron from $nl$ shell, when atomic system undergoes a transition from $j$ level to $f$ level, can be expressed by $ \sigma^{CI}_{jf} (\varepsilon_{1})/(4 \pi \bar{r}^{2}_{nl}) $ \cite{1965pr_138_A336_gryzinski}. Here, $\varepsilon_{1}$ is the energy of the scattered or ejected electron, which removes the additional electron, $\bar{r}_{nl}$ is the average distance among the electrons in the $nl$ shell. Assuming that density of electrons in the shell is uniform we can write $\bar{R}_{nl} \approx \bar{r}_{nl} N_{nl}^{1/3}$, where $\bar{R}_{nl}$ is the mean distance of the electrons from the nucleus, $N_{nl}$ is the number of electrons in the $nl$ shell. Thus, the equation for DDI from $i$ level to $j$ level through the ionization-ionization (II) path can be written as \begin{equation} \sigma^{DDI-II}_{if}(\varepsilon_{0}) = \sum_{j} \sigma^{CI}_{ij} (\varepsilon_{0}) p_{j} \frac{ \sigma^{CI}_{jf} (\varepsilon_{1}) (N_{nl})^{2/3}}{4 \pi \bar{R}^{2}_{nl}} . \label{II} \end{equation} Additional intensive paths of DDI go through excitation-ionization-ionization (EII) and ionization-excitation-ionization (IEI) processes. For EII process, the DDI cross-section can be expressed by the equation: \[ \sigma^{DDI-EII}_{if}(\varepsilon_{0}) = \sum_{kj} \sigma^{CE}_{ik} (\varepsilon_{0}) p_{k} \] \begin{equation} \times \frac{ \sigma^{CI}_{kj}(\varepsilon_{1}) (N_{nl})^{2/3}}{4 \pi \bar{R}^{2}_{nl}} \frac{ \sigma^{CI}_{jf}(\varepsilon_{2}) (N_{n'l'})^{2/3}}{4 \pi \bar{R}^{2}_{n'l'}} . \label{EII} \end{equation} Here, $p_{k}$ stands for the population of the excited $k$ state of the initial ion, $\varepsilon_{1} = \varepsilon_{0} - \Delta E_{ik}$, $\Delta E_{ik}$ is a transition energy, $\varepsilon_{2}$ is the energy of scattered or ejected electron. In a similar way, the DDI cross-section for IEI process can be expressed by the equation: \[ \sigma^{DDI-IEI}_{if}(\varepsilon_{0}) = \sum_{kj} \sigma^{CI}_{ik} (\varepsilon_{0}) p_{k} \] \begin{equation} \times \frac{ \sigma^{CE}_{kj}(\varepsilon_{1}) (N_{nl})^{2/3}}{4 \pi \bar{R}^{2}_{nl}} \frac{ \sigma^{CI}_{jf}(\varepsilon_{2}) (N_{n'l'})^{2/3}}{4 \pi \bar{R}^{2}_{n'l'}} . \label{IEI} \end{equation} It should be noted that the main difference between our method and the approach proposed by Gryzinski \cite{1965pr_138_A336_gryzinski} is the population of levels included into Eqs. (\ref{II}), (\ref{EII}), and (\ref{IEI}). Moreover, previously not described additional processes, such as EII and IEI, are also determined. Flexible Atomic Code (FAC) \cite{2008cjp_86_675_Gu} is employed to obtain autoionization transition probabilities as well as electron-impact excitation and single ionization cross-sections in the distorted wave approximation. The largest uncertainties in the calculation of DDI cross-sections using Eqs. (\ref{II}), (\ref{EII}), and (\ref{IEI}) come from calculation of ionization cross-sections within the distorted wave approximation since it is not clear which mean configuration, i.e. of ionizing or ionized ion, has to be applied \cite{2008cjp_86_675_Gu}. Furthermore, the previous studies demonstrate that incident and scattered electron continuum orbitals can be evaluated in the potential of ionizing \cite{1981pra_24_1278_younger} or ionized \cite{1991jpb_24_l405_botero} ion. Therefore, we have checked which approach gives a better agreement with experimental measurements for single and double ionization cross-sections and included only those calculations. \begin{figure} \includegraphics[scale=0.3]{o1o3dw5lev5.eps}% \caption{\label{o1} Electron-impact DI cross-sections for $O^{1+}$. $\mathrm{DDI}^{1}$ stands for DDI cross-sections when scattered and ejected electrons share the excess energy, $\mathrm{DDI}^{2}$ - one of the electrons takes all the available energy after ionization, $\mathrm{DI}$ - sum of $\mathrm{DDI}^{1}$ and $\mathrm{IA}$ parts. See explanations in the text for the other processes. Experiment: solid circles \cite{1999ps_1999_285_westermann}, open circles \cite{1994jpb_27_2383_zambra}.} \end{figure} The theoretical electron-impact DI cross-sections along with experimental values for $O^{1+}$ ion are displayed in Fig. \ref{o1}. The calculated DDI cross-sections correspond to the two cases of energy distribution of scattered and ejected electrons. In one case, the excess energy is taken by one of the electrons participating in the collision. Only this electron participates in the further process which results in DI. In the other case, the ejected and scattered electrons share the excess energy. Further collisions of one of two available electrons with any target electrons can lead to the removal of the additional electron from the system. Theoretical cross-sections when electrons share the excess energy agree quite well with experimental values \cite{1999ps_1999_285_westermann} obtained in Giessen the electron-ion crossed-beam set-up \cite{1989jpb_22_1241_tinschert}. The later measurements from Zambra {\it et al.} \cite{1994jpb_27_2383_zambra} show about 20 \% smaller values than \cite{1999ps_1999_285_westermann}. It can be explained by the fact that the different metastable fractions of $O^{1+}$ ion are present in the ion beams. Our data correspond to the ionization from the ground level. Calculations show that contributions from the higher levels of $2s^{2} 2p^{3}$ configuration can have a decreasing or increasing character to the total cross-section. It is worth to note that at high energy limit electrons after collision tend to equally share the excess energy. On the other hand, one of the electrons acquires large part of the excess energy at lower energies of the incident electron. It is obvious that the differences between theoretical and experimental data can be removed by analyzing electron energy distribution after impact ionization. As it can be seen from Fig. \ref{o1}, the contribution from II process dominates over the contribution from IEI and EII processes. The cross-sections from IEI process are about 50 \% larger compared with the cross-sections from EII process. This can be explained by the fact that initial ionization process is relatively stronger than excitation. Calculations show that the relative population of $2s^{2}2p^{2}$ configuration is equal to 30 \% and the population of $2s^{1}2p^{3}$ configuration amounts to 10 \% at electron energy of 300 eV. Electron-impact excitation gives the relative population of 33 \% for $2s^{1}2p^{4}$ configuration at the same electron energy. The cross-sections show a well-distinguished two-maxima structure in the case of $O^{2+}$ ion (Fig. \ref{o2}). Theoretical cross-sections correspond to the ionization from the excited $2p_{0.5}2p_{1.5} (J=2)$ level of the ion. DDI cross-sections from the other levels of the ground configuration give smaller or larger values. On the other hand, the indirect part is not strongly affected by which level is used to calculate cross-sections. At low incident electron energies, a better agreement with the experimental cross-sections is obtained if after impact, which leads to the ionization, one of the electrons takes all the excess energy. And again, the largest contribution to DDI comes from II process. However, the contribution of IEI process is relatively larger compared to the $O^{1+}$ case. The analysis of population of configurations after the first collision, which leads to excitation or ionization, shows that the largest flux goes to the excited configurations of the initial ion. There are some differences between theoretical and experimental data at electron energies where direct and indirect processes start to overlap. This disagreement can be explained by the additional contribution from the excited $2s2p^{3}$ configuration. Our calculations show that cross-section maximum for the direct part reaches $3\cdot 10^{-19}$ cm$^{2}$ at 360 eV electron energy for this configuration. \begin{figure} \includegraphics[scale=0.3]{o2o4dw5lev8.eps}% \caption{\label{o2} Same as Fig. \ref{o1} but for $O^{2+}$. } \end{figure} For $O^{3+}$ ion, the indirect part of DI cross-section dominates over the direct part (Fig. \ref{o3}). Calculations correspond to the ionization from the lowest level of the first excited $2s^{1}2p^{1}$ configuration. The contribution of the ground configuration to the direct part is about 40 \% smaller. At low impact energies, a better agreement of theoretical values with experimental ones is obtained when one of the electrons takes all the available energy. The difference between the theoretical and experimental cross-sections can be related to the different electron energy distribution after collision. \begin{figure} \includegraphics[scale=0.3]{o3o5dw52s12p2lev7.eps}% \caption{\label{o3} Same as Fig. \ref{o1} but for $O^{3+}$. } \end{figure} In the case of $C^{1+}$ ion, a two-maxima structure is also seen for DI cross-sections (Fig. \ref{c1}). We present cross-sections for the lowest level of the first excited $2s^{1}2p^{2}$ configuration. The cross-sections from the ground level have slightly lower values compared with ionization from the level of the excited configuration. On the other hand, the cross-sections from the other two long-lived levels of the $2s^{1}2p^{2}$ configuration are higher or lower than from the lowest level of the configuration. \begin{figure} \includegraphics[scale=0.3]{c1c3dw4lev62s12p2lev7.eps}% \caption{\label{c1} Same as Fig. \ref{o1} but for $C^{1+}$. } \end{figure} The ground configuration of $Ar^{2+}$ ion has filled shells up to $3s$ with valence electrons in $3p$ shell. Calculated electron-impact DI cross-sections from the excited $3p_{1.5}^2 (J=0)$ level of the ground configuration are displayed in Fig. \ref{ar2} along with the experimental values \cite{1989jpb_22_1241_tinschert}. A better agreement with the experiment at lower electron energies is obtained if one assumes that one of electrons after ionization takes all the available energy. A share of the excess energy starts to dominate near the peak of DDI cross-section. However, many long-lived levels of the first excited $3p^{3}3d$ configuration have to be studied to find out the metastable fraction in the ion source. Encouraged by the obtained results, we also investigated DI cross sections for $W^{5+}$ and $W^{25+}$ ions using UTA approach. Comparison with experimental data for $W^{5+}$ ion shows the same tendencies for distribution of electron energies as for the other studied ions - one of the electrons takes main part of the available energy of the collision at low energies. However, large number of long-lived levels of $4f^{13}5d^{2}$, $5p^{5}5d^{2}$, and $6s$ configurations have to be analyzed in order to estimate their contribution to DI cross-sections. DDI cross-sections for $W^{25+}$ ion are two orders lower than contribution from IA process. It confirms the fact that for highly charged ions DDI influence is very small. \begin{figure} \includegraphics[scale=0.3]{ar2ar4nelexydw5lev4.eps}% \caption{\label{ar2} Same as Fig. \ref{o1} but for $Ar^{2+}$. Solid circles - experiment \cite{1989jpb_22_1241_tinschert}.} \end{figure} To conclude, here we have developed a method that considers electron-impact DDI process as a consequence of two and three step processes. Excitation and ionization processes after collision of incident electron with target are studied to obtain population of levels for further steps. We have demonstrated that the method can be easily applied for complex ions. Much work still needs to be done analyzing distribution of electron energies after the first ionization process. This research was funded by European Social Fund under the Global Grant Measure (No.: VP1-3.1-\v{S}MM-07-K-02-015). Part of computations were performed on resources at the High Performance Computing Center „HPC Sauletekis“ in Vilnius University Faculty of Physics.
2,877,628,090,917
arxiv
\section{Introduction} In the past few years modeling crowd behavior has become a very active field of applied mathematics. Beyond their importance in real life applications, these modeling problems serve as basic ideas to understand many other phenomena coming for example from biology (cell migration, tumor growth, pattern formations in animal populations, etc.), particle physics and economics. A first non-exhaustive list of references for these problems is \cite{Cha1, Col, Cos, CriPicTos, Dog, Helb1, Helb3, Hug1, Hug2, crowd1}. A very natural question in all these models is the problem of congestion phenomenon: in many practical situations, very high quantities of individuals could try to occupy the same spot, which could be impossible, or lead to strong negative effects on the motion, because of natural limitations on the crowd density. These phenomena have been studied by using different models, which could be either ``microscopic'' (based on ODEs on the motion of a high number of agents) or ``macroscopic'' (describing the agents via their density and velocity, typically with Eulerian formalism). Let us concentrate on the macroscopic models, where the density $\rho$ plays a crucial role. These very same models can be characterized either by ``soft congestion'' effects (i.e. the higher the density the slower the motion), or by ``hard congestion'' (i.e. an abrupt threshold effect: if the density touches a certain maximal value, the motion is strongly affected, while nothing happens for smaller values of the density). See \cite{MauRouSan1} for comparison between the different classes of models. This last class of models, due to the discontinuity in the congestion effects, presents new mathematical difficulties, which cannot be analyzed with the usual techniques from conservation laws (or, more generally, evolution PDEs) used for soft congestion. A very powerful tool to attack macroscopic hard-congestion problems is the theory of optimal transportation (see \cite{villani,OTAM}), as we can see in \cite{MauRouSan2, MauRouSan1, aude_phd, xedp}. In this framework, the density of the agents solves a continuity equation (with velocity field taking into account the congestion effects), and can be seen as a curve in the Wasserstein space. Our aim in this paper is to endow the macroscopic hard congestion models of \cite{MauRouSan2, MauRouSan1, aude_phd, xedp} with diffusion effects. In other words, we will study an evolution equation where particles \begin{itemize} \item have a spontaneous velocity field $u_t(x)$ which depends on time and on their position, and is the velocity they would follow in the absence of the other particles, \item must adapt their velocity to the existence of an incompressibility constraint which prevents the density to go beyond a given threshold, \item are subject to some diffusion effect. \end{itemize} This can be considered as a model for a crowd where a part of the motion of each agent is driven by a Brownian motion. Implementing this new element into the existing models could give a better approximation of reality: as usual when one adds a stochastic component, this can be a (very) rough approximation of unpredictable effects which are not already handled by the model, and this could work well when dealing with large populations. Anyway, we do not want to discuss here the validity of this hard-congestion model and we are mainly concerned with its mathematical analysis. In particular, we will consider existence and regularity estimates, while we do not treat the uniqueness issue. Uniqueness is considered in a recent work of the first author in collaboration with S. Di Marino, see \cite{DiMMes}, and one can observe that the insertion of diffusion dramatically simplifies the picture as far as uniqueness is concerned. We also underline that one of the goals of the current paper (and of \cite{DiMMes}) is to better ``prepare'' these hard congestion crowd motion models for a possible analysis in the framework of Mean Field Games (see \cite{lasry1, lasry2, lasry3}, and also \cite{modest}). These MFG models usually involve a stochastic term, also implying regularizing effects, that are useful in the mathematical analysis of the corresponding PDEs. \subsection{The existing first order models in the light of \cite{MauRouSan2,MauRouSan1}} Some macroscopic models for crowd motion with density constraints and ``hard congestion'' effects were studied in \cite{MauRouSan1} and \cite{MauRouSan2}. We briefly present them as follows: \begin{itemize} \item The density of the population in a bounded (convex) domain $\O\subset\mathbb{R}^d$ is described by a probability measure $\rho\in\P(\O).$ The initial density $\rho_0\in{\mathcal P}(\Omega)$ evolves in time, and $\rho_t$ denotes its value at each time $t\in[0,T]$. \item The spontaneous velocity field of the population is a given time-dependent field, denoted by $u_t.$ It represents the velocity that each individual would like to follow in the absence of the others. Ignoring the density constraint, this would give rise to the continuity equation $\partial_t\rho_t+\nabla\cdot\left(\rho_t u_t\right)=0$. We observe that in the original work \cite{MauRouSan2} the vector field $u_t(x)$ was taken of the form $-\nabla D(x)$ (independent of time and of gradient form) but we try here to be more general (see \cite{aude_phd} where the non-gradient case is studied under some stronger regularity assumptions). \item The set of admissible densities will be denoted by ${\mathcal K}:=\{\rho\in\P(\O):\rho\le1\}.$ In order to guarantee that ${\mathcal K}$ is neither empty nor trivial, we suppose $|\O|>1$. \item The set of admissible velocity fields with respect to the density $\rho$ is characterized by the sign of the divergence of the velocity field on the saturated zone. We need to suppose also that all admissible velocity fields are such that no mass exists from the domain. So formally we set $$\rm{adm}(\rho):=\left\{v:\O\to\mathbb{R}^d:\nabla\cdot v\ge 0\ {\rm{on}}\ \{\rho=1\}\ {\rm{and}}\ v\cdot n\le 0\ {\rm{on}}\ \partial\Omega\right\}.$$ \item We consider the projection operator $P$ in $L^2(\mathcal{L}^d)$: $$\displaystyle P_{\rm{adm}(\rho)}[u]\in{\rm{argmin}}_{v\in\rm{adm}(\rho)}\int_\Omega|u-v|^2\,{\rm d} x.$$ Note that we could have used the Hilbert space $L^2(\rho)$ instead of $L^2(\mathcal{L}^d)$: this would be more natural in this kind of evolution equations, as $L^2(\rho)$ is interpreted in a standard way as the tangent space to the Wasserstein space $\mathcal{W}_2(\Omega)$. Yet, these two projections turn out to be the same in this case, as the only relevant zone is $\{\rho=1\}$. This is just formal, and would require more rigorous definitions (in particular of the divergence constraint in $\rm{adm}(\rho)$, see below). Anyway, to clarify, we choose to use the $L^2(\mathcal{L}^d)$-projection: in this way the vector fields are considered as defined Lebesgue-a.e. on the whole $\Omega$ (and not only on $\{\rho>0\}$) and the dependence of the projected vector field on $\rho$ only passes through the set $\rm{adm}(\rho)$. \item Finally we solve the following modified continuity equation for $\rho$ \begin{equation}\label{continuity} \partial_t\rho_t+\nabla\cdot\left(\rho_t P_{\rm{adm}(\rho_t)}[u_t]\right)=0, \end{equation} where the main point is that $\rho$ is advected by a vector field, compatible with the constraints, which is the closest to the spontaneous one. \end{itemize} The problem in solving Equation \eqref{continuity} is that the projected field has very low regularity: it is a priori only $L^2$ in $x$, and it does not depend smoothly on $\rho$ either (since a density $1$ and a density $1-\varepsilon$ give very different projection operators). By the way, its divergence is not well-defined either. To handle this issue we need to redefine the set of admissible velocities by duality. Taking a test function $p\in H^1(\Omega),\ p\ge 0\ {\rm{a.e.}}$, we obtain by the integration-by-parts equality $$\int_\Omega v\cdot\nabla p\,{\rm d} x=- \int_{\Omega}(\nabla\cdot v) p\,{\rm d} x+\int_{\partial\Omega}p v\cdot n\,{\rm d}{\mathcal H}^{d-1}(x).$$ For vector fields $v$ which do not let mass go through the boundary $\partial\Omega$ we have (in an a.e. sense) $v\cdot n=0$. This leads to the following definition $$\rm{adm}(\rho)=\left\{v\in L^2(\Omega;\mathbb{R}^d):\int_\O v\cdot\nabla p\ \,{\rm d} x\le 0,\ \forall p\in H^1(\O),p\ge0, p(1-\rho)=0\ {\rm{a.e.}}\right\},$$ (indeed, for smooth vector field with vanishing normal component on the boundary, this is equivalent to imposing $\nabla\cdot v\geq 0$ on the set $\{\rho=1\}$). Now, if we set $$\rm{press}(\rho):=\left\{p\in H^1(\Omega):p\ge0,\ p(1-\rho)=0\ {\rm{a.e.}}\right\},$$ we observe that, by definition, $\rm{adm}(\rho)$ and $\nabla \rm{press}(\rho)$ are two convex cones which are dual to each other in $L^2(\Omega;\mathbb{R}^d)$. Hence we always have a unique orthogonal decomposition \begin{equation}\label{decomposition} u=v+\nabla p,\quad v\in \rm{adm}(\rho);\;p\in \rm{press}(\rho),\quad \int_\O v\cdot\nabla p\,{\rm d} x=0. \end{equation} In this decomposition (as it is the case every time we decompose on two dual convex cones), $v=P_{\rm{adm}(\rho)}[u]$. These will be our mathematical definitions for $\rm{adm}(\rho)$ and for the projection onto this cone. Via this approach (introducing the new variable $p$ and using its characterization from the previous line), for a given desired velocity field $u:[0,T]\times\Omega\to\mathbb{R}^d,$ the continuity equation \eqref{continuity} can be rewritten as a system for the pair of variables $(\rho,p)$ which is \begin{equation}\label{syst with press} \left\{ \begin{array}{ll} \partial_t\rho_t+\nabla\cdot\left(\rho_t(u_t-\nabla p_t)\right)=0, &{\rm{in}}\;\; [0,T]\times\Omega,\\ p\ge0,\ \rho\leq 1,\ p(1-\rho)=0,&{\rm{in}}\;\;[0,T]\times\Omega,\\ \rho_t(u_t-\nabla p_t)\cdot n=0,&{\rm{on}}\;\;[0,T]\times\partial\Omega. \end{array} \right. \end{equation} This system is endowed with the initial condition $\rho(0,x)=\rho_0(x)$ ($\rho_0\in {\mathcal K}$). As far as the spatial boundary $\partial\Omega$ is concerned, we put no-flux boundary conditions to preserve the mass in $\Omega$. Note that in the above system we withdrew the condition $ \int (u_t-\nabla p_t)\cdot\nabla p_t=0$, as it is a consequence of the system \eqref{syst with press} itself. Informally, this can be seen in the following way: for an arbitrary $p_0\in\rm{press}(\rho_{t_0})$, we have that $t\mapsto \int_\O p_0\rho_t$ is maximal at $t=t_0$ (where it is equal to $\int_\O p_0$). Differentiating this quantity w.r.t. $t$ at $t=t_0$, using the equation \eqref{syst with press}, we get the desired orthogonality condition at $t=t_0$. For a rigorous proof of this fact (which holds for a.e. $t_0$), we refer to Proposition 4.7 in \cite{DMS}. \subsection{A diffusive counterpart} The goal of our work is to study a second order model of crowd movements with hard congestion effects where beside the transport factor a non-degenerate diffusion is present as well. The diffusion is the consequence of a randomness (a Brownian motion) in the movement of the crowd. With the ingredients that we introduced so far, we will modify the Fokker-Planck equation $\partial_t\rho_t-\Delta\rho_t+\nabla\cdot\left(\rho_t u_t\right)=0$ in order to take into account the density constraint $\rho_t\leq 1$. Assuming enough regularity for the velocity field $u$, we observe that the Fokker Planck equation is derived from a motion given by the SODE $\,{\rm d} X_t=u_t(X_t)\,{\rm d} t+\sqrt{2}\,{\rm d} B_t$ (where $B_t$ is the standard $d$-dimensional Brownian motion), but is macroscopically represented by the advection of the density $\rho_t$ by the vector field $-\nabla \rho_t/\rho_t+u_t$. Projecting onto the set of admissible velocities raises a natural question: should we project only $u_t$, and {\it then} apply the diffusion, or project the whole vector field, including $-\nabla \rho_t/\rho_t$? But this is not a real issue, since, at least formally, $\nabla\rho_t/\rho_t=0$ on the saturated set $\{\rho_t=1\}$ and $P_{\rm{adm}(\rho_t)}[-\nabla\rho_t/\rho_t+u_t]=P_{\rm{adm}(\rho_t)}[-\nabla\rho_t/\rho_t]+P_{\rm{adm}(\rho_t)}[u_t]=0+P_{\rm{adm}(\rho_t)}[u_t]$. Rigorously, this corresponds to the fact that the Heat Kernel preserves the constraint $\rho\leq 1$. As a consequence, we consider the modified Fokker-Planck type equation \begin{equation}\label{fokker} \partial_t\rho_t-\Delta\rho_t+\nabla\cdot\left(\rho_t P_{\rm{adm}(\rho_t)}[u_t]\right)=0 \end{equation} which can also be written equivalently for the variables $(\rho,p)$ as \begin{equation}\label{fokker2} \left\{ \begin{array}{ll} \partial_t\rho_t-\Delta\rho_t+\nabla\cdot\left(\rho_t(u_t-\nabla p_t)\right)=0, &{\rm{in}}\;\;[0,T]\times\Omega,\\ p\ge0,\ \rho\leq 1, \ p(1-\rho)=0,&{\rm{in}}\;\;[0,T]\times\Omega. \end{array} \right. \end{equation} As usual, these equations are complemented by no-flux boundary conditions and by an initial datum $ \rho(0,x)=\rho_0(x)$. Roughly speaking, we can consider that this equation describes the law of a motion where each agent solves the stochastic differential equation $$\,{\rm d} X_t=(u_t(X_t)-\nabla p_t(X_t))\,{\rm d} t+\sqrt{2}\,{\rm d} B_t.$$ This last statement is just formal and there are several issues defining an SODE like this: indeed, the pressure variable is also an unknown, and globally depends on the law $\rho_t$ of $X_t$. Hence, if we wanted to see this evolution as a superposition of individual motions, each agent should somehow predict the evolution of the pressure in order to solve his own equation. This reminds of some notions from the stochastic control formulation of Mean-Field Games, as introduced by J.-M. Lasry and P.-L. Lions, even if here there are no strategic issues for the players. For MFG with density constraints, we refer to \cite{CarMesSan, MesSil, modest}. However, in this paper we will not consider any microscopic or individual problem, but only study the parabolic PDE \eqref{fokker2}. \subsection{Structure of the paper and main results} The main goal of the paper is to provide an existence result, with some extra estimates, for the Fokker-Planck equation \eqref{fokker2} via time discretization, using the so-called splitting method (the two main ingredients of the equation, i.e. the advection with diffusion on one hand, and the density constraint on the other hand, are treated one after the other). In Section \ref{sec:2} we will collect some preliminary results, including what we need from optimal transport and from the previous works about density-constrained crowd motion, in particular on the projection operator onto the set ${\mathcal K}$. In Section \ref{sec:main} we will provide the existence result we aim at, by a splitting scheme and some entropy bounds; the solution will be a curve of measures in $AC^2([0,T];\mathcal W_2(\O))$ (absolutely continuous curves with square-integrable speed). In Section \ref{sec:bv} we will make use of $BV$ estimates to justify that the solution we just built is also $\mathrm{Lip}([0,T];\mathcal W_1(\O))$ and satisfies a global $BV$ bound $\|\rho_t\|_{BV}\leq C$ (provided $\rho_0\in BV$): this requires to combine $BV$ estimates on the Fokker-Planck equation (which are available depending on the regularity of the vector field $u$) with $BV$ estimates on the projection operator on ${\mathcal K}$ (which have been recently proven in \cite{gafb}). Section \ref{sec:5} presents a short review of alternative approaches, all discretized in time, but based either on gradient-flow techniques (the JKO scheme, see \cite{jko}) or on different splitting methods. Finally, in the Appendix \ref{sec:app} we detail the $BV$ estimates on the Fokker-Planck equation (without any density constraint) that we could find; this seems to be a delicate matter, interesting in itself, and we are not aware of the sharp assumptions on the vector field $u$ to guarantee the $BV$ estimate that we need. \section{Preliminaries}\label{sec:2} \subsection{Basic definitions and general facts on optimal transport} Here we collect some tools from the theory of optimal transportation, Wasserstein spaces, its dynamical formulation, etc. which will be used later on. We set our problem either in a compact convex domain $\Omega\subset\mathbb{R}^d$ with smooth boundary or in the $d-$dimensional flat torus $\Omega:=\mathbb{T}^d$ (even if we will not adapt all our notations to the torus case). We refer to \cite{villani,OTAM} for more details. Given two probability measures $\mu,\nu\in {\mathcal P}(\Omega)$ and for $p\ge1$ we define the usual Wasserstein metric by means of the Monge-Kantorovich optimal transportation problem $$W_p(\mu,\nu):=\inf\left\{ \int_{\Omega\times\Omega}|x-y|^p\,{\rm d}\gamma(x,y)\;:\;\gamma\in\Pi(\mu,\nu)\right\}^{\frac1p},$$ where $\Pi(\mu,\nu):=\{\gamma\in{\mathcal P}(\Omega\times\Omega):\;\; (\pi^x)_\#\gamma=\mu,\; (\pi^y)_\#\gamma=\nu\}$ and $\pi^x$ and $\pi^y$ denote the canonical projections from $\Omega\times\Omega$ onto $\Omega.$ This quantity happens to be a distance on ${\mathcal P}(\Omega)$ which metrizes the weak-$*$ convergence of probability measures; we denote by $\mathcal W_p(\Omega):=({\mathcal P}(\Omega),W_p),$ i.e. the space of probabilities on $\Omega$ endowed with this distance. Moreover, in the quadratic case $p=2$ and under the assumption $\mu\ll{\mathcal L}^d$ (the $d-$dimensional Lebesgue measure on $\Omega$) in the late 80's Y. Brenier showed (see \cite{brenier1, brenier2}) that actually the optimal $\overline\gamma$ in the above problem (the existence of which is obtained simply by the direct method of calculus of variations) is induced by a map, which is the gradient of a convex function, i.e. there exists $S:\O\to\O$ and $\psi:\Omega\to\mathbb{R}$ convex such that $S=\nabla \psi$ and $\overline\gamma:=(\rm{id},S)_\#\mu.$ The function $\psi$ is obtained as $\displaystyle\psi(x)=\mbox{{\small $\frac 12$}} |x|^2-\varphi(x)$, where $\varphi$ is the so-called Kantorovich potential for the transport from $\mu$ to $\nu$, and is characterized as the solution of a dual problem that we will not develop here. In this way, the optimal transport map $S$ can also be written as $S(x)=x-\nabla\varphi(x)$. Later in the 90's R. McCann (see \cite{mccann}) introduced a notion of interpolation between probability measures: the curve $\mu_t:=\left( (T-t)x+ty\right)_\#\overline\gamma,$ for $t\in[0,T]$ ($T>0$ is given), gives a constant speed geodesic in the Wasserstein space connecting $\mu_0:=\mu$ and $\mu_T:=\nu.$ Based on this notion of interpolation in 2000 J.-D. Benamou and Y. Brenier used some ideas from fluid mechanics to give a dynamical formulation to the Monge-Kantorovich problem (see \cite{BB}). They showed that $$\frac{1}{pT^{p-1}}W_p^p(\mu,\nu)=\inf\left\{{\mathcal B}_p(E,\mu)\; : \; \partial_t\mu+\nabla\cdot E=0,\; \mu_0=\mu,\; \mu_T=\nu \right\}.$$ Here ${\mathcal B}_p$ is a functional defined on pairs $(E,\mu)$, where $E$ is a $d$-dimensional vector measure on $[0,T]\times \O$ and $\mu=(\mu_t)_t$ is a Borel-measurable family of probability measures on $\O$. This functional is defined to be finite only if $E=E_t\otimes \,{\rm d} t$ (i.e. it is induced by a measurable family of vector measures on $\O$: $\int_{[0,T]\times\O}\xi(t,x)\cdot \,{\rm d} E(t,x)=\int_0^T\,{\rm d} t\int_\O \xi(t,x)\cdot \,{\rm d} E_t(x)$ for all test functions $\xi\in C^0([0,T]\times\O;\mathbb{R}^d)$) and in this case it is defined through $$ {\mathcal B}_p(E,\mu):=\left\{ \begin{array}{ll} \displaystyle\int_0^T\int_\Omega \frac{1}{p}\left|v_t\right|^p\,{\rm d} \mu_t(x)\,{\rm d} t, &{\rm{if}}\ E_t=v_t\cdot\mu_t\,\\ +\infty, & {\rm{otherwise}}. \end{array} \right. $$ It is well-known that ${\mathcal B}_p$ is jointly convex and l.s.c. w.r.t the weak-$*$ convergence of measures (see Section 5.3.1 in \cite{OTAM}) and that, if $\partial_t \mu+\nabla\cdot E=0$, then ${\mathcal B}_p(E,\mu)<+\infty$ implies that $t\mapsto \mu_t$ is a curve in $AC^{p}([0,T];\mathcal W_p(\Omega))$\footnote{Here $AC^p([0,T];\mathcal W_p(\Omega))$ denotes the class of absolutely continuous curves in $\mathcal W_p(\Omega)$ with metric derivative in $L^p$. See the connection with the functional ${\mathcal B}_p$.}. In particular it is a continuous curve and the initial and final conditions on $\mu_0$ and $\mu_T$ are well-defined. Coming back to curves in Wasserstein spaces, it is well known (see \cite{ags} or Section 5.3 in \cite{OTAM}) that for any distributional solution $\mu_t$ (being a continuous curve in $\mathcal W_p(\Omega)$) of the continuity equation $\partial_t\mu+\div E=0$ with $E_t=v_t\cdot\mu_t$, we have the relations $$|\mu'|_{W_p}(t)\le\|v_t\|_{L^p_{\mu_t}}\;\;\;{\rm{and}}\;\;\; W_p(\mu_t,\mu_s)\le\int_s^t|\mu'|_{W_p}(\tau)\,{\rm d}\tau,$$ where we denoted by $|\mu'|_{W_p}(t)$ the metric derivative w.r.t. $W_p$ of the curve $\mu_t$ (see for instance \cite{AmbTil} for general notions about curves in metric spaces and their metric derivative). For curves $\mu_t$ that are geodesics in $\mathcal W_p(\Omega)$ we have the equality $$W_p(\mu_0,\mu_1)=\int_0^1|\mu'|_{W_p}(t)\,{\rm d} t=\int_0^1\|v_t\|_{L^p_{\mu_t}}\,{\rm d} t.$$ The last equality is in fact the Benamou-Brenier formula with the optimal velocity field $v_t$ being the density of the optimal $E_t$ w.r.t. the optimal $\mu_t.$ This optimal velocity field $v_t$ can be computed as $v_t:=(S-\rm{id})\circ (S_t)^{-1}$, where $S_t:=(1-t)\rm{id}+tS$ is the transport in McCann's interpolation (we assume here that the initial measure $\mu_0$ is absolutely continuous, so that we can use transport maps instead of plans). This expression can be obtained if we consider that in this interpolation particles move with constant speed $S(x)-x$, but $x$ represents here a Lagrangian coordinate, and not an Eulerian one: if we want to know the velocity at time $t$ at a given point, we have to find out first the original position of the particle passing through that point at that time. In the sequel we will also need the notion of entropy of a probability density, and for any probability measure $\varrho\in{\mathcal P}(\Omega)$ we define it as $$\displaystyle{\mathcal E}(\varrho):=\left\{ \begin{array}{ll} \displaystyle\int_\Omega\varrho(x)\log\varrho(x)\,{\rm d} x, & \rm{if}\ \varrho\ll{\mathcal L}^d,\\ +\infty, & \rm{otherwise}. \end{array} \right.$$ We recall that this functional is l.s.c. and geodesically convex in $\mathcal W_2(\Omega)$. As we will be mainly working with absolutely continuous probability measures (w.r.t. Lebesgue), we often identify measures with their densities. \subsection{Projection problems in Wasserstein spaces}\label{subsec:proj} Our analysis strongly relies on the projection operator $P_{\mathcal K}$ in the sense of $W_2.$ Here ${\mathcal K}:=\{\rho\in\P(\O):\rho\le1\}$ and $$P_{\mathcal K}[\mu]:=\rm{argmin}_{\rho\in{\mathcal K}}\;\frac12 W_2^2(\mu,\rho).$$ We recall (see \cite{MauRouSan2,xedp} and \cite{gafb}) the main properties of the projection $P_{\mathcal K}$ operator. \begin{itemize} \item As far as $\Omega$ is compact, for any probability measure $\mu$, the minimizer in $\min_{\rho\in{\mathcal K}}\frac12 W_2^2(\mu,\rho)$ exists and is unique, and the operator $P_{\mathcal K}$ is continuous (it is even $C^{0,1/2}$ for the $W_2$ distance). \item The projection $P_{\mathcal K}[\mu]$ saturates the constraint $\rho\leq 1$ in the sense that for any $\mu\in{\mathcal P}(\Omega)$ there exists a measurable set $B\subseteq\Omega$ such that $P_{\mathcal K}[\mu]=\mathbbm{1}_B+\mu^{\rm{ac}}\mathbbm{1}_{B^c},$ where $\mu^{\rm{ac}}$ is the absolutely continuous part of $\mu$. \item The projection is characterized in terms of a pressure field, in the sense that $\rho=P_{\mathcal K}[\mu]$ if and only if there exists a Lipschitz function $p\geq 0$, with $p(1-\rho)=0$, and such that the optimal transport map $S$ from $\rho$ to $\mu$ is given by $S:=\rm{id}-\nabla\varphi=\rm{id}+\nabla p$. \item There is (as proven in \cite{gafb}) a quantified $BV$ estimate: if $\mu\in BV$ (in the sense that it is absolutely continuous and that its density belongs to $BV(\Omega)$), then $P_{\mathcal K}[\mu]$ is also $BV$ and $$TV(P_{\mathcal K}[\mu],\Omega)\le TV(\mu,\Omega).$$ \end{itemize} This last $BV$ estimate will be crucial in Section \ref{sec:bv}, and it is important to have it in this very form (other estimates of the form $TV(P_{\mathcal K}[\mu],\Omega)\le aTV(\mu,\Omega)+b$ would not be as useful as this one, as they cannot be easily iterated). \section{Existence via a splitting-up type algorithm {\it \textbf{ (Main Scheme)}}}\label{sec:main} Similarly to the approach in \cite{MauRouSan1} (see the algorithm (13) and Theorem 3.5) for a general, non-gradient, vector field, we will build a theoretical algorithm, after time-discretization, to produce a solution of \eqref{fokker2}. Let us remark that splitting-type methods have been widely used in other contexts as well, see for instance the paper \cite{CleMaas} which deals with splitting methods for Fokker-Planck equations and for more general gradient flows in metric and Wasserstein spaces, or \cite{Laborde} where a splitting-like approach is used to attack PDEs which are not gradient flows but ``perturbations'' of gradient flows. In this section the spontaneous velocity field is a general vector field $u:[0,T]\times\O\to\mathbb{R}^d$ (not necessarily a gradient), which depends also on time. The only assumption we require on $u$ is the following: \begin{equation}\label{hyp:U} u\in L^\infty([0,T]\times\Omega;\mathbb{R}^d).\tag{U} \end{equation} We will work on a time interval $[0,T]$ and in a bounded convex domain $\Omega\subset\mathbb{R}^d$ (the case of the flat torus is even simpler and we will not discuss it in details). We consider $\rho_0\in{\mathcal P}^{\rm{ac}}(\Omega)$ to be given, which represents the initial density of the population, and we suppose $\rho_0\in{\mathcal K}$. \subsection{Splitting using the Fokker-Planck equation} Let us consider the following scheme. \smallskip \begin{minipage}{8.1cm} {\textbf{Main scheme:}} Let $\tau>0$ be a small time step with $N:=\lfloor T/\tau\rfloor.$ Let us set $\rho_0^\tau:=\rho_0$ and for every $k\in\{1,\dots,N\}$ we define $\rho_{k+1}^\tau$ from $\rho_k^\tau$ in the following way. First we solve \begin{equation}\label{FP-basic} \left\{ \begin{array}{l} \partial_t\varrho_t -\Delta\varrho_t+\nabla\cdot(\varrho_t u_{t+k\tau})=0, \ t\in]0,\tau],\\ \varrho_{0}=\rho_k^\tau, \end{array} \right. \end{equation} equipped with the no-flux boundary condition ($\varrho_t(\nabla\varrho_t-u_{t})\cdot n =0$ a.e. on $\partial\Omega$) and set $\rho_{k+1}^\tau=P_{\mathcal K}[\tilde{\rho}_{k+1}^\tau],$ where $\tilde{\rho}_{k+1}^\tau=\varrho_\tau.$ See Figure \ref{cxc} on the right. \end{minipage} \begin{minipage}{6.5cm} \begin{tikzpicture}[scale=0.6] \draw[blue] (0,0) node {$\bullet$} node [left]{$\rho^\tau_k$}; \draw[blue,->,>=stealth] (0.5,0.3) -- (7.5,5.5) node[midway,above,sloped] {$\partial_t\varrho_t-\Delta\varrho_t+\nabla\cdot\left(\varrho_t u_{t+k\tau}\right)=0$}; \draw[blue] (8,6) node {$\bullet$} node [above]{$\tilde\rho^\tau_{k+1}=\varrho_{\tau}$}; \draw[blue,->,>=stealth] (9.7,0.3) -- (8.15,5.2) node[midway,above,sloped] {$\rm{id}+\tau\nabla p_{k+1}^\tau$} \draw[blue] (10,0) node {$\bullet$} node [right]{$\rho^\tau_{k+1}$}; \end{tikzpicture} \captionof{figure}{One time step}\label{cxc} \end{minipage} \smallskip Let us remark first that by classical results on parabolic equations (see for instance \cite{lady}), since $u$ satisfies the assumption \eqref{hyp:U}, Problem \ref{FP-basic} admits a unique distributional solution. The above algorithm means the following: first follow the Fokker-Planck equation, ignoring the density constraint, for a time $\tau$, then project. In order to state and prove the convergence of the scheme, we need to define some suitable interpolations of the discrete sequence of densities that we have just introduced. {\it First interpolation.} We define the following curves of densities, velocities and momenta constructed with the help of the $\rho_k^\tau$'s. First set $$ \rho^\tau_t:=\left\{ \begin{array}{ll} \varrho_{2(t-k\tau)}, & {\rm{if}}\ t\in\left[k\tau,(k+1/2)\tau\right[,\\ \left(\rm{id}+2((k+1)\tau-t)\nabla p_{k+1}^\tau\right)_\#\rho_{k+1}^\tau, & {\rm{if}}\ t\in\left[(k+1/2)\tau,(k+1)\tau\right[, \end{array} \right. $$ where $\varrho_t$ is the solution of the Fokker-Planck equation \eqref{FP-basic} with initial datum $\rho_k^\tau$ and $\nabla p_{k+1}^\tau$ arises from the projection of $\tilde\rho_{k+1}^\tau,$ more precisely $(\rm{id}+\tau\nabla p_{k+1}^\tau)$ is the optimal transport from $\rho_{k+1}^\tau$ to $\tilde\rho_{k+1}^\tau.$ What are we doing? We are fitting into a time interval of length $\tau$ the two steps of our algorithm. First we follow the FP equation \eqref{FP-basic} at double speed, then we interpolate between the measure we reached and its projection following the geodesic between them. This geodesic is easily described as an image measure of $\rho_{k+1}^\tau$ through McCann's interpolation. By the construction it is clear that $\rho^\tau_t$ is a continuous curve in ${\mathcal P}(\Omega)$ for $t\in[0,T].$ We now define a family of time-dependent vector fields though $$ v^\tau_t:=\left\{ \begin{array}{ll} -2\frac{\nabla{\varrho_{2(t-k\tau)}}}{\varrho_{2(t-k\tau)}}+2u_{t}, & {\rm{if}}\ t\in\left[k\tau,(k+1/2)\tau\right[,\\ -2\nabla p_{k+1}^\tau\circ(\rm{id}+2((k+1)\tau-t)\nabla p_{k+1}^\tau)^{-1}, & {\rm{if}}\ t\in\left[(k+1/2)\tau,(k+1)\tau\right[, \end{array} \right. $$ and, finally, we simply define the curve of momenta as $E_t^\tau:=\rho_t^\tau v_t^\tau.$ {\it Second interpolation.} We define another interpolation as follows. Set $$\tilde\rho_t^\tau:=\varrho_{t-k\tau},\;\;\; {\rm{if}}\ t\in[k\tau,(k+1)\tau[,$$ where $\varrho_t$ is (again) the solution of the Fokker-Planck equation \eqref{FP-basic} on the time interval $[0,\tau]$ with initial datum $\rho_k^\tau.$ Here we do not double its speed. We define the curve of velocities $$\tilde v_t^\tau:=-\frac{\nabla{\varrho_{t-k\tau}}}{\varrho_{t-k\tau}}+u_{t},\;\;\; {\rm{if}}\ t\in\left[k\tau,(k+1)\tau\right[,$$ and we build the curve of momenta by $\tilde E_t^\tau:=\tilde\rho_t^\tau \tilde v_t^\tau.$ {\it Third interpolation.} For each $\tau$, we also define piecewise constant curves, \begin{eqnarray*}\hat\rho_t^\tau:=\rho_{k+1}^\tau,\;\;\; &&{\rm{if}}\ t\in[k\tau,(k+1)\tau[,\\ \hat v_t^\tau:=\nabla p_{k+1}^\tau,\;\;\; &&{\rm{if}}\ t\in[k\tau,(k+1)\tau[,\end{eqnarray*} and $\hat E_t^\tau:=\hat\rho_t^\tau\hat v_t^\tau.$ We remark that $p_{k+1}^\tau(1-\rho_{k+1}^\tau)=0,$ hence the curve of momenta is just $$\hat E_t^\tau:=\nabla p_{k+1}^\tau,\;\;\; {\rm{if}}\ t\in[k\tau,(k+1)\tau[.$$ Mind the differences in the construction of $\rho_t^\tau,$ $\tilde\rho_t^\tau$ and $\hat\rho_t^\tau$ (hence in the construction of $v_t^\tau$, $\tilde v_t^\tau$ and $\hat v_t^\tau$ and $E_t^\tau$, $\tilde E_t^\tau$ and $\hat E_t^\tau$): 1) the first one is continuous in time for the weak-* convergence, while the second and third ones are not; 2) in the first construction we have taken into account the projection operator explicitly, while in the second one we see it just in an indirect manner (via the `jumps' occurring at every time of the form $t=k\tau$). The third interpolation is piece-wise constant, and at every time it satisfies the density constraint; 3) in the first interpolation the pair $(\rho^\tau,E^\tau)$ solves the continuity equation, while in the other two they do not. This is not astonishing, as the continuity equation characterizes continuous curves in $\mathcal W_2(\O)$. In order to prove the convergence of the scheme above, we will obtain uniform $AC^2([0,T];\mathcal W_2(\Omega))$ bounds for the curves $\rho^\tau.$ A key observation here is that the metric derivative (w.r.t. $W_2$) of the solution of the Fokker-Planck equation is comparable with the time differential of the entropy functional along the same solution (see Lemma \ref{entropy-FP}). Now we state the main theorem of this section. \begin{theorem}\label{convergence} Let $\rho_0\in{\mathcal K}$ and $u$ be a given desired velocity field satisfying \eqref{hyp:U}. Let us consider the interpolations introduced above. Then there exists a continuous curve $[0,T]\ni t\mapsto \rho_t\in\mathcal W_2(\Omega)$ and some vector measures $E,\tilde E, \hat E\in{\mathfrak M}([0,T]\times\Omega)$ such that the curves $\rho^\tau,\tilde\rho^\tau,\hat\rho^\tau$ converge uniformly in $\mathcal W_2(\Omega)$ to $\rho$ and $$ E^\tau\stackrel{*}{\rightharpoonup} E,\quad \tilde E^\tau\stackrel{*}{\rightharpoonup} \tilde E,\quad \hat E^\tau\stackrel{*}{\rightharpoonup} \hat E,\;\; {\rm{in}}\;\; {\mathfrak M}([0,T]\times\Omega)^d,\;\; {\rm{as}}\ \tau\to0. $$ Moreover $E=\tilde E-\hat E$ and for a.e. $t\in[0,T]$ there exist time-dependent measurable vector fields $v_t,\tilde v_t,\hat v_t$ such that \begin{itemize} \item[(1)] $E=\rho v,\tilde E=\rho\tilde v,\;\; \hat E=\rho\hat v,$ \item[(2)] $\displaystyle\int_0^T\left(\|v_t\|_{L^2_{\rho_t}}^2+\|\tilde v_t\|_{L^2_{\rho_t}}^2+ \|\hat v_t\|_{L^2_{\rho_t}}^2\right)\,{\rm d} t<+\infty,$ \item[(3)] $v_t=\tilde v_t-\hat v_t\ \rho_t-{\rm{a.e.}},\;\; \tilde E_t=\rho_t u_t-\nabla\rho_t\;\; {\rm{and}}\;\; \hat v_t=\nabla p_t,\ \rho_t-{\rm{a.e.}},$ \end{itemize} where $p\in L^2([0,T];H^1(\Omega)),$ $p\geq 0$ and $p(1-\rho)=0$ a.e. in $[0,T]\times\Omega$. As a consequence, the pair $(\rho,p)$ is a weak solution of the problem \begin{equation}\label{FP-advanced} \left\{ \begin{array}{ll} \partial_t\rho_t-\Delta\rho_t+\nabla\cdot\left(\rho_t(u_t-\nabla p_t)\right)=0, & {\rm{in}}\ [0,T]\times\Omega,\\ p_t\ge 0,\ \rho_t\leq 1,\ p_t(1-\rho_t)=0, & {\rm{in}}\ [0,T]\times\Omega,\\ \rho_t(\nabla\rho_t-u_t+\nabla p_t)\cdot n = 0, & {\rm{on}}\ [0,T]\times\partial\Omega,\\ \rho(0,\cdot)=\rho_0. \end{array} \right. \end{equation} \end{theorem} To prove this theorem we will use the following tools. \begin{lemma}\label{entropy-FP} Let us consider a solution $\varrho_t$ of the Fokker-Planck equation on $[0,T]\times\Omega$ with the velocity field $u$ satisfying \eqref{hyp:U} and with no-flux boundary conditions on $[0,T]\times\partial\Omega$. Then for any time interval $]a,b[$ we have the following estimate \begin{equation}\label{estim-velocity} \frac{1}{2}\int_a^b\int_{\Omega}\left|-\frac{\nabla\varrho_t}{\varrho_t}+u_{t}\right|^2\varrho_t\,{\rm d} x\,{\rm d} t\le{\mathcal E}(\varrho_a)-{\mathcal E}(\varrho_b)+\frac{1}{2}\int_a^b\int_\Omega|u_{t}|^2\varrho_t\,{\rm d} x\,{\rm d} t \end{equation} In particular this implies \begin{equation}\label{tool2} \frac{1}{2}\int_a^b|\varrho'_t|^2_{W_2}\,{\rm d} t\le{\mathcal E}(\varrho_a)-{\mathcal E}(\varrho_b)+\frac{1}{2}\int_a^b\int_\Omega|u_{t}|^2\varrho_t\,{\rm d} x\,{\rm d} t, \end{equation} where $|\varrho_t'|_{W_2}$ denotes the metric derivative of the curve $t\mapsto\varrho_t\in\mathcal W_2(\Omega)$. \end{lemma} \begin{proof} To prove this inequality, we will first make computations in the case where both $u$ and $\varrho$ are smooth, and $\varrho$ is boundeded from below by a positive constant. In this case we can write \begin{align*} \frac{\,{\rm d}}{\,{\rm d} t}{\mathcal E}(\varrho_t)&=\int_\Omega(\log\varrho_t+1)\partial_t\varrho_t\,{\rm d} x=\int_\Omega \log\varrho_t(\Delta\varrho_t-\nabla\cdot(\varrho_t u_{t}))\,{\rm d} x\\ &=\int_\Omega\left(-\frac{|\nabla\varrho_t|^2}{\varrho_t}+u_{t}\cdot\nabla\varrho_t\right)\,{\rm d} x, \end{align*} where we used the conservation of mass (i.e. $\int_\Omega \partial_t\varrho_t\,{\rm d} x=0$) and the boundary conditions in the integration by parts. We now compare this with \begin{eqnarray*} \frac 12\int_{\Omega}\left|-\frac{\nabla\varrho_t}{\varrho_t}+u_{t}\right|^2\varrho_t\,{\rm d} x-\frac12\int_\Omega|u_{t}|^2\varrho_t\,{\rm d} x&=&\int_\Omega\left(\frac12\frac{|\nabla\varrho_t|^2}{\varrho_t}-\nabla\varrho_t\cdot u_{t}\right)\,{\rm d} x\\&\le&\int_\Omega\left(\frac{|\nabla\varrho_t|^2}{\varrho_t}-\nabla\varrho_t\cdot u_{t}\right)\,{\rm d} x =-\frac{\,{\rm d}}{\,{\rm d} t}{\mathcal E}(\varrho_t). \end{eqnarray*} This provides the first part of the statement, i.e. \eqref{estim-velocity}. If we combine this with the fact that the metric derivative of the curve $t\mapsto\varrho_t$ is always less or equal than the $L^2_{\varrho_t}$ norm of the velocity field in the continuity equation, we also get $$\frac12|\varrho'_t|^2_{W_2}-\frac12\int_\Omega|u_{t}|^2\varrho_t\le-\frac{\,{\rm d}}{\,{\rm d} t}{\mathcal E}(\varrho_t),$$ and hence \eqref{tool2}. In order to prove the same estimates without artificial smoothness and lower bound assumptions, we can act by approximation. We approximate the density $\varrho_a$ by smooth and strictly positive densities $\varrho^k_a$ (by convolution, so that we guarantee in particular ${\mathcal E}(\varrho_a^k)\to{\mathcal E}(\varrho_a)$), and the vector field $u$ with smooth vector fields $u^k$ (strongly in $L^4([a,b]\times\Omega)$, keeping the $L^\infty$ bound). If we call $\varrho^k$ the corresponding solution of the Fokker Planck equation, it satisfies \eqref{estim-velocity}. This implies a uniform bound (w.r.t. $k$) for $\sqrt{\varrho^k}$ in $L^2([a,b];H^1(\Omega))$, and hence a uniform bound on $\varrho^k$ in $L^2([a,b]\times\Omega)$. From these bounds and the uniqueness of the solution of the Fokker-Planck equation with $L^\infty$ drift we deduce $\varrho^k\to\varrho$. The semicontinuity of the left-hand side in \eqref{estim-velocity} and of the entropy term at $t=b$, together with the convergence of the entropy at $t=a$ and the convergence $\int_a^b\int_\O |u^k|^2\varrho^k\,{\rm d} x\,{\rm d} t\to\int_a^b\int_\O |u|^2\varrho\,{\rm d} x\,{\rm d} t$ (because we have a product of weak and strong convergence in $L^2$) allow to pass \eqref{estim-velocity} to the limit. \end{proof} \begin{corollary}\label{entropy_small} From the inequality \eqref{tool2} we deduce that $${\mathcal E}(\varrho_b)-{\mathcal E}(\varrho_a)\le\frac{1}{2}\int_a^b\int_\Omega|u_{t}|^2\varrho_t\,{\rm d} x\,{\rm d} t,$$ hence in particular for $u$ satisfying \eqref{hyp:U} we have $${\mathcal E}(\varrho_b)-{\mathcal E}(\varrho_a)\le\frac12 \|u\|^2_{L^\infty}(b-a).$$ As a consequence, if $\varrho_a\le 1,$ then we have $${\mathcal E}(\varrho_b)\le\frac12 \|u\|^2_{L^\infty}(b-a).$$ The same estimate can be applied to the curve $\tilde\rho^\tau$, with $a=k\tau$ and $b\in ]k\tau,(k+1)\tau[$, thus obtaining ${\mathcal E}(\tilde\rho^\tau_t)\leq C\tau$ for every $t$. \end{corollary} \begin{lemma}\label{proj_entropy} For any $\rho\in{\mathcal P}(\Omega)$ we have ${\mathcal E}\left(P_{\mathcal K}[\rho]\right)\le{\mathcal E}(\rho).$ \end{lemma} \begin{proof} We can assume $\rho\ll{\mathcal L}^d$, otherwise the claim is straightforward. As we pointed out in Section \ref{subsec:proj}, we know that there exists a measurable set $B\subseteq\Omega$ such that $$P_{\mathcal K}[\rho]=\mathbbm{1}_B+\rho\mathbbm{1}_{B^c}.$$ Hence it is enough to prove that $$\int_B\rho\log\rho\,{\rm d} x\ge 0=\int_B P_{\mathcal K}[\rho]\log P_{\mathcal K}[\rho]\,{\rm d} x,$$ as the entropies on $B^c$ coincide. As the mass of $\rho$ and $P_{\mathcal K}[\rho]$ is the same on the whole $\Omega$, and they coincide on $B^c$, we have $\displaystyle\int_B\rho(x)\,{\rm d} x=\int_B P_{\mathcal K}[\rho]\,{\rm d} x=|B|$. Then, by Jensen's inequality we have $$\frac{1}{|B|}\int_B\rho\log\rho\,{\rm d} x\ge \left(\frac{1}{|B|}\int_B\rho\,{\rm d} x\right)\log\left(\frac{1}{|B|}\int_B\rho\,{\rm d} x\right)=0.$$ The entropy decay follows. \end{proof} To analyze the pressure field we will need the following result. \begin{lemma}\label{pressure} Let $\{p^\tau\}_{\tau>0}$ be a bounded sequence in $L^2([0,T]; H^1(\Omega))$ and $\{\rho^\tau\}_{\tau>0}$ a sequence of piecewise constant curves valued in $\mathcal W_2(\Omega)$, which satisfy $W_2(\rho^\tau(a),\rho^\tau(b))\leq C\sqrt{b-a+\tau}$ for all $a<b\in [0,T]$ for a fixed constant $C$. Suppose that $$p^\tau\geq 0,\; p^\tau(1- \rho^\tau)=0,\;\rho^\tau\leq 1,$$ and that $$p^\tau\rightharpoonup p\; {\rm{weakly\ in}}\; L^2([0,T]; H^1(\Omega))\;\;{\rm{and}}\;\; \rho^\tau\to\rho\; {\rm{uniformly\ in\ }}\mathcal W_2(\Omega).$$ Then $p(1-\rho)=0$ a.e. in $[0,T]\times\Omega$. \end{lemma} The proof of this result is the same as in Step 3 of Section 3.2 of \cite{MauRouSan2} (see also \cite{aude_phd} and Lemma 4.6 in \cite{DMS}). We omit it in order not to overburden the paper. The reader can note the strong connection with the classical Aubin-Lions lemma \cite{Aubin}, applied to the compact injection of $L^2$ into $H^{-1}$. Indeed, from the weak convergence of $p^\tau$ in $L^2([0,T]; H^1(\Omega))$, we just need to provide strong convergence of $\rho^\tau$ in $L^2([0,T]; H^{-1}(\Omega))$. If instead of the quasi-H\"older assumption of the above lemma we suppose a uniform bound of $\{\rho^\tau\}_\tau$ in $AC^2([0,T];\mathcal W_2(\O))$ (which is not so different), then the statement can be really deduced from the Aubin-Lions lemma. Indeed, the sequence $\{\rho^\tau\}$ is bounded in $L^\infty([0,T]; L^2(\Omega))$ and its time-derivative would be bounded in $L^2([0,T]; H^{-1}(\Omega))$. This strongly depends on the fact that the $H^{-1}$ distance can be controlled by the $W_2$ distance as soon as the measures have uniformly bounded densities (see \cite{loeper,MauRouSan2}), a tool which also crucial in the proofs in \cite{MauRouSan2,aude_phd,DMS}. Then, the Aubin-Lions lemma guarantees compactness in $C^0([0,T];H^{-1}(\O))$, which is more than what we need. \begin{lemma}\label{tools} Let us consider the previously defined interpolations. Then we have the following facts. \begin{itemize} \item[{$\rm{(i)}$}] For every $\tau>0$ and $k$ we have $$\max\left\{W_2^2(\rho_k^\tau,\tilde\rho_{k+1}^\tau), W_2^2(\rho_{k}^\tau,\rho_{k+1}^\tau)\right\}\le \tau C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau^2,$$ where $C>0$ only depends on $\|u\|_{L^\infty}.$ \item[(ii)] There exists a constant $C$, only depending on $\rho_0$ and $\|u\|_{L^\infty}$, such that ${\mathcal B}_2(E^\tau,\rho^\tau)\le C$, ${\mathcal B}_2(\tilde E^\tau,\tilde\rho^\tau)\le C$ and ${\mathcal B}_2(\hat E^\tau,\hat\rho^\tau)\le C$. \item[(iii)] For the curve $[0,T]\ni t\mapsto\rho_t^\tau$ we have that $$\int_0^T|(\rho_t^\tau)'|^2_{W_2}\,{\rm d} t\le C,$$ for a $C>0$ independent of $\tau$. Here we denoted by $|(\rho_t^\tau)'|_{W_2}$ the metric derivative of the curve $\rho^\tau$ at $t$ in $\mathcal W_2$. In particular, we have a uniform H\"older bound on $\rho^\tau$: $W_2(\rho^\tau(a),\rho^\tau(b))\leq C\sqrt{b-a}$ for every $b>a$. \item[(iv)] $E^\tau,\tilde E^\tau, \hat E^\tau$ are uniformly bounded sequences in ${\mathfrak M}([0,T]\times\Omega)^d.$ \end{itemize} \end{lemma} \begin{proof} ${\rm{(i)}}$ First by the triangle inequality and by the fact that $\rho_{k+1}^\tau=P_{\mathcal K}[\tilde{\rho}_{k+1}^\tau]$ we have that \begin{equation}\label{tool1} W_2(\rho_k^\tau,\rho_{k+1}^\tau)\le W_2(\rho_k^\tau,\tilde{\rho}_{k+1}^\tau)+W_2(\tilde{\rho}_{k+1}^\tau,\rho_{k+1}^\tau)\le2W_2(\rho_k^\tau,\tilde{\rho}_{k+1}^\tau). \end{equation} We use (as before) the notation $\varrho_t,$ $t\in[0,\tau]$ for the solution of the Fokker-Planck equation \eqref{FP-basic} with initial datum $\rho_k^\tau,$ in particular we have $\varrho_\tau=\tilde\rho_{k+1}^\tau.$ Using Lemma \ref{entropy-FP} and since $\varrho_0=\rho_k^\tau$ and $\varrho_\tau=\tilde\rho_{k+1}^\tau$ we have by \eqref{tool2} and using $\displaystyle W_2(\rho_k^\tau,\tilde\rho_{k+1}^\tau)\le\int_0^\tau|\varrho_t'|_{W_2}\,{\rm d} t$ \begin{align*} W_2^2(\rho_k^\tau,\tilde{\rho}_{k+1}^\tau)&\le\left(\tau^{\frac12}\left(\int_0^\tau|\varrho_t'|^2_{W_2}\,{\rm d} t\right)^{\frac12}\right)^2\le 2\tau\left({\mathcal E}(\varrho_0)-{\mathcal E}(\varrho_\tau)\right)+\tau\int_0^\tau\int_\Omega|u_{k\tau+t}|^2\varrho_t\,{\rm d} x\,{\rm d} t\\ &\le 2\tau\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\tilde\rho_{k+1}^\tau)\right)+C\tau^2\le 2\tau\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau^2, \end{align*} where $C>0$ is a constant depending just on $\|u\|_{L^\infty}$. We also used the fact that ${\mathcal E}(\rho_{k+1}^\tau)\le{\mathcal E}(\tilde\rho_{k+1}^\tau)$, a consequence of Lemma \ref{proj_entropy}. Now by the means of \eqref{tool1} we obtain $$W_2^2(\rho_k^\tau,\rho_{k+1}^\tau)\le \tau C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau^2.$$ ${\rm{(ii)}}$ We use Lemma \ref{entropy-FP} on the intervals of type $[k\tau,(k+1/2)\tau[$ and the fact that on each interval of type $[(k+1/2)\tau,(k+1)\tau[$ the curve $\rho_t^\tau$ is a constant speed geodesic. In particular, on these intervals we have $$|(\rho^\tau)'|_{W_2}=\|v^\tau_t\|_{L^2_{\rho^\tau_t}}=2\tau\|\nabla p_{k+1}^\tau\|_{L^2_{\rho_{k+1}^\tau}}=2W_2(\rho_{k+1}^\tau,\tilde\rho_{k+1}^\tau).$$ On the other hand we also have $$\tau^2\|\nabla p_{k+1}^\tau\|_{L^2_{\rho_{k+1}^\tau}}^2=W_2^2(\rho_{k+1}^\tau,\tilde\rho_{k+1}^\tau)\le W_2^2(\rho_{k}^\tau,\tilde\rho_{k+1}^\tau)\le \tau C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau^2.$$ Hence we obtain \begin{align*} &\int_{k\tau}^{(k+1)\tau}\|v^\tau_t\|^2_{L^2(\rho^\tau_t)}\,{\rm d} t\\ &=\int_{k\tau}^{(k+1/2)\tau}\int_\Omega 4\left|-\frac{\nabla\varrho_{2(t-k\tau)}}{\varrho_{2(t-k\tau)}}+u_{2t-k\tau}\right|^2\varrho_{2(t-k\tau)}(x)\,{\rm d} x\,{\rm d} t+4\int_{(k+1/2)\tau}^{(k+1)\tau}\int_\Omega|\nabla p_{k+1}^\tau|^2\rho_{k+1}^\tau\,{\rm d} x\,{\rm d} t\\ &\le C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau + 2\tau\|\nabla p_{k+1}^\tau\|^2_{L^2_{\rho_{k+1}^\tau}}\\ &\le C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau. \end{align*} Hence by adding up we obtain $${\mathcal B}_2(E^\tau,\rho^\tau)\le \sum_k \left\{C\left({\mathcal E}(\rho_k^\tau)-{\mathcal E}(\rho_{k+1}^\tau)\right)+C\tau\right\}=C\left({\mathcal E}(\rho_0^\tau)-{\mathcal E}(\rho_{N+1}^\tau)\right)+CT\le C.$$ The estimate on ${\mathcal B}_2(\tilde E^\tau,\tilde\rho^\tau)$ and ${\mathcal B}_2(\hat E^\tau,\hat \rho^\tau)$ are completely analogous and descend from the previous computations. ${\rm{(iii)}}$ The estimate on ${\mathcal B}_2(E^\tau,\rho^\tau)$ implies a bound on $\displaystyle\int_0^T|(\rho_t^\tau)'|^2_{W_2}\,{\rm d} t$ because $v^\tau$ is a velocity field for $\rho^\tau$ (i.e., the pair $(E^\tau,\rho^\tau)$ solves the continuity equation). ${\rm{(iv)}}$ In order to estimate the total mass of $E$ we write \begin{align*} |E^\tau|([0,T]\times\Omega)&=\int_0^T\int_\Omega|v_t^\tau|\rho_t^\tau\,{\rm d} x\,{\rm d} t\le\int_0^T\left(\int_\Omega |v_t^\tau|^2\rho_t^\tau\,{\rm d} x\right)^\frac12\left(\int_\Omega\rho_t^\tau\,{\rm d} x\right)^\frac12\,{\rm d} t\\ &\le \sqrt{T}\left(\int_0^T\int_\Omega|v_t^\tau|^2\rho_t^\tau\,{\rm d} x\,{\rm d} t\right)^\frac12\le C. \end{align*} The bounds on $\tilde E^\tau$ and $\hat E^\tau$ rely on the same argument. \end{proof} \begin{proof}[Proof of Theorem \ref{convergence}] We use the tools from Lemma \ref{tools}. {\it Step 1.} By the bounds on the metric derivative of the curves $\rho_t^\tau$ we get compactness, i.e. there exists a curve $[0,T]\ni t\mapsto\rho_t\in{\mathcal P}(\Omega)$ such that $\rho^\tau$ (up to subsequences) converges uniformly in $[0,T]$ w.r.t. $W_2,$ in particular weakly-$*$ in ${\mathcal P}(\Omega)$ for all $t\in[0,T].$ It is easy to see that $\tilde\rho^\tau$ and $\hat\rho^\tau$ are converging to the same curve. Indeed we have $\tilde\rho_t^\tau=\rho_{\tilde s(t)}^\tau$ and $\hat\rho_t^\tau=\rho_{\hat s(t)}^\tau$ for $|\tilde s(t)-t|\leq \tau$ and $|\hat s(t)-t|\leq \tau$, which implies $W_2(\rho_t^\tau,\tilde\rho_t^\tau),W_2(\rho_t^\tau,\hat\rho_t^\tau) \le C\tau^\frac12$. This provides the convergence to the same limit. {\it Step 2.} By the boundedness of $E^\tau,\tilde E^\tau$ and $\hat E^\tau$ in ${\mathfrak M}([0,T]\times\Omega)^d$ we have the existence of $E,\tilde E,\hat E\in{\mathfrak M}([0,T]\times\Omega)^d$ such that (up to a subsequence) $E^\tau\stackrel{*}{\rightharpoonup} E,\tilde E^\tau\stackrel{*}{\rightharpoonup} \tilde E, \hat E^\tau\stackrel{*}{\rightharpoonup} \hat E$ as $\tau\to 0.$ Now we show that $E=\tilde E-\hat E.$ Indeed, let us show that for any test function $f\in \rm{Lip}([0,T]\times\Omega)^d$ we have $$\left|\int_0^T\int_\Omega f_t\cdot \left(E_t^\tau-(\tilde E_t^\tau+\hat E_t^\tau)\right)(\,{\rm d} x,\,{\rm d} t)\right|\to 0,$$ as $\tau\to 0.$ First for each $k\in\{0,\dots,N\}$ we have that \begin{eqnarray*} \int_{k\tau}^{(k+1/2)\tau}\int_\Omega f_t \cdot E_t^\tau(\,{\rm d} x,\,{\rm d} t)&=&\int_{k\tau}^{(k+1)\tau}\int_\Omega f_{(t+k\tau)/2}\cdot(-\nabla \varrho_{t-k\tau}+u_{t}\varrho_{t-k\tau})(\,{\rm d} x,\,{\rm d} t)\\ &=&\int_{k\tau}^{(k+1)\tau}\int_\Omega f_t \cdot \tilde E_t^\tau(\,{\rm d} x,\,{\rm d} t)+\int_{k\tau}^{(k+1)\tau}\int_\Omega \left( f_{(t+k\tau)/2}-f_t\right)\cdot\tilde E_t^\tau(\,{\rm d} x,\,{\rm d} t) \end{eqnarray*} and \small \begin{eqnarray*} \int_{(k+1/2)\tau}^{(k+1)\tau}\int_\Omega f_t \cdot E_t^\tau(\,{\rm d} x,\,{\rm d} t)&=&\int_{k\tau}^{(k+1)\tau}\int_\Omega -f_{(t+(k+1)\tau)/2}\circ(\rm{id}+((k+1)\tau-t)\nabla p_{k+1}^\tau)\cdot\nabla p_{k+1}^{\tau}\rho_{k+1}^\tau(\,{\rm d} x,\,{\rm d} t)\\ &=&-\int_{k\tau}^{(k+1)\tau}\int_\Omega f_t \cdot \hat E_t^\tau(\,{\rm d} x,\,{\rm d} t)\\ &&+\int_{k\tau}^{(k+1)\tau}\int_\Omega \left(f_t-f_{(t+(k+1)\tau)/2}\circ(\rm{id}+((k+1)\tau-t))\right)\cdot\hat v^\tau_t\hat \rho^\tau_t(\,{\rm d} x,\,{\rm d} t)\end{eqnarray*} \normalsize This implies that \begin{align*} \Bigg{|}\int_0^T\int_\Omega f_t\cdot (E_t^\tau&-\tilde E_t^\tau+\hat E_t^\tau)(\,{\rm d} x,\,{\rm d} t)\Bigg{|} \le \sum_k\int_{k\tau}^{(k+1)\tau}\rm{Lip}(f)\tau\int_\Omega |\tilde E^\tau_t|(\,{\rm d} x,\,{\rm d} t)\\ &+\sum_k\int_{k\tau}^{(k+1)\tau}\rm{Lip}(f)\tau\int_\Omega (1+|\hat v^\tau_t|)|\hat E_t^\tau|(\,{\rm d} x,\,{\rm d} t)\\ &\le \tau C\rm{Lip}(f) \left(|\tilde E^\tau|([0,T]\times\Omega)+ |\hat E^\tau|([0,T]\times\Omega)+{\mathcal B}_2(\hat E,\hat\rho)\right)\\ &\le \tau C\rm{Lip}(f), \end{align*} for a uniform constant $C>0$. Letting $\tau\to 0$ we prove the claim. {\it Step 3.} The bounds on ${\mathcal B}_2(E^\tau,\rho^\tau), {\mathcal B}_2(\tilde E^\tau,\tilde\rho^\tau)$ and ${\mathcal B}_2(\hat E^\tau,\hat\rho^\tau)$ pass to the limit by semicontinuity and allow to conclude that $E, \tilde E$ and $\hat E$ are vector valued Radon measures absolutely continuous w.r.t. $\rho.$ Hence there exist $v_t,\tilde v_t,\hat v_t$ such that $E=\rho v$, $\tilde E = \rho \tilde v$ and $\hat E=\rho\hat v.$ {\it Step 4.} We now look at the equations satisfied by $E, \tilde E$ and $\hat E$. First we use $\partial_t \rho^\tau+\nabla\cdot E^\tau=0$, we pass to the limit as $\tau\to 0$, and we get $$\partial_t \rho+\nabla\cdot E=0.$$ Then, we use $\tilde E^\tau=-\nabla\tilde\rho^\tau+u_t\tilde\rho^\tau$, we pass to the limit again as $\tau\to 0$, and we get $$\tilde E=-\nabla\rho+u_t\rho.$$ To justify the above limit, the only delicate point is passing to the limit the term $u_t\tilde\rho^\tau$, since $u$ is only $L^\infty$, and $\tilde\rho^\tau$ converges weakly as measures, and we are a priori only allowed to multiply it by continuous functions. Yet, we remark that by Corollary \ref{entropy_small} we have that ${\mathcal E}(\tilde\rho_t^\tau)\le C\tau$ for all $t\in[0,T]$. In particular, this provides, for each $t$, uniform integrability for $\tilde\rho_t^\tau$ and turns the weak convergence as measures into weak convergence in $L^1$. This allows to multiply by $u_t$ in the weak limit. Finally, we look at $\hat E^\tau$. There exists a piecewise constant (in time) function $p^\tau$ (defined as $p_{k+1}^\tau $ on every interval $]k\tau,(k+1)\tau]$) such that $p^\tau\geq 0$, $p^\tau(1-\hat\rho^\tau)=0$, \begin{equation}\label{L2H1bound p} \int_0^T\int_\Omega |\nabla p^\tau|^2 (\,{\rm d} x,\,{\rm d} t)=\int_0^T\int_\Omega |\nabla p^\tau|^2 \hat\rho^\tau (\,{\rm d} x,\,{\rm d} t)=\int_0^T\int_\Omega |\hat v^\tau|^2 \hat\rho^\tau (\,{\rm d} x,\,{\rm d} t)\leq C \end{equation} and $\hat E^\tau=\nabla p^\tau \hat\rho^\tau=\nabla p^\tau $. The bound \eqref{L2H1bound p} implies that $p^\tau$ is uniformly bounded in $L^2(0,T;H^1(\Omega))$. Since for every $t$ we have $|\{p^\tau_t=0\}|\geq |\{\hat\rho^\tau_t<1\}|\geq |\Omega|-1$, we can use a suitable version of Poincar\'e's inequality, and get a uniform bound in $L^2([0,T];L^2(\Omega))=L^2([0,T]\times \Omega)$. Hence there exists $p\in L^2([0,T]\times \Omega)$ such that $p^\tau\rightharpoonup p$ weakly in $L^2$ as $\tau\to 0.$ In particular we have $\hat E=\nabla p$. Moreover it is clear that $p\ge 0$ and by Lemma \ref{pressure} we obtain $p(1-\rho)=0$ a.e. as well. Indeed, the assumptions of the Lemma are easily checked: we only need to estimate $W_2(\hat\rho^\tau(a),\hat\rho^\tau(b))$ for $b>a$, but we have $$W_2(\hat\rho^\tau(a),\hat\rho^\tau(b))=W_2(\rho^\tau(k_a\tau),\rho^\tau(k_b \tau))\leq C\sqrt{k_b-k_a},\quad\mbox{ for $k_b\tau\leq b+\tau$ and $k_a\geq a$}.$$ Once we have $\hat E=\nabla p$ with $p(1-\rho)=0$, $p\in L^2([0,T];H^1(\Omega))$ and $\rho\in L^\infty$, we can also write $$\hat E=\nabla p=\rho\nabla p.$$ If we sum up our results, using $E=\tilde E-\hat E$, we have $$\partial_t \rho -\Delta\rho+\nabla\cdot (\rho(u-\nabla p))=0\;\;\;\mbox{together with }p\geq 0,\,\rho\leq 1,\,p(1-\rho)=0\;\;{\rm{a.e.\ in\ }}[0,T]\times\Omega.$$ As usual, this equation is satisfied in a weak sense, with no-flux boundary conditions. \end{proof} \section{Uniform $\rm{Lip}([0,T];\mathcal W_1)$ and $BV$ estimates}\label{sec:bv} In this section we provide uniform estimates for the curves $\rho^\tau,\tilde\rho^\tau$ and $\hat\rho^\tau$ of the following form: we prove uniform $BV$ (in space) bounds on $\tilde\rho^\tau$ (which implies the same bound for $\hat\rho^\tau$) and uniform Lipschitz bounds in time for the $W_1$ distance on $\rho^\tau$. This means a small improvement compared to the previous section in what concerns time regularity, as we have Lipschitz instead of $AC^2$, even if we need to replace $W_2$ with $W_1$. It is also important in what concerns space regularity. Indeed, from Lemma \ref{entropy-FP} one could deduce that the solution $\rho$ of the FP equation \eqref{fokker2} satifies $\sqrt{\rho}\in L^2([0,T];H^1(\O))$ and, using $\rho\leq 1$, also $\rho\in L^2([0,T];H^1(\O))$. Yet, this is just an integrable estimate in $t$, while the $BV$ estimate of this section is uniform in the time variable. Nevertheless there is a price to pay for this improvement: we have to assume higher regularity for the velocity field. These uniform-in-time $W_1$-Lipschitz bounds are based both on $BV$ estimates for the Fokker-Planck equation (see Lemma \ref{bv_estimate1} from Appendix A) and for the projection operator $P_{\mathcal K}$ (see \cite{gafb}). The assumption on $u$ is essentially the following: we need to control the growth of the total variation of the solutions of the Fokker-Planck equation \eqref{FP-basic}, and we need to iterate this bound along time steps. We will discuss in the Appendix the different $BV$ estimates on the Fokker-Planck equation that we were able to find. The desired estimate is true whenever $\|u_t\|_{C^{1,1}(\Omega)}$ is uniformly bounded $u_t\cdot n=0$ on $\partial\Omega$. It seems to be an open problem to obtain similar estimate under the only assumption that $u$ is Lipschitz continuous. Of course, we will also assume $\rho_0\in BV(\Omega)$. Despite these extra regularity assumptions, we think these estimates have their own interest, exploiting some finer properties of the solutions of the Fokker-Planck equation and of the Wasserstein projection operator. Before entering into the details of the estimates, we want to discuss why we concentrate on $BV$ estimates (instead of Sobolev ones) and on $W_1$ (instead of $W_p$, $p>1$). The main reason is the role of the projection operator: indeed, even if $\rho\in W^{1,p}(\Omega)$, we do not have in general $P_{\mathcal K}[\rho]\in W^{1,p}$ because the projection creates some jumps at the boundary of $\{P_{\mathcal K}[\rho]=1\}$. This prevents from obtaining any $W^{1,p}$ estimate for $p>1$. On the other hand, \cite{gafb} exactly proves a $BV$ estimate on $P_{\mathcal K}[\rho]$ and paves the way to $BV$ bounds for our equation. Concerning the regularity in time, we observe that the velocity field in the Fokker-Planck equation contains a term in $\nabla\rho/\rho$. Since the metric derivative in $\mathcal W_p$ is given by the $L^p$ norm (w.r.t. $\rho_t$) of the velocity field, it is clear that estimates in $\mathcal W_p$ for $p>1$ would require spatial $W^{1,p}$ estimates on the solution itself, which are impossible for $p>1$ in this splitting scheme. We underline that this does not mean that uniform $W^{1,p}$ are impossible for the solution of \eqref{fokker2}; it only means that they are not uniform along the approximation that we used in our \emph{Main Scheme} to build such a solution. The precise result that we prove is the following. \begin{theorem}\label{lip-estimate} Let us suppose that $\|u_t\|_{C^{1,1}}\leq C$ and $\rho_0\in BV(\Omega)$. Then using the notations from the \emph{Main scheme} and Theorem \ref{convergence} one has $\|\tilde\rho^\tau_t\|_{BV}\leq C$ and $W_1(\rho^\tau_k,\rho^\tau_{k+1})\leq C\tau$. As a consequence we also have $\rho\in\rm{Lip}([0,T];\mathcal W_1)\cap L^\infty([0,T];BV(\Omega))$. \end{theorem} To prove this theorem we need the following lemmas. \begin{lemma}\label{tool3} Suppose $\|u_t\|_{\rm{Lip}}\leq C$ and $u_t\cdot n=0$ on $\partial\Omega$. Then for the solution $\varrho$ of \eqref{FP-app} with velocity field $v=u$ we have the estimate $$\|\varrho_t\|_{L^\infty}\le\|\varrho_0\|_{L^\infty} e^{Ct},$$ where $C=\|\nabla\cdot u_t\|_{L^\infty}$. \end{lemma} \begin{proof} Standard comparison theorems for parabolic equations allow to prove the results once we notice that $f(t,x):=\|\varrho_0\|_{L^\infty} e^{Ct}$ is a supersolution of the Fokker-Planck equation, i.e. $$\partial_tf_t\geq \Delta f_t -\nabla\cdot(f_t u_t).$$ Indeed, in the above equation the Laplacian term vanishes as $f$ is constant in $x$, $\partial_t f_t=Cf_t$ and $\nabla\cdot(f_t u_t)=f_t\nabla\cdot u_t+\nabla f_t\cdot u_t=f_t\nabla\cdot u_t\leq Cf_t$ where $C=\|\nabla\cdot u_t\|_{L^\infty}$. From this inequality, and from $\rho_0\leq f_0$, we deduce $\rho_t\leq f_t$ for all $t$. \end{proof} We remark that the above lemma implies in particular that after every step in the {\it Main scheme} we have $\tilde\rho_{k+1}^\tau\le e^{\tau c}\leq 1+C\tau,$ where $c:=\|\nabla\cdot u\|_{L^\infty}.$ Let us now present the following lemma as well. \begin{corollary}\label{tool4} Along the iterations of our ${\rm{Main\ scheme}}$, for every $k$ we have $W_1(\tilde\rho_{k+1}^\tau,\rho_{k+1}^\tau)\le \tau C$ for a constant $C>0$ independent of $\tau$. \end{corollary} \begin{proof} With the saturation property of the projection (see Section \ref{subsec:proj} or \cite{gafb}), we know that there exists a measurable set $B\subseteq\Omega$ such that $\rho_{k+1}^\tau=\tilde\rho_{k+1}^\tau\mathbbm{1}_B+\mathbbm{1}_{\Omega\setminus B}.$ On the other hand we know that \begin{eqnarray*} W_1(\tilde\rho_{k+1}^\tau,\rho_{k+1}^\tau)&=&\sup_{f\in\rm{Lip}_{1}(\Omega),\,0\leq f\leq \rm{diam}(\Omega)}\int_\Omega f(\tilde\rho_{k+1}^\tau-\rho_{k+1}^\tau)\,{\rm d} x\\ &=&\sup_{f\in\rm{Lip}_{1}(\Omega),\,0\leq f\leq \rm{diam}(\Omega)}\int_{\Omega\setminus B}\! f(\tilde\rho_{k+1}^\tau-1)\,{\rm d} x\le\tau C\,|\Omega|\rm{diam}(\Omega). \end{eqnarray*} We used the fact that the competitors $f$ in the dual formula can be taken positive and bounded by the diameter of $\Omega$, just by adding a suitable constant. This implies as well that $C$ is depending on $c,|\Omega|$ and ${\rm{diam}(\Omega)}.$ \end{proof} \begin{proof}[Proof of Theorem \ref{lip-estimate}] First we take care of the $BV$ estimate. Lemma \ref{bv_estimate1} in the Appendix guarantees, for $t\in ]k\tau,(k+1)\tau[,$ that we have $TV(\tilde\rho^\tau_t)\leq C\tau+e^{C\tau} TV(\rho^\tau_k)$. Together with the $BV$ bound on the projection that we presented in Section \ref{subsec:proj} (taken from \cite{gafb}), this can be iterated, providing a uniform bound (depending on $TV(\rho_0)$, $T$ and $\sup_t \|u_t\|_{C^{1,1}}$) on $\|\tilde\rho^\tau_t\|_{BV}$. Passing this estimate to the limit as $\tau\to 0$ we get $\rho\in L^\infty([0,T];BV(\Omega))$. Then we estimate the behavior of the interpolation curve $\hat\rho^\tau$ in terms of $W_1$. We estimate \begin{align*} W_1(\rho_k^\tau,\tilde\rho_{k+1}^\tau)\le \int_{k\tau}^{(k+1)\tau}|(\tilde\rho_t^\tau)'|_{W_1}\,{\rm d} t &\le\int_{k\tau}^{(k+1)\tau}\int_\Omega\left(\frac{|\nabla\tilde\rho^\tau_{t}|}{\tilde\rho^\tau_{t}}+|u_t|\right)\tilde\rho^\tau_{t}\,{\rm d} x\,{\rm d} t\\ &\le \int_{k\tau}^{(k+1)\tau}\|\tilde\rho^\tau_{t}\|_{BV}\,{\rm d} t+C\tau\le C\tau.\end{align*} Hence, we obtain $$ W_1(\rho_k^\tau,\rho_{k+1}^\tau)\le W_1(\rho_k^\tau,\tilde\rho_{k+1}^\tau)+W_1(\tilde\rho_{k+1}^\tau,\rho_{k+1}^\tau)\le \tau C. $$ This in particular means, for $b>a$, $$W_1(\hat\rho^\tau(a),\hat\rho^\tau(b))\leq C(b-a+\tau).$$ We can pass this relation to the limit, using that, for every $t$, we have $\hat\rho^\tau_t\to \rho_t$ in $\mathcal W_2(\Omega)$ (and hence also in $\mathcal W_1(\Omega)$, since $W_1\leq W_2$), we get $$W_1(\rho(a),\rho(b))\leq C(b-a),$$ which means that $\rho$ is Lipschitz continuous in $\mathcal W_1(\Omega)$. \end{proof} \section{Variations on a theme: some reformulations of the {\it\textbf{Main scheme}}}\label{sec:5} In this section we propose some alternative approaches to study the problem \eqref{fokker2}. The general idea is to discretize in time, and give a way to produce a measure $\rho^\tau_{k+1}$ starting from $\rho^\tau_k$. Observe that the interpolations that we proposed in the previous sections $\rho^\tau, \tilde\rho^\tau$ and $\hat\rho^\tau$ are only technical tools to state and prove a convergence result, and the most important point is exactly the definition of $\rho^\tau_{k+1}$. The alternative approaches proposed here explore different ideas, more difficult to implement than the one that we presented in Section \ref{sec:main}, and/or restricted to some particular cases (for instance when $u$ is a gradient). They have their own modeling interest and this is the main reason justifying their sketchy presentation. \subsection{Variant 1: transport, diffusion then projection.} We recall that the original splitting approach for the equation without diffusion (\cite{MauRouSan1,aude_phd}) exhibited an important difference compared to what we did in Section \ref{sec:main}. Indeed, in the first phase of each time step (i.e. before the projection) the particles follow the vector field $u$ and $\tilde\rho^\tau_{k+1}$ was not defined as the solution of a continuity equation with advection velocity given by $u_t$, but as the image of $\rho^\tau_k$ via a straight-line transport: $\tilde\rho^\tau_{k+1}:=(\rm{id}+\tau u_{k\tau})_\#\rho^\tau_k$. One can wonder whether it is possible to follow a similar approach here. A possible way to proceed is the following: take a random variable $X$ distributed according to $\rho^\tau_k$, and define $\tilde\rho^\tau_{k+1}$ as the law of $X+\tau u_{k\tau}(X)+B_\tau$, where $B$ is a Brownian motion, independent of $X$. This exactly means that every particle moves starting from its initial position $X$, following a displacement ruled by $u$, but adding a stochastic effect in the form of the value at time $\tau$ of a Brownian motion. We can check that this means $$\tilde{\rho}_{k+1}^\tau:=\eta_\tau*\left((\rm{id}+\tau u_{k\tau})_\#\rho^\tau_k\right),$$ where $\eta_\tau$ is a Gaussian kernel with zero-mean and variance $\tau$, i.e. $\displaystyle\eta_\tau(x):=\frac{1}{(4\tau\pi)^{d/2}}e^{-\frac{|x|^2}{4\tau}}.$ Then we define $$\rho_{k+1}^\tau:=P_{\mathcal K}\left[\tilde{\rho}_{k+1}\right].$$ Despite the fact that this scheme is very natural and essentially not that different from the {\it Main scheme}, we have to be careful with the analysis. First we have to quantify somehow the distance $W_p(\rho_k^\tau,\tilde\rho_{k+1}^\tau)$ for some $p\ge1$ and show that this is of order $\tau$ in some sense. Second, we need to be careful when performing the convolution with the heat kernel (or adding the Brownian motion, which is the same): this requires either to work in the whole space (which was not our framework) or in a periodic setting ($\Omega=\mathbb{T}^d$, the flat torus, which is qutie restrictive). Otherwise, the ``explicit'' convolution step should be replaced with some other construction, such as following the Heat equation (with Neumann boundary conditions) for a time $\tau$. But this brings back to a situation very similar to the {\it Main scheme}, with the additional difficulty that we do not really have estimates on $(\rm{id}+\tau u_{k\tau})_\#\rho^\tau_k$. \subsection{Variant 2: gradient flow techniques for gradient velocity fields} In this section we assume that the velocity field of the population is given by the opposite of the gradient of a function, $u_t=-\nabla V_t$ a typical example is given when we take for $V$ the distance function to the exit (see the discussions in \cite{MauRouSan2} about this type of question). We start from the case where $V$ does not depend on time, and we suppose $V\in W^{1,1}(\Omega)$. In this particular case -- beside the splitting approach -- the problem has a variational structure, hence it is possible to show the existence by the means of gradient flows in Wasserstein spaces. Since the celebrated paper of Jordan, Kinderlehrer and Otto (\cite{jko}) we know that the solutions of the Fokker-Planck equation (with a gradient vector field) can be obtained with the help of the gradient flow of a perturbed entropy functional with respect to the Wasserstein distance $W_2.$ This formulation of the JKO scheme was also used in \cite{MauRouSan2} for the first order model with density constraints. It is easy to combine the JKO scheme with density constraints to study the second order/diffusive model. As a slight modification of the model from \cite{MauRouSan2}, we can consider the following discrete implicit Euler (or JKO) scheme. As usual, we fix a time step $\tau>0,$ $\rho_0^\tau=\rho_0$ and for all $k\in\{1,2,\dots,\lfloor N/\tau\rfloor\}$ we just need to define $\rho_{k+1}^\tau$. We take \begin{equation}\displaystyle \rho_{k+1}^\tau=\rm{argmin}_{\rho\in\P(\O)}\left\{\int_\O V(x)\rho(x)\,{\rm d} x+{\mathcal E}(\rho)+I_{\mathcal K}(\rho)+\frac{1}{2\tau}W_2^2(\rho,\rho_k^\tau)\right\}, \end{equation} where $I_{\mathcal K}$ is the indicator function of ${\mathcal K},$ which is $$I_{\mathcal K}(x):=\left\{\begin{array}{ll} 0, & \rm{if}\ x\in {\mathcal K},\\ +\infty, & \rm{otherwise}. \end{array} \right.$$ The usual techniques from \cite{jko,MauRouSan2} can be used to identify that System \eqref{fokker2} is the gradient flow of the functional $\displaystyle\rho\mapsto J(\rho):=\int_\O V(x)\rho(x)\,{\rm d} x+{\mathcal E}(\rho)+I_{\mathcal K}(\rho)$ and that the above discrete scheme converges (up to a subsequence) to a solution of \eqref{fokker2}, thus proving existence. The key estimate for compactness is $$\frac{1}{2\tau}W_2^2(\rho^\tau_{k+1},\rho_k^\tau)\leq J(\rho^\tau_k)-J(\rho^\tau_{k+1}),$$ which can be summed up (as on the r.h.s. we have a telescopic series), thus obtaining the same bounds on ${\mathcal B}_2$ that we used in Section \ref{sec:main}. Note that whenever $D^2V\geq \lambda I$, the functional $\rho\mapsto \int_\O V(x)\rho(x)\,{\rm d} x+{\mathcal E}(\rho)+I_{\mathcal K}(\rho)$ is $\lambda$-geodesically convex. This allows to use the theory in \cite{ags} to prove not only existence, but also uniqueness for this equation, and even stability (contractivity or exponential growth on the distance between two solutions) in $\mathcal W_2$. Yet, we underline that the techniques of \cite{DiMMes} also give the same result. Indeed, \cite{DiMMes} contains two parts. In the first part, the equation with density constaints for a given velocity field $u$ is studied, under the assumption that $-u$ has some monotonicity properties: $(-u_t(x)+u_t(y))\cdot(x-y)\geq \lambda|x-y|^2$ (which is the case for the gradients of $\lambda$-convex functions). In this case standard Gr\"onwall estimates on the $W_2$ distance between two solutions are proved, and it is not difficult to add diffusion to that result (as the Heat kernel is already contractant in $\mathcal W_2$). In the second part, via different techniques (mainly using the adjoint equation, and proving somehow $L^1$ contractivity), the uniqueness result is provided for arbitrary $L^\infty$ vector fields $u$, but with the crucial help of the diffusion term in the equation. It is also possible to study a variant where $V$ depends on time. We assume for simplicity that $V\in\rm{Lip}([0,T]\times \Omega)$ (this is a simplification; less regularity in space, such as $W^{1,1}$, could be sufficient). In this case we define $$J_t(\rho):=\int_\O V_t(x)\rho(x)\,{\rm d} x+{\mathcal E}(\rho)+I_{\mathcal K}(\rho)$$ and \begin{equation}\displaystyle \rho_{k+1}^\tau=\rm{argmin}_{\rho\in\P(\O)}\left\{J_{k\tau}(\rho)+\frac{1}{2\tau}W_2^2(\rho,\rho_k^\tau)\right\}, \end{equation} The analysis proceeds similarly, with the only exception that the we get $$\frac{1}{2\tau}W_2^2(\rho^\tau_{k+1},\rho_k^\tau)\leq J_{k\tau}(\rho^\tau_k)-J_{k\tau}(\rho^\tau_{k+1}),$$ which is no more a a telescopic series. Yet, we have $J_{k\tau}(\rho^\tau_{k+1})\geq J_{(k+1)\tau}(\rho^\tau_{k+1})+\rm{Lip}(V)\tau$, and we can go on with a telescopic sum plus a remainder of the order of $\tau$. In the case where $u_t$ is the opposite of the gradient of a $\lambda$-convex function $V_t$, one could consider approximation by functions which are piecewise constant in time and use the standard theory of gradient flows Let us remark here that the recent paper \cite{AleKimYao} gives another approach to deal with first order crowd motion models as limit of nonlinear-diffusion equations with gradient drift. This approach could be plausible also in the case when we add a simple diffusion term in the models studied in \cite{AleKimYao}. \subsection{ Variant 3: transport then gradient flow-like step with the penalized entropy functional.} We present now a different scheme, which combines some of the previous approaches. It could formally provide a solution of the same equation, but presents some extra difficulties. We define now $\tilde{\rho}_{k+1}^\tau:=(\rm{id}+\tau u_{k\tau})_\#\rho_k^\tau$ and with the help of this we define $$\rho_{k+1}^\tau:=\rm{argmin}_{\rho\in{\mathcal K}}{\mathcal E}(\rho)+\frac{1}{2\tau}W_2^2(\rho,\tilde{\rho}_{k+1}^\tau).$$ In the last optimization problem we minimize a strictly convex and l.s.c. functionals, and hence we have existence and uniqueness of the solution. The formal reason for this scheme being adapted to the equation is that we perform a step of a JKO scheme in the spirit of \cite{jko} (without the density constraint) or of \cite{MauRouSan2} (without the entropy term). This should let a term $-\Delta\rho-\nabla\cdot(\rho\nabla p)$ appear in the evolution equation. The term $\nabla\cdot(\rho u)$ is due to the first step (the definition of $\tilde\rho^\tau_{k+1}$). To explain a little bit more for the unexperienced reader, we consider the optimality conditions for the above minimization problem. Following \cite{MauRouSan2}, we can say that $\rho\in {\mathcal K}$ is optimal if and only if there exists a constant $\ell\in\mathbb{R}$ and a Kantorovich potential $\varphi$ for the transport from $\rho$ to $\rho_k^\tau$ such that \noindent\begin{minipage}{8cm} $$\rho=\begin{cases}1 & \mbox{ on }\left(\ln\rho+\frac\varphi\tau\right) <\ell,\\ 0 & \mbox{ on }\left(\ln\rho+\frac\varphi\tau\right) >\ell,\\ \in[0,1] & \mbox{ on }\left(\ln\rho+ \frac\varphi\tau\right) = \ell. \end{cases}$$ We then define $p=(\ell-\ln\rho-\frac\varphi\tau)_+$ and we get $p\in\rm{press}(\rho)$. Moreover, $\rho-\mbox{a.e.}\,\nabla p = -\frac{\nabla\rho}{\rho}-\frac{\nabla\varphi}{\tau}.$ We then use the fact that the optimal transport is of the form $T=\rm{id}-\nabla\varphi$ and obtain a situation as is sketched in Figure \ref{cxc2}. \end{minipage} \begin{minipage}{8cm} \begin{center} \begin{tikzpicture}[scale=0.6] \draw[blue] (0,0) node {$\bullet$} node [left]{$\rho^\tau_k$}; \draw[blue,->,>=stealth] (0.5,0.3) -- (4.5,2.7) node[midway,above,sloped] {$\rm{id}+\tau u_{k\tau}$}; \draw[blue] (5,3) node {$\bullet$} node [above]{$\tilde\rho^\tau_{k+1}$}; \draw[blue,->,>=stealth] (9.5,0.3) -- (5.5,2.7) node[midway,above,sloped] {$\rm{id}+\tau (\nabla p+\frac{\nabla\rho}{\rho})$}; \draw[blue] (10,0) node {$\bullet$} node [right]{$\rho^\tau_{k+1}$}; \draw[blue,dashed,->,>=stealth] (9.375,0) -- (0.625,0) node[midway,below] {$\rm{id}-\!\tau (u_{(k+1)\tau}\!-\!\nabla p\!-\!\frac{\nabla\rho}{\rho})\!+\!o(\tau)$}; \end{tikzpicture} \captionof{figure}{One time step}\label{cxc2} \end{center} \end{minipage} Notice that $(\rm{id}+\tau u_{k\tau})^{-1}\circ(\rm{id}+\tau (\nabla p+\nabla\rho/\rho))=\rm{id}-\tau (u_{(k+1)\tau}-\nabla p-\nabla\rho/\rho)+o(\tau)$ provided $u$ is regular enough. Formally we can pass to the limit $\tau\to 0$ and have $$\partial_t\rho-\Delta\rho+\nabla\cdot(\rho(u-\nabla p))=0.$$ Yet, this turns out to be quite na\"ive, because we cannot get proper estimates on $W_2(\rho_k^\tau,\rho_{k+1}^\tau)$. Indeed, this is mainly due to the hybrid nature of the scheme, i.e. a gradient flow for the diffusion and the projection part on one hand and a free transport on the other hand. The typical estimate in the JKO scheme comes from the fact that one can bound $W_2(\rho_k^\tau,\rho_{k+1}^\tau)^2/\tau$ with the opposite of the increment of the energy, and that this gives rise to a telescopic sum. Yet, this is not the case whenever the base point for a new time step is not equal to the previous minimizer. Moreover, the main difficulty here is the fact that the energy we consider implicitly takes the value $+\infty$, due to the constraint $\rho\in{\mathcal K}$, and hence no estimate is possible whenever $ \tilde\rho^\tau_{k+1}\notin {\mathcal K}$. As a possible way to overcome this difficulty, one could approximate the discontinuous functional $I_{\mathcal K}$ with some finite energies of the same nature (for instance power-like entropies, even if the best choice would be an energy which is Lipschitz for the distance $W_2$). These kinds of difficulties are matter of current study, in particular for mixed systems and/or multiple populations.
2,877,628,090,918
arxiv
\section{Introduction\label{-one}} It has been known since the work of Taubes \cite{Taubes} that an $SU(2)$-monopole of charge $m$ with well-separated zeros of the Higgs field approximates a collection of $m$ monopoles of charge $1$. This fact is reflected in the asymptotic behaviour of the natural hyperk\"ahler metric on the moduli space $M_m$ of charge $m$ monopoles. Namely, in the asymptotic region of $M_m$, the monopole metric is exponentially close to another hyperk\"ahler metric, whose geodesics determine scattering of $m$ particles with electric, magnetic and scalar charges. This metric was found by Gibbons and Manton in \cite{GM} and a proof that it differs from the exact monopole metric by an exponentially small amount was given in \cite{BielCMP}. For $m=2$, the Gibbons-Manton metric is just the product of a flat metric and the Taub-NUT metric with a negative mass parameter. \par Monopoles exist for any compact Lie group and Taubes' estimates work equally well for $SU(N)$-monopoles with maximal symmetry breaking, that is monopoles whose Higgs field has distinct eigenvalues at infinity. This time a moduli space $M_{m_1,\dots,m_{N-1}}(\mu_1,\dots,\mu_N)$, where $m_i$ are positive integers and $\mu_1<\dots<\mu_N$, is obtained by identifying gauge-equivalent framed monopoles whose Higgs field at infinity defines a map from the $2$-sphere to the adjoint orbit $O$ of $\operatorname{diag}(i\mu_1,\dots,i\mu_N)$ and whose degree is $(m_1,\dots,m_{N-1})\in H_2(O,{\Bbb Z})$. We should think of particles making up the monopole as coming in $N-1$ distinguishable types, with $m_i$ being the number of particles of type $i$. \par In this paper we shall compute the asymptotic metric on $M_{m_1,\dots,m_{N-1}}(\mu_1,\dots,\mu_N)$ \nolinebreak \footnote{Strictly speaking, we only show that our asymptotic metric is close to the metric on the moduli space of solutions to Nahm's equations. Thus we can conclude that the geodesics of the monopole metric and of the asymptotic metric are close to each other (see Remark \ref{isometry}). The metrics themselves are close if the Nahm transform for $SU(N)$-monopoles, $N>2$, is an isometry.}. \linebreak This metric turns out to be a hybrid between the Gibbons-Manton metric and the metric on the moduli space of monopoles of charge $(1,1,\dots,1)$ which was computed in \cite{LWY,MM,Cha}. Particles of the same type interact as in the Gibbons-Manton metric while the particles of different types as in the $(1,1,\dots,1)$-metric (i.e. neighboring types interact as in the Taub-NUT metric with a positive mass parameter and non-neighboring types do not interact). The precise formula for the asymptotic metric is given in the next section. Somewhat surprisingly, the monopole metric on $M_{m_1,\dots,m_{N-1}}(\mu_1,\dots,\mu_N)$ is exponentially close to the asymptotic metric as soon as, for each $i$, particles of type $i$ are far apart. Particles of different types can be as close to each other as we wish (in fact sometimes they can have the same position). Moreover, as we observe in section \ref{three}, if particles of a single type, say $i$, are far apart, then the monopole metric is likely to be exponentially close to yet another hyperk\"ahler metric. This metric is simpler than the monopole metric, but still given by transcendental functions. It is only when the particles of each type are far apart, that the asymptotic metric becomes algebraic and we are able to compute it explicitly. \par The proof uses the idea of our previous work \cite{BielCMP}, i.e. replacing solutions to Nahm's equations corresponding to monopoles with solutions defined on half-line. This gives a new moduli space whose metric is then computed by twistor methods. The main novelty is the way the metrics are compared. We prove (in Appendix B) a general theorem which allows us to deduce the estimates on all derivatives of components (in natural coordinates) of the difference of metric tensors from one-sided estimates on the metric tensors. Such a deduction is possible for two hyperk\"ahler metrics which are related via a complex-symplectic isomorphism providing that one of the metrics admits holomorphic coordinates in which the complex-symplectic form is standard and in which the components of the metric tensor are uniformly bounded. \par The paper is organized as follows. In the next section we collect some facts about $T^m$-invariant hyperk\"ahler metrics in dimension $4m$ of which our asymptotic metric is an example. In section \ref{one} we recall the description, due to Nahm \cite{Nahm} and to Hurtubise and Murray \cite{HurtMur}, of $SU(N)$-monopoles in terms of Nahm's equations. We also define there the moduli space of solutions to Nahm's equations whose metric will be the asymptotic metric. This moduli space is a hyperk\"ahler quotient of the product of several simpler moduli spaces, and in the following four sections we compute the metrics on these. In section \ref{topology} we discuss the topology of these moduli spaces. Finally, in section \ref{three}, we put the results together to obtain an explicit formula for the asymptotic metric (Theorem \ref{finalasymptoticmetric}). We also prove there that the rate of approximation is exponential (Theorem \ref{estimates}) and discuss the topology of the asymptotic moduli space. The appendix A deals with the question of identifying certain hyperk\"ahler quotients with corresponding complex-symplectic quotients which is needed in section \ref{topology}. In appendix B we prove the above-mentioned comparison theorem for Ricci-flat K\"ahler manifolds. \section{Hyperk\"ahler quotients and $T^n$-invariant hyperk\"ahler metrics \label{zero}} The Gibbons-Manton metric \cite{GM,GR} is an example of a $4m$-dimensional (pseudo)-hyperk\"ahler metric admitting a tri-Hamiltonian (hence isometric) action of the $n$-dimensional torus $T^m$. Such metrics have particularly nice properties and were studied by several authors \cite{LR,HKLR,PP}. On the set where the action of $T^m$ is free such a metric can be locally written in the form: \begin{equation} g=\Phi d\overline{\bf x}\cdot d\overline{\bf x}+\Phi^{-1}(d\overline{t}+A)^2,\label{torus-invariant}\end{equation} where $\overline{\bf x}$ is the hyperk\"ahler moment map, $d\overline{t}$ is dual to the $(m\times 1)$-matrix of Killing vector fields and the matrix $\Phi$ and the $1$-form $A$ depend only on the ${\bf x}_i$ and satisfy certain linear PDE's. In particular, $\Phi$ determines the metric up to a gauge equivalence. The set where the $T^m$-action is free can be viewed as a $T^m$-bundle over an open subset of ${\Bbb R}^{3m}$. For the Gibbons-Manton metric this open subset is the configuration space of $\tilde{C}_m({\Bbb R}^3)$ of $m$ distinct points in ${\Bbb R}^3$ (i.e. ${\Bbb R}^{3m}$ without the generalized diagonal) and \begin{equation}\Phi_{ij}=\begin{cases}-\frac{1}{\nu}-\sum_{k\neq i}\frac{1}{\|{\bf x}_i-{\bf x}_k\|} & \text{if $i=j$}\\ \frac{1}{\|{\bf x}_i-{\bf x}_j\|} & \text{if $i\neq j$}.\end{cases}\label{GM}\end{equation} Here $\nu$ is the mass parameter. We can, in particular, take $\nu=\infty$ and $m=2$. Then the linearity of the equations for $\Phi$ and $A$ implies that, for any mapping $(i,j)\mapsto s_{ij}$ of $\{1,\ldots,m\}\times \{1,\ldots,m\}$ such that $s_{ij}=s_{ji}$ and $s_{ii}=0$ for $i,j=1,\ldots, m$, and for any constants $c_i$, $i=1,\dots,m$, the following matrix $\Phi$ defines a $T^m$-invariant (pseudo)-hyperk\"ahler metric: \begin{equation}\Phi_{ij}=\begin{cases}c_i+\sum_{k\neq i}\frac{s_{ik}}{\|{\bf x}_i-{\bf x}_k\|} & \text{if $i=j$}\\ -\frac{s_{ij}}{\|{\bf x}_i-{\bf x}_j\|} & \text{if $i\neq j$}.\end{cases}\label{GMtype}\end{equation} \par The asymptotic metric on $M_{m_1,\dots,m_{N-1}}(\mu_1,\dots,\mu_N)$ will turn out to be of this form. Namely $m=m_1+\dots+m_{N-1}$ and, if we define the type $t(i)$ of an $i\leq m$ by $t(i)=\min\{k;i\leq \sum_{s\leq k}m_s\}$, then \begin{equation} c_i=\mu_{k+1}-\mu_k\enskip \text{if $t(i)=k$}\label{c-asym}\end{equation} and \begin{equation}s_{ij}=\begin{cases}-2 & \text{ if $t(i)=t(j)$}\\ 1 & \text{ if $|t(i)-t(j)|=1$}\\ 0 & \text{otherwise}.\end{cases}\label{s-asym} \end{equation} \section{Moduli spaces of solutions to Nahm's equations \label{one}} We shall define in this section several moduli spaces of solutions to Nahm's equations. All of these spaces carry hyperk\"ahler metrics. In particular, we shall recall, after Nahm \cite{Nahm} and Hurtubise and Murray \cite{HurtMur}, the description of the moduli spaces of $SU(N)$ monopoles with maximal symmetry breaking in terms of solutions to Nahm's equations. We shall also describe the asymptotic moduli spaces. We remark that from the point of view of hyperk\"ahler geometry many interesting metrics are obtained by replacing below the unitary group with an arbitrary compact Lie group. Such a generalization is straightforward, but, as our focus is on monopoles, we shall restrict ourselves to the unitary case.\bigskip Nahm's equations are the following ODE's: \begin{equation}\dot{T}_i+[T_0,T_i]+\frac{1}{2}\sum_{j,k=1,2,3}\epsilon_{ijk}[T_j,T_k]=0\;,\;\;\;\;i=1,2,3.\label{Nahm}\end{equation} The functions $T_0,T_1,T_2,T_3$ are defined on some interval and are skew-hermitian and analytic. If the rank of the $T_i$ is $n$, then the space of solutions is acted upon by the gauge group ${\cal G}$ of $U(n)$-valued functions $g(t)$: \begin{eqnarray} T_0&\mapsto & \operatorname{Ad}(g)T_0-\dot{g}g^{-1}\nonumber\\ T_i&\mapsto & \operatorname{Ad}(g)T_i\;,\;\;\qquad i=1,2,3.\label{action}\end{eqnarray} We define fundamental moduli spaces of ${\frak u}(n)$-valued solutions $F_n(m;c)$ and $\tilde{F}_n(m;c)$. Here $m$ is a nonnegative integer less than or equal to $n$ and $c$ is a positive real number ($c$ can be negative or zero for $\tilde{F}_n(m;c)$). The moduli spaces $F_n(m;c)$ correspond to monopoles with minimal symmetry breaking and are the basic building blocks from which all moduli spaces of framed $SU(N)$-monopoles with maximal symmetry breaking can be obtained by means of the hyperk\"ahler quotient construction. The spaces $\tilde{F}_n(m;c)$ play a similar role for the asymptotic metrics. They are defined as follows: \begin{itemize} \item Solutions in $F_n(m;c)$ are defined on $(0,c]$, while solutions in $\tilde{F}_n(m;c)$ are defined on $(0,\infty]$. \item For a solution $(T_0,T_1,T_2,T_3)$ in either $F_n(m;c)$ or $\tilde{F}_n(m;c)$, $T_0$ and the $m\times m$ upper-diagonal blocks of $T_1,T_2,T_3$ are analytic at $t=0$, while the $(n-m)\times(n-m)$ lower-diagonal blocks have simple poles with residues defining the standard $(n-m)$-dimensional irreducible representation of ${\frak su}(2)$. The off-diagonal blocks are of the form $t^{(n-m-1)/2}\times(\text{\it analytic in $t$})$. \item A solution in $F_n(m;c)$ is analytic at $t=c$, while a solution in $\tilde{F}_n(m;c)$ approaches a diagonal limit at $+\infty$ exponentially fast. Furthermore $(T_1(+\infty),T_2(+\infty),T_3(+\infty))$ is a regular triple, i.e. its centralizer consists of diagonal matrices. \item The gauge group for $F_n(m;c)$ consists of gauge transformations $g$ with $g(0)=g(c)=1$, while the gauge group for $\tilde{F}_n(m;c)$ has the Lie algebra consisting of functions $\rho:[0,+\infty)\rightarrow {\frak u}(n)$ such that \begin{itemize} \item[(i)] $\rho(0)=0$ and $\dot{\rho}$ has a diagonal limit at $+\infty$; \item[(ii)] $(\dot{\rho}-\dot{\rho}(+\infty))$ and $[\tau,\rho]$ decay exponentially fast for any regular diagonal matrix $\tau\in{\frak u}(n)$; \item[(iii)] $c\dot{\rho}(+\infty)+\lim_{t\rightarrow +\infty} (\rho(t)-t\dot{\rho}(+\infty))=0$. \end{itemize} \end{itemize} \begin{remark} Alternately, the space $\tilde{F}_n(m;c)$ can be viewed as the moduli space of solutions defined on $[-c,+\infty]$ with the gauge group given by the transformation which are exponentially close to $\exp(ht)$ for some diagonal $h$.\label{shift}\end{remark} The tangent space at a solution $(T_0,T_1,T_2,T_3)$ can be identified, for both $F_n(m;c)$ and $\tilde{F}_n(m;c)$, with the space of solutions to the following system of linear equations: \begin{equation}\begin{array}{c} \dot{t}_0+[T_0,t_0]+[T_1,t_1]+[T_2,t_2]+[T_3,t_3]=0,\\ \dot{t}_1+[T_0,t_1]-[T_1,t_0]+[T_2,t_3]-[T_3,t_2]=0,\\ \dot{t}_2+[T_0,t_2]-[T_1,t_3]-[T_2,t_0]+[T_3,t_1]=0,\\ \dot{t}_3+[T_0,t_3]+[T_1,t_2]-[T_2,t_1]-[T_3,t_0]=0.\end{array}\label{tangent}\end{equation} $F_n(m;c)$ carries a hyperk\"ahler metric is defined by \begin{equation}\|(t_0,t_1,t_2,t_3)\|^2=\int_{0}^c\sum_0^3\|t_i\|^2 \label{metric},\end{equation} while $\tilde{F}_n(m;c)$ possesses an indefinite (and possibly degenerate) hyperk\"ahler metric given by: \begin{equation}\|(t_0,t_1,t_2,t_3)\|^2=c\sum_0^3\|t_i(+\infty)\|^2+\int_0^{+\infty}\sum_0^3\left(\|t_i(s)\|^2-\|t_i(+\infty)\|^2\right)ds. \label{smetric}\end{equation} The moduli space $F_n(m;c)$ has a tri-Hamiltonian action of $U(n)\times U(m)$ given by gauge transformations $g$ with arbitrary values at $t=c$ and with $g(0)$ being block-diagonal with the off-diagonal blocks equal to $0$ and the $(n-m)\times(n-m)$ lower-diagonal block being identity. Both $U(n)$ and $U(m)$ act freely. The hyperk\"ahler moment map for the action of $U(n)$ is $(-T_1(c),-T_2(c),-T_3(c))$, while the one for the action of $U(m)$ is $\pi(T_1(0),T_2(0),T_3(0))$, where $\pi$ denotes the projection onto the $m\times m$ upper-diagonal block. \par The moduli space $\tilde{F}_n(m;c)$ has a similarly defined free tri-Hamiltonian action of $U(m)$. In addition, it has a free tri-Hamiltonian action of the diagonal torus $T^n\leq U(n)$ given by gauge transformations which are asymptotic to $\exp(-th+\lambda h)$ for a diagonal $h$ and real $\lambda$. The moment map for this action is $(T_1(+\infty),T_2(+\infty),T_3(+\infty))$.\bigskip We shall now consider hyperk\"ahler quotients of various products of ${F}_n(m;c)$ and $\tilde{F}_n(m;c)$. We observe that the hyperk\"ahler quotient construction of say, ${F}_n(m;c)\times {F}_n(l;c^\prime)$ matches solutions $(T_0(t),T_1(t),T_2(t),T_3(t))$ in ${F}_n(m;c)$ with $(-T_0(c+c^\prime-t),-T_1(c+c^\prime-t),-T_2(c+c^\prime-t),-T_3(c+c^\prime-t))$ for a $(T_0(t),T_1(t),T_2(t),T_3(t))$ in $\tilde{F}_n(m;c)$. The resulting space can be identified with the moduli space of solutions to Nahm's equations on $[0,c+c^\prime]$ having appropriate poles at $t=0$ and at $t=c+c^\prime$. We recall that the triple backslash denotes hyperk\"ahler quotient (in all our constructions the moment map is canonical and we quotient its $0$-set). We have basic hyperk\"ahler isomorphisms: \begin{eqnarray} \bigl({F}_n(m;c)\times {F}_n(n;c^\prime)\bigr)/\!/\!/U(n) & \simeq & {F}_n(m;c+c^\prime)\nonumber\\ \bigl({F}_n(m;c)\times \tilde{F}_n(n;c^\prime)\bigr)/\!/\!/U(n) & \simeq & \tilde{F}_n(m;c+c^\prime).\label{quotient}\end{eqnarray} The group acts diagonally on the product. In particular all $\tilde{F}_n(m;c)$ can be obtained from the $\tilde{F}_n(n;c)$ and the ${F}_n(m;c)$. \par We now define auxiliary moduli spaces $F_{n,m}(c,c^\prime)$, $F_{\tilde{n},m}(c,c^\prime)$, $F_{n,\tilde{m}}(c,c^\prime)$, $F_{\tilde{n},\tilde{m}}(c,c^\prime)$. Here $n,m$ are arbitrary positive integers and $c,c^\prime$ are arbitrary positive real numbers. The spaces are defined as follows: \begin{itemize} \item if $n<m$, then $F_{n,m}(c,c^\prime)=\bigl(F_n(n;c)\times F_m(n;c^\prime)\bigr)/\!/\!/U(n)$. The spaces with a tilde over $n$ or $m$ are obtained by replacing the corresponding $F$ with $\tilde{F}$; \item if $n>m$, then $F_{n,m}(c,c^\prime)=\bigl(F_n(m;c)\times F_m(m;c^\prime)\bigr)/\!/\!/U(m)$ and similarly for the other spaces; \item if $n=m$, then $F_{n,n}(c,c^\prime)$ is the hyperk\"ahler quotient of $F_n(n;c)\times F_n(n;c^\prime)\times {\Bbb H}^n$ by the diagonal action of $U(n)$. \end{itemize} \begin{remark}Thus these moduli spaces consist of $\frak{u}(m)$-valued solutions $T_i^-$ on $[-x,0)$ and of $\frak{u}(n)$-valued solutions $T_i^+$ on $(0,y]$, where $x=c^\prime$ or $-\infty$ and $y=c$ or $y=+\infty$, with matching conditions at $t=0$: if $n>m$ (resp. $n<m$), then the limit of the $m\times m$ upper-diagonal block of $T_i^+$ (resp. $T_i^-$) at $t=0$ is equal to the limit of $T_i^-$ (resp. $T_i^+$); if $n=m$, then there exists a vector $(V,W)\in {\Bbb C}^{2n}$ such that $(T_2^++iT_3^+)(0_+)-(T_2^-+iT_3^-)(0_-)= VW^T$ and $T_1^+(0_+)-T_1^-(0_-)=(|V|^2-|W|^2)/2$. The gauge transformations $g(t)$ satisfy similar matching conditions: if $n\neq m$, then the upper-diagonal $m\times m$ block is continuous, the lower-diagonal block is identity at $t=0$ and the off-diagonal blocks vanish to order $(n-m-1)/2$; if $n=m$, then $g(t)$ is continuous at $t=0$.\label{match}\end{remark} Notice that $F_{n,m}(c,c^\prime)$ is isomorphic, as a hyperk\"ahler manifold, to $F_{m,n}(c^\prime,c)$ and similarly for $F_{\tilde{n},\tilde{m}}(c,c^\prime)$. We can now define the moduli spaces we are really interested in. Let us fix an integer $N$ and consider functions $\sigma:\{1,\ldots,N-1\}\rightarrow {\Bbb N}\sqcup {\Bbb N}$ is arbitrary and $\mu:\{1,\ldots,N\}\rightarrow {\Bbb R}$ is increasing. We shall denote the second copy of ${\Bbb N}$ by $\tilde{\Bbb N}$ and write its elements as $\tilde{1},\tilde{2},\tilde{3},\ldots$. We define the moduli space $F_\sigma(\mu)$ as a hyperk\"ahler quotient of $F_{\sigma_1}(c_1)\times F_{\sigma_2}(c_2,c_2^\prime)\times\ldots\times F_{\sigma_{N-1}}(c_{N-1},c_{N-1}^\prime)\times F_{\sigma_{N}}(c_{N}^\prime)$. Here $c_i+c_{i+1}^\prime=\mu(i+1)-\mu(i)$, $\sigma_1,\sigma_N\in{\Bbb N}\sqcup {\Bbb N}$ and $\sigma_i:\{1,2\}\rightarrow {\Bbb N}\sqcup {\Bbb N}$ for $2\leq i\leq N-1$. Furthermore $\sigma_1=\sigma(1)$, $\sigma_i(1)=\sigma(i)$, $\sigma_i(2)=\sigma(i-1)$ for $2\leq i\leq N-1$, and $\sigma_N=\sigma(N-1)$. Finally $F_n(c),F_{\tilde{n}}(c)$ denote $F_n(0;c)$ and $\tilde{F}_n(0;c)$. The group by which we quotient is a product of unitary groups and of tori acting on this product space: we take the diagonal action of $U(n)$ on $F_{\sigma_i}(c_i,c^\prime_i)\times F_{\sigma_{i+1}}(c_{i+1},c^\prime_{i+1})$ (or $F_{\sigma_1}(c_1)\times F_{\sigma_2}(c_2,c_2^\prime)$ for $i=1$ and similarly for $i=N-1$) if $\sigma(i)=n$ and the diagonal action of $T^n$ if $\sigma(i)=\tilde{n}$. \begin{remark}The moduli space $F_\sigma(\mu)$ should be viewed as consisting of solutions to Nahm's equations on $N-1$ ``intervals" $I_i$, $i=1,\dots,N_1$ with matching conditions at the boundary points. If $\sigma(i)\in {\Bbb N}$, then $I_i=[\mu(i),\mu(i+1)]$, while if $\sigma(i)\in \tilde{\Bbb N}$, then $I_i=[\mu(i),+\infty)\cup(-\infty,\mu(i+1)]$ (see Remark \ref{shift}). The solutions satisfy matching conditions of Remark \ref{match} at each $\mu_i$ and are continuous at each infinity. The gauge transformations satisfy matching conditions of Remark \ref{match} at each $\mu_i$ and are exponentially close to $\exp(h_it+p_i)$ near $\pm \infty$ in $I_i$, $\sigma(i)\in \tilde{\Bbb N}$, for some diagonal matrices $h_i,p_i$.\label{tilde(M)}\end{remark} A theorem of Hurtubise and Murray \cite{HurtMur}, giving a full proof of the correspondence found by Nahm \cite{Nahm}, and generalizing the $SU(2)$ case due to Hitchin \cite{Hit1} can be phrased as follows (this formulation uses the connectivity of the moduli space of $SU(N)$-monopoles due to Jarvis \cite{Jarvis}): \begin{theorem} The moduli space $M_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$ of framed $SU(N)$ \linebreak monopoles of charge $(m_1,\ldots,m_{N-1})$ and the symmetry breaking at infinity equal to $(\mu_1,\ldots,\mu_N)$, $\mu_i$ distinct, is diffeomorphic to the moduli space $F_\sigma(\mu)$ with $\sigma(i)=m_i$ and $\mu(i)=\mu_{i}$. \qed \label{mon-Nahm} \end{theorem} \begin{remark}It is expected, but at present not known (except for $N=2$ \cite{Nak}), that this diffeomorphism is an isometry. Nevertheless, the twistorial character of Hurtubise and Murray's construction shows that this diffeomorphism preserves the three complex structures and, hence, the Levi-Civita connection. Thus, the geodesics are the same. \label{isometry}\end{remark} Our aim is to show that the metric on $F_\sigma(\mu)$ with $\sigma(i)=m_i$ and $\mu(i)=\mu_{i}$ is asymptotic to the metric on $F_{\tilde{\sigma}}(\mu)$ with $\tilde{\sigma}(i)=\tilde{m}_i$ and $\mu(i)=\mu_i$. We shall first compute the metric on $F_{\tilde{\sigma}}(\mu)$. \section{Complex structures on $F_n(m;c)$ and $\tilde{F}_n(n;c)$\label{two}} All moduli spaces described in the previous section have an isometric action of $SU(2)$ or $SO(3)$ rotating the complex structures and therefore all complex structures are equivalent. We shall consider the complex structure $I$ and describe the complex coordinates (and the complex symplectic form $\omega_2+i\omega_3$) on $F_n(m;c)$ and $\tilde{F}_n(n;c)$. All other moduli spaces can be described as open subsets of complex-symplectic quotients of products of these.\\ We set $\alpha=T_0+iT_1$ and $\beta=T_2+iT_3$. The Nahm equations can be then written as one complex and one real equation: \begin{eqnarray} & &\frac{d\beta}{dt} = [\beta,\alpha]\label{complex}\\ & &\frac{d\,}{dt}(\alpha+\alpha^\ast) =[\alpha^\ast,\alpha]+[\beta^\ast,\beta].\label{real}\end{eqnarray} First, we consider $F_n(m;c)$ (cf. \cite{Hurt, BielCMP}).\\ Let $E_1,\ldots,E_n$ denote the standard basis of ${\Bbb C}^n$. There is a unique solution $w_1$ of the equation \begin{equation} \frac{dw}{dt}=-\alpha w\label{alphato0}\end{equation} with \begin{equation} \lim_{t\rightarrow 0}\left(t^{-(n-m-1)/2}w_1(t)-E_{m+1}\right)=0\label{w1}.\end{equation} Setting $w_i(t)=\beta^{i-1}(t)w_1(t)$, we obtain a solution to \eqref{alphato0} with $$ \lim_{t\rightarrow 0}\left(t^{i-(n-m+1)/2}w_i(t)-E_{m+i}\right)=0.$$ In addition there are solutions $u_1,\ldots,u_m$ to \eqref{alphato0} whose last $n-m$ components vanish to order $(n-m+1)/2$, and which are linearly independent at $t=0$. The complex gauge transformation $g(t)$ with $g^{-1}=(u_1,\ldots,u_m,w_1,\ldots,w_{n-m})$ makes $\alpha$ identically zero and sends $\beta(t)$ to the constant matrix (cf. \cite{Hurt}) \begin{equation}B=\left(\begin{array}{ccc|cccc} & & & 0 &\ldots & 0 & g_1\\ & h & & \vdots & & \vdots & \vdots\\ & & & 0 &\ldots & 0 & g_m \\ \hline f_1 & \ldots & f_m & 0 &\ldots & 0 & e_1\\ 0 &\ldots & 0 & 1 & \ddots & & e_2\\ \vdots & & \vdots & & \ddots & \ddots & \vdots \\0 &\ldots & 0 & 0&\ldots & 1 & e_{n-m} \end{array}\right).\label{betaconst}\end{equation} The mapping $(\alpha,\beta)\rightarrow (g(c),B)$ gives a biholomorphism between $(F_n(m;c),I)$ and $Gl(n,{\Bbb C})\times{\frak gl}(m,{\Bbb C})\times {\Bbb C}^{n+m}$ \cite{BielJLMS}. The action of $Gl(n,{\Bbb C})$ is given by the right translations, and the action of $Gl(m,{\Bbb C})$ is given by $p\cdot\bigl(h,f,g,e,g(c)\bigr)=(php^{-1},fp^{-1},pg,e,pg(c))$, where for the last term we embedded $Gl(m,{\Bbb C})$ in $Gl(n,{\Bbb C})$ as the $m\times m$ upper-diagonal block. We can compute the complex symplectic form $\omega=\omega_2+i\omega_3$. We denote by $b,\hat{b}$ vectors tangent to the space of $B$'s in \eqref{betaconst} and by $\rho,\hat{\rho}$ right-invariant vector fields on $Gl(n,{\Bbb C})$. We have \cite{BielJLMS}: \begin{equation}\omega\left((\rho,b),(\hat{\rho},\hat{b})\right)=\operatorname{tr}\bigl(\rho\hat{b}-\hat{\rho}b-B[\rho,\hat{\rho}]\bigr).\label{form}\end{equation} Now we consider the complex structure of $\tilde{F}_n(n;c)$. Let ${\frak n}$ be a unipotent algebra corresponding to the Cartan algebra of diagonal matrices. We consider the open dense subset $\tilde{F}({\frak n})$ of $\tilde{F}_n(n;c)$ defined as the set of all solutions $(\alpha,\beta)=(T_0+iT_1,T_2+iT_3)$ such that the intersection of the sum of positive eigenvalues of $\text{ad}(iT_1(+\infty))$ with with the centralizer $C(\beta(+\infty))$ is contained in ${\frak n}$. We observe that, since $(T_1(+\infty),T_2(+\infty),T_3(+\infty))$ is a regular triple, the projection of $T_1(+\infty)$ onto $C(\beta(+\infty))$ is a regular element. Now, as in \cite{BielCMP}, we use results of Biquard \cite{Biq} to deduce that $\tilde{F}({\frak n})$ is biholomorphic to an open subset of $Gl(n,{\Bbb C})\times_N {\frak b}$ where $N=\exp{\frak n}$ and ${\frak b}={\frak d}+{\frak n}$, ${\frak d}$ denoting the diagonal matrices. Briefly, the element $g$ of $Gl(n,{\Bbb C})$ is given by the value at $t=0$ of the complex gauge transformation $g(t)$ which makes $(0,\beta(+\infty)+n)$ into $(\alpha,\beta)$.\\ The charts $\tilde{F}({\frak n})$ are glued as follows: $[g,d+n] \sim [g^\prime,d^\prime+n^\prime]$ if and only if $n\in {\frak n},n^\prime \in {\frak n}^\prime$, and either ${\frak n}^\prime\subset {\frak n}$ and there exists an $m\in N$ such that $gm^{-1}=g^\prime,\operatorname{Ad}(m)(d+{\frak n})=d^\prime+{\frak n}^\prime$ or vice versa (i.e. ${\frak n}\subset {\frak n}^\prime$ etc.). \par We remark that $F_{\emptyset}$ is an open dense subset biholomorphic to an open subset of $Gl(n,{\Bbb C})\times {\frak d}$. We shall denote this subset by $\tilde{F}_n^{\rm reg}(n;c)$. If $b_d,\hat{b}_d$ denote vectors tangent to the space of diagonal matrices and $\rho,\hat{\rho}$ denote this time left-invariant vector fields on $Gl(n,{\Bbb C})$, then the form $\omega$ is given by \cite{BielCMP}: \begin{equation}\omega=-\operatorname{tr}\bigl(b_d\hat{\rho}-\rho\hat{b}_d-[\rho,\hat{\rho}]\beta_d\bigr).\label{form2}\end{equation} All other moduli spaces $\tilde{F}_n(m;c)$ and $F_\sigma(\mu)$ can be viewed as hyperk\"ahler quotients of products of ${F}_n(m;c)$ and $\tilde{F}_n(n;c)$. Thus, as complex-symplectic manifolds, they are isomorphic to open subsets of complex-symplectic quotients of the corresponding complex-symplectic manifolds computed above. The description of the latter quotients is straightforward. Let us remark that Hurtubise showed in \cite{Hurt} that if $\sigma(i)=m_i$, $i=1,\ldots,N-1$, then $F_\sigma(\mu)$ (i.e. the moduli space of $SU(N)$-monopoles) is biholomorphic to the space of based rational maps from ${\Bbb C}P^1$ to $SU(N)/T$ (maximal torus) of degree $(m_1,\ldots,m_{N_1})$. \section{Complex-symplectic structure of $F_{\tilde{n},\tilde{m}}(c,c^\prime)$, $n>m$ \label{four}} Our aim is to calculate the metric on $F_\sigma(\mu)$ where $\sigma(i)=\tilde{m}_i$. This space has dimension $4p=4(m_1+\ldots+m_{N-1})$ and admits a tri-Hamiltonian action of $T^p$. By the definition of $F_\sigma(\mu)$, it is a hyperk\"ahler quotient, by a torus, of the product spaces $\tilde{F}_n(0;c)$ and $F_{\tilde{n},\tilde{m}}(c,c^\prime)$. The metric on $\tilde{F}_n(0;c)$ was calculated in \cite{BielCMP} - it is the Gibbons-Manton metric \cite{GM} with the mass parameter $-1/c$. It remains to calculate the metric on $F_{\tilde{n},\tilde{m}}(c,c^\prime)$. For convenience we shall write $$\tilde{F}_{n,m}(c,c^\prime):=F_{\tilde{n},\tilde{m}}(c,c^\prime).$$ The dimension of this space is $4(n+m)$ and it has a tri-Hamiltonian action of an $(n+m)$-dimensional torus. \par The space $\tilde{F}_{n,m}(c,c^\prime)$ should be thought as consisting of solutions to Nahm's equations on $(-\infty,0)\cup (0,\infty)$, which are ${\frak u}(m)$-valued on $(-\infty,0)$, ${\frak u}(n)$-valued on $(0,\infty)$, and satisfy appropriate matching conditions at zero. In what follows we shall usually say ``$\tilde{F}_{n,m}(c,c^\prime)$ is biholomorphic to \dots" rather than ``$\tilde{F}_{n,m}(c,c^\prime)$ is biholomorphic to an open subset of \dots". This never leads to any problems. We consider the space $\tilde{F}_{n,m}(c,c^\prime)$ for $n>m$. From its description as a complex-symplectic quotient, $\tilde{F}_{n,m}(c,c^\prime)$ is given by charts of the form $\{(b_-,(g,b_+)\}\in {\frak b}\times \bigl(Gl(n,{\Bbb C})\times_{N^\prime} {\frak b}^\prime\bigr)$ such that $gb_+g^{-1}$ is of the form \eqref{betaconst} with $h=b_-$. Let us consider the chart on which ${\frak b}={\frak d}_m$ and ${\frak b}^\prime={\frak d}_n$ (${\frak d}_m$ and ${\frak d}_n$ denote $m\times m$ and $n\times n$ diagonal matrices). Let us write the elements of ${\frak d}_m$ as $\operatorname{diag}(\kappa_1,\ldots,\kappa_m)$ and the elements of ${\frak d}_n$ as $\operatorname{diag}(\beta_1,\ldots,\beta_n)$. Let $q_+(z)=\prod (z-\beta_i)$ and $q_-(z)=\prod (z-\kappa_i)$. We assume that the roots of both these polynomials are distinct and we consider multiplication by $z$ on ${\Bbb C}[z]/(q_+)$. It is a linear operator which, in the basis \begin{equation}\frac{\prod_{j\neq i}(z-\beta_j)}{\prod_{j\neq i}(\beta_i-\beta_j)},\qquad {i=1,\ldots,n},\label{basis1}\end{equation} is the diagonal matrix $\operatorname{diag}(\beta_1,\ldots,\beta_n)$. On the other hand, in the basis \begin{equation}\frac{\prod_{j\neq i}(z-\kappa_j)}{\prod_{j\neq i}(\kappa_i-\kappa_j)},q_-(z),\ldots,z^{n-m-1}q_-(z),\qquad {i=1,\ldots,m},\label{basis2}\end{equation} the multiplication by $z$ is given by a matrix of the form \eqref{betaconst} with $f_i=1\big/\prod_{j\neq i}(\kappa_i-\kappa_j)$. Let $Z$ be the matrix transforming the basis \eqref{basis1} into \eqref{basis2}. Then any $g$ which sends $\operatorname{diag}(\beta_1,\ldots,\beta_n)$ to a matrix of the form \eqref{betaconst} can be written as \begin{equation}\operatorname{diag}(v_1^{-1},\ldots,v_m^{-1},1,\ldots,1)Z\operatorname{diag}(u_1,\ldots,u_n).\label{g-Z}\end{equation} We shall now compute $Z$. We introduce one more basis of ${\Bbb C}[z]/(q_+)$: \begin{equation} 1,\ldots,z^{n-1}.\label{basis3}\end{equation} The passage from \eqref{basis1} to \eqref{basis3} is given by $V(\beta_1,\ldots,\beta_n)^{-1}$, where $V(\beta_1,\ldots,\beta_n)$ is the Vandermonde matrix, i.e. its $(i,j)$-th entry is $(\beta_i)^{j-1}$. We then compute the passage from \eqref{basis3} to \eqref{basis2} as given by the matrix \begin{equation} L=\left(\begin{array}{c|c} V_{\kappa} & W\\ \hline 0 & H \end{array}\right),\label{el}\end{equation} where $V_{\kappa}=V(\kappa_1,\ldots,\kappa_m)$, $W_{ij}=(\kappa_i)^{m+j-1}$ and $H$ is upper-triangular with $H_{ij}=(-1)^{i-j}H_{i-j}$, where $H_k$ denotes the $k$-th complete symmetric polynomial in $\kappa_1,\ldots,\kappa_m$, i.e. the sum of all monomials of degree $k$. \begin{remark} The factorization $Z=LV^{-1}$ is unique only if $\beta_i\neq\kappa_j$ for all $i,j$. If, for instance, $\beta_1=\kappa_1$, then the above $g$ sends $\operatorname{diag}(\beta_1,\ldots,\beta_n)$ to a matrix of the form \eqref{betaconst} with $g_1=0$. However, there is then another $g$ which makes $f_1=0$.\label{unique}\end{remark} We calculate the complex symplectic form on $\tilde{F}_{n,m}(c,c^\prime)$. The chart where $\beta_i\neq \beta_j$, $\kappa_r\neq\kappa_s$, $\beta_i\neq \kappa_s$ for all $i,j=1,\dots,n$, $i\neq j$, $r,s=1,\dots,m$, $r\neq s$, can be described as consisting of pairs $\bigl((g_{-},\kappa_d),(g_{+},\beta_d)\bigr)$, where $\kappa_d=\operatorname{diag}(\kappa_1,\dots,\kappa_m)$, $\beta_d=\operatorname{diag}(\beta_1,\dots,\beta_n)$, $g_{-}=V_\kappa^{-1}\operatorname{diag}(v_1,\dots,v_m)$, $g_{+}=V_\kappa^{-1}L V_\beta^{-1}\operatorname{diag}(u_1,\dots,u_n)$. According to the formula \eqref{form2}, the complex symplectic form in this chart is equal to: $$\omega=-\operatorname{tr}\bigl(k_d\tilde{\rho}_--\rho_-\tilde{k}_d-\kappa_d[\rho_-,\tilde{\rho}_-]+b_d\tilde{\rho}_+-\rho_+\tilde{b}_d-\beta_d[\rho_+,\tilde{\rho}_+]\bigr).$$ Here $k_d,\rho_-,b_d,\rho_+$ are dual to, respectively, $\kappa_d,(g_-)^{-1}dg_-,d\beta_d, (g_+)^{-1}dg_+$.\newline The first three terms can be computed as in \cite{BielCMP} and give \begin{equation}\sum_{i=1}^m \frac{dv_i}{v_i}\wedge d\kappa_i-\sum_{i<j}\frac{d\kappa_i\wedge d\kappa_j}{\kappa_i-\kappa_j} .\label{omega-}\end{equation} To compute the remaining two terms let us write $X=V_{\kappa}^{-1}L$ and $Y=V_{\beta}^{-1}\operatorname{diag}(u_1,\ldots,u_n)$. Let us also write $\beta_c=Y\beta_d Y^{-1}$ and $b_c,x,y$ for vector fields dual to $\beta_c,X^{-1}dX, Y^{-1}dY$. Then $\rho_+=Y^{-1}xY+y$ and $b_c=Yb_dY^{-1}+Y[y,\beta_d]Y^{-1}$. Thus the last three terms in the above formula can be rewritten as \begin{multline*}-\operatorname{tr}\bigl(b_d\tilde{\rho}_+-\rho_+\tilde{b}_d-\beta_d[\rho_+,\tilde{\rho}_+]\bigr)= -\operatorname{tr}\Bigl(b_d\tilde{y}-y\tilde{b}_d +b_dY^{-1}\tilde{x}Y- Y^{-1}xY \tilde{b}_d \\ -\beta_d[y,\tilde{y}] -\beta_d[y,Y^{-1}\tilde{x}Y]-\beta_d[Y^{-1}xY,\tilde{y}] -Y\beta_dY^{-1}[x,\tilde{x}]\Bigr)=\omega_+ -\operatorname{tr}\Bigl(Yb_dY^{-1}\tilde{x}- xY \tilde{b}_dY^{-1} \\ -Y\beta_dY^{-1}[YyY^{-1},\tilde{x}]-Y\beta_dY^{-1}[x,Y\tilde{y}Y^{-1}] -Y\beta_dY^{-1}[x,\tilde{x}]\Bigr)=\omega_+-\operatorname{tr}\bigl(b_c\tilde{x}-x\tilde{b}_c- \beta_c[x,\tilde{x}]\bigr).\end{multline*} Here $\omega_+=-\operatorname{tr}\bigl(b_d\tilde{y}-y\tilde{b}_d-\beta_d[y,\tilde{y}]\bigr)$, which, again as in \cite{BielCMP}, is equal to \begin{equation}\sum_{i=1}^n \frac{du_i}{u_i}\wedge d\beta_i-\sum_{i<j}\frac{d\beta_i\wedge d\beta_j}{\beta_i-\beta_j} .\label{omega+}\end{equation} For the remaining terms we observe that $d\beta_c$ is upper triangular and $X^{-1}dX$ is strictly upper-triangular. Hence the remaining terms vanish and the complex-symplectic form on $\tilde{F}_{n,m}(c,c^\prime)$ is given by: \begin{equation}\omega= \sum_{i=1}^n \frac{du_i}{u_i}\wedge d\beta_i-\sum_{i<j}\frac{d\beta_i\wedge d\beta_j}{\beta_i-\beta_j} +\sum_{i=1}^m \frac{dv_i}{v_i}\wedge d\kappa_i-\sum_{i<j}\frac{d\kappa_i\wedge d\kappa_j}{\kappa_i-\kappa_j}.\label{omega} \end{equation} \section{Twistor space and the metric of $\tilde{F}_{n,m}(c,c^\prime)$, $n>m$\label{five}} First, we compute the twistor space of $F_n(m;c)$. Let $\zeta$ be the affine coordinate on ${\Bbb C}P^1$. The twistor space of any hyperk\"ahler manifold admitting an action of $SU(2)$ rotating the complex structures can be trivialized using just two charts $\zeta\neq \infty$ and $\zeta\neq 0$. For a moduli space of solutions to Nahm's equations this is achieved by putting $\eta=\beta+(\alpha+\alpha^\ast)\zeta-\beta^\ast\zeta^2$, $u=\alpha-\beta^\ast\zeta$ over $\zeta\neq\infty$ and $\tilde{\eta}=\beta/\zeta^{2}+(\alpha+\alpha^\ast)/\zeta-\beta^\ast$, $\tilde{u}=-\alpha^\ast-\beta/\zeta$ over $\zeta\neq 0$. Then, over $\zeta\neq 0,\infty$, we have $\tilde{\eta}=\eta/\zeta^2$, $\tilde{u}=u-\eta/\zeta$. Moreover, the real structure is $\zeta\mapsto -1/\bar{\zeta}$, $\eta\mapsto -\eta^\ast/\bar{\zeta}^2$, $u\mapsto -u^\ast+\eta^\ast/\bar{\zeta}$ (cf. \cite{Dan2,Biq2}). \par We consider the matrix \eqref{betaconst}. We have \begin{lemma} The $(i,j)$-th entry of $\exp{Bt/\zeta}$ is of the form $\frac{1}{(i-j)!}(t/\zeta)^{i-j}+O(t^{i-j+1})$ for $i>j>m$.\end{lemma} \begin{pf} We write $(Bt/\zeta)^k$ in the block form as $$\begin{pmatrix} P(k) & Q(k)\\ R(k) & S(k) \end{pmatrix}. $$ One then checks by induction that: $$ S(k)_{(r,s)}=\begin{cases} 0 & \text{if $k<r-s$}\\ (t/\zeta)^{r-s} & \text{if $k=r-s$}\\ O(t^{r-s+1}) & \text{if $k>r-s$}\end{cases}$$ and similarly for the other blocks. \end{pf} Therefore the $(m+1)$-th column, denoted by $\tilde{p}_{m+1}$ of $g^{-1}(t)\exp\{Bt/\zeta\}$ is of the form $t^{n-m-1}p +O(t^{n-m})$, for some vector $p$. This means that $p$ belongs to the $-(n-m-1)/2$-eigenspace of $\operatorname{Res}\tilde{u}$, and so is of the form $aE_n$ ($E_n$ is the $n$-th vector of the standard basis), for some constant $a$. Computing the $t^{n-m-1}$-term of the last entry of $\tilde{p}_{m+1}$ gives $(\operatorname{Res} \eta)^{n-m-1}E_{m+1}\bigl(\zeta^{n-m-1}(n-m-1)!\bigr)^{-1}=\zeta^{-(n-m-1)}$, and so $a=\zeta^{-(n-m-1)}$. Thus, as in \eqref{w1}, $\tilde{w}_1= \zeta^{n-m-1}p_{m+1}$ is a solution $\tilde{w}_1(t)$ to $\frac{d}{dt}\tilde{w}_1=-\tilde{u}\tilde{w}_1$ with $$\lim_{t\rightarrow 0}\bigl(t^{-(n-m-1)/2}w_1(t)-E_n\bigr)=0.$$ In the same vein we see that $\tilde{w}_i(t)=\zeta^{n-m+1-2i}\tilde{p}_{m+i}$, where $\tilde{p}_{m+i}$ is the $(m+i)$-th column of $g^{-1}(t)\exp\{Bt/\zeta\}$. In other words $\tilde{g}(t)=d(\zeta)\exp\{Bt/\zeta\}g(t)$ where $$d(\zeta)=\operatorname{diag}\{1,\ldots,1,\zeta^{-(n-m-1)},\ldots,\zeta^{(n-m-1)}\}.$$ \par Similar computations show that the real structure sends $B$ to $-r(\zeta)\bigl(B^\ast/\bar{\zeta}^2\bigr)r(\zeta)^{-1}$ and $g$ to $r(\zeta)\exp\{B^\ast/\bar{\zeta}\}(g^\ast)^{-1}$ where \begin{equation}r_{ij}(\zeta)=\begin{cases} 0 & \text{if $i+j\neq n+m+1$}\\ (-1)^{j-1}\bar{\zeta}^{n+m+1-2j} & \text{if $i+j= n+m+1$}.\end{cases} \label{rij}\end{equation} Now we consider the subset of $\tilde{F}_n(n;c)$, where the eigenvalues of $\beta(+\infty)$ are distinct. We have assigned to each element of this set the pair $(\beta(+\infty);g)$. We know that $\tilde{\beta}(+\infty)=\beta(+\infty)/\zeta^2$. The argument in section 2 shows then that $\tilde{g}=g\exp\{-c\beta(+\infty)/\zeta\}$. The real structure sends $g$ to $(g^\ast)^{-1}\exp\{c\beta(+\infty)^\ast/\bar{\zeta}\}$. Now we wish to calculate the twistor space of $\tilde{F}_{n,m}(c,c^\prime)$ for $n>m$. This space is a hyperk\"ahler quotient of $F_n(m;1)\times\tilde{F}_n(m;c-1)\times\tilde{F}_m(m;c^\prime)$. On the subset corresponding to the same chart as in section \ref{four} (i.e. ${\frak b}={\frak d}_m$ and ${\frak b}^\prime={\frak d}_n$), the coordinates are given by $\beta(-\infty),\beta(+\infty)$ and $g$ such that $g\beta(+\infty)g^{-1}$ is of the form \eqref{betaconst} with $h=\beta(-\infty)$. This $g$ can be written as $(g_3)^{-1}g_1g_2$, where $g_1$ (resp. $g_2$, resp. $g_3$) is the $g$ considered above for $F_n(m;1)$ (resp. $\tilde{F}_n(m;c-1)$, resp. $\tilde{F}_m(m;c^\prime)$). It follows that $$\tilde{g}=\exp\{c^\prime\beta(-\infty)/\zeta\}(g_3)^{-1}d(\zeta)\exp\{B/\zeta\} g_1 g_2\exp\{-(c-1)\beta(+\infty)/\zeta\}$$ which can be rewritten as $$ \exp\{c^\prime\beta(-\infty)/\zeta\}d(\zeta) (g_3)^{-1}g_1g_2 \exp\{-c\beta(+\infty)/\zeta\}.$$ Therefore $$ \tilde{g}=\exp\{c^\prime\beta(-\infty)/\zeta\}d(\zeta) g \exp\{-c\beta(+\infty)/\zeta\}.$$ We now compute the twistor space in coordinates $\kappa_1,\ldots,\kappa_m$,$v_1,\ldots,v_m$,$\beta_1,\ldots,\beta_n$, $u_1,\ldots,u_n$ of section \ref{four}. We know that $\tilde{\kappa}_i=\kappa_i/\zeta^2$ and $\tilde{\beta}_j=\beta_j/\zeta^2$. The matrix $g$ is given by \eqref{g-Z}, where $Z=LV^{-1}$ where $L$ is described by \eqref{el} and $V$ is the Vandermonde matrix for $\beta_1,\ldots,\beta_n$. We obtain equations (here $v_i=\tilde{v}_i=1$ and $\kappa_i=\tilde{\kappa}_i=0$ for $i>m$): $$\tilde{v}_i^{-1}\tilde{u}_j\tilde{Z}_{ij}=\exp\{(c^\prime \kappa_i-c\beta_j)/\zeta\} d_i(\zeta) Z_{ij}v_i^{-1}u_j.$$ In addition $\tilde{Z}_{ij}=Z_{ij}$ if $i\leq m$ and $\tilde{Z}_{ij}=\zeta^{2i-2}Z_{ij}$ if $i>m$. Hence $$\tilde{v}_i^{-1}\tilde{u}_j=\begin{cases}\exp\{(c^\prime \kappa_i-c\beta_j)/\zeta\}v_i^{-1}u_j & \text{if $i\leq m$}\\ \zeta^{-n-m+1}\exp\{-c\beta_j/\zeta\}v_i^{-1}u_j & \text{if $i>m$}. \end{cases}$$ As $v_i=\tilde{v}_i=1$ for $i>m$, we finally obtain \begin{xalignat}{1} & \tilde{v}_i=\zeta^{-n-m+1} \exp\{-c^\prime \kappa_i/\zeta\}v_i\\ & \tilde{u}_j=\zeta^{-n-m+1}\exp\{-c\beta_j/\zeta\}u_j.\end{xalignat} Finally, the real structure is computed as in \cite{BielCMP}: \begin{xalignat}{2} \beta_i\mapsto -\bar{\beta}_i/\bar{\zeta}^2, & & u_i\mapsto \bar{u}_i^{-1}(1/\bar{\zeta})^{n+m-1}e^{c\bar{\beta}_i/\bar{\zeta}} \prod_{j\neq i}(\bar{\beta}_i-\bar{\beta}_j)\prod_{j=1}^m(\bar{\beta}_i-\bar{\kappa}_j) &\label{real1} \\ \kappa_i\mapsto -\bar{\kappa}_i/\bar{\zeta}^2, & & v_i\mapsto \bar{v}_i^{-1}(1/\bar{\zeta})^{n+m-1}e^{c^\prime\bar{\kappa}_i/\bar{\zeta}} \prod_{j\neq i}(\bar{\kappa}_i-\bar{\kappa}_j)\prod_{j=1}^n(\bar{\kappa}_i-\bar{\beta}_j) &\label{real2} \end{xalignat} We now have to calculate the real sections. First of all we have \begin{xalignat}{2} & \beta_i(\zeta)=z_i+2x_i\zeta-\bar{z}_i\zeta^2 & & \text{for $i=1,\ldots,n,$}\\ & \kappa_i(\zeta)=z_{n+i}+2x_{n+i}\zeta-\bar{z}_{n+i}\zeta^2 & & \text{for $i=1,\ldots,m,$}\end{xalignat} where $p_i=(z_i,x_i)\in {\Bbb C}\times{\Bbb R}$ are such that $p_i\neq p_j$ if $i\neq j$. These curves of genus $0$ should be thought of as spectral curves of individual monopoles. Let $S_i$ denote either $\beta_i$ or $\kappa_i$. Two curves $S_i$ and $S_j$ intersect in a pair of distinct points $a_{ij}$ and $a_{ji}$, where \begin{equation} a_{ij}=\frac{(x_i-x_j)+r_{ij}}{\bar{z}_i-\bar{z}_j}, \quad r_{ij}=\sqrt{(x_i-x_j)^2 +|z_i-z_j|^2}.\label{aij}\end{equation} As in \cite{BielCMP}, if $i,j\leq n$, then $u_i$ has a zero at $a_{ji}$ and is nonzero at $a_{ij}$. Similarly, if $i,j> n$, then $v_{i-n}$ has a zero at $a_{ji}$ and is nonzero at $a_{ij}$. Let us consider what happens when $i\leq n$ and $j>n$ (and no other curves intersect $S_i$ at $a_{ji}$). First of all, computing the characteristic polynomial of \eqref{betaconst} gives \cite{Hurt}: \begin{equation}\det (\eta -B)=\det(\eta-h)(\eta^{n-m}-e_{n-m}\eta^{n-m-1}-\ldots -e_1) -f(\eta-h)_{\rm adj}g,\label{det}\end{equation} from which we conclude that $f_{j-n}g_{j-n}$ is zero at both $a_{ij}$ and $a_{ji}$. This implies, since $f_{i}=v_i/\prod_{s\neq i}(\kappa_i-\kappa_s)$, that $v_{j-n}$ is zero precisely when $f_{j-n}$ is zero. Now, if the passage from $\operatorname{diag}(\beta_1,\ldots,\beta_n)$ to \eqref{betaconst} is given by the matrix $G$ of the form \eqref{g-Z} with $Z=LV^{-1}$, then $G_{j-n,s}=0$ if $s\neq i$ and $G_{j-n,i}=v_{j-n}^{-1}u_i$. This has two implications: 1) $u_i$ is zero if and only if $v_{j-n}$ is, and 2) $g_j=0$ in \eqref{betaconst}. Hence, in this situation, $v_{j-n}\neq 0$. Thus $u_i$ and $v_{j-n}$ are zero at exactly one of the two points of intersection of $S_i$ and $S_j$. Furthermore, since $\kappa_k(\zeta)$ does not intersect $\beta_l(\zeta)$ at $a_{ij}$ or $a_{ji}$ if $k\neq j-n$ or $l\neq i$, we conclude that $f_kg_k\neq 0$ at $a_{ij}$ or $a_{ji}$ if $k\neq j-n$. Thus $v_k(a_{ij})\neq 0$ and $v_k(a_{ji})\neq 0$ for $k\neq j-n$. Since $G(a_{ij})$ and $G(a_{ji})$ are invertible we also have $u_l(a_{ij})\neq 0$ and $u_l(a_{ji})\neq 0$ for $l\neq i$. \par Summing up, $u_i(\zeta)$, $i\leq n$, and $v_i(\zeta)$, $i\leq m$, are of the form: \begin{equation} u_i(\zeta)=A_i\prod\begin{Sb}j\leq n\\ j\neq i\end{Sb}(\zeta-a_{ji}) \prod_{j>n}(\zeta-c_{ij})e^{c(x_i-\bar{z}_i\zeta)},\label{ui}\end{equation} \begin{equation} v_i(\zeta)=B_i\prod\begin{Sb}j\leq m\\j\neq i\end{Sb}(\zeta-a_{j+n,i+n}) \prod_{j\leq n} (\zeta-c_{i+n,j})e^{c^\prime(x_i-\bar{z}_i\zeta)}.\label{vi}\end{equation} Here $c_{ij}$ can be either $a_{ij}$ or $a_{ji}$ and is at present undetermined. The reality condition implies that $$A_i\bar{A}_i=\prod\begin{Sb}j\leq n\\ j\neq i\end{Sb}(x_i-x_j+r_{ij}) \prod_{j>n}\bigl(\pm(x_i-x_j)+r_{ij}\bigr),$$ $$B_i\bar{B}_i=\prod\begin{Sb}j\leq m\\j\neq i\end{Sb}(x_{i+n}-x_{j+n}+r_{i+n,j+n}) \prod_{j\leq n} \bigl(\pm(x_{i+n}-x_j)+r_{i+n,j}\bigr),$$ where the undetermined signs are positive if $c_{ij}=a_{ji}$ and negative if $c_{ij}=a_{ij}$. By continuity, these formulae extend to the case when more than two $S_i$ intersect at a point. \par We can now compute the metric on $\tilde{F}_{n,m}(c,c^\prime)$ up to the above sign indeterminacy. This metric is of the form \eqref{torus-invariant} and it is enough to compute the matrix $\Phi$. The complex symplectic form on the twistor space is given by the formula \eqref{omega} and it follows from it that each factor in \eqref{ui} (resp. \eqref{vi}) together with the corresponding factor of $|A_i|$ (resp. $|B_i|$) gives a separate contribution to $\Phi$ (the coefficient of $\zeta$ in the expansion of $\omega$ is the K\"ahler form $\omega_1$). The factors of $u_i$, indexed by $j\leq n$, together with the exponential term, describe exactly the twistor space of the Gibbons-Manton metric \eqref{GM} with mass parameter $-1/c$, as computed in Proposition 6.2 of \cite{BielCMP}, providing that the complex symplectic form is taken to be the first two terms of \eqref{omega}. Thus these terms contribute $$c-\sum\begin{Sb}j\leq n\\ j\neq i\end{Sb}\frac{1}{r_{ij}}$$ to $\Phi_{ii}$, $i\leq n$, and $1/r_{ij}$ to $\Phi_{ij}$ for $i,j\leq n$, $i\neq j$. An exactly parallel statement holds for the factors of $v_i$ indexed by $j\leq m$ plus the exponential term. The remaining factors contribute terms in $\frac{du_{i}}{u_{i}}\wedge d\beta_i$ and $\frac{dv_{i}}{v_{i}}\wedge d\kappa_i$. The calculation in the proof of Theorem 6.4 in \cite{BielCMP} shows that the contribution of these factors to the matrix $\Phi$ are Gibbons-Manton-like terms with positive or negative sign. Thus we conclude that the matrix $\Phi$ for $\tilde{F}_{n,m}(c,c^\prime)$ is given by \eqref{GMtype} with \begin{equation} c_i=\begin{cases} c & \text{if $i\leq n$}\\ c^\prime & \text{if $i> n$},\end{cases}\label{ci}\end{equation} \begin{equation} s_{ij}=\begin{cases} -1 & \text{if $i,j\leq n$ or $i,j>n$}\\ (-1)^{\epsilon_{ij}} & \text{if $i\leq n, j>n$ or $i>n,j\leq n$}.\end{cases}\label{sij}\end{equation} Here $\epsilon_{ij}=0$ if $c_{ij}=a_{ij}$ and $\epsilon_{ij}=1$ if $c_{ij}=a_{ji}$. We shall eventually see (Lemma \ref{epsilonij}) that all $\epsilon_{ij}$ are equal to zero. \section{ The metric on $\tilde{F}_{n,n}(c,c^\prime)$ \label{six}} This space is the hyperk\"ahler quotient of $\tilde{F}_n(n;c)\times \tilde{F}_n(n;c^\prime)\times {\Bbb H}^n$ by the diagonal action of $U(n)$. According to section \ref{two} its complex charts can be described as $\bigl(b_-,(g,b_+), (V,W)\bigr)\in {\frak b}\times \bigl(Gl(n,{\Bbb C})\times_{N^\prime} {\frak b}^\prime\bigr)\times{\Bbb C}^{2n}$ with $gb_+g^{-1}=b_-+VW^T$. \par Our first step is to calculate the complex-symplectic form on the set where ${\frak b}={\frak b}^\prime={\frak d}$. Let us write $\beta_d^+=\operatorname{diag}(\beta_1^+,\dots,\beta_n^+)$ for $b_+$ and $\beta_d^-=\operatorname{diag}(\beta_1^-,\dots,\beta_n^-)$ for $b_-$ on this set. The choice of our chart implies that $\beta_i^+\neq \beta_j^+$ and $\beta_i^-\neq \beta_j^-$ for $i\neq j$. In addition we suppose that $\beta_i^-\neq \beta_j^+$ for all $i,j\leq n$. Then one of $\beta_d^-,\beta_d^+$, say $\beta_d^-$, is invertible. Since all components of $W$ must be nonzero (otherwise the spectra of $\beta_d^-$ and $\beta_d^+$ are not disjoint), $W$ is cyclic for $\beta_d^-$. Consider the basis given by columns of $\bigl((\beta_d^-)^{n-1}W,(\beta_d^-)^{n-2}W,\ldots,W\bigr)^T$, in which $\beta^-$ is of form \eqref{betaconst} (with $m=0$) and $W^T=(0,0,\ldots,1)$. Thus, $\beta^+$ is also of form \eqref{betaconst}. We can therefore describe this chart as consisting of pairs $\bigl((g_-,\beta_d^-),(g_+,\beta_d^+)\bigr)$ with $g_-\beta_d^-(g_-)^{-1}$ and $g_+\beta_d^+(g_+)^{-1}$ both of form \eqref{betaconst} (with $m=0$). We have $g_{\pm}=V(\beta_1^{\pm},\ldots, \beta_n^{\pm})^{-1}\operatorname{diag} (u_1^\pm,\ldots,u_n^\pm)$. The form $\omega$, via the complex-symplectic quotient, can be written as ($b_d^\pm,\rho_{\pm}$ are dual to $\beta_d^\pm, (g_\pm)^{-1}dg_\pm$): $$\omega=-\operatorname{tr}\bigl(b_d^+\tilde{\rho}_+-\rho_+\tilde{b}_d^+-\beta_d^+[\rho_+, \tilde{\rho}_+]+b_d^-\tilde{\rho}_--\rho_-\tilde{b}_d^--\beta_d^-[\rho_-, \tilde{\rho}_-]\bigr),$$ which can be computed as in \cite{BielCMP} giving: \begin{equation}\omega= \sum_{i=1}^n \frac{du_i^-}{u_i^-}\wedge d\beta_i^- -\sum_{i<j}\frac{d\beta_i^-\wedge d\beta_j^-}{\beta_i^--\beta_j^-} + \sum_{i=1}^n \frac{du_i^+}{u_i^+}\wedge d\beta_i^+ -\sum_{i<j}\frac{d\beta_i^+\wedge d\beta_j^+}{\beta_i^+-\beta_j^+}.\label{omega+-}\end{equation} We now wish to compute the twistor space of $\tilde{F}_{n,n}(c,c^\prime)$. We proceed as in the previous section. A calculation done there shows that the coordinates $g,\beta_d^-,\beta_d^+$ (here $g\beta_d^+ g^{-1}=\beta_d^-+VW^T$) change from $\zeta\neq \infty$ to $\zeta\neq 0$ as: $$\tilde{\beta}_d^\pm=\beta_d^\pm/\zeta^2,\qquad \tilde{g}=\exp\{c^\prime\beta_d^-\}g\exp\{-c\beta_d^+\}. $$ Moreover, since the twistor space of ${\Bbb H}^n$ is simply $O(1)\otimes {\Bbb C}^{2n}$, we have $$\tilde{V}=V/\zeta,\qquad \tilde{W}=W/\zeta.$$ We wish to pass to coordinates $\beta_i^\pm,u_i^\pm$, $i=1,\dots,n$. The passage from the basis in which $\beta^\pm$ are diagonal to the one in which they are of the form \eqref{betaconst} is achieved by the matrix $$H=\bigl((\beta_d^-)^{n-1}W,(\beta_d^-)^{n-2}W,\dots,W\bigr)^T.$$ Thus $g_-=Hd$ and $g_+=Hdg$ for some diagonal $d$. We have $\tilde{d}=d\exp\{-c^\prime\beta_d^-\}$ ($d$ is $g_-$ in the chart in which $b_-$ is diagonal). On the other hand we have written $g_{\pm}=V(\beta_1^{\pm},\ldots, \beta_n^{\pm})^{-1}\operatorname{diag} (u_1^\pm,\ldots,u_n^\pm)$. Comparing the two formulae in the two charts, we conclude that $u^\pm_i(\zeta)$ changes from $\zeta\neq \infty$ to $\zeta\neq 0$ as: $$ \tilde{u}_i^+=\zeta^{1-2n}\exp\{-c\beta^+_i/\zeta\}u^+_i,\quad i=1,\dots,n,$$ $$ \tilde{u}_i^-=\zeta^{1-2n}\exp\{-c^\prime\beta^-_i/\zeta\}u^-_i,\quad i=1,\dots,n.$$ The real structure is given by the formulae \eqref{real1} and \eqref{real2} with $m=n$, $\beta_i=\beta_i^+,\kappa_i=\beta_i^-,u_i=u_i^+,v_i=u_i^-$. We have to know what happens to the $u^\pm_i(\zeta)$ when two curves $\beta_i^\pm(\zeta)$ intersect. As in the previous section we write $S_i(\zeta)=\beta^+_i(\zeta)=z_i+2x_i\zeta -\bar{z}_i\zeta^2$, $S_{n+i}(\zeta)=\beta^-_i(\zeta)=z_{n+i}+2x_{n+i}\zeta -\bar{z}_{n+i}\zeta^2$, and we denote the intersection points of $S_i$ and $S_j$ by $a_{ij},a_{ji}$. These are given by the formula \eqref{aij}. \par Consider first the intersection of $S_i$ and $S_j$ where both $i$ and $j$ are either less than or equal to $n$ (and no other $S_k$ intersect at $a_{ij},a_{ji}$). We can still assume generically that the spectra of $\beta_d^-$ and $\beta_d^+$ are disjoint and, hence, $W$ is cyclic for $\beta_d^-$. Then $H$ is invertible at $a_{ij},a_{ji}$ and so are $g_{\pm}$. We compute, as in \cite{BielCMP}, that each $u_i^\pm$ has a zero at the intersection point $a_{ji}$ and is nonzero at $a_{ij}$, and all other $u_s^\pm$, $s\neq i,j$ are nonzero at both $a_{ij}$ and $a_{ji}$. The same argument works in the case when both $i$ and $j$ are greater than $n$. \par Now consider the intersection of $S_i$ and $S_j$ where $i\leq n$ and $j>n$. In the chart in which $\beta^-$ is diagonal, we had $g\beta_d^+g^{-1}=\beta_d^-+VW^T$. Thus $\det(\beta_d^- - \beta_i^+1+VW^T)=0$. Since $VW^T$ has rank one, and so all its $k\times k$, $k>1$, minors vanish, we have for any diagonal matrix $d=\operatorname{diag}(d_1,\dots,d_n)$ the formula \begin{equation}\det(d+VW^T)=\prod_k d_k +\sum_{k}\left(V_kW_k\prod_{l\neq k} d_l\right).\label{det2}\end{equation} In our case $d_j=0$ and $d_k\neq 0$ for $k\neq j$. We conclude that $V_jW_j$ vanish at both $a_{ij}$ and $a_{ji}$. However both $V_j$ and $W_j$ are sections of $O(1)$ and so have exactly one zero. Thus $W_j$ vanishes at either $a_{ij}$ or $a_{ji}$ (and only one of them). Furthermore, if we consider the diagonal matrix $d=\beta_d^- - \beta_s^-1$, $s\neq j$, then, by the above argument, the non-vanishing of $\det(d+VW^T)$ implies that $V_sW_s$ does not vanish at either $a_{ij}$ or $a_{ji}$ if $s\neq j$. In summary, it is precisely the $j$-th column of $H$ that vanishes at either $a_{ij}$ or $a_{ji}$. Thus the same statement holds for both $g_-$ and $g_+$, and, as $g_{\pm}=V(\beta_1^{\pm},\ldots, \beta_n^{\pm})^{-1}\operatorname{diag} (u_1^\pm,\ldots,u_n^\pm)$, we conclude that both $u_j^-$ and $u_j^+$ vanish at either $a_{ij}$ or $a_{ji}$ and no other $u_s^\pm$ vanishes at either $a_{ij}$ or $a_{ji}$. This means that $u_i^+$ is given by the formula \eqref{ui} and $u_i^-$ is given by the formula \eqref{vi} (with $n=m$). Once more, the formulae extend to the non-generic case. The remainder of the previous section can be now repeated word by word, and we conclude that the metric on $\tilde{F}_{n,n}(c,c^\prime)$ is of the form \eqref{torus-invariant} with $\Phi$ given by \eqref{GMtype} where the $c_i$ and $s_{ij}$ are given by \eqref{ci} and \eqref{sij}. \section{Topology of $\tilde{F}_{n,m}(c,c^\prime)$ \label{topology}} We shall discuss the topology of $\tilde{F}_{n,m}(c,c^\prime)$. This space can be viewed as a moduli space of solutions to Nahm's equations defined on $(-\infty,0]\cup(0,+\infty)$ with the appropriate matching at $0$. The tri-Hamiltonian action of $T^{n+m}=T^n\times T^m$ gives us the moment map to ${\Bbb R}^3\otimes {\Bbb R}^{n+m}$ which is simply $$\Bigl(\bigl(T_1(+\infty),-T_1(-\infty)\bigr), \bigl(T_2(+\infty),-T_2(-\infty)\bigr),\bigl(T_2(+\infty),-T_2(-\infty)\bigr)\Bigr).$$ Before stating the result let us recall that a basis of the second homology $H_2\bigl(\tilde{C}_{p}({\Bbb R}^3),{\Bbb Z}\bigr)$ of a configuration space $\tilde{C}_{p}({\Bbb R}^3)$ is given by the $p(p-1)/2$ $2$-spheres \begin{equation}S_{ij}^2=\{({\bf x}_1,\dots,{\bf x}_p)\in{\Bbb R}^3\otimes {\Bbb R}^p; |{\bf x}_i-{\bf x}_j|={\it const},\enskip {\bf x}_k={\it const}\enskip\text{if}\enskip k\neq i,j\}\label{basis}\end{equation} where $i<j$. We have: \begin{proposition} The above moment map induces a homeomorphism between the orbit space of $\tilde{F}_{n,m}(c,c^\prime)$ and $\tilde{C}_{n}({\Bbb R}^3)\times \tilde{C}_{m}({\Bbb R}^3)$. The set of principal $T^{n+m}$-orbits of $\tilde{F}_{n,m}(c,c^\prime)$ maps to $\tilde{C}_{n+m}({\Bbb R}^3)$ and as a $T^{n+m}$-bundle is determined by the element $(h_1,\ldots,h_{n+m})$ of $H^2\bigl(\tilde{C}_{n+m}({\Bbb R}^3),{\Bbb Z}^{n+m}\bigr)$ given by $$h_k(S_{ij}^2)=\begin{cases} s_{ij} &\text{if $k=i$}\\-s_{ij} &\text{if $k=j$}\\0 &\text{otherwise,}\end{cases}$$ where the $s_{ij}$ are given by \eqref{sij}.\label{bundle}\end{proposition} \begin{pf} Let us fix an element $(\tau^+,\tau^-)$ of $ \tilde{C}_{n}({\Bbb R}^3)\times \tilde{C}_{m}({\Bbb R}^3)$. Identify $\tau^+$ with a regular triple $(\tau^+_1,\tau^+_2,\tau_3^+)$ of diagonal $n\times n$ matrices and similarly $\tau^-$ with a regular triple $(\tau^-_1,\tau^-_2,\tau_3^-)$ of diagonal $m\times m$ matrices. As in Proposition 5.2 of \cite{BielCMP} the space of $T^{n+m}$-orbits mapping to $(\tau^+,\tau^-)$ can be identified with the set of solutions to Nahm's equations with $T_0\equiv 0$ and having values conjugate to $(\tau^+,-\tau^-)$ at $+\infty$ and at $-\infty$. If $n>m$, this space is diffeomorphic to the hyperk\"ahler quotient $X$ of the product $M(\tau^+_1,\tau^+_2,\tau_3^+)\times F_n(m;1)\times M(\tau^-_1,\tau^-_2,\tau_3^-)$ by $U(n)\times U(m)$, where the $M$'s are Kronheimer's hyperk\"ahler structures on $Gl(n,{\Bbb C})/(T^n)^{\Bbb C}$ and $Gl(m,{\Bbb C})/(T^m)^{\Bbb C}$ \cite{Kron}. If $n=m$, this space is diffeomorphic to the hyperk\"ahler quotient $X$ of the product $M(\tau^+_1,\tau^+_2,\tau_3^+)\times {\Bbb H}^n\times M(\tau^-_1,\tau^-_2,\tau_3^-)$ by $U(n)$. \par The first statement will be proved if we can show that these hyperk\"ahler quotients are single points. First we show that the corresponding complex-symplectic quotient, with respect to a generic complex structure $I$ (i.e. one in which $M(\tau^\pm_1,\tau^\pm_2,\tau_3^\pm)$ are biholomorphic to regular adjoint orbits), are single points.\newline {\bf (1) $n>m$.} Let $M(\tau^+_1,\tau^+_2,\tau_3^+)$ be complex-symplectic isomorphic to the adjoint orbit $O^+$ of $\operatorname{diag}(\beta_1^+,\dots,\beta_n^+)$ ($\beta^+_i$ distinct) and $M(\tau^-_1,\tau^-_2,\tau_3^-)$ to the adjoint orbit $O^-$ of $\operatorname{diag}(\beta_1^-,\dots,\beta_m^-)$ ($\beta^-_i$ distinct). First of all, the complex symplectic quotient of $M(\tau^+_1,\tau^+_2,\tau_3^+)\times F_n(m;1)$ by $Gl(n,{\Bbb C})$ can be identified with the set $U$ of elements of $O^+$ which are of the form \eqref{betaconst}. Then the zero-set of the complex moment map for the action of $Gl(m,{\Bbb C})$ on $U\times O^-$ can be identified with the set $Y$ of matrices of the form \eqref{betaconst} which belong to $O^+$ and such that $h$ belongs to $O^-$. We have to show that $Y$ is a single orbit of $Gl(m,{\Bbb C})$. Since the $\beta_i^-$ are distinct we can diagonalize $h$. Then the equation \eqref{det} shows that the $e_i$'s and the products $f_ig_i$ are determined. Thus we obtain a single $({\Bbb C}^\ast)^m$-orbit.\newline {\bf (2) $n=m$.} We make the same assumption about the complex-symplectic structure of the two $M$'s. The zero set of the complex moment map for the action of $Gl(n,{\Bbb C})$ on $O^+\times O^-\times {\Bbb C}^{2n}$ is the set $\{(a,b,V,W)\in O^+\times O^-\times {\Bbb C}^{n} \times {\Bbb C}^{n};\, a=b+VW^T\}$. Again we have to show that this set is a single orbit of $Gl(n,{\Bbb C})$. Let us diagonalize $b$ and use the formula \eqref{det2} with $d=b-\eta 1$. Substituting $\beta_i^-$ for $\eta$ shows that $V_iW_i$ is determined, $i=1,\dots,n$. We obtain a single $({\Bbb C}^\ast)^n$-orbit. \par We remark that the above proof shows that the action of $G^{\Bbb C}$, where $G^{\Bbb C}$ is $Gl(n,{\Bbb C})\times Gl(m,{\Bbb C})$ in case (1) or $Gl(n,{\Bbb C})$ in case (2), on the zero-set of the complex moment map has closed orbits of the form $G^{\Bbb C}/T^{\Bbb C}$ for some subtorus $T$ of $G$. \par Thus, to prove the first statement, we have to show that the complex-symplectic and the hyperk\"ahler quotient coincide. The proof of this requires a substantial detour from the main line of argument and will be given in the appendix A. Let us remark that Hurtubise's argument \cite{Hurt} for matching solutions to Nahm's equations on two (or more) intervals cannot be adapted to the case of two half-lines (in this case his Lemma 2.19 will not provide any information). \par It is clear from the description of the sections of the twistor space - formulae \eqref{ui} and \eqref{vi} - that the action is free precisely over $\tilde{C}_{n+m}({\Bbb R}^3)$. To determine the principal bundle, one merely has to repeat the calculation in the proof of Proposition 6.3 in \cite{BielCMP}.\end{pf} \begin{corollary} The action of $T^{n+m}$ on $\tilde{F}_{n,m}(c,c^\prime)$ extends to the global action of $({\Bbb C}^\ast)^{n+m}$ with respect to any complex structure.\label{C*}\end{corollary} \begin{pf} This is equivalent to showing that, if we fix $\zeta\in {\Bbb C}P^1$, then the $u_i(\zeta)$ and $v_j(\zeta)$ of \eqref{ui} and \eqref{vi} can take arbitrary complex values (with appropriate degenerations at the intersection points of the $\beta_i(\zeta)$). If, for example $\zeta=0$, then the $z_i$ are fixed and one solves for the $x_i$. One shows that a solution always exists and by the previous result the corresponding point lies in $\tilde{F}_{n,m}(c,c^\prime)$. \end{pf} \section{Asymptotic comparison of metrics \label{three}} We consider the moduli space $M_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$ of $SU(N)$ monopoles with maximal symmetry breaking. We wish to compare the metric on $F_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$ (whose Levi-Civita connection coincides with that on $M_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$) with the metric on $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}(\mu_1,\ldots,\mu_N)$. As discussed in Remark \ref{tilde(M)}, this space consists of solutions to Nahm's equations on the union of $I_k$ where $I_k=[\mu_k,+\infty)\cup (-\infty,\mu_{k+1}]$ with matching conditions at the endpoints of each $I_k$. It will be convenient to write $$I_k=[[\mu_k,\mu_{k+1}]]$$ and denote the ``middle point" $\pm \infty$ by $\infty_k$. We shall also use double brackets for any connected subset of $[[\mu_k,\mu_{k+1}]]$. \par The space $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}(\mu_1,\ldots,\mu_N)$ should be thought of as consisting of $m=m_1+\dots +m_{N-1}$ particles with phases. The positions of particles are ${\bf x}_i^k=(x_i^k,\text{Re}\,z_i^k,\text{Im}\,z_i^k)$, $i\leq m_k$, $k=1,\dots N-1$, where $\operatorname{diag}( x^k_1,\dots,x^k_{m_k})=\sqrt{-1}T_1(\infty_k)$ and $\operatorname{diag}( z^k_1,\dots,z^k_{m_k})=(T_2+\sqrt{-1}T_3)(\infty_k)$. We put \begin{equation} R_k=\min \{|{\bf x}_i^k-{\bf x}_j^k|; i\neq j\}.\label{R}\end{equation} Let us also write \begin{equation} Z_k=\min \{|z_i^k-z_j^k|; i\neq j\},\label{Z}\end{equation} and denote by $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ the subset of $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}(\mu_1,\ldots,\mu_N)$ where $Z_k>0$ for $k=1,\dots,N-1$. This subset depends on the chosen complex structure (which is $I$ in the case at hand). If we write for this complex structure, as in section \ref{two}, $\alpha$ for $T_0+iT_1$, $\beta$ for $T_2+iT_3$, then we can define the subset $F_{m_1,\ldots,m_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ of $F_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$ as the set of $(\alpha,\beta)$ such that the eigenvalues of $\beta$ restricted to the $k$-th interval, $k=1,\dots,N-1$, are distinct. \par We define subset $U(\gamma,\delta,C)$ of $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}(\mu_1,\ldots,\mu_N)$ as follows \begin{equation} U(\gamma,\delta,C)=\{{\bf x}; \min_k Z_k({\bf x})\geq \delta,\enskip \min_k R_k({\bf x})\geq C,\enskip \zeta^T\Phi\zeta\geq \gamma|\zeta|^2\enskip \forall\zeta\in{\Bbb R}^m\},\label{Udelta}\end{equation} where $\Phi$ is given by \eqref{GMtype}-\eqref{s-asym} and $m=\sum m_k$. \par We have canonical local complex coordinates on $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$: \begin{equation} (w_1,\dots,w_m):=\{z_i^k,u_i^k; \;i=1,\dots,m_k,\enskip k=1,\dots, N-1\}\label{cancoor}\end{equation} where the $u_i^k$ are given by the local ${\Bbb C}^m$-action, $m=\sum m_k$. \par Let $\tilde{g},g$ denote the metrics on $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ and $F_{m_1,\ldots,m_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ respectively, and let $\Sigma$ be the product of symmetric groups $\prod_{k=1}^{N-1}\Sigma_{m_{k}}$. We can now state the two main results of the paper. \begin{theorem} The hyperk\"ahler metric on $F_{\tilde{m}_1,\dots,\tilde{m}_{N-1}}(\mu_1,\dots,\mu_N)$ is determined by the matrix $\Phi$ of the form \eqref{GMtype} with the $c_i$ and $s_{ij}$ given by \eqref{c-asym} and \eqref{s-asym}.\label{finalasymptoticmetric}\end{theorem} \begin{theorem} There exists a complex-symplectic isomorphism $\phi$ from \linebreak $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)/\Sigma$ to $F_{m_1,\ldots,m_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ with the following property: \par Let us write $$\phi^\ast g-\tilde{g}=\operatorname{Re}\sum S_{ij}dw_i\otimes d\bar{w_j}$$ in coordinates \eqref{cancoor}. Then, for any positive $\gamma,\delta$, there is a $C=C(\gamma,\delta)$ such that on the set $U(\gamma,\delta,C)$ defined by \eqref{Udelta}, we have \begin{equation} |D^l S_{ij}|\leq A_l e^{-\lambda R},\quad l=0,1,2,\dots,\label{DkSij}\end{equation} where $R=\min\{R_k; k=1,\dots,N-1\}$ and $A_l,\lambda>0$ are constants depending only on $\gamma,\delta$.\label{estimates}\end{theorem} \begin{remarks} 1. For a possible generalization see the discussion at the end of the section.\newline 2. One can alternately use the coordinates given by positions and phases of particles and obtain a completely analogous statement. This follows at once from the explicit formulae for the metric and the twistor space of $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}(\mu_1,\ldots,\mu_N)$. We see that the coordinate change map and its inverse have all derivatives uniformly bounded on $U(\gamma,\delta,C)$ (cf. \cite{Besse}, section 13.F for the case of Taub-NUT). \end{remarks} The proof of Theorems \ref{finalasymptoticmetric} and \ref{estimates} will be separated into several parts. We shall write $M$ for $F_{m_1,\ldots,m_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$ and $\tilde{M}$ for $F_{\tilde{m}_1,\ldots,\tilde{m}_{N-1}}^{\rm reg}(\mu_1,\ldots,\mu_N)$. As usual, we use the same letter to denote constants varying from line to line.\newline {\bf Part 1: Construction of $\phi$.} This is completely analogous to the $SU(2)$ case. One goes via an intermediate moduli space $M_I$ consisting of solutions $(\alpha,\beta)$ to the complex Nahm equation which are constant and diagonal on each $[[\mu_k+c,\mu_{k+1}-c]]$ for some $c<\min\{(\mu_{k+1}-\mu_k)/2;k=1,\dots,N-1\}$ and satisfy appropriate matching conditions at each $\mu_k$, modulo gauge transformations $g(t)$ which satisfy the matching conditions of Remark \ref{tilde(M)}. In particular $g(t)= \exp\{h_kt-p_k\}$ near $\infty_k$ for some complex diagonal matrices for diagonal $h_k,p_k$. The passage from $M_I$ to $M$ is given by restricting these solutions to the union of $[[\mu_k,(\mu_k+\mu_{k+1})/2]]\cup [[(\mu_k+\mu_{k+1})/2, \mu_{k+1}]]$, viewing them as solutions to the complex Nahm equation on $[\mu_1,\mu_{N-1}]$ and solving the real equation as in \cite{Hurt}. \par The map from $M$ to $M_I/\Sigma$ is defined by using a complex gauge transformation to make an element $(\alpha,\beta)$ of $M$ constant and diagonal on each $[\mu_k+c,\mu_{k+1}-c]$, cut off at the center of each interval and extend trivially onto $[[\mu_k,\mu_{k+1}]]$. \par The passage from $\tilde{M}$ to $M_I$ is given, as in section \ref{two}, by making a solution constant and diagonal on each $[[\mu_k+c,\mu_{k+1}-c]]$ by a complex gauge transformation with $g(\infty_k)=1$ and $g(t)=1$ on each $[\mu_k,\mu_k+c/2]\cup[\mu_{k+1}-c/2,\mu_{k+1}]$. The inverse mapping is given by first solving the real Nahm equation by a complex transformation which is exponentially close to $\exp\{h_kt-p_k\}$ near $\infty_k$ for some diagonal $h_k,p_k$. To see that this can be done we argue as follows (cf. \cite{BielCMP}). By the argument in the proof of Proposition \ref{bundle}, we can first solve the real equation on each $(\infty_{k-1},\mu_k)\cup(\mu_k,\infty_k)$ by bounded gauge transformation $g_k(t)$ satisfying the matching condition at $\mu_k$. Now, from Corollary \ref{C*} and the definition of $\tilde{M}$ as the hyperk\"ahler (and so complex-symplectic) quotient of the $\tilde{F}_{(n,m)}(c,c^\prime)$, we conclude that there is a global action of $({\Bbb C}^\ast)^m$ on $\tilde{M}$. Using this action allows us to replace the $g_k$ by a gauge transformation $g(t)$ which is exponentially close to $\exp\{h_kt-p_k\}$ near each $\infty_k$ and which also solves the real equation. \par We still have to show that $\phi$ respects complex-symplectic forms. However, $\phi$ was constructed using only a) complex gauge transformations, and b) restriction or extension of constant solutions. Both of these operations respect the complex-symplectic forms involved.\newline {\bf Part 2: Estimates on solutions.} We first obtain estimates on solutions to Nahm's equations. Recall that the biholomorphism $\phi$ was defined as the composition $\phi=\phi_2\phi_1$ with $\phi_1:\tilde{M}\rightarrow M_I$ and $\phi_2:M_I\rightarrow M$. Let $(\alpha,\beta)$ be a solution to Nahm equations on a half-line $[x,+\infty)$, with $\min\{|\beta_{ii}(+\infty)-\beta_{jj}(+\infty)|; i\neq j\}\geq \delta>0$. For any $\epsilon >0$, we can assume that $\alpha$ and $\beta$ are lower-triangular on $[x+\epsilon/2,+\infty)$ (this is done as in \cite{BielAGAG}: one can conjugate $\beta$ to be lower-triangular on $[x+\epsilon/2,+\infty)$ by a unitary gauge transformation; \eqref{complex} implies then that $\alpha$ is also lower-triangular). Then the apriori estimates from section 1 in \cite{BielAGAG} show that \begin{equation}|\alpha_{ij}(t)|+|\beta_{ij}(t)|\leq Me^{-\lambda R_{ij}t}\label{betaij}\end{equation} for $i>j$ and $t\geq x+\epsilon$. Here $R_{ij}=|\operatorname{Re} \alpha_{ii}(+\infty)- \operatorname{Re} \alpha_{jj}(+\infty)|+|\beta_{ii}(+\infty)-\beta_{jj}(+\infty)|$ and $M,\lambda>0$ are constants depending only on $\delta,\epsilon$ (and the Lie-algebra to which $\alpha,\beta$ belong). For the diagonal part of $\alpha$ one has the following estimate (\cite{BielAGAG}, end of section 1): $$|\operatorname{Re}\alpha_{ii}(t)-\operatorname{Re}\alpha_{ii}(+\infty)|\leq K$$ for all $i$ and $t\geq x+\epsilon$, $K=K(\delta,\epsilon)$. Then the real Nahm equation \eqref{real} gives $$\frac{d}{dt}(\operatorname{Re}\alpha_{ii})\leq M\sum_{i>j} R_{ij} e^{-\lambda R_{ij}t},$$ from which we conclude, that for all $i$ and $t>x+\epsilon$, \begin{equation}|\operatorname{Re}\alpha_{ii}(t)-\operatorname{Re}\alpha_{ii}(+\infty)|\leq M e^{-\lambda Rt},\quad R=\min R_{ij}.\label{alphaii}\end{equation} Notice also that we can use the gauge freedom to make $\operatorname{Im} \alpha_{ii}$ constant on $[x+\epsilon,+\infty)$. \par Now, $\phi_1$ was defined by a complex gauge transformation $p(t)$, with $p(\infty_k)=1$ and $p(t)=1$ on each $[\mu_k,\mu_k+c/2]\cup[\mu_{k+1}-c/2,\mu_{k+1}]$, making $\alpha$ and $\beta$ constant and diagonal on $[[\mu_k+c,\mu_{k+1}-c]]$. Thus we conclude that $(\tilde{\alpha}, \tilde{\beta})=\phi_1(\alpha,\beta)$ satisfies \begin{equation}|\tilde{\alpha}_(t)-\alpha(t)|=\begin{cases} 0 & \text{if $t\in[\mu_k,\mu_k+c/2]\cup[\mu_{k+1}-c/2,\mu_{k+1}]$}\\ O\bigl(\exp\{-\lambda R_k t\}\bigr) & \text{if $t\in[[\mu_k+c,\mu_{k+1}-c]]$},\end{cases}\label{est1}\end{equation} and similarly for $\beta$ and for the derivative of $\alpha$. We now consider $\phi_2$. After cutting off the solutions, we obtain a solution $(\hat{\alpha},\hat{\beta})$ on $[\mu_1,\mu_N]$ to the complex Nahm equation which satisfies $$F(\hat{\alpha},\hat{\beta}):=\frac{d\,}{dt}\left(\hat{\alpha}+ \hat{\alpha}^\ast\right)+[\hat{\alpha},\hat{\alpha}^\ast]+ [\hat{\beta},\hat{\beta}^\ast]=O(e^{-\lambda R}).$$ We know from the work of Hurtubise that there is a unique element of ${\cal G}^{\Bbb C}/{\cal G}$ such that any element $g(t)$ in this orbit takes $(\hat{\alpha},\hat{\beta})$ to an element of $M$. We have \begin{lemma} The gauge transformation $g$ satisfies $|g^\ast g-1|=O(e^{-\lambda R})$ uniformly on $[\mu_1,\mu_N]$.\label{gissmall}\end{lemma} \begin{pf} Using Lemma 2.10 in \cite{Don} and a simple comparison theorem (\cite{BielCrelle}, Lemma 2.8), one shows that the real equation can be solved on each $[\mu_k,\mu_{k+1}]$ by a complex gauge transformation $g_k(t)$ with $g_k(\mu_k)=g_k(\mu_{k+1})=1$ and $g_k^\ast g_k$ uniformly bounded by $O(e^{-\lambda R})$. Furthermore, near $\mu_k$, $|g_k^\ast g_k(t)-1|\leq (t-\mu_k)ce^{-\lambda R}$ and similarly near $\mu_{k+1}$. Therefore the derivative of $g_k^\ast g_k$ at $\mu_k,\mu_{k+1}$ is bounded by $ce^{-\lambda R}$. This shows that, while the resulting $\check{\alpha}$ does not satisfy the matching conditions at the $\mu_k$, the jumps are of order $O(e^{-\lambda R})$. Hurtubise shows in \cite{Hurt} that one can now match the solutions by a unique (complex) gauge transformation $g^\prime$. Since both $(\check{\alpha},\check{\beta})$ and $g^\prime (\check{\alpha},\check{\beta})$ satisfy the real equation, Lemma 2.10 in \cite{Don} implies that $g^{\prime\ast} g^\prime$ is bounded by its values at the points $\mu_k$. Let $\phi$ (resp. $-\psi$) be the logarithm of maximum (resp. minimum) of eigenvalues of $g^{\prime\ast} g^\prime$. The proofs of Propositions 2.20 and 2.21 in \cite{Hurt} show that the jumps $\Delta\dot{\phi}, \Delta\dot{\psi}$ of derivatives $\dot{\phi}$ and $\dot{\psi}$ are of order $e^{-\lambda R}$ at each $\mu_k$. We then conclude, by going through the proof of Lemma 2.19 in \cite{Hurt}, that at each $\mu_k$ we have $\phi(\mu_k)\leq c \Delta\dot{\phi}+O(e^{-\lambda R})$ and $\psi(\mu_k)\leq c \Delta\dot{\psi}+O(e^{-\lambda R})$ for some $c<0$ (depending only on the $\mu_j$). This shows that $\phi(\mu_k)$ and $\psi(\mu_k)$ are both of order $e^{-\lambda R}$ which finishes the proof.\end{pf} {\bf Part 3: Estimates for the tangent vectors.} Recall that a tangent vector to a moduli space of solutions to Nahm's equations is a quadruple $t_0,\dots t_3$ satisfying equations \eqref{tangent}. We shall write $a=t_0+it_1$ and $b=t_2+it_3$. Then the equations \eqref{tangent} can be written as \begin{equation} \dot{a}=[\alpha^\ast,a]+[\beta^\ast,b],\label{treal}\end{equation} \begin{equation} \dot{b}=[\beta,a]+[b,\alpha].\label{tcomplex}\end{equation} If the moduli space consits of solutions defined on several adjoining intervals, then $a$ and $b$ also satisfy appropriate matching conditions at the endpoints. \par We shall need apriori estimates for solutions of the above equations in $\tilde{F}_{n,m}(c,c^\prime)$. Let us write $\bigl((\alpha^-,\beta^-),(\alpha^+,\beta^+)\bigr)$ for a representative of $\tilde{F}_{n,m}(0,0)$ (and so of any $\tilde{F}_{n,m}(c,c^\prime)$) and then $x_i^-,z_i^-$ (resp. $x_i^+,z_i^+$) for the values of $\operatorname{Re}\alpha^-,\beta^-$ (resp. $\operatorname{Re}\alpha^+,\beta^+$) at $-\infty$ (resp. at $+\infty)$). Let us also write $R^{\pm}=\min \{|x_i^{\pm}-x_j^{\pm}|+|z_i^{\pm}-z_j^{\pm}|;i\neq j\}$, $Z^\pm= \min \{|z_i^{\pm}-z_j^{\pm}|;i\neq j\}$ and $S=\min \{\{|x_i^{-}-x_j^{+}|+|z_i^{-}-z_j^{+}|\}$. We have \begin{proposition} For any positive $\delta,\epsilon,\nu>0$ there exist constants $M,C,\lambda>0$ depending only on $m,n,\epsilon,\delta,\nu$ with the following property: \par Let $\bigl((\alpha^-,\beta^-),(\alpha^+,\beta^+)\bigr)$ be a representative of $\tilde{F}_{n,m}(0,0)$ with $Z^{\pm}\geq \delta>0$, $S\geq \nu >0$ and $R^+,R^->C$. If $((a^-,b^-),(a^+,b^+))$ is a tangent vector to $\tilde{F}_{n,m}(0,0)$ at $\bigl((\alpha^-,\beta^-),(\alpha^+,\beta^+)\bigr)$ and \begin{equation}A^2=|a^-(-\infty)|^2+|b^-(-\infty)|^2+|a^+(+\infty)|^2+ |b^+(+\infty)|^2,\label{A2}\end{equation} then for all $t\geq \epsilon$ \begin{equation} |a^-(-t)-a^-(-\infty)|+|b^-(-t)-b^-(-\infty)| \leq Me^{-\lambda R^-t}A,\label{a-}\end{equation} \begin{equation} |a^+(t)-a^+(+\infty)|+|b^+(t)-b^+(+\infty)| \leq Me^{-\lambda R^+t}A.\label{a+}\end{equation}\label{estab}\end{proposition} \begin{pf} It is enough to prove the estimates for $A=1$. We can assume, as in part 2 of this proof, that $\alpha^\pm(t) , \beta^\pm(t)$ are lower-triangular for $|t|\geq \epsilon/2$. For the time being we consider only $\alpha^+ , \beta^+$ and we omit the superscript $+$. We choose $C$ so that the right-hand side of \eqref{betaij} is small compared to $R_{ij}^{-1}$ and the right-hand side of \eqref{alphaii} is small compared to $R^{-1}$ at $t=\epsilon$. Then, if we write $y$ for the diagonal components and $x$ for the off-diagonal components of $a$ and $b$, we obtain from equations \eqref{treal} and \eqref{tcomplex} \begin{equation} \dot{y}=A(t)x,\quad |A(t)|\leq Me^{-\lambda Rt}.\label{yyy}\end{equation} On the other hand, if we differentiate the equations \eqref{treal} and \eqref{tcomplex}, we can write \begin{equation} \ddot{x}=D(t)x+B(t)y,\quad |B(t)|\leq Me^{-\lambda Rt},\enskip\exists_{s>0}\forall_z \operatorname{Re}\bigl(D(t)z,z)\geq s^2R^2 |z|^2.\label{xxx}\end{equation} Let $t_0\in[\epsilon,+\infty]$ be the first point for which $|x(t_0)|^2+|y(t_0)|^2\leq |a(+\infty)|^2+ |b(+\infty)|^2\leq 1$. Let $X=\sup\{x(t);t\in [t_0,+\infty]\}$, $Y=\sup\{y(t);t\in [t_0,+\infty]\}$. Both $X$ and $Y$ are finite. Equation \eqref{yyy} implies that \begin{equation}|y(t)-y(t_0)|\leq \frac{MX}{R}.\label{yt}\end{equation} Similarly, using \eqref{xxx} and a comparison theorem (the same argument as on p. 133 in \cite{BielAGAG}), one concludes that \begin{equation}|x(t)|\leq |x(t_0)|MYe^{-\lambda R(t-t_0)}\label{xt}\end{equation} From this, changing $C$ if necessary (i.e. taking larger $R$), we conclude that there this a constant $P$ such that $X+Y\leq P$. Using again \eqref{yyy},\eqref{xt} we obtain that the estimate \eqref{a+} holds for $t\geq t_0$. This implies that \begin{equation}\int_{t_0}^{+\infty} \bigl(|a^+(t)|^2+|b^+(t)|^2- |a^+(+\infty)|^2-|b^+(+\infty)|^2\bigr)dt\leq \rho,\label{fromt0}\end{equation} where $\rho=\rho(n,\delta,\epsilon)$ and can be made arbitrarily small by changing $C$ (recall that $A=1$). We also have that for $t\in[\epsilon,t_0]$ the expression under the integral sign in \eqref{fromt0} is nonnegative. We can do exactly the same for $\alpha^-,\beta^-,a^-,b^-$. Let $s_0$ denote the negative number with the same properties as $t_0$. We now compute the length $L$ of the vector $\bigl((a^-,b^-),(a^+,b^+)\bigr)$ in the metric of $\tilde{F}_{(n,m)}(\epsilon,\epsilon)$. We can write (cf. \eqref{smetric}) (the fact that below we have an inequality, rather than equality, stems from the fact that, for $n=m$, there are additional (positive) terms): \begin{multline} L^2\geq\int_{-\epsilon}^0\bigl(|a^-(t)|^2+|b^-(t)|^2\bigr)dt +\int_0^\epsilon \bigl(|a^+(t)|^2+|b^+(t)|^2\bigr)dt\\ +\int^{-\epsilon}_{s_0}\bigl(|a^-(t)|^2+|b^-(t)|^2- |a^-(-\infty)|^2-|b^-(-\infty)|^2\bigr)dt \\+\int^{t_0}_\epsilon \bigl(|a^+(t)|^2+|b^+(t)|^2- |a^+(+\infty)|^2-|b^+(+\infty)|^2\bigr)dt\\ +\int_{-\infty}^{s_0}\bigl(|a^-(t)|^2+|b^-(t)|^2- |a^-(-\infty)|^2-|b^-(-\infty)|^2\bigr)dt\\ +\int_{t_0}^{+\infty} \bigl(|a^+(t)|^2+|b^+(t)|^2- |a^+(+\infty)|^2-|b^+(+\infty)|^2\bigr)dt. \label{longformula}\end{multline} Each of the first four lines is positive, while the last two have their absolute value bounded by $2\rho$ with $\rho$ as small as we wish. Let us write $T$ for the sum of the third and fourth line. It follows that $T\leq L^2+2\rho$. The explicit formula for the metric on $ \tilde{F}_{n,m}(c,c^\prime)$ found in sections \ref{five} and \ref{six} implies that $L^2\leq P$, where $P>0$ depends only on $m,n,\epsilon,\nu,\delta$ (notice that this bound is independent of the actual value of $\epsilon_{ij}$ in \eqref{sij}). Thus $T\leq P^\prime$. Now, if both $t_0$ and $-s_0$ are smaller than $2\epsilon$, then we are done (by replacing the original $\epsilon$ with $2\epsilon$). Suppose that $t_0\geq 2\epsilon$. Since the integrand in the second and third line is nonnegative, we conclude from $T\leq P^\prime$ that there is point $t_1\in [\epsilon,2\epsilon]$ with $|a^+(t_1)|^2+|b^+(t_1)|^2\leq P^\prime/\epsilon$. We can now repeat the arguments after \eqref{xxx} and conclude that the estimate \eqref{a+} holds for $t\geq 2\epsilon$. We can deal similarly with the case $s_0\leq -2\epsilon$. \end{pf} We shall also need the following strengthening of the last result: \begin{lemma} With the same assumptions and notation as in Proposition \ref{estab}, we can replace the estimates \eqref{a-} and \eqref{a+} with: $$ |a_{ij}^-(-t)|+|b_{ij}^-(-t)| \leq Me^{-\lambda R_{ij}^-t}A,\qquad |a_{ij}^+(t)|+|b_{ij}^+(t)| \leq Me^{-\lambda R_{ij}^+t}A,$$ for all $i\neq j$ and $t\geq\epsilon$. Here $R_{ij}^{\pm}=|x_i^\pm -x_j^\pm|+ |z_i^\pm -z_j^\pm|$.\label{expotan}\end{lemma} \begin{pf} We differentiate the equations \eqref{treal} and \eqref{tcomplex} and proceed, as in Proposition 3.12 of \cite{BielCrelle} using \eqref{a-} and \eqref{a+}.\end{pf} {\bf Part 4: Proof of Theorem \ref{finalasymptoticmetric}.} From its definition in section \ref{one}, $F_{\tilde{m}_1,\dots,\tilde{m}_{N-1}}(\mu_1,\dots,\mu_N)$ is the hyperk\"ahler quotient, by a product of tori, of the product $$\tilde{F}_{m_1}(c_1)\times \tilde{F}_{m_2,m_1}(c_2,c_2^\prime ) \times\dots\times \tilde{F}_{m_{N-1},m_{N-2}}(c_{N-1},c_{N-1}^\prime)\times \tilde{F}_{m_N}(c_N^\prime)$$ where $c_i+c_{i+1}^\prime=\mu_{i+1}-\mu_{i}$, $i=1,\dots,N-1$. The matrices $\Phi$ for each factor are of the form \eqref{GMtype} with the $s_{ij}$ given by \eqref{sij} (for the first and last factor, the metric is the Gibbons-Manton metric given by \eqref{GM}). On the hyperk\"ahler quotient these matrices are simply added together (after viewing each of them as a submatrix of an $m\times m$ matrix, $m=m_1+\dots+m_{N-1}$). Thus the result is proved as soon as we show that all $\epsilon_{ij}$ of \eqref{sij} are zero. Let us show this. \begin{lemma} In the formula \eqref{sij}, all $\epsilon_{ij}$ are equal to zero.\label{epsilonij}\end{lemma} \begin{pf} Suppose that this is not true. Let us write the norm of vector $v$ tangent to $F_{(n,m)}(1,1)$ as in the formula \eqref{longformula} with $\epsilon=t_0=-s_0=1$. From the estimates \eqref{a-} and \eqref{a+}, it follows that $\|v\|^2\geq -MA^2/R$ for a constant $M$ depending only $m,n,\delta$ and $\nu$. Here $R=\min\{R^-,R^+\}$. Thus, for a sufficiently large $R$, $\|v\|^2\geq -\rho A^2$, $A$ defined by \eqref{A2}, with $\rho$ as small as we wish. However, if any $\epsilon_{ij}=1$, then we can find a point ${\bf x}$ in $F_{(n,m)}(1,1)$ with $S=S({\bf x})\geq \nu$ and $R$ arbitrarily large such that there is a tangent vector $v$ at ${\bf x}$ with $\|v\|^2\leq -c A^2/\nu$ for some $c=c(n,m)$. This contradicts $\|v\|^2\geq -\rho A^2$ and so the lemma is proved.\end{pf} {\bf Part 5: Proof of Theorem \ref{estimates}.} In appendix B we prove a general theorem we allows us to reduce the estimates to one-sided estimates on the metric tensors. This is so because the asymptotic metric is quasi-isometric to the flat metric in coordinates \eqref{cancoor}. This last fact follows from the explicit formula for the metric and the twistor space (cf. Remark 2 after Theorem \ref{estimates}). Thus we only have to show that \begin{equation} \phi^\ast g\leq \bigl(1+Me^{-\lambda R}\bigr)\tilde{g}\label{estXX}\end{equation} in the region $U(\gamma,\delta,C)$ for some $C,M,\lambda>0$ depending only on $\gamma,\delta$. Once we have this, we apply Theorem \ref{B1} (and Remark \ref{B2}) to the region where $R\geq R_0$ and obtain that the estimate \eqref{DkSij} with $R=R_0$, $R_0$ arbitrary, holds in the region where $R\geq R_0+1$, in particular for all points with $R=R_0+1$. Since $R_0$ is arbitrary this will prove the theorem. \par Therefore we are going to show \eqref{estXX}. We start with a vector $(a,b)$ tangent to $U(\gamma,\delta,C)$, where $C=C(\gamma,\delta)$ is determined by the validity of estimates below. Since $\gamma>0$, the metric is positive-definite and, furthermore, quasi-isometric to the flat metric in coordinates \eqref{cancoor} or the coordinates given by positions and phases of particles. Let us assume that the norm of $(a,b)$ is $1$ in this metric. Then $\sum_k \bigl(|a(\infty_k)|^2+|b(\infty_k)|^2\bigr)\leq B$, where $B$ depends only on $\gamma$. We also have estimates of the form \eqref{a-} and \eqref{a+}: \begin{equation} |a(t)-a(\infty_k)|+|b(t)-b(\infty_k)|\leq Me^{-\lambda R_k t}B\quad \text{if $t\in [[\mu_k+\epsilon,\mu_{k+1}-\epsilon]]$}, \label{a-infty}\end{equation} as well as the stronger estimates of Lemma \ref{expotan}. Also, by writing the metric as in \eqref{longformula} with $\epsilon=t_0=c$ and using \eqref{a-infty}, we get \begin{equation} \sum_{k=1}^{N}\int_{\mu_k-c}^{\mu_k+c} \bigr(|a(t)|^2+|b(t)|^2\bigr)dt \leq MB.\label{L2a}\end{equation} The left-hand side includes the sum of Euclidean norms of pairs of vectors $u_k,v_k$ which give us the matching conditions for $(a,b)$ at $\mu_k$ in the case when $m_{k-1}=m_k$. \par Recall that the map $\phi$ was a composition of a $\phi_1$ and $\phi_2$. The map $\phi_1$ was given by a complex gauge transformation $p(t)$ which from part 2 of the proof can be uniformly estimated by $O(e^{-\lambda Rt})$. Therefore, after we conjugate $a,b$ by $p$, they still satisfy \eqref{a-infty}. Moreover we have $\bigl|\|(pap^{-1},pbp^{-1})\|-1\bigr|=O(e^{-\lambda R})$. In order to obtain the vector $d\phi_1\bigr((a,b)\bigr)$, one has to make $pap^{-1}$ and $pbp^{-1}$ constant and diagonal on each $[[\mu_k+c,\mu_{k+1}]]$ ($c$ is defined in part 1) by an infinitesimal complex gauge transformation $\rho_1$ (with $\rho_1(\infty_k)=0$ etc.). From the estimates \eqref{a-infty} and of Lemma \ref{expotan} on $pap^{-1}$ and $pbp^{-1}$, this changes the norm of $p(a,b)p^{-1}$ by something of order $e^{-\lambda R}$. Furthermore the $L^2$-estimate \eqref{L2a} holds for $d\phi_1(a,b)$. At the next stage, we restrict $d\phi_1(a,b)$ to $[\mu_1,\dots,\mu_N]$. Since $d\phi_1(a,b)$ is constant and diagonal on the union of $[[\mu_k+c,\mu_{k+1-c}]]$, its norm in the metric $\tilde{g}$ is the same as the norm of the restriction $(\hat{a},\hat{b})$ in the metric $g$. Now we conjugate $(\hat{a},\hat{b})$ by the complex gauge transformation $g(t)$ of Lemma \ref{gissmall}. Using the estimate of that lemma and the estimate \eqref{L2a} for $(\hat{a},\hat{b})$ we conclude that \begin{equation} \bigl|\|(g\hat{a}g^{-1},g\hat{b}g^{-1})\|-1\bigr|\leq Me^{-\lambda R},\label{almostthere}\end{equation} for some $M,\lambda>0$ depending only on $\gamma$ and $\delta$. The vector $(g\hat{a}g^{-1},g\hat{b}g^{-1})$ solves the equation \eqref{tcomplex} but not \eqref{treal}. This is the final step: we obtain the vector $d\phi(a,b)$ by acting on $(g\hat{a}g^{-1},g\hat{b}g^{-1})$ with a complex infinitesimal gauge transformation, so that the resulting vector solves \eqref{treal}. However, the equation \eqref{treal} is the condition of orthogonality to complex infinitesimal gauge transformations and, hence, the norm of the vector $d\phi(a,b)$ is not greater than the norm of $(g\hat{a}g^{-1},g\hat{b}g^{-1})$. This and \eqref{almostthere} proves \eqref{estXX}, and so, by the discussion above, also Theorem \ref{estimates}\hfill $\Box$\bigskip As remarked after the statement of the above theorem, there is a likely generalization of this result. Suppose that it is only particles of a given type, say $k_0$, that separate (recall that the type of particle $i$ is the smallest $k$ for which $i\leq m_k$). Then the metric on $F_\sigma(\mu)=F_{m_1,\ldots,m_{N-1}}(\mu_1,\ldots,\mu_N)$) should get close to the metric on $F_{\tilde{\sigma}}(\mu)$, where $\tilde{\sigma}(k)=m_k$ if $k\neq i_0$ and $\tilde{\sigma}(k_0)=\tilde{m}_{k_0}$. Similarly, if the particles of types $k_1,\dots,k_s$ separate, the metric should be close to the metric on $F_{\tilde{\sigma}}(\mu)$, where $\tilde{\sigma}(k)=m_k$ if $k\neq k_1,\ldots,k_s$ and $\tilde{\sigma}(k_j)=\tilde{m}_{k_j}$, for $j=1,\ldots,s$. All of these moduli spaces have dimension $4(m_1+\ldots+m_{N-1})$. In general the metric on $F_{\tilde{\sigma}}(\mu)$ will be simpler than the one on $F_\sigma(\mu)$ (it has a tri-Hamiltonian action of a $(\sum_{i=1}^s m_{k_i})$-dimensional torus), but it is only in the case when $\{k_1,\ldots,k_s\}=\{1,\ldots,N-1\}$ that the metric is algebraic. Finally we shall discuss the topology of the asymptotic moduli space. First of all, from Proposition \ref{bundle}, the orbit space of $F_{\tilde{m}_1,\dots,\tilde{m}_{N-1}}(\mu_1,\dots,\mu_N)$ is $\prod \tilde{C}_{m_k}\bigl({\Bbb R}^3)$, and so particles of different types can take the same position. Now recall from section \ref{zero} that the type $t(i)$ of the particle $i$ is defined as $\min\{k;i\leq \sum_{s\leq k}m_s\}$. It follows easily from Proposition \ref{bundle} that the set of principal orbits of $T^m$, $m=m_1+\dots+m_{N-1}$, on $F_{\tilde{m}_1,\dots,\tilde{m}_{N-1}}(\mu_1,\dots,\mu_N)$ is a bundle $P$ over $$C=\bigl\{({\bf x}_1,\dots,{\bf x}_m)\in{\Bbb R}^3\otimes {\Bbb R}^m; |t(i)-t(j)|\leq 1\implies {\bf x_i}\neq {\bf x_j}\bigr\}.$$ The basis of the second integer homology of $C$ is given by the spheres $S_{ij}$ defined by \eqref{basis}, where now $i,j$ run over the set $\{(i,j);i<j\enskip \text{and}\enskip |t(i)-t(j)|\leq 1\}$. As in Lemma 7.1 in \cite{BielCMP}, we obtain that the bundle $P$ is determined by the element $(h_1,\ldots,h_{m})$ of $H^2\bigl(C,{\Bbb Z}^{m}\bigr)$ such that $$h_k(S_{ij})=\begin{cases}s_{ij} & \text{if $k=i$}\\ -s_{ij} & \text{if $k=j$} \\ 0 & \text{otherwise},\end{cases}$$ where the $s_{ij}$ are given by \eqref{s-asym}.
2,877,628,090,919
arxiv
\section*{Acknowledgments} \printbibliography[keyword=primary] \newrefcontext[sorting=none] \printbibliography[env=bibliographyNUM, title={Additional References}, keyword=secondary, resetnumbers] \end{document} \section{Introduction} \label{sec:intro} Medical visualization research to date has focused primarily on supporting medical experts (radiologists, pathologists, surgeons in diagnosis and treatment and---to a lesser extent---to medical students, in particularly for anatomy education. Medical information and research, however, are also interesting to non-experts, i.e., a general audience that comprises patients and their relatives along with those with an interest in science. Interactive medical visualization aiming at this type of audience requires different design approaches with easy to understand representations~\cite{Bottinger2020} than in systems such as radiology workstations that are aimed at experts. Narrative visualization combines storytelling techniques with interactive graphics to appeal to a general audience~\cite{Segel2010}. It aims to present the data in a traceable progression that is memorable and easier to understand~\cite{Figueiras2014}. There are two types of storytelling: synchronous and asynchronous storytelling~\cite{Lee2015}. In synchronous storytelling, the narrator is in direct contact with the audience, e.g., live presentations, whereas asynchronous stories do not require direct audience contact. These stories take the form of recorded videos, static graphics, or visually guided tours through complex processes with interactive visualizations. Ynnerman et al.~\cite{Ynnerman2018} coined the term \emph{exploranation} for merging exploratory visualizations that are traditionally made for experts with explanatory visualization techniques. While this supports visual knowledge acquisition for non-experts, it requires more guidance and automatically-generated content. For medical imaging data this may comprise labeled visualizations and animations highlighting relevant anatomical structures, e.g., vessel branching around an associated pathological structure. Visualizations of population-based data that indicate the most frequent tumor locations or metastasis pathways may also interesting to the public. Moreover, visualizations of health survey data demonstrating avoidable lifestyle-related risk factors for diseases can motivate the public to adopt healthier lifestyles. While \emph{“scientific outreach”} is already an essential topic for the visualization of astronomy data~\cite{Bock2019}, climate data~\cite{Bottinger2020reaching}, and cell biology data~\cite{Kouvril2021}, the same has not been true for interactive medical visualization research. Exceptions include epidemiological data, e.g., the COVID-19 Dashboard by Johns Hopkins University which supports map-based visualization, a selection of interesting countries, and time-based visualization of cases and fatalities. Early limited authoring tools were developed for generating interactive medical stories based on volume data~\cite{Wohlfart2006,Wohlfart2007}. However, medical data also includes other data types, e.g., clinical images, 3D models, and flow data. Several techniques for visualizing medical imaging data lend themselves well to narrative medical visualization storytelling principles with limited freedom for exploration. These include clipping planes which are automatically moved, cutaways or automated ghosted views based on structure selection, and automatically generated animated transitions. However, concept-driven content, e.g., informative infographics, may be highly valuable to engage general audiences in scientific communication~\cite{Rheingans2020}. \noindent \textbf{Scope of this Paper.} In this work, we discuss the potential of including interactive exploration of medical data in narrative visualization for a general audience, i.e., members of the general public who are interested in understanding diseases and their treatment but lack detailed medical knowledge or familiarity with scientific visualizations. We further identify three general public subgroups: Patients with a direct link to a specific disease, patient relatives, and people interested in medicine, see Figure~\ref{fig:overview_audiences}. Following an asynchronous storytelling method, we show how to leverage narrative techniques to present medical data in a way that is both compelling and understandable. Our proof-of-concept focuses on the suitability and arrangement of narrative techniques to tell stories about three common diseases that are related to three important structures of the human body: organs, vessels, and bones. Our inspiration for these disease stories draws in part from health websites such as \textit{WebMD} and \textit{UpToDate}. Similar to other works dealing with narrative scientific visualization, we choose touch screen as a medium such that the user can interact with the data during the story. In narrative visualization, stories can be mainly data-driven or concept-driven. We follow the suggestions by Segel and Heer~\cite{Segel2010} that data should enrich the story while memorable visuals and interesting storytelling are the main components of the story. Our key contributions are the following: \begin{itemize} \item We provide an overview of existing work in narrative visualization and based on an analysis of a corpus of 30 medical stories propose a template to structure medical visualization stories. \item We present three proof-of-concept medical stories that are enriched with interactive medical data visualization components to explain information around selected example diseases. \item We identify promising areas for future research in narrative medical visualization. \end{itemize} \noindent \textbf{Organization.} Section~\ref{sec:ingrnarrvis} summarizes general narrative techniques based on seminal works in this field. Then, Section~\ref{sec:narrscivis} gives a brief insight which other scientific visualization areas have used narrative techniques and how. Here, we also describe the associated transition from scientific visualization designed for experts to scientific visualizations for the general public, as well as challenges which arise in this process. Section~\ref{sec:exampmedstory} then describes the core of our paper. Based on the summary in Section~\ref{sec:ingrnarrvis}, we show how narrative techniques can be applied to medical data to generate stories for the general public. We then discuss various aspects of our conceptualized medical stories in Section~\ref{sec:discussion} and identify a research agenda that highlights promising aspects for future work in medical narrative visualization in Section~\ref{sec:agenda}. The paper is concluded in Section~\ref{sec:conclusion}. \section{Narrative Visualization} \section{Ingredients of Narrative Visualization} \label{sec:ingrnarrvis} A visual data story is composed of a series of specific facts, called \emph{story pieces}, that are supported by data~\cite{Lee2015}. These story pieces are visualized to convey important messages to the audience. Visualizations are enriched with story elements such as labels, arrows, links, and textual explanations to clearly emphasize these messages and avoid ambiguity. The story pieces should be arranged into scenes on the basis of a meaningful genre and design pattern to support the author's communication goal. General goals include to inform or entertain the audience. In the following, we summarize existing techniques to generate and transition between scenes. In addition, we summarize genres and design patterns with suggestions for their use in medical visualization. \subsection{Generating and Transitioning Narrative Scenes} Segel and Heer~\cite{Segel2010} derive general design elements of narrative visualizations and examine the range of user guidance and interaction. Hullman and Diakopoulos~\cite{Hullman2011} build on this work to analyze 51 narrative visualizations, examining the rhetorical devices used. Stolper et al.~\cite{Stolper2016} extend this summary by novel data-driven storytelling techniques. Based on these ground-breaking works in the field of narrative visualization, we summarize existing story elements, how to connect them to form scenes, how to transition between scenes and how to construct a scene path. \subsubsection{Story Elements} Visualizations are best complemented by other means of communication and highlighting techniques need to guide the user through a story~\cite{Kosara2013}. \noindent \textbf{Text narration.} Text is the simplest way to explain data. \textit{Long-form texts} can be used to explain key points in detail and to introduce or summarize a topic. \textit{Headlines} or \textit{captions} can serve to draw attention to a story. \textit{Tooltips} can provide details when a user hovers their cursor over an element~\cite{Figueiras2014}. Text can also be used in the form of \textit{annotations} or \textit{labels} to designate important structures. \noindent \textbf{Audio narration} can be used to enhance visualizations~\cite{Segel2010}. This allows the viewer to focus more on the visuals, since the narrative is temporally linked to the visual elements. \noindent Moreover, \textbf{graphical properties} can be used to draw the reader’s attention. Elements can be highlighted using wrapped shapes, specific colors or techniques such as motion or close ups~\cite{Segel2010}. \subsubsection{Connection of Story Elements} To understand the explanatory nature of the interplay between story elements, connections must be made between them. Stolper et al.~\cite{Stolper2016} found three basic types to connect story elements. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Figures//Genre_2} \caption{\label{fig:genre} General genres of narrative visualization according to Segel and Heer~\cite{Segel2010}.} \vspace{-10px} \end{figure*} \noindent \textbf{Interaction} is an efficient way to connect story elements. Interactivity refers to the different ways a user can manipulate the visualization, e.g., filtering, hovering, zooming, rotating, and translating and also to how the user learns these methods (explicit instruction, tacit tutorial, initial configuration)~\cite{Segel2010,Stolper2016}. The level of interaction ranges from passive narration, where no interaction is provided, to free exploration, where the user has no interaction constraints~\cite{Wohlfart2007}. Passive narration can be interrupted and the user can temporarily take control and change the presentation, e.g., by using \textbf{dynamic queries} to change the visual style of an object. Afterwards, the passive narration continues. \noindent \textbf{Color} is another option to link story elements. Consistent colors should be used to represent objects or attributes that appear in multiple visualizations~\cite{Stolper2016}. Color can also be used to connect text and visualizations by assigning text the same color as the associated visualized objects. However, the choice of color schemes and the design of the color map play a crucial role. Crameri et al.~\cite{Crameri2020} presented guidelines to design color charts for scientific data, including perception effects to be considered. \noindent \textbf{Animations} can also be used to link objects that help users relate complex processes in an understandable way. Care must be taken to ensure that the user does not lose the focus while context information is needed for orientation. Therefore, smooth transitions between different camera positions are required, where focus objects should be visually emphasized. \subsubsection{Defining Scene Transition} Moving within and between visual scenes without disorienting the user is a fundamental aspect of storytelling. Segel and Heer~\cite{Segel2010} identified six types of transitions. One way is to keep the object change between scenes to a minimum, maintaining \textit{object continuity}. The number and style of objects should not be fundamentally changed between two cuts. Related to this is the concept of \textit{familiar objects}, which states that commonly used symbols should be used to represent facts. Another category involves meaningful movement of the virtual camera. The \textit{view angle} of the camera should change between two scenes or when moving within a scene, but not so much that completely different views are created. Also, strong changes in the \textit{camera movement speed} between adjacent scenes should be avoided. \textit{Continuity editing} is an established technique from the film industry which creates the impression that the story was shot in one piece without cuts. Another option is to use \textit{animated transitions}. Based on morphological transformations, objects of one scene can be changed into objects of another scene. \subsubsection{Defining a Scene Path} Data-driven stories are usually characterized by an author-specified order. The story is thus given a structure that is supported by frequent navigation aids. In addition to the specification of a strict path (linear story), there is the possibility to provide the user with several paths to choose from (user-directed story)~\cite{Segel2010}. Commonly used techniques to navigate through a story are \textit{next/previous buttons} and \textit{scrolling}. \textit{Flowchart arrows} can help to convey the intended narrative structure of the story. To navigate to a specific location, \textit{menu selections} or \textit{interactive maps} can be provided. To show the user where s/he is in the story \textit{section header buttons}, \textit{breadcrumbs} in the form of points, and \textit{timelines} in the form of progress bars or checklists are often used. \subsection{Selecting Narrative Genres} To communicate the story in an understandable way, consideration must be given to how story elements are arranged and combined. Segel and Heer~\cite{Segel2010} have defined seven genres: magazine style, annotated chart, partitioned poster, flow chart, comic strip, slide show, and film/video/animation, as depicted in Figure~\ref{fig:genre}. These genres differ in the number of scenes shown, and the arrangement of story elements within a scene. The choice of genre depends on the data complexity, as well as the intended audience and medium. For narrative medical visualization, \textit{magazine styles}, where a 2D image is embedded in text, could be adapted to integrate 3D models, with the text around explaining visible structures. In contrast, \textit{flow charts} can be used to show medical processes such as disease treatment in an abstract way. \textit{Annotated charts} can be used to present statistical information, e.g., the prognosis as a function of the selected therapy. The combination of images and diagrams in a \textit{partitioned poster} is well suited to provide overviews or summaries of medical explanations. \textit{Slide shows} are commonly used in business presentations. For the application to medical data, the user's attention should be kept by interactive components, where the s/he is encouraged to interactively explore the data. \textit{Comic strips} consist of highly abstracted illustrations that contain only brief annotations. An interesting scenario would be the cartoon-style illustration of medical aspects for children. \textit{Videos, and animations} would be well suited to support the exploration of 3D medical data. Optimal views on surfaces, such as vessels and organs, could show structures of interest, e.g., the resection of a tumor. \subsection{Selecting Narrative Design Patterns} \label{subsec:narpatterns} Depending on author intent and the audience, a story can be told in different ways. Bach et al.~\cite{Bach2018} described eighteen narrative design patterns that can be used individually or in combination to tell data stories. Each pattern has a specific purpose, with five overarching groups. There are \textit{argumentative}, \textit{structuring}, \textit{framing}, \textit{emotional}, and \textit{engaging} patterns. Argumentative patterns include comparisons, concretizations, and repetitions, to present, support, reinforce, contradict, or discuss a particular statement. They can be used, for example, to compare treatment options, to present information that users should remember (e.g., preventable risk factors), or to present the benefits of protective measures, such as vaccinations. Structuring patterns include concepts such as revealing, slowing down, and speeding up. Framing patterns determine how the story content is perceived through techniques such as creating familiar settings, making guesses, defamiliarization, breaking conventions, hiding data, and using physical metaphors. Structuring and framing patterns are important to present medical data, which usually contains different data types, such as volume data, 3D models, quantitative values and qualitative flow information. To communicate this data to general audiences, it must be simplified, details must be omitted, and interesting aspects, e.g., in statistical diagrams, should be revealed. Slowing down, and speeding up could, e.g., be used to show blood flow animations. In time ranges where interesting flow occurs, the animation is slowed down and in less interesting ranges the animation is sped up~\cite{Kolesar2014}. Emotional patterns such as directly assessing the audience and presenting individual stories are designed to help understand and share important feelings in the story. To engage the user into the story, techniques such as rhetorical questioning, call to action, and interactive exploration can be used. \section{Narrative Visualization of Scientific Data} \label{sec:narrscivis} Numerous works combined narrative techniques and information visualization~\cite{Tong2018,Gershon2001}. In contrast, there is little research on combining scientific visualization with narratives~\cite{Ma2011}. In this section, we summarize the transition from expert-driven visualizations of scientific data, i.e., spatio-temporal data, to non-expert visual representation of these data. We also provide insights into the challenges that arise during this transfer, especially for medical data. Finally, we present selected scientific applications outside of medicine, where narrative techniques have already been used. \subsection{From Scientific to General Audience} \label{subsec:sciens_to_pub} Traditionally, visualizations were used by experts to gain detailed insights into complex data. Experts have a deep background knowledge of the respective domain and are able to evaluate and interact with complex visualizations. In contrast, a general audience includes people with varying levels of expertise who differ in terms of age and cultural backgrounds~\cite{Bottinger2020}. Bringing scientific results to a general audience is challenging, as it [...] “is quite a different matter to compel attention and understanding in a diverse, hurried, skeptical population of readers than to communicate with an eager, familiar group of associates”~\cite{Dibiase1990}. Therefore, the purpose of the visualization should be clearly defined in the context of the target audience in order to fulfill the intended communication goals~\cite{Bottinger2020}. Results from cognitive science show that embedding data in a narrative makes it more exciting and memorable~\cite{Ma2011}. For this purpose, complex scientific results need to be reduced, summarized and generalized by means of simplified and understandable visualizations. Compromises have to be made in terms of accuracy and completeness, since showing too many details can make it difficult to convey a clear message~\cite{Bottinger2020}. To create a narrative visualization, the target audience must be defined as precisely as possible~\cite{Bottinger2020}. The background knowledge and the goals of the audience are decisive for the design, the level of interaction allowed and how strongly the audience is guided through the story. With regard to medical data, different audience groups such as scientists, students, patients, health care providers, or policymakers are conceivable, as shown in Figure~\ref{fig:overview_audiences}. \subsection{Challenges in Narrative Visualization} Several challenges need to be considered when designing narrative visualizations for rich scientific data~\cite{Bock2019,Ynnerman2020}. We summarize the main challenges and their relation to medical data. \noindent \textbf{Varying Spatial-Temporal Scales.} In scientific data, the spatial and temporal scales of objects can vary greatly~\cite{Bock2019}. Navigation and interaction aids are needed that identify points of interest both spatially and temporally. In medical data, the sizes of structures can vary greatly. For example, organs, such as the liver, are several centimeters in diameter, while embedded structures, such as vessels or cells, are many times smaller. Similarly, time scales can range from hours that a treatment needs to years in long-term follow-up of diseases. \noindent \textbf{Varying Data Sources.} Another problem are different types of data coming from different sources~\cite{Bock2019}. Medical data can include radiological and histological image data, numerical values, and statistical information, which can be acquired with different devices, e.g., different scanners. Moreover, biomedical simulation data may be relevant. \noindent \textbf{Data Access Issues.} Another challenge that is particularly relevant to medical data is making data available to the general public. From an ethical point of view, mere anonymization of data is not sufficient for their use in public scenarios. One solution to this could be to use data derived from data donors. \noindent \textbf{Interaction and Navigation.} The exploration of medical data by the general public requires to reduce complexity in terms of interaction and navigation compared to systems for experts~\cite{Ynnerman2020}. Otherwise, users can lose their desire to use the visualizations. The design of the user interface should be tailored to the communication goal of the story without noticeably restricting the exploration. \noindent \textbf{Occlusion management.} In 3D scenes, special attention must be paid to resolve object occlusion. Virtual X-ray approaches and volumetric probes to adapt the opacity of occluding objects either automatically or interactively~\cite{Elmqvist2008} are suitable for narrative medical visualization. Whenever interesting objects are occluded in medical data, often smart visibility techniques, such as ghosted views and cut away techniques, are applied~\cite{lawonn2018}. \noindent \textbf{Storytelling and Exploration.} To not overwhelm people with visual exploration opportunities, they should be guided through the story~\cite{Ynnerman2020}. In 3D medical visualization, this can be realized through automatic views, limited rotation capabilities, or predefined parameter settings. The user should always know where s/he is in 3D space. \noindent \textbf{Flexibility and Performance.} Due to different data types and many possible scenarios, a system to interactively explore medical data should be flexible regarding the integration of new interactions and rendering styles. In addition, robustness is important to make software available to the general public. \subsection{Selected Examples of Narrative Scientific Visualization } Typical places where the general public comes into contact with scientific data are museums, planetariums, exhibitions, and science centers. Ma et al.~\cite{Ma2011} described projects of NASA's Scientific Visualization Studio in which narrative visualizations communicate investigations recorded with various instruments and sensors. Further details are provided by captions, sound or live demonstrations. Media comprise UltraHD displays and hyperwalls, Dome shows, mobile, and 360 projections. Krone et al.~\cite{Krone2017} present design considerations of a scientific exhibition in the Carl-Zeiss-Planetarium Stuttgart to inform a general audience about computer simulations comprising industrial and molecular simulation examples. In different interactive scenarious, the users are educated what simulation is, how they are computed and how the results can be visualized. A Microsoft Kinect and Leap Motion are used as input device. For validation, the visitors could provide feedback using questionnaires about different aspects of the exhibition. Although only a few visitors left feedback, this was very positive with regard to the comprehensibility and engagement of the presentations shown. Recently, Ynnerman et al.~\cite{Ynnerman2020} summarized how storytelling is used in the Norrk\"oping Visualization Center C. In addition to dome projections and VR setups, users can explore volume data using multi-touch displays. These data comprise full-body CT scans, which are visualized by direct volume rendering (DVR). From a pre-defined image gallery, the visitors can select a transfer function, which should be applied to the data. Visitors can interact directly with the visualizations, they can perform single and multi touch gestures, e.g, rotate the volume or cut through it, but they cannot select objects very precisely. Similar setups are used to explore a virtual human mummy~\cite{Ynnerman2016} and biological structures that would not be visible to the naked eye~\cite{Host2018}. However, besides pre-defined transfer functions, textual descriptions, and videos no further guidance through the complex data is provided. Narrative techniques have also been used to communicate potential future climate changes to the general public based on simulated data~\cite{Bottinger2020}. In order for the user to draw conclusions, various visualization aspects must be taken into account. The choice of appropriate color scales is important to draw the user's attention. Furthermore, combinations of visualization techniques, such as color and contour lines to show correlations, must be carefully explained, e.g., by audio guidance. \section{Narrative Medical Visualization Concepts} \label{sec:exampmedstory} In this section, we describe how narrative techniques can enrich medical visualization so that users are able to easily understand, absorb, and interact with the data. Our intended \textbf{communication goal} is to \textbf{inform} \textbf{people interested in medicine} about a disease. To demonstrate the potential of narrative medical visualization, we first derive a template comprising potential stages of a story about disease data, as detailed in Section~\ref{subsec:stagesmedstory}. Next, in Section~\ref{subsec:medusecases}, we define an example scenario where the target audience comes into contact with medical data. Then, we select three common diseases for story generation, discussed in Section~\ref{subsec:meddata}, followed by explaining the story preparation including data preprocessing and selection of an authoring tool in Section~\ref{subsec:medicalstorydesign}. Based on the defined template and selected diseases, finally the medical stories are designed and presented in Section~\ref{subsec:narmedvis_broad}. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Figures//stages} \caption{\label{fig:stages} Derived template for narrative medical visualization of disease data that comprises seven stages.} \vspace{-10px} \end{figure*} \subsection{A Template for Narrative Disease Visualization} \label{subsec:stagesmedstory} Many university hospitals, scientific institutes or online encyclopedias put freely accessible blogs online to inform a general audience about the development, diagnosis and treatment of various diseases. We analyzed a total of 30 blogs for three selected diseases: Liver cancer \cite{surgery_liver_metas_1}-\nocite{surgery_liver_metas_2, surgery_liver_metas_3, surgery_liver_metas_4,surgery_liver_metas_5,surgery_liver_metas_6,surgery_liver_metas_7,surgery_liver_metas_8,surgery_liver_metas_9}\cite{surgery_liver_metas_10}, brain aneurysm \cite{aneurysm_1}\nocite{aneurysm_2,aneurysm_3,aneurysm_4,aneurysm_5,aneurysm_6,aneurysm_7,aneurysm_8,aneurysm_9}-\cite{aneurysm_10}, and pelvic fracture \cite{pelvic_1}\nocite{pelvic_2,pelvic_3,pelvic_4,pelvic_5,pelvic_6,pelvic_7,pelvic_8,pelvic_9}-\cite{pelvic_10} according to their basic structure, since we have the same communication goal and want to address the same audience. The basic structure of these blogs is very similar. First, a short and understandable \textbf{definition of the disease} is provided, and \textbf{statistical aspects} such as the annual incidence and age-related distribution between men and women are described. Next, an \textbf{anatomical overview} shows the location and function of the structures affected by the disease. This provides the baseline for understanding what is normal before introducing the disease itself. Using the example of liver cancer, schematic sketches are used to explain where the liver is located, its function, and important nearby structures. Subsequently, typical \textbf{symptoms} are explained usually as textual enumeration. Afterwards, the \textbf{diagnosis} is explained. This comprises frequently used examination methods, e.g., MRI, as well as their sequence and reliability in order to make a diagnosis. The procedure of each diagnostic method is briefly summarized and associated inconveniences for the patient are explained. The diagnosis is typically followed by an overview of possible \textbf{treatment options}. Therapeutic procedures are summarized including treatment risks and the associated chance of cure is estimated. Typically, 5-year \textbf{prognoses} are provided. Finally, \textbf{disease prevention} is explained, where risk factors for the development are summarized. A distinction is usually made between \emph{preventable} and \emph{congenital/genetic} risk factors. This concluding consideration of risk factors serves as an appeal and clarification that one's own behavioral patterns can have a strong influence on the development of life-threatening disease. The reader should be sensitized to think about their own habits and to adapt their lifestyle in a positive way. Based on this analysis, we derived a sequence of seven stages forming a template, as shown in Figure~\ref{fig:stages}, that can be used as a basic pattern for applying narrative techniques to disease data. \subsection{General Public Information} \label{subsec:medusecases} People interested in medicine come typically into contact with medical data either in museums or science centers or through Internet research on their home computers, where no specific doctor patient connection exists. Similar to the Visualization Center in Norrköping~\cite{Ynnerman2020}, asynchronous storytelling based on a touch display could be used to interactively inform about diseases. While users at home would probably be more likely to use a tablet or their phone, larger interactive displays could be used in a science center or museum. We show how narrative techniques can help medically interested people to inform themselves about diseases at home or in a science center. This is an asynchronous scenario where the user interactively explores the data on their own. The goal is to give a general overview of a disease, not focusing on anatomical variations or severity of a disease. \subsection{Selected Disease Examples} \label{subsec:meddata} For selecting example diseases, we oriented ourselves to the basic structures of the human body, which are visible in radiological image data: organs, vessels, and bones. Below, we outline our motivation for selecting three specific disease examples. \noindent \textbf{Liver Cancer.} Regarding organ diseases, we selected liver cancer. The number of new cases of liver cancer has doubled in the last 35 years~\cite{Liu2019}. Accordingly, the interest in learning more about this disease on the part of the general public is likely increasing. In Germany, approximately 8790 people (6160 men, 2630 women) are newly diagnosed with liver cancer each year. The average age of onset is 69.9 years for men and 72.1 years for women. The increase in annual new cases is associated with an increasing number of patients with liver cirrhosis, the high rate of new hepatitis B infections, and increasingly frequent obesity. \noindent \textbf{Brain Aneurysms.} For vessel diseases, we selected brain aneurysms, which are localized dilations of the brain vessels. In addition to older people, younger people are also frequently affected, making this condition of interest to the general public. About 3-5\,\% of all people likely have a brain aneurysm~\cite{Boulouis2017}. In most cases, these aneurysms are found by chance and remain asymptomatic. Brain aneurysms are more common in people over the age of 40, where women are affected more often than men by a ratio of 5:3. Their greatest danger is that the vessel wall ruptures, which leads to internal bleeding. The low incidence of rupture suggests that 80\,\% to 85\,\% of all brain aneurysms will never rupture~\cite{Singh2009}. Due to the low rupture rate and the existing treatment risks for the patient, physicians must limit treatment to high-risk patients. \noindent \textbf{Pelvic Fracture.} For bone-related diseases, we used a data set showing a pelvic fracture, which is a break in any of the bones that form the ring of bones at hip-level~\cite{Gordon2018}. Severe cases show multiple fractures and/or an unstable fracture. This can occur as a result of high-energy trauma, e.g., car accident (20-22\,\%), or in frail or older patients from minor trauma, such as a fall (5-30\,\%). High-energy trauma-related pelvic fractures often come with accompanying injuries that require immediate treatment. Pelvic fractures represent 3\,\% of skeletal injuries, with 5-16\,\% mortality (unstable pelvic fractures ~8\,\%). All age groups can be affected, but trauma has a higher incidence in young males, while in older populations is more associated with women. \subsection{Medical Story Preparation} \label{subsec:medicalstorydesign} For medical story design, some data preprocessing was necessary, which we detail in Section~\ref{subsubsec:dataprepro}. Essentially, this involves preparation of radiological image data. We also had to choose an authoring tool to create scenes and define transitions between them, as discussed in Section~\ref{subsubsec:authoringtool}. \subsubsection{Data Preprocessing} \label{subsubsec:dataprepro} For each story, an anonymous patient data set was used, comprising different radiological data types, such as ultrasound, MRI, and CT images. As this data was anonymized, we had no access to patient-related meta information. Therefore, any introductory patient information at the beginning of each story is fictitious. In the following, we shortly describe our data sets and necessary preprocessing steps. \noindent \textbf{Liver Cancer.} We used a data set of a patient with a stage 1 liver carcinoma provided by our clinical partners as sample data. In this data, small and medium-sized tumors without lymph node involvement and metastases were diagnosed in the liver based on ultrasound and CT Angiography (CTA). In addition, the liver showed no cirrhotic changes. Due to multiple tumors, surgical removal was not an option. Instead, the tumors should be treated with ablation. Based on the CTA data, the liver as well as the tumors were segmented using \textit{HepaVision}~\cite{Bourquain2002}. Moreover, other surrounding structures such as the heart and ribs were segmented and transformed to 3D surfaces. \noindent \textbf{Brain Aneurysms.} For brain aneurysms, we used a data set from Berg et al.~\cite{Berg2015}, where a brain aneurysm was incidentally found during CTA. The vasculature was segmented using the pipeline presented by M\"onch et al.~\cite{Moench2011a} and converted into a volume grid. Computational fluid dynamics (CFD) simulations were used to calculate the blood flow behavior. Finally, particles are traced in the resulting vector field to depict the blood flow. All simulation details can be found in the work by Berg et al.~\cite{Berg2015}. \noindent \textbf{Pelvic Fracture.} We obtained a CT dataset of a woman with a pelvic fracture including pre-operative and post-operative scans. We performed direct volume rendering (DVR) in 3D Slicer~\cite{Kikinis2014} in order to generate videos and images to support our story. In addition, we provide interactive 3D scenes based on the Virtual Surgical Pelvis (VSP) model~\cite{smit2016pelvis}, which is in use as a virtual educational tool to teach pelvic anatomy~\cite{smit2016online}. The VSP consists of anatomical surface models based on expert segmentation of a cryosection data set. Selected VSP structures were embedded as interactive 3D models. \subsubsection{Authoring Tool Selection} \label{subsubsec:authoringtool} We created the stories using \textit{PowerPoint 365 MSO} version 2105. PowerPoint offers numerous possibilities to combine and visually arrange narrative elements. Animations and transitions can be defined and different file formats, such as images, videos and interactive 3D models, can be integrated. PowerPoint thus offers all the functionalities we need to design basic concepts that show the potential of narrative visualizations for medical data. In addition, the wide availability of PowerPoint makes it a good choice for an interactive narrative visualization intended for a general audience. \subsection{Medical Story Design} \label{subsec:narmedvis_broad} We use the derived stages shown in Figure~\ref{fig:stages} as the basic structure for designing our medical stories. Thus, some scenes for the three diseases are very similar. To avoid redundancy, we use the example of liver cancer to show a complete design of a medical story. For the two remaining diseases, we focus on illustrating disease-specific aspects. Visual placeholders are inserted for parts of the stories whose design is similar to the liver scenario. We attached all created stories as supplemental material. We have chosen the \textit{slideshow} format as basic design genre. For each stage, one or multiple slides are prepared as scenes. Within the scenes, other narrative genres such as magazine style or partitioned poster are used. Smooth transitions are defined between scenes, with the timeline of stages visible in each scene. This way the user always knows s/he is in the story. All three stories introduce a patient case to capture the user's attention. This consists of a catchy \textit{headline} and a \textit{long-form textual description} of the patient case, see Figure~\ref{fig:liver} (A1). In addition, the patient description is read aloud as a \textit{voiceover}. This patient description should help the audience to relate to the case and motivate them to continue with the story. By pressing the start button, the user begins the actual story. Below, we provide detailed insights into the scenes. \noindent \textbf{Disease Definition.} Within the first scene, we use the \textit{magazine style} to introduce the affected anatomy by an interactive 3D model to provide an initial orientation to the topic. Inspired by the work of Garcia~\cite{Arcia2016}, statistical parameters of the disease are depicted as \textit{information graphics} with \textit{icons} and laddered text to quickly absorb information. We exclude visual representations of the disease at this stage to avoid overwhelming the user. Via voice narration, the user is encouraged to rotate the model using their fingers. Since free rotation of 3D objects is difficult to handle for inexperienced users, we limit rotation around a vertical axis with a finger movement from left to right. In the liver cancer story, we embed an interactive \textit{3D liver model} alongside \textit{textual components} and \textit{information graphics}, see Figure~\ref{fig:liver}~(B1). We do not yet show 3D models of the tumors or unnecessary surrounding anatomy. One of the most important statistical parameters is the annual incidence, which is emphasized by the larger font size. Other parameters, such as the distribution between men and women and their average age of onset, are represented by \textit{annotated information graphics} and \textit{icons} to aid recognition and memorability. We apply similar concepts to the other two stories. \noindent \textbf{Anatomy.} Again, we use the \textit{magazine style} to describe the anatomy scenes. We describe the anatomical structures that are necessary to understand disease development. These facts include the location, importance, and function of the key anatomical structure(s). Finally, we introduce the disease by super- or juxtaposing the pathology on the normal anatomy with a crossfade transition. Continuing the liver cancer story, the anatomical stage defines four key facts: (1) the liver is the largest and the most important organ to digest food and remove toxins, (2) it is located in the right upper part of the abdomen, below the heart, (3) it is supplied and drained by a vast network of blood vessels, and (4) in liver cancer, abnormal growth of cells in the liver forms tumors. We use several scenes to represent these key facts. First, an \textit{automatic rotation} of the 3D liver model gives an overview of its anatomical shape, where a \textit{long-form textual description} with \textit{highlighted keywords} provides more details. Following \textit{familiar objects} and \textit{object continuity} concepts, the story transitions to show anatomical context around the liver: \textit{labeled 3D models} of the ribs, heart, and liver vasculature. The last scene transitions to show the disease: surrounding structures \textit{fade away} and the fully-opaque liver becomes \textit{translucent} to reveal tumors within. The accompanying text discusses development of a liver tumor due to abnormal cell growth. The aneurysm and pelvic fracture stories both follow a similar introduction and flow of elements. However, a unique characteristic is the complex anatomical structure of the pelvis with multiple bones and closely related vessels, organs, and nerves. In addition, for this patient we have pre- and post-operative data available, which makes it possible to show treatment effects on real data. To communicate these anatomical peculiarities and the treatment process, we combine \textit{hotspots}, \textit{3D models}, and \textit{DVR}, see Figure~\ref{fig:pelvis}. The user can interactively explore anatomical structures by clicking on the hotspots (A) by highlighting the corresponding anatomical name in the text description or clicking on a structure of interest. \noindent \textbf{Symptoms.} Using the \textit{partitioned poster} genre, a visual overview of frequently-occurring symptoms is provided. For each symptom, we artistically create an \textit{icon} with an accompanying \textit{caption}. The use of icons as opposed to purely text-based listings aims to increase memorability of symptoms The symptoms are displayed one after the other to give the user time to process each icon. For liver cancer, we create icons and accompanying text for the following critical symptoms in advanced liver cancer: unexplained weight loss, loss of appetite, pain/pressure in the upper abdomen, increased temperature, weakness/fatigue, abdominal swelling, and yellowing of the skin, see Figure~\ref{fig:liver} (B2). We also identify key symptoms of pelvic fracture, with the same storytelling mechanisms. We had to adjust the aneurysm story, since we focus on accidentally detected aneurysms without symptoms. Treatment bears considerable risks, which can exceed natural rupture risk. Therefore, the aneurysm story communicates how rupture-prone aneurysms can be detected as shown in Figure~\ref{fig:aneurysm}. The first scene shows a 3D aneurysm model representing an incidental finding (see Figure~\ref{fig:aneurysm} (A)). An illustrative superimposed \textit{magnifying glass} helps to quickly see the aneurysm in the complex vascular tree. Next, aneurysm rupture is shown \textit{illustratively} (see Figure~\ref{fig:aneurysm} (B). Using \textit{information graphics} and \textit{textual descriptions}, arranged in \textit{magazine style}, it should be made clear that a rupture occurs very rarely but is very dangerous. Here, the information graphics are only indicated by a placeholder, as these would be similar to the graphics of the liver definition stage. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Figures//liver_board_3} \caption{\label{fig:liver} All scenes of the liver cancer story covering the seven derived stages. The narrative sequence is A1, B1, ..., D1, A2 and so on. } \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Figures//pelvis_board} \caption{\label{fig:pelvis}Excerpts from the pelvic fracture story. A unique characteristic is the complex anatomical structure of the pelvis with multiple bones and closely related vessels, organs, and nerves. Hotspots, 3D models, and DVR are combined to highlight these aspects and treatment.} \vspace{-10px} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Figures//aneurysm_board} \caption{\label{fig:aneurysm}Excerpts from the aneurysm storyboard. There is a trade-off between the rupture risk and the treatment risk. 3D models, concept-driven visualizations, and blood flow depictions are combined to communicate this dilemma.} \vspace{-10px} \end{figure*} \noindent \textbf{Diagnosis.} The diagnosis is discussed using a variety of media. Key to this stage is informing the audience on how the diagnosis is achieved comprising diagnostic questions and imaging modalities. Each diagnostic question is illustrated as an individual scene using the \textit{magazine style}. Important questions for liver cancer include, e.g., the size and location of tumors as well as the the exact tumor type. For this purpose, the 3D translucent liver model including the tumors is shown. The tumors are enriched by \textit{glyphs} and \textit{annotations} to visually communicate the the main aspect of each related question (see Figure~\ref{fig:liver} (C2-C3)). Simple and clean \textit{vector illustrations} describe diagnostic procedures, e.g., liver biopsy to determine the exact tumor type (see Figure~\ref{fig:liver} (D3)). Procedures with anatomical views different than previously presented include a \textit{rotating 3D navigator model} of the organ that helps for view orientation. The size and location of aneurysms in the brain vessels is also a critical diagnostic question, which we handle slightly differently (see Figure~\ref{fig:aneurysm} (C)). This entails beginning with a localizing \textit{overall view} of the brain vessels with the translucent surrounding skull before a \textit{zoom and rotation} transformation focuses the camera tightly on the aneurysm to emphasize its location and shape. \textit{Panning and zooming} allows the user to get even closer to the aneurysm, a detail view is shown in the form of a 3D aneurysm model, with blood flow suggested by \textit{animated particles}, see Figure~\ref{fig:aneurysm}~(D). Additional scenes give an impression of diagnostic imaging modalities used. Here, the \textit{magazine style} is again used to combine image and text information. For each modality, a \textit{real image} is incorporated as an example, e.g., ultrasound, CT, and MRI in liver cancer, where in each the tumors are highlighted by contours, see Figure~\ref{fig:liver} (A4). We use similar concepts to show imaging modalities employed in brain aneurysm and pelvic fracture diagnosis. \noindent \textbf{Treatment.} The treatment stage provides an overview of typical therapy options and key aspects that influence treatment decisions. We do not consider rarely performed treatments that can only be done in special centers. The first treatment scene uses a \textit{flow chart} combined with \textit{long-form textual descriptions}. The chart in form of a directed graph describes the basic treatment approaches and their key aspects. By clicking on one of its nodes, the user gets more information in a \textit{magazine style} about the selected treatment. Each treatment is shown as a \textit{2D vector illustration} of its key moment to provide clarity. We again use a \textit{navigator icon} with \textit{labels} to indicate key aspects of the procedure. An exception to this are the metal implants used in fracture treatment, such as in the pelvis data (see Figure~\ref{fig:pelvis} (D)). Similar to bones in a CT scan, these can be easily visualized by DVR, where optimal views from different perspectives can be shown. In case of palliative treatments where the target is more towards symptom alleviation, we use \textit{icons} to create consistency and repetition between this stage and the earlier symptom stage. For example, in liver cancer there are essentially two treatment strategies, curative and palliative~\cite{Anwanwan2020} (see Figure~\ref{fig:liver} (B4)). To the right, we detail the key aspects that determine treatment in list form. This includes: the number of tumors present, their size, whether they have grown into blood vessels or into surrounding tissue, whether they have already metastasized, how functional the liver still is, and the patient's general health. On selection of the curative therapy chart element, the user is directed through the set of curative therapies. Here, the user learns that the goal of curative therapy is to cure the cancer, methods of which include (1) surgical removal of the tumor(s), (2) ablation, in which the tumor(s) is destroyed by heat or microwaves, and (3) radiation (Figure~\ref{fig:liver} (C4-A5)). The user then is taken through the palliative treatment scenes. We use icons to indicate alleviation of symptoms that were introduced earlier in the story, as well as new icons, such as those for chemotherapy, cytostatic drugs, and high calorie foods that may prolong the patient's life and relieve symptoms (see Figure~\ref{fig:liver} (B5)). \noindent \textbf{Prognosis.} In addition to general statistics, the prognosis of a disease also includes several parameters depending on the severity and the chosen treatment. Since a detailed presentation of all dependencies and resulting parameters would overwhelm the general user, we limit ourselves to the most important parameters to give insight into the liklihood of a cure. Based on the \textit{partitioned poster} genre, we produce an \textit{infographic-style illustration} that makes use of \textit{color} and \textit{laddered text} to aid in information absorption and memory retention of key prognostic information. We use \textit{glyphs} for men and women to convey that, unlike incidence, there are no significant prognostic differences between men and women. The prognosis for liver cancer depends on the cancer stage and the condition of the liver~\cite{Anwanwan2020}. We emphasize the relative 5-year survival, which is around 18\,\% for men and women, in the \textit{largest typeface} with an \textit{attention-drawing accent color} along with \textit{icons} indicating men and women (see Figure~\ref{fig:liver} (C5)). We use a \textit{smiley symbol} to indicate the group where tumor removal often experiences a positive outcome. For stage I tumors, the 5-year relative survival is around 62\,\% in women and around 54\,\% in men. In stage IV, however, it is only 2\,\%. We repeat the use of larger typeface with accent color for the survival with gender symbols at a slightly smaller size. For both the aneurysm and pelvic fracture stories we use a similar presentation of laddered text with accent colors for percentages with symbols to indicate affected genders. \noindent \textbf{Prevention.} In this stage, we focus on illustrating avoidable risk factors to give the user a sense of agency. Risk factors such as age or genetic factors that a person cannot influence are excluded since they are not actionable for the user. Similar to the symptoms, we use the \textit{partitioned poster} genre and utilize \textit{icons} to better recognize and understand the presented information, e.g., a martini glass for the recommendation to reduce alcohol consumption. The main risk factor for liver cancer is cirrhosis, caused typically by chronic hepatitis B virus infection, depicted with a \textit{syringe} and \textit{caption} to vaccinate against hepatitis B, or high alcohol consumption, which we depict with an \textit{alcoholic beverage icon} and a \textit{caption} to limit alcohol consumption (Figure~\ref{fig:liver} (D5)). Another risk factor is obesity which is depicted by a \textit{scale symbol} with an \textit{arrow} indicating increasing weight along with the caption ``Keep weight in a healthy range.''. Smoking also increases the risk of the disease, which we depict with a \textit{barred cigarette icon} with the \textit{caption} ``Quit smoking.'' We can similarly use such icons to show risk factors for the brain aneurysm and pelvic fracture stories. \section{Discussion} \label{sec:discussion} In the following, we outline general issues that arise during medical story creation. We cover principle decisions regarding scene design, such as story element style and underlying design patterns. Furthermore, we discuss necessary story adaptations to address other diseases or other communication goals, e.g., education. \noindent\textbf{Content production.} For our purposes of showing an initial concept for narrative medical visualization, the functionality of PowerPoint for story creation was sufficient. However, its content production feature set remains limited, with significant manual effort required to integrate multiple media types to tell a comprehensive story. We used several external tools to bridge this gap, including Adobe Illustrator to produce icons and treatment illustrations. While PowerPoint is able to embed 3D surface models, medical image feature extraction and advanced visualization, e.g., DVR, are not supported, which led to a need to use additional software. \noindent\textbf{Scene design.} Following the suggestion of B{\"o}ttinger et al.~\cite{bottinger2020challenges} to make scenes "as simple as possible without risking scientific credibility," we extract as much information as possible from real medical image data to generate data-driven stories. However, for creating simple and clearly understandable scenes for our target audience, visual abstraction and easy-to-use interaction techniques are necessary~\cite{Viola2020visual}. We used textual and verbal descriptions and avoided technical terms where possible. We added 2D vector illustrations to show important treatment concepts in a simplified way. While 3D data-driven models with surgical instruments would also be possible, this creates visually complex scenes that we felt may overwhelm or scare the user. For touch-based interaction, we use interaction types that do not require a high degree of accuracy. These include single-touch gestures, e.g., rotation around a predefined axis and clicking on objects, as well as familiar multi-touch gestures, such as panning and zooming. Such interactions take advantage of user familiarity with everyday objects such as smartphones and tablet PCs without requiring extra equipment, such as a mouse or keyboard. To select an appropriate narrative genre, there are currently no guidelines on which genre is most suitable for a given context. Since, we want an interactive and multi-media environment for our stories, slideshows seemed most appropriate. Within the scenes, we often combine 2D/3D representations with textual descriptions arranged in the magazine style to explain disease stages in a short and memorable way. In principle, we could also use another genre for the story or parts of the story, such as comics. Thus, a validation and derivation of guidelines of contextual genre recommendations is an important point for future research. To inform the general public about a disease, we use all design patterns defined by Bach et al.~\cite{Bach2018} (cf. Section~\ref{subsec:narpatterns}). The two most important patterns in our stories are \textit{framing by hiding data} and \textit{structuring by revealing data}. For example, within the 3D models only the most important structures are depicted while structures irrelevant to the story are excluded, e.g., other abdominal organs in the liver scenario. In addition, combinations of 3D structures, annotations/glyphs, and textual descriptions are revealed sequentially to avoid confronting the user with a visual overload. We use emotional and engaging patterns to capture and maintain the user's attention. We use an initial patient description to help users relate to the story, and incorporate interactive components encouraging users to take action. Argumentative patterns in the form of memorable icons convey avoidable risk factors. Our aim is to raise awareness of a disease and to encourage taking action and adopting a healthier lifestyle. \noindent\textbf{Evaluation of Medical Stories.} Our stories were designed by scientists with many years of experience in the visualization of medical data. One of our co-authors is a medical illustrator and thus brings a lot of experience regarding the design of illustrations for the general public. In this forward-looking paper, we have basically conceptualized what medical stories could look like. We have not yet done any evaluation with the intended audience which is fundamental for future work. \noindent\textbf{Communication goals.} In addition to informing, educating an audience would be another important communication goal. For example, one possible scenario would be to teach students about various medical conditions. The use of engaging patterns in the form of interactive exploration to maintain students' enjoyment during the learning process is particularly important~\cite{Rheingans2020}. An option could be to integrate an interactive quiz, where the user could check how much they have learned from the story. Moreover, argumentative patterns such as spaced repetition should be used to help the user remember the most important facts. Furthermore, details on demand would be useful for learning. In currently available apps, such as "language learning apps", the respective topic is explained using a meaningful example. In addition, the user has the option of viewing further language-specific examples to consolidate the learning material. This details-on-demand principle could be applied to medical data. Upon request, the user could see additional records, e.g. a stage IV liver cancer dataset, with other anatomical features. \noindent\textbf{Application to other diseases.} In this paper, we focus on diseases which can be diagnosed based on radiological image data. However, several diseases require other technologies for diagnosis. Examples are respiratory diseases such as asthma, heart rhythm disorders, or infectious diseases such as HIV, which are diagnosed by pulmonary function tests, electrocardiogram, and blood tests, respectively. While our basic division into the seven stages can also be applied to such diseases, the media used must be adapted accordingly. For example, 3D models based on real data cannot be generated for infectious diseases. Similarly, treatment options would have to be adapted. The division into curative and palliative methods is not always possible, since many diseases cannot be cured. \section{Research Agenda} \label{sec:agenda} This section focuses on open research challenges focusing on the combination of narrative techniques and medical data. \noindent \textbf{Authoring tools for narrative medical visualization.} When presenting medical information such as incidence rates or prognostic factors, any authoring tool suitable for data story creation would be suitable. However, presentation of medical imaging data offers additional challenges. Providing efficient authoring tools tailored to medical imaging data is a key aspect of advancing the use of narrative techniques in this area. Such an authoring tool would need to support data preprocessing (e.g., segmentation, smoothing, filtering) and provide techniques to add narrative elements. Here, also advanced labeling~\cite{Oeltze2014} and animation creation~\cite{Preim2020} techniques for medical data are needed. Existing expert tools for analyzing medical data, e.g., MeVisLab~\cite{MeVisLab}, 3D slicer~\cite{Kikinis2014}, or VTK~\cite{Schroeder1998} could be chosen as a basis. These are powerful medical image processing and visualization frameworks freely available for non-commercial research. Their modular character allow fast integration and testing of new algorithms. They could be extended with narrative modules which allow scene generation scenes based on a selected scene type and transitions between scenes. Key features should be interaction tracking, e.g., to create animations and the definition of which user interactions should be provided. An alternative would be to provide an authoring tool to create web-based stories. This would allow even more people to access the stories. For the creation of web-based stories, libraries such as VMTK~\cite{Izzo2018} or VTK.js could be combined for the preparation and visualization of imaging data and D3~\cite{d3js} for the incorporation of information visualization and interaction. \noindent \textbf{General pattern design for medical data.} The definition of dedicated patterns to capture medical narratives helps authors to structure their stories. In this paper, we propose a pattern for the narrative presentation of disease data comprising seven stages. However, there are many other medical aspects, such as healthy metabolic processes, pregnancy, and medical procedures, for which this pattern would not be suitable. Other patterns should be derived to structure such topics. \noindent \textbf{Narrative medical visualization for patients.} In addition to medically interested people, patients could benefit from narrative medical visualization. Patients come into contact with medical data through their physicians who explain planned procedures and diagnosed diseases, including possible treatments and their prognosis. The patient typically sees excerpts from imaging data or schematic drawings on information sheets. However, since patients often have little medical knowledge, communication difficulties can occur, which could lead to uncertainty and fear. In a doctor-patient conversation, the patient could instead be informed about his or her condition using personalized visual representations. Here, narrative visualization could help the patient to understand diagnostic and therapeutic procedures such that a suitable therapy plan can be made in consultation with the physician. This would strengthen opportunities for participatory medicine. \noindent \textbf{Support experts with narrative medical visualization.} In addition to the general public, medical experts are also a potential target group for narrative visualization. Many expert visualization tools partly consist of very complex visual representations where the experts need extensive explanations and there is a steep learning curve. Especially for prospective experts, such as medical students, narrative techniques could help to understand complex medical data~\cite{Rheingans2020}. In contrast to stories developed for a general audience, expert narratives would need more emphasis on data preservation and, consequently, less abstraction. It would be interesting to further explore how expert tools can be enriched with narrative techniques and what impact this has on understanding. \noindent \textbf{Narrative medical visualization evaluation.} To assess the usefulness of storytelling in a medical context, it is important to measure the quality of the visualization w.r.t. achieving communication goals. Evaluation of narrative visualizations is a difficult and complex task with which only a few works have currently dealt~\cite{Mahyar2015,Boy2015}. The basic idea is to evaluate \textit{design decisions made , \textit{techniques employed , and \textit{media used} in terms of aspects such as comprehensibility~\cite{Figueiras2014tell}, memorability~\cite{Borkin2015}, and effectiveness in engaging a general audience~\cite{Boy2015,Mckenna2017}. In contrast to medical expert visualizations, which are often validated in controlled lab studies, typically within a short time frame, evaluation of narrative medical visualization will require a different approach to account for various scenarios and to reflect real-world uses. To get meaningful results, we have to reach a wider and more diverse audience than the usual small group of experts used in lab studies. Kosara et al.~\cite{Kosara2013} suggested to use crowdsourcing platforms for such larger visualization studies. In addition, objective measures of comprehensibility, memorability, and user-engagement have to be defined. Eye-tracking studies could help to better understand how different medical-related audiences view and interact with visualizations. \noindent \textbf{Advanced media for narrative medical visualization.} Many of the existing visualization and interaction approaches were originally designed for a standard desktop computer set-up with mouse and keyboard input. However, in the last years new advances have been made in human-computer interaction technology~\cite{Besanccon2021}. New display technologies such as multi-touch displays have become very popular as they are intuitive, easy to use, and allow a direct manipulation of the visual data representation by multiple users. Furthermore, VR environments are increasingly being developed for medical scenarios such as surgical planning and medical education. It would be interesting to investigate how these mediums could be combined with narrative techniques to communicate medical data to general audiences. Furthermore, concepts such as voice and gesture control as well as human avatars to answer questions may bring complex scientific data closer to a general audience~\cite{Ynnerman2020}. A more detailed investigation is needed regarding their suitability for medical data. \section{Conclusion} \label{sec:conclusion} The use of narrative techniques represents an important research focus with the goal of communicating complex scientific data in an understandable way to a general audience without specific expertise. To date, however, there are few approaches leveraging narrative techniques for medical data, although the audience for this is large. Patients, relatives and people interested in medicine can benefit from custom medical representations and expand their knowledge. In addition, physicians and medical students could benefit from the use of narrative techniques. In this paper, we provide a first proof of concept demonstrating how narrative techniques could help explain disease characteristics to a wider audience. The combination of data-driven and illustrative presentations seems to be a powerful way to present complex diagnosis and treatment methods in an understandable way. Important points for future research are the development of appropriate authoring tools and the validation of narrative medical presentations.
2,877,628,090,920
arxiv
\section{Introduction} Parallel MRI was introduced to speed up the traditionally slow MR acquisitions. Specifically, the redundancy between the k-space samples is capitalized to highly undersample k-space data, thereby reducing the scan time. The recovery of images from highly under-sampled multi-channel Fourier measurements is a classical problem in MRI \cite{lustig2007sparse}. Pre-calibrated approaches such as SENSE \cite{pruessmann1999sense} rely on coil sensitivities that are estimated using additional calibration scans. Several image priors \cite{doneva2020mathematical}, including sparsity, have been used to regularize pre-calibrated image recovery, resulting in improved image recovery at high acceleration factors. Researchers have recently introduced model-based deep-learning (DL) algorithms that use a forward model (capturing the imaging physics) combined with a deep-learned prior \cite{sun2016deep,schlemper2017deep,hammernik2018learning,aggarwal2018modl}. An improvement in image quality as a result of multiple iterations of optimization blocks, sharing weights across the network, and end-to-end training, along with the ability to use multiple learned regularization priors, has been demonstrated in \cite{aggarwal2018modl}. Since these methods use pre-estimated coil sensitivity maps within the forward model, they suffer from errors in the sensitivity maps resulting from motion or high accelerations. Self-calibrated approaches such as GRAPPA \cite{griswold2002generalized}, SPIRiT \cite{lustig2010spirit} and ESPIRiT \cite{uecker2014espirit} estimate coil sensitivities from a fully sampled calibration region in the center of k-space. However, the need for a fully sampled region restricts the achievable acceleration in these settings. Structured low-rank (SLR) matrix completion approaches \cite{jacob2020structured,haldar2019linear, haldar2014low} were introduced to overcome the challenges with the above calibration based schemes and have been very effective in uncalibrated parallel MRI \cite{uecker2014espirit} and multi-shot acquisitions \cite{mani2017multi}. In the context of parallel MRI, these methods exploit the annihilation relations between multi-channel Fourier data rather than relying on explicit coil sensitivity estimates. Similar SLR approaches have been used to exploit a variety of other signal properties, including support constraints\cite{haldar2019linear}, continuous domain sparsity \cite{ongie2015super, lee2016acceleration}, phase \cite{haldar2019linear}, and the exponential structure of an MRI time series \cite{jacob2020structured}. Iterative re-weighted least-squares (IRLS) SLR algorithms make use of the convolutional structure of the matrices \cite{ongie2017fast} to accelerate the algorithms. IRLS methods alternate between estimating an annihilation (null space) filterbank and updating the Fourier coefficients of the signal from the available measurements. Specifically, the missing Fourier coefficients are chosen so they match the measurements while being annihilated by the filterbank; the projection energy of the signal to the signal subspace measured by a residual convolution-deconvolution filterbank, is maximized. While this algorithm is considerably faster than earlier approaches, the iterative estimation of the annihilation filterbank from the under-sampled data is still computationally expensive. The approaches that use calibration information \cite{uecker2014espirit,ongie2015super,haldar2019linear} estimate the null space filters from a fully sampled calibration region, resulting in reduced complexity. Since the annihilation filterbank need not be derived from the under-sampled data in an iterative fashion, this approach offers faster reconstructions. However, the challenge with these methods is the need for a calibration region, which restricts the achievable acceleration. In this paper, we introduce a general DL strategy to reduce the runtime of the SLR algorithms, which is valid for all the signal priors discussed above \cite{ongie2015super, lee2016acceleration,haldar2019linear,jacob2020structured}. Unlike the SLR approach that estimates a specific linear annihilation network for each dataset from the undersampled measured data, we propose to learn a single non-linear CNN from several training datasets. Specifically, the residual convolution-deconvolution linear filterbank in the IRLS-SLR algorithm is replaced with a residual multi-channel CNN. We hypothesize that the pre-learned non-linear CNN behaves as a different linear annihilation filterbank for each specific dataset, annihilating the multi-channel data. The residual CNN behaves as a projection for each dataset, facilitating the \emph{denoising} of the dataset from alias artifacts and noise at each iteration. Similar to MoDL \cite{aggarwal2018modl}, the proposed model unrolls the resulting algorithm and learns the parameters of the non-linear filterbank in an end-to-end fashion. The proposed work combines this approach with an image domain prior similar to MoDL, which is complementary to the Fourier domain multi-channel relations. This hybrid approach offers improved performance over SLR, while offering around three orders of magnitude reduction in computational complexity. We focus on two representative applications---sparse single-channel recovery and parallel MRI---which use two distinct lifting structures in the SLR approach \cite{jacob2020structured}. Specifically, we show how different lifting structures can be accommodated in the proposed scheme by modifying the data organization of the input and output of the CNN module. This enables the extension of the proposed framework to a range of SLR applications \cite{jacob2020structured,haldar2019linear,shin2014calibrationless} that use one or a combination of the above lifting structures. This main focus of this work is to introduce a general DL framework for uncalibrated parallel MRI and multishot MRI. This work is related to \cite{aggarwal2019modl} and \cite{hu2019reconstruction}, which are partially calibrated strategies. Specifically, the MoDL-MUSSELS \cite{aggarwal2019modl} framework explicitly accounted for the pre-estimated coil sensitivities within the data consistency block, while it performed a calibration-free correction of phase errors between shots. A challenge with these partially calibrated approaches is the potential mismatch between coil sensitivities and the diffusion-weighted acquisition due to motion between the coil sensitivity calibration scan and the diffusion scan. By contrast, the annihilation relation between coils is learned by the non-linear k-space CNN from exemplar data in this work; the application of this framework to the diffusion setting yields a completely uncalibrated algorithm, which jointly accounts for coil sensitivities as well as phase errors between shots. As demonstrated in our experiments in the supplementary material, this approach eliminates errors resulting from a motion-induced mismatch between the calibration scan and the diffusion-weighted one. In addition, the main focus of this work is to show that the proposed approach works well for a range of SLR priors, of which only one was considered in the earlier work \cite{aggarwal2019modl}. The conference version of this work is presented in the literature \cite{pramanik2019off, pramanik2020calibrationless}. DL methods in k-space were the focus of recent work \cite{han2019k,eo2018kiki,akccakaya2019scan}. The RAKI framework \cite{akccakaya2019scan} is a calibrated scheme, unlike our calibration-free approach. A direct inversion (model-free) approach was pursued in one study \cite{han2019k}; it differs from the proposed model-based framework, which also combines image domain priors. The KIKI net approach \cite{eo2018kiki} was introduced for a single-channel setting, unlike our uncalibrated multi-channel scheme. The proposed reconstructions are compared against model-free image domain DL \cite{ye2018deep}, k-space DL \cite{han2019k}, image domain MoDL \cite{aggarwal2018modl}, and traditional SLR methods. These comparisons reveal the improved performance offered by the Deep-SLR framework. \section{Background} We now briefly describe the background to make the paper self-contained and easily accessible. \subsection{Forward Model} We model the acquisition of image ${\boldsymbol \gamma}(\mathbf r)$ as: \begin{equation} \label{forward_model} b_i = \mathcal S (\mathcal{F}(\underbrace{ s_i~{\boldsymbol \gamma}}_{\boldsymbol \gamma_i})) + {\eta_i},\hspace{2pt} i=1\ldots M, \end{equation} where $ s_i; i=1,..,M$ is the coil sensitivity of the $i^{\rm th}$ coil, while $b_i$ is the noisy under-sampled Fourier measurement, and $\boldsymbol \gamma_i$ is the image, corresponding to the $i^{\rm th}$ coil. $\eta_i$ is the noise term. Here $\mathcal F$ is the Fourier transform that maps $\boldsymbol \gamma_i$ onto its k-space samples and $\mathcal S$ is the under-sampling operator. We compactly denote the above operation as \begin{equation} \mathbf B = \mathcal{A}({\boldsymbol \Gamma}) + \mathbf P \end{equation} where $\widehat{\boldsymbol{\Gamma}} = \left [\begin{array}{ccc} {\widehat{{\boldsymbol\gamma}}_1} & .. & \widehat{{\boldsymbol\gamma}}_M \end{array}\right]$ is the matrix representing multi-channel data in Fourier space, $ \mathbf B = \left [\begin{array}{ccc} \mathbf{b}_1 & .. & \mathbf{b}_M \end{array}\right]$ is the corresponding noisy under-sampled multi-channel Fourier measurement, and $\mathbf P=\left [\begin{array}{ccc} \boldsymbol{\eta}_1 & .. & \boldsymbol{\eta}_M \end{array}\right] $ is the multi-channel noise. Note that we denote the image of the $i^{\rm th}$ channel by $\boldsymbol \gamma_i$, while $\boldsymbol{\Gamma}$ denotes the concatenation of the channel data in spatial domain. The Fourier domain representation of $i^{\rm th}$ channel image is $\widehat{\boldsymbol \gamma_i}$, while $\widehat{\boldsymbol \Gamma}$ is the concatenation of the channel images in Fourier domain. \subsection{Structured Low-Rank Algorithms} \label{slrbackground} SLR methods rely on different liftings of the Fourier coefficients, designed to exploit specific properties of the signal. We now discuss two representative SLR applications, which illustrate the different types of lifting used in the SLR setting. \subsubsection{Continuous domain sparsity} \label{girafsec} A continuous domain piecewise constant image $\boldsymbol \gamma$ with edges specified by the zero sets of a bandlimited function $\mu$ satisfies an image domain annihilation relation, $\nabla \boldsymbol \gamma(\mathbf r) \cdot \mu(\mathbf r)=0, \forall \mathbf r $, where $\mathbf r$ represents spatial coordinates. Here, $\nabla \boldsymbol \gamma$ denotes the gradient of $\boldsymbol \gamma$. This relation translates to the following Fourier domain annihilation relations $\widehat{\nabla \boldsymbol \gamma}[\mathbf k] \ast n[\mathbf k]=0, \forall \mathbf k $, where $\mathbf k $ denotes k-space coordinates (Fourier space). Here $\widehat{\nabla \boldsymbol \gamma}[\mathbf k]$ represents the Fourier coefficients of the gradient of $\boldsymbol \gamma$ and $n[\mathbf k]$ is the Fourier transform of $\mu(\mathbf r)$. We denote the mapping from the Fourier coefficients ${\widehat{\boldsymbol \gamma}}$ to $\widehat{\nabla \boldsymbol \gamma}$ by $\mathcal G$: \begin{equation} \label{nabla_def} \mathcal G({\widehat{\boldsymbol \gamma}}) = \widehat{\nabla \boldsymbol \gamma}[\mathbf k]=\left[\begin{array}{c} j2\pi {k_x}~ \widehat{ \boldsymbol \gamma}[\mathbf k] \\ j2 \pi {k_y}~ \widehat{\boldsymbol \gamma}[\mathbf k] \end{array} \right ]= \left[\begin{array}{c} \widehat{\boldsymbol{\gamma}_x} \\ \widehat{\boldsymbol{ \gamma}_y} \end{array} \right ]. \end{equation} Note that $\mathcal G$ essentially creates two copies of $\widehat{\boldsymbol \gamma}$, each with a different Fourier weighting. The convolution relation $\widehat{\nabla \boldsymbol \gamma}[\mathbf k] \ast n[\mathbf k]=0$ can be represented as Hankel matrix multiplication $\mathcal{H}(\widehat{\nabla \boldsymbol \gamma})~\mathbf n=0$. The number of such null space filters, denoted by $V$, is often large (see \cite{ongie2017convex}) \begin{equation}\label{gradient} \underbrace{\begin{bmatrix} \mathcal H(\widehat{\boldsymbol \gamma_x})\\\mathcal H(\widehat{\boldsymbol \gamma_y}) \end{bmatrix}}_{\mathcal T\left(\mathcal G(\widehat{\boldsymbol \gamma})\right) }\underbrace{\left[\begin{array}{c|c|c|c} \mathbf n_1& \mathbf n_2 &\ldots & \mathbf n_V \end{array}\right]}_{\mathbf N} = 0. \end{equation} resulting in a low-rank matrix $\mathcal T(\mathcal G(\widehat{\boldsymbol \gamma}))$. Note that the Hankel matrices are vertically stacked to obtain $\mathcal T\left(\mathcal G(\widehat{\boldsymbol \gamma})\right)$, which is a common approach in SLR \cite{uecker2014espirit}. \subsubsection{Parallel MRI acquisition scheme} \label{pslrsec} Image and Fourier domain multi-channel annihilation relations were shown in two studies \cite{uecker2014espirit,jacob2020structured}. Specifically, each pair of multi-channel images in \eqref{forward_model} satisfy a Fourier domain annihilation relation $\widehat{\boldsymbol \gamma_{i}}[\mathbf{k}]\ast\widehat{\boldsymbol s_{j}}[\mathbf{k}]-\widehat{\boldsymbol \gamma_{j}}[\mathbf{k}]\ast\widehat{\boldsymbol s_{i}}[\mathbf{k}]=0, \forall \mathbf{k}$, where $\widehat{\boldsymbol \gamma_{i}}[\mathbf{k}]$ and $\widehat{\boldsymbol s_{i}}[\mathbf{k}]$ are the Fourier coefficients of $\boldsymbol \gamma_{i}(\mathbf{k})$ and $\boldsymbol s_{i}(\mathbf{k})$, respectively. Such annihilation relations exist for every pair of coil images and can be compactly written as \begin{eqnarray} \label{compact_rel} \underbrace{\left[\begin{array}{cccc}\mathcal{H} (\widehat{\boldsymbol \gamma_{1}}) & \mathcal{H} (\widehat{\boldsymbol \gamma_{2}}) & \ldots & \mathcal{H} (\widehat{\boldsymbol \gamma_{M}})\end{array}\right]}_{\mathcal{T}({\widehat{\boldsymbol \Gamma}})}\cdot \mathbf{N} = 0. \end{eqnarray} The columns of $\mathbf N$ correspond to the vertical stacking of the filters $\widehat{\boldsymbol s_{i}}$. The large null space $\mathbf{N}$ implies it is low rank. Note that the Hankel matrices are horizontally stacked to obtain $\mathcal T\left(\widehat{\boldsymbol{\Gamma}}\right)$. Here $\mathcal G=\mathcal I$, which is the identity mapping. This is another popular class of lifting used in SLR \cite{jacob2020structured,haldar2019linear,lee2016acceleration}. \subsection{Calibration-free SLR Methods} In general, SLR schemes aim to recover an image or a series of images $\boldsymbol \Gamma$ from its measurements $\mathcal A(\boldsymbol \Gamma)$ by solving the optimization problem: \begin{equation} \label{slr} \min_{{\boldsymbol \Gamma}}\hspace{2pt} \mbox{rank}\hspace{1pt} \big[\mathcal{T}(\mathcal{G}(\widehat{\boldsymbol \Gamma}))\big] \hspace{2pt} \mbox{such that} \hspace{2pt} \mathbf B=\mathcal A\left( {\boldsymbol \Gamma}\right)+\mathbf P. \end{equation} Here, $\mathcal T(.)$ is a lifting operator that lifts the weighted signal $\mathcal{G}(\widehat{\boldsymbol \Gamma})$ into a higher dimensional structured matrix. As discussed earlier, the generic weighting matrix $\mathcal G$ depends on the specific annihilation relation. The recovery of ${\boldsymbol \Gamma}$ is often posed as an unconstrained nuclear norm minimization problem \begin{equation} \label{slr1} \arg \min_{{\boldsymbol \Gamma}} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda \|\mathcal{T}(\mathcal G (\widehat{\boldsymbol \Gamma}))\|_\ast \end{equation} where $\lambda$ is a regularizer to tune the nuclear norm loss term. \subsection{Iterative Re-weighted Least-Squares (IRLS) Algorithm} \label{irls} The IRLS scheme majorizes the nuclear norm with a weighted Frobenius norm as $\|\mathcal T(\mathcal G (\widehat{\boldsymbol \Gamma}))\|_\ast \leq \|\mathcal T(\mathcal G (\widehat{\boldsymbol \Gamma}))\mathbf Q\|_F^2$ to yield a two-variable optimization problem \begin{equation} \label{constrained} \arg \min_{{\boldsymbol \Gamma}, \mathbf Q} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda \|\mathcal{T}(\mathcal G (\widehat{\boldsymbol \Gamma}))\mathbf Q\|_F^2, \end{equation} which alternates between the null space $\mathbf Q$ and image ${\boldsymbol \Gamma}$, \begin{eqnarray} \label{fup} {\boldsymbol \Gamma}^{(n)}=\arg \min_{{\boldsymbol \Gamma}} \|\mathcal A(\mathbf{{\boldsymbol \Gamma}})-\mathbf B\|_2^2+ \lambda \|\mathcal T(\mathcal{G} (\widehat{\boldsymbol \Gamma}))\mathbf Q^{(n-1)}\|_F^2\\ \label{qup} \mathbf Q^{(n)}= [\mathcal T(\mathcal G (\widehat{\boldsymbol \Gamma}^{(n)}))^H\mathcal T(\mathcal G(\widehat{\boldsymbol \Gamma}^{(n)}))+\epsilon^{(n)}\mathbf I]^{-1/4} \end{eqnarray} respectively. The matrix $\mathbf Q$ can be viewed as a collection of column vectors spanning the null space of $\mathcal T(\mathcal G (\widehat{\boldsymbol \Gamma}))$. \subsection{Calibration-based SLR Methods} \label{calibrated} Several calibration-based MRI schemes (e.g., GRAPPA, SPIRiT \cite{griswold2002generalized,lustig2010spirit}) are related to the SLR schemes \cite{uecker2014espirit,jacob2020structured}. These approaches acquire a fully sampled calibration region in the Fourier domain, which corresponds to fully sampled rows of $\mathcal T(\mathcal G (\widehat{\boldsymbol{\Gamma}}))$ or, equivalently, the sub-matrix $\mathcal T_{R}(\mathcal G (\widehat{\boldsymbol{\Gamma}}))$. These schemes estimate the null space matrix $\mathbf Q$ (or, equivalently, the GRAPPA weights) by solving $\mathcal T_{R}(\mathcal G (\widehat{\boldsymbol{\Gamma}})) \mathbf Q =0$ subject to norm constraints on $\mathbf Q$; see the literature \cite{jacob2020structured} for details. Once the $\mathbf Q$ is pre-estimated from calibration data, the image is recovered from under-sampled Fourier coefficients by minimizing \begin{equation} \label{lp} \arg \min_{{\boldsymbol \Gamma}} \|\mathcal A(\mathbf{{\boldsymbol \Gamma}})-\mathbf B\|_2^2+ \lambda \|\mathcal{T}(\mathcal G (\widehat{\boldsymbol \Gamma}))\mathbf Q\|_F^2. \end{equation} The above optimization problem simplifies solving the system of equations $\mathcal A(\mathbf{{\boldsymbol \Gamma}})=\mathbf B; \mathcal G (\widehat{\boldsymbol \Gamma}))\mathbf Q=0$ for specific sampling patterns analytically \cite{griswold2002generalized,uecker2014espirit}. In other cases \cite{ongie2015super,haldar2019linear}, \eqref{lp} is solved iteratively. Both strategies are computationally efficient since $\mathbf Q$ is fully known. However, the need for a calibration region restricts the achievable acceleration. \section{Deep Generalization of SLR Methods} The main focus of this work is to introduce a DL solution to improve the computational efficiency of SLR algorithms. We note that calibrated SLR methods, which learn the linear null space projection operator from calibration data, require few iterations for convergence, thus offering fast image recovery. Calibration-free SLR methods by contrast are computationally expensive. Specifically, because the null space matrix $\mathbf Q$ is estimated from the data itself, the algorithm requires several iterations to converge. We propose to pre-learn a CNN-based null space projector from multiple exemplar datasets. The proposed non-linear CNN module learns to estimate the annihilation relations from the under-sampled data based on its training on exemplar data. We view this approach as learning a non-linear filterbank, which behaves like different linear filterbanks for different images. Specifically, the non-linear filterbank can be approximated as a linear filterbank, which projects the data to the null space, thus annihilating the signal but preserving the noise and alias artifacts; the residual block preserves the signal, while suppressing noise. \subsection{IRLS Algorithm with Variable Splitting} To facilitate the reinterpretation of the reconstruction scheme as an iterative denoising strategy, we introduce an auxiliary variable $\widehat{\mathbf z}$ in \eqref{constrained} to obtain a three-variable constrained optimization problem, \begin{equation}\label{constrainedeq} \arg \min_{{\boldsymbol \Gamma}, \mathbf Q, \widehat{\mathbf{Z}}} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda \|\mathcal{T}(\widehat{\mathbf{Z}})\mathbf Q\|_F^2 \hspace{3pt} \mbox{such that} \hspace{3pt} \widehat{\mathbf Z} = \mathcal G (\widehat{\boldsymbol \Gamma}). \end{equation} We impose the constraint by a penalty term as \begin{equation*} \arg \min_{{\boldsymbol \Gamma}, \mathbf Q, \widehat{\mathbf{Z}}} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda \|\mathcal{T}(\widehat{\mathbf{Z}})\mathbf Q\|_F^2+ \beta \|\mathcal G (\widehat{\boldsymbol \Gamma}) - \widehat{\mathbf{Z}}\|_2^2. \end{equation*} This formulation is equivalent to \eqref{constrainedeq} when $\beta \rightarrow \infty$. We propose to solve the above problem using the alternating minimization scheme: \begin{eqnarray} \label{img_update} \mathbf {{\boldsymbol \Gamma}}_{n+1} &=& \arg \min_{{\boldsymbol \Gamma}} \|\mathcal A({\boldsymbol \Gamma}) - \mathbf B\|^2 + \beta \|\mathcal G (\widehat{\boldsymbol \Gamma}) - \widehat{\mathbf Z}_{n}\|^2 \\\nonumber \widehat{\mathbf Z}_{n+1} &=& \arg \min_{\widehat{\mathbf Z}} \beta \| \mathcal G (\widehat{\boldsymbol\Gamma}_{n+1}) - \widehat{\mathbf Z}\|^2+ \lambda \|\mathcal{T}(\widehat{\mathbf{Z}})\mathbf Q\|_F^2. \\\label{z_update} \end{eqnarray} At each step, the $\mathbf Q$ matrix is updated as in \eqref{qup}. \subsubsection{Image Update}The first step specified by \eqref{img_update} is a simple Tikhonov regularized optimization problem to recover the multi-channel images $\boldsymbol \gamma$ at the $(n+1)$-th iteration. When $\mathcal G=\mathcal I$, the prior reduces to $\|\widehat{\boldsymbol \Gamma}- \widehat{\mathbf Z}_{n}\|^2$. In the general case, the solution to this optimization problem can be determined analytically as \begin{equation} \widehat{\boldsymbol \Gamma}_{n}=(\mathcal{A}^H \mathcal A + \beta \mathcal G^H\mathcal G)^{-1}(\mathcal{A}^H \mathbf B + \beta \mathcal G^H(\widehat{\mathbf Z}_{n-1})), \end{equation} when $\mathcal A$ involves a sampling in the Fourier domain. Similar analytical solutions can also be used when $\mathcal G$ involves a Fourier domain weighting as in the literature \cite{gregSIAM2016}. \subsubsection{Projection }The sub-problem \eqref{z_update} is essentially a proximal operation. Specifically, the second term of \eqref{z_update} is the energy in projecting $\mathcal T(\widehat{\mathbf Z})$ to the subspace $\mathbf Q$. If $\lambda \rightarrow \infty$, we obtain $\widehat{\mathbf Z}$ as the projection of $\widehat{\boldsymbol \Gamma}_{n+1}$ onto the signal subspace, orthogonal to $\mathbf Q$. \subsection{Filterbank Interpretation of the Denoising Subproblem} We will now focus on the denoising sub-problem by showing its linear filterbank structure. We will capitalize on this structure to generalize the algorithm. We will focus on the vertical and horizontal stacking cases separately. \subsubsection{Vertical stacking considered in Section \ref{girafsec}} Consider the term $\mathcal T(\widehat{\mathbf Z})\mathbf q_i$, where $\mathbf q_i$ is one of the columns of the matrix $\mathbf Q$. When the lifting operation is described by \eqref{gradient}, we have \begin{equation} \mathcal T(\widehat{\mathbf Z})~\mathbf q_i = \begin{bmatrix} \mathcal H(\widehat{\mathbf z}_1) \\\mathcal H(\widehat{\mathbf z}_2)\end{bmatrix}\mathbf q_i= \begin{bmatrix} \mathbf p_1\\\mathbf p_2\end{bmatrix} \end{equation} Because $\mathcal H(\widehat{\mathbf z})$ is a Hankel matrix, $\mathcal H(\widehat{\mathbf z})\mathbf Q$ corresponds to the linear convolution between $\widehat{\mathbf z}$ and $\mathbf Q$. Since convolution is commutative, we can rewrite the above expression as \begin{equation} \mathcal T(\widehat{\mathbf Z})\mathbf q_i = \underbrace{\begin{bmatrix} \widehat{\mathbf z}_1 \\\widehat{\mathbf z}_2\end{bmatrix}}_{\widehat{\mathbf Z}}\mathcal P(\mathbf q_i), \end{equation} where, $\mathcal P(\mathbf q_i)$ is a block Hankel matrix constructed from the samples of $\mathbf q_i$. We thus have $\|\mathcal T(\widehat{\mathbf Z})\mathbf Q\|^2 = \| \widehat{\mathbf Z}\, \mathcal J(\mathbf Q)\|^2$, where $\mathcal J(\mathbf Q)$ is obtained by horizontally stacking the matrices $\mathcal P(\mathbf q_i)$. We note that $\widehat{\mathbf z}_1 J(\mathbf Q)$ corresponds to passing $\widehat{\mathbf z}_1$ through a single input multiple output (SIMO) filterbank, whose filters are specified by $\mathbf q_i$. \subsubsection{Horizontal stacking considered in Section \ref{pslrsec}} Similar to the vertical stacking case, we consider \begin{eqnarray} \mathcal T(\widehat{\mathbf Z})~\mathbf q_i &=& \overbrace{\begin{bmatrix} \mathcal H(\widehat{\mathbf z}_1) &..&\mathcal H(\widehat{\mathbf z}_N)\end{bmatrix}}^{\mathcal T(\widehat{\mathbf Z})} \overbrace{\begin{bmatrix} \mathbf q_{i,1}\\\vdots\\\mathbf q_{i,N}\end{bmatrix}}^{\mathbf q_i}\\ &=&\underbrace{\begin{bmatrix} \mathcal P(\mathbf q_{i,1}) &..&\mathcal P(\mathbf q_{i,N})\end{bmatrix}}_{\mathcal J(\mathbf Q)} \underbrace{\begin{bmatrix} \widehat{\mathbf z}_{1}\\\vdots\\\widehat{\mathbf z}_{N}\end{bmatrix}}_{\widehat{\mathbf Z}} \end{eqnarray} We thus have $\|\mathcal T(\widehat{\mathbf Z})\mathbf Q\|^2 = \|\mathcal J(\mathbf Q) \widehat{\mathbf Z}\|^2$, where \begin{equation}\label{key} \mathcal J(\mathbf Q) = \begin{bmatrix} \mathcal P(\mathbf q_{1,1}) &..&\mathcal P(\mathbf q_{1N})\\ \vdots&..&\vdots\\ \mathcal P(\mathbf q_{N,1}) &..&\mathcal P(\mathbf q_{N,N}) \end{bmatrix} \end{equation} We note that $\mathcal J(\mathbf Q)\widehat{\mathbf Z}$ corresponds to passing $\widehat{\mathbf Z}$ through a multiple input multiple output (MIMO) filterbank, whose filters are specified by $\mathbf q_i$. \subsection{Approximation of Denoising Sub-problem} We thus rewrite \eqref{z_update} for both lifting approaches as, \begin{equation} \label{two_var_unconstrained} \widehat{\mathbf Z}_{n+1} = \arg \min_{\widehat{\mathbf{Z}}} \beta \| \mathcal G (\widehat{\boldsymbol\Gamma}_{n+1}) - \widehat{\mathbf Z}\|^2 + \lambda \|\mathcal{J}(\mathbf Q_n)\widehat{\mathbf Z}\|_F^2. \end{equation} which reduces to \begin{equation} \widehat{\mathbf Z}_{n+1} = \left[\mathbf I ~+~\frac{\lambda}{\beta}~\mathcal J(\mathbf Q_{n})^H \mathcal J(\mathbf Q_{n}) \right]^{-1}\mathcal G (\widehat{\boldsymbol \Gamma}_{n+1}). \end{equation} We propose to solve the denoising problem approximately. Assuming $\lambda <<\beta$ and applying first-order Taylor approximation, we obtain an approximate solution for $\widehat{\mathbf Z}$ as \begin{equation} \label{denoiser_gen} \widehat{\mathbf Z}_{n+1} \approx \underbrace{\left[\mathbf I ~-~\overbrace{\frac{\lambda}{\beta}~\mathcal J(\mathbf Q_n)^H \mathcal J(\mathbf Q_n)}^{\mathcal R_n} \right]}_{\textbf{$\mathcal L_n$}}\mathcal G (\widehat{\boldsymbol \Gamma}_{n+1}). \end{equation} \begin{figure} \subfigure[Linear Residual Convolutional Block]{\includegraphics[width=0.5\textwidth,keepaspectratio=true,trim={5cm 7cm 5cm 6.8cm},clip]{IRLS_denoiser_v2.pdf}}\vspace{-1.1em} \subfigure[Iterative Algorithm]{\includegraphics[width=0.48\textwidth,keepaspectratio=true,trim={4.5cm 6cm 4.2cm 5.3cm},clip]{IRLS_network_v1.pdf}} \caption{Illustration of the network structure of the IRLS algorithm used in structured low-rank algorithms: (a) shows the linear residual convolutional-deconvolutional block, which projects the signal at the $n^{\rm th}$ iteration to the signal subspace; (b) illustrates the network structure of the SLR algorithm, which alternates between the projection and the data consistency block.}\vspace{-1em} \label{irlsfig} \end{figure} As discussed before, $\mathcal J(\mathbf Q)$ denotes a MIMO or SIMO filterbank, depending on the nature of lifting. The term $\mathcal J(\mathbf Q)^H$ denotes convolution with a flipped version of $\mathbf Q$, often referred to as the deconvolution layer in DL literature. $\mathcal R_n$ is a filterbank that projects the signal to the null space, thus killing or annihilating the signal and preserving the noise terms. Thus, the linear operator $\mathcal L_n$ is a residual block, which removes the alias or noise terms from the input signal, thus essentially denoising the signal (see Fig. \ref{irlsfig}). Note that the filterbank $\mathbf Q_n$ has a subscript $n$ since it is updated at each iteration. The joint estimation of $\mathbf Q_n$ and reconstruction $\widehat{\boldsymbol{\Gamma}}_n$ results in high computational complexity. On the other hand, calibration-based methods pre-estimate $\mathbf Q$ and hence the residual filterbank $\mathcal L$, thus resulting in significantly reduced computational complexity. \subsection{SLR-inspired Model-based k-space DL} \label{slrdeep} The main disadvantage of the IRLS strategy discussed above is the high computational complexity. Specifically, this iterative approach requires an singular value decomposition (SVD) at each iteration, and thus results in a computationally expensive algorithm. To improve the computational efficiency, we propose to pre-learn a non-linear CNN annihilation filterbank $\mathcal N_{\rm k}$ from exemplar data. The subscript $\rm k$ indicates that the network performs convolutions in k-space. We pose a reconstruction similar to \eqref{lp}: \begin{equation} \label{deep} \arg \min_{{\boldsymbol \Gamma}} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda_1 \| \underbrace{\left(\mathcal I -\mathcal{D}_{\rm k}\right)}_{\mathcal{N}_{\rm k}}(\mathcal G(\widehat{\boldsymbol \Gamma}) )\|_2^2. \end{equation} Here, $\mathcal N_{\rm k}$ is a CNN that kills or annihilates the signal while preserving the noise or alias terms, which is conceptually similar to $\mathcal R_n$ in \eqref{denoiser_gen}. Thus, the operator $\mathcal D_{\rm k} = \mathcal I - \mathcal N_{\rm k}$ can be viewed as a denoiser similar to $\mathcal L_n$ in \eqref{denoiser_gen}. We propose to pre-learn the parameters of the network from exemplar data. Unlike calibrated schemes that learn a small linear network from a small subset of Fourier data (calibration region), the CNN parameters are learned from several fully sampled exemplar datasets. This approach enables us to learn a larger CNN, which can generalize to other datasets. We hypothesize that this pre-learned non-linear network can behave like a linear projection for each dataset, thereby facilitating their recovery from under-sampled data. Since the parameters of the network do not need to be self-learned, this approach is significantly faster than uncalibrated SLR approaches. We use an alternating minimization strategy similar to \eqref{img_update} and \eqref{z_update} to minimize \eqref{deep}. The resulting algorithm translates to a recursive network, which alternates between the denoising network $\mathcal D_{\rm k}$, which removes the noise and alias terms, and data consistency (DC) blocks: \begin{eqnarray} \widehat{\mathbf Z}_n &=& \mathcal D_{\rm k}(\mathcal G(\widehat{\boldsymbol \Gamma}_n))\\ \widehat{\boldsymbol \Gamma}_{n+1} &=& (\mathcal A^H \mathcal A + \lambda_1 \mathcal G^H \mathcal G)^{-1}(\mathcal A^H \mathbf B + \lambda_1 \mathcal G^H \widehat{\mathbf Z}_n) \end{eqnarray} Similar to \cite{aggarwal2018modl}, we consider $K$ iterations of the above algorithm and unroll the above iterative scheme to obtain a deep network. The unrolled network consists of $K$ number of repetitions of both $\mathcal D_{\rm k}$ and DC blocks with parameters of $\mathcal D_{\rm k}$ being shared across iterations. At each iteration, the \emph{noisy} input $\mathcal G \widehat{\boldsymbol \Gamma}_n$ is projected to the signal subspace and hence denoised. The output of $\mathcal D_{\rm k}$ is given by $\mathcal D_{\rm k}(\mathcal G(\widehat{\boldsymbol \Gamma}_n))=\mathcal G(\widehat{\boldsymbol \Gamma}_n) - \mathcal N_{\rm k}(\mathcal G(\widehat{\boldsymbol \Gamma}_n))$. The output is then fed into the DC block as shown in Fig \ref{fig:kspn}. \begin{figure}[t!] \centering \subfigure[Residual CNN]{\includegraphics[width=0.4\textwidth,keepaspectratio=true,trim={6cm 5cm 6cm 5cm},clip]{5_layer_CNN_v3.pdf}}\vspace{-1em} \subfigure[Proposed iterative algorithm]{\includegraphics[width=0.48\textwidth,keepaspectratio=true,trim={4.5cm 6cm 4.2cm 5cm},clip]{KSP_network_v1.pdf}} \caption{Network structure of the proposed recursive CNN in k-space, described in Section \ref{slrdeep}. The main difference of the proposed scheme with the approach in Fig. \ref{irlsfig} is the use of the deep residual CNN in (b), instead of the linear convolution-deconvolution block in Fig. \ref{irlsfig}.(a).} \label{fig:kspn}\vspace{-1em} \end{figure} As discussed previously, this iterative algorithm is similar to an alternating scheme to solve \eqref{lp}, with the distinction that the linear convolution-deconvolution block is replaced by a non-linear CNN. Unlike the setting in \eqref{lp}, where the filter parameters are learned from the calibration data of each dataset, we propose to pre-learn a CNN from exemplary data. \subsection{Hybrid Regularized DL} The SLR methods exploit the redundancies in k-space resulting from specific structures in the signal. However, the image patches in MR images often exhibit extensive redundancy, which is exploited in our MoDL scheme \cite{aggarwal2018modl} as well as other image domain methods \cite{lee2018deep,schlemper2017deep,hammernik2018learning}. These priors are complementary to the SLR priors discussed in the previous section. We propose to modify the cost function in \eqref{deep} as \begin{equation} \label{deep_hybrid} \arg \min_{\boldsymbol \Gamma} \|\mathcal{A}({\boldsymbol \Gamma})-\mathbf B\|_2^2 + \lambda_1 \|\mathcal{N}_{\rm k}(\mathcal G(\widehat{\boldsymbol \Gamma})) \|_2^2 + \lambda_2 \|\mathcal{N}_I ({\boldsymbol \Gamma}) \|_2^2. \end{equation} Here, $\mathcal N_{\mathbf I} $ and $\mathcal N_{\mathbf k} $ are two residual CNNs. The alternating minimization of this scheme results in the following steps: \begin{eqnarray} \label{ksp_proj_hybrid} \boldsymbol \Theta_n &=& \mathcal D_{\rm k}(\mathcal G (\widehat{\boldsymbol \Gamma}_n))\\ \label{img_proj_hybrid} \boldsymbol \Phi_n &=& \mathcal D_{\rm I}(\boldsymbol \Gamma_n)\\\nonumber \label{dchybrid} \widehat{\boldsymbol \Gamma}_{n+1} &=& (\mathcal A^H\mathcal A + \lambda_1 \mathcal G^H \mathcal G + \lambda_2 \mathcal I)^{-1}(\mathcal A^H \mathbf B \\& &+ \lambda_1 \mathcal G^H \boldsymbol \Theta_n + \lambda_2\boldsymbol \Phi_n) \end{eqnarray} as shown in Fig \ref{fig:hybn}. The $\mathcal D_{\rm k}$ relies on annihilation relations in k-space, while $\mathcal D_{\rm I}$ exploits the image domain priors. We propose to learn the parameters of the CNNs $\mathcal D_{\rm k}$ and $\mathcal D_{\rm I}$ using exemplary data. \subsection{Special Cases} We show applications of our proposed methods in both single-channel and multi-channel settings, and show that the image sub-problem can be solved analytically in both cases. This approach will accelerate the training and testing procedures. \subsubsection{Piecewise Constant Image Structure} The GIRAF \cite{ongie2017fast} algorithm is an SLR scheme that exploits the piecewise constant nature of images, as described in Section \ref{girafsec}. Here, the operator $\mathcal G(\widehat{\boldsymbol{\gamma}}) = \widehat{\nabla \boldsymbol \gamma}$ as defined in \eqref{nabla_def}. In this case, we have \begin{eqnarray} \mathcal G^H\left(\begin{bmatrix} \widehat{\mathbf z}_1\\\widehat{\mathbf z}_2 \end{bmatrix}\right)[\mathbf k] &=& -(j2\pi k_x \widehat{\mathbf z}_1[\mathbf k] +j2\pi k_y \widehat{\mathbf z}_2[\mathbf k] )\\ \mathcal G^H\mathcal G\left(\widehat{\boldsymbol{\gamma}}\right)[\mathbf k] &=& 4\pi^2 (k_x^2+k_y^2) ~\widehat{\boldsymbol \gamma}[\mathbf k] \end{eqnarray} Note that the matrix $(\mathcal A^H \mathcal A + \lambda_1 \mathcal G^H \mathcal G + \lambda_2 \mathcal I)$ in \eqref{dchybrid} can be viewed as a weighting operator in the Fourier domain in the single-channel setting. We can thus solve \eqref{dchybrid} analytically. \subsubsection{Parallel MRI Acquisition} In a parallel MRI setting, $\mathcal G=\mathcal I$ and hence the data consistency term simplifies to $ (\mathcal A^H \mathcal A + \left(\lambda_1 + \lambda_2\right) \mathcal I)^{-1}(\mathcal A^H \mathbf B + \lambda_1 \boldsymbol \Theta_n + \lambda_2 \boldsymbol \Phi_n)$. The term $(\mathcal A^H\mathcal A + \left(\lambda_1 + \lambda_2\right) \mathcal I)$ is separable across the channels. Hence, one can independently solve for each channel of $\widehat{\boldsymbol \Gamma}_{n+1}$ in the Fourier domain in an analytical fashion. \begin{figure} \includegraphics[width=0.48\textwidth,keepaspectratio=true,trim={4cm 7cm 3.5cm 5.7cm},clip]{HYBRID_network_v1.pdf} \caption{Hybrid network: It consists of two identically structured residual CNNs $\mathcal D_k$, $\mathcal D_{\rm I}$ for k-space and image domain learning, respectively. The $\mathcal D_{\rm I}$ block learns redundancies in patches, while $\mathcal D_k$ block exploits k-space annihilation relations. The $\mathcal D_{\rm I}$ block does an inverse fast Fourier transform (IFFT) on input k-space ${\boldsymbol \Gamma}$ and passes it to the residual CNN. The residual image output is transformed back to k-space ${\boldsymbol \Gamma}$ by a fast Fourier transform (FFT). The output of $\mathcal D_{\mathbf k}$ and $\mathcal D_{\mathbf I}$ at the $\rm n^{th}$ iteration are denoted by $\boldsymbol \Theta_n$ and $\boldsymbol \Phi_n$ according to \eqref{ksp_proj_hybrid}, \eqref{img_proj_hybrid} and \eqref{dchybrid}. The parameters are not shared between $\mathcal D_k$ and $\mathcal D_{\rm I}$. Both $\mathcal D_k$ and $\mathcal D_{\rm I}$ in hybrid Deep-SLR have half the number of feature maps per layer compared to $\mathcal D_k$ in k-space Deep-SLR to keep number of trainable parameters the same in both networks for fair comparison. The network parameters are shared across iterations similar to the MoDL \cite{aggarwal2018modl} framework.} \label{fig:hybn} \end{figure} \section{Implementation Details} \subsection{Datasets} \label{data_acq} The datasets used for single channel experiments were multi-coil k-space of knee from (\url{www.mridata.org}) and multi-coil brain from the Calgary-Campinas Public (CCP) dataset \cite{souza2018open} in (\url{https://sites.google.com/view/calgary-campinas-dataset}). The CCP consists of 12-channel T1-weighted brain MR datasets acquired on a 3T scanner. It is a 3D acquisition that allows undersampling along two directions (phase and slice encoding). We used the single channel complex valued images provided by the organizers, which were generated by multi-channel coil combination. The Fourier Transform (FT) was applied to obtain k-space samples from the coil combined images. Since the frequency encoding dimension is fully sampled, we performed an IFFT along this dimension and considered the recovery of each 2D slice independently. We chose twenty subjects for training, five for validation, and ten for testing. The other dataset consisting of multi-channel knee data of twenty subjects was acquired with a 3D fast spin echo (FSE) sequence on a 3T scanner. The parameters set for the scan were: repetition time TR = 1550 ms, echo time TE = 25 ms, and a flip angle of $90^{\circ}$. There are 256 sagittal slices and 320 coronal slices per subject with matrix sizes of 320 x 320 and 320 x 256, respectively, at a slice thickness of 0.5 mm. A coil combination of the 8-channel knee k-space data was performed using principal component analysis (PCA). Specifically, we performed a PCA along the coil dimension and picked the first component along the coil dimension as the single-channel complex image. This coil compression preserved on average about 90\% of the energy from multi-channel data. Fifteen subjects were used for training, two for validation, and the remaining three for testing. A set of experiments were done to study the workings of the k-space Deep-SLR scheme for single-channel MRI recovery. The NIfTI formatted T2-weighted brain datasets from the Human Connectome Project (HCP) \cite{van2012human} were used. The T2-weighted brain images were acquired by a Siemens 3T MR scanner using a 3D Cartesian spin-echo sequence. The TR and TE parameters were 3200 ms and 565 ms, respectively, while the matrix size was 320 x 256 with a field of view (FOV) of 224 x 224 mm$^{2}$. Parallel MRI experiments were performed on multi-channel brain and knee datasets. The knee dataset \cite{hammernik2018learning} is a multi-slice 2D dataset consisting of 15-channel slices from 20 subjects with roughly 40 slices per subject. The slices are of dimension 640 x 368 x 15. Twelve subjects were used for training, one for validation, and the remaining seven for testing. The data was under-sampled by varying density along the phase encodes. Brain MRI was collected from nine subjects at University of Iowa Hospitals and Clinics using a 3D T2 CUBE sequence with Cartesian readouts using a 12-channel head coil. There are 140 3D slices per subject with dimensions 12 x 256 x 232. We used five subjects for training, one for validation, and the remaining three for testing. In both cases, the fully sampled complex k-space data was under-sampled and used for training. The complex image obtained by evaluating the IFFT of the individual coil data was used as ground truth in training and testing. \begin{figure*} \centering \includegraphics[width=\textwidth,keepaspectratio=true,trim={1.9cm 9.1cm 1.2cm 9.5cm},clip]{Calgary_single_channel_axial.pdf} \caption{Reconstruction results of 4x accelerated single-channel brain data. SNR (dB)/PSNR (dB)/SSIM values are reported for each case. The data was under-sampled using a cartesian 2D non-uniform variable-density mask. The top row shows reconstructions (magnitude images), while the bottom row shows corresponding error images. The additional image domain prior in \textbf{H-DSLR} ensures significant improvement in performance over other schemes.} \label{fig:sc_brain_rec} \end{figure*} \subsection{Quality Evaluation Metric} We quantitatively evaluate the recovered images in terms of signal-to-noise ratio (SNR) and structural similarity (SSIM) index. The SNR of an image is computed as $ \mathbf{SNR} = 20 \cdot \log_{10} \left(\frac{\|\mathbf x_{\rm rec}\|_2}{\|\mathbf x_{\rm org}-\mathbf x_{\rm rec}\|_2}\right)$, where $\mathbf x_{\rm org}$ and $ \mathbf x_{\rm rec}$ are original ground truth and reconstructed images, respectively. \subsection{Architecture of the CNNs} The modular nature of the proposed scheme allows us to use any residual CNN architecture to define the prior. A key difference with the approach in Fig. \ref{irlsfig} is that the CNN parameters are fixed and do not change with iterations as in Fig. \ref{irlsfig}.(b). The pre-learning of the CNN parameters using exemplar data allows us to significantly reduce the number of alternating steps compared to the self-learning strategy in Fig. \ref{irlsfig}. Image domain CNN $\mathcal N_{\rm I}$ is structurally identical to the Fourier domain CNN $\mathcal N_{\rm k}$, with an equal number of parameters. The residual block $\mathcal D_{\rm I}$ performs an IFFT that feeds spatial domain input $\boldsymbol \Gamma$ to the CNN $\mathcal N_{\rm I}$ and transforms the residual output back to k-space by a fast Fourier transform (FFT) operation. For implementation purposes, we split the real and imaginary parts of the input k-space data into real and imaginary components, which are fed as two channels. The two output channels are combined to recreate the complex output k-space data. The models were implemented \subsubsection{Single-channel Case} We use a residual UNET as $\mathcal N_{\rm k}$ in the single-channel setting for the proposed k-space Deep-SLR (K-DSLR) scheme. We use its modified version of UNET with only 12 layers (two pooling and unpooling operations). The number of filters per layer grows from 64 to a maximum of 256. The UNET operates on single-channel Fourier data ($M=1$). For the proposed hybrid scheme H-DSLR, the number of parameters in both the UNETs were halved layer by layer to keep the total similar to the K-DSLR network for fair comparisons. \subsubsection{Parallel MRI (Multi-channel Case)}A residual five-layer MIMO CNN $\mathcal N_{\rm k}$ as shown in Fig. \ref{fig:kspn}.(b) is used as the k-space network in K-DSLR. The input and output channels of the network are adjusted according to the dataset. For example, $M = 12$ and $M = 15$ channels are set for multi-channel brain and knee data, respectively. Each convolution layer consists of 64 3 x 3 filters, followed by ReLU non-linearity. The number of filters per layer was halved to 32 for both the CNNs in H-DSLR compared to 64 in K-DSLR for fair comparison. We trained the unrolled recursive network for different iterations of $K$. $K=10$ was found to be the best-performing model on test data for both cases and the performance saturated afterwards. We were constrained by 16 GB GPU memory, which restricted us from going beyond 15 iterations. The regularization parameters were fixed at $\lambda_1 = \lambda_2 = 1$ for all the experiments. The weights were Xavier initialized and trained for 500 epochs with an Adam optimizer to reduce the mean square error (MSE) at a learning rate of $10^{-4}$. All the DL models were implemented using Tensorflow version 1.15. The proposed $K=10$ iteration models for single and multi-channel cases took 5 and 10 hours for training respectively. The source code for the proposed H-DSLR scheme on multi-channel MRI datasets can be viewed and downloaded from the github link: \url{https://github.com/anikpram/Deep-SLR} . \subsection{State-of-the-art Methods for Comparison} We compare our scheme for single-channel recovery against the SLR algorithm (GIRAF) \cite{ongie2017fast}, a k-space UNET (K-UNET) \cite{han2019k}, and an image domain UNET (I-UNET). The K-UNET is a direct DL approach with a 20-layer 2D UNET in k-space without a DC step. It accepts a real image formed by concatenation of real and imaginary parts of 2D complex k-space. The I-UNET is the spatial version of K-UNET where learning is performed in spatial domain. The I-UNET structure and its number of parameters are exactly the same as in K-UNET. These networks were trained and tested on single-channel knee and brain datasets described in Section \ref{data_acq}. In the parallel MRI setting, we compare the proposed scheme with MoDL \cite{aggarwal2018modl}, K-UNET \cite{han2019k}, and the calibration-less parallel SLR algorithm, which motivated our proposed scheme. K-UNET is also a multi-channel calibration-less direct DL approach in k-space without a DC step \cite{han2019k}. Its structure is similar to single-channel K-UNET, with the only difference being the multi-channel input and output. MoDL \cite{aggarwal2018modl} is a pre-calibrated approach that uses coil sensitivity information and spatial domain regularization. The coil sensitivities for MoDL were estimated using ESPIRiT \cite{uecker2014espirit}. All the parallel MRI methods were evaluated on the brain and knee datasets mentioned in Section \ref{data_acq}. \section{Experiments and Results} Experiments were done on multiple datasets for both single-channel sparse MRI and parallel MRI recovery. Additional experiments were also done on diffusion MRI recovery and are discussed in the supplementary material. \subsection{Single-channel Signal Recovery} Comparisons of the proposed single-channel schemes against state-of-the-art methods are shown in Fig. \ref{fig:sc_brain_rec} for the CCP dataset described in Section \ref{data_acq}. We observe that the k-space Deep-SLR (K-DSLR) approach in Fig. \ref{fig:sc_brain_rec}.(d) provides results that are comparable to the model-based GIRAF \cite{ongie2017fast} method in Fig. \ref{fig:sc_brain_rec}.(b). By contrast, the direct inversion based I-UNET and K-UNET provides lower performance, even though the number of trainable parameters are larger. Of these, the K-UNET provides slighly lower errors. The improved performance of K-DSLR over K-UNET may be attributed to the model-based approach, which repeatedly enforces DC. Fig. \ref{fig:sc_brain_rec}.(f) corresponds to H-DSLR, which uses both k-space and image domain priors. The H-DSLR scheme significantly reduces errors. The number of parameters in this model is similar to the one in Fig. \ref{fig:sc_brain_rec}.(d) since the number of output channels of each intermediate layer is halved. However, the addition of the complementary prior significantly reduces the errors. A similar set of experiments were also performed on the single-channel coronal and sagittal view of knee images described in Section \ref{data_acq}. The comparisons are shown and discussed in the supplementary paper. The quantitative results of all the experiments are recorded in Table S1 in the supplementary section. \begin{figure}[b!] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true,trim={1.8cm 10.9cm 11.5cm 11.4cm},clip]{psf_analysis.pdf} \caption{Illustration of the non-linear and linear annihilation operators. Piecewise constant images and their gradients are shown in (a) and (b), respectively. The non-linear block $\mathcal N_{\rm k}$ behaves like a linear projector to the null space for each image. Pseudo-random perturbations of small magnitude are added to the gradients of the image and fed to $\mathcal N_{\rm k}$. The SOS of the output perturbations are shown in (d). The SOS function on $\mathcal N_{\rm k}$ closely mimics the linear operator $\mathcal R$ in (c). Specifically, it annihilates or kills the gradient components close to the edge locations while preserving the noise components far from the edges. We show more results on different slices in the supplementary material (see Fig. S1).} \label{fig:psf} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth,keepaspectratio=true,trim={1.8cm 9.05cm 0.6cm 9.8cm},clip]{Knee_Florian_PMRI_zoomed_images_2_v1.pdf} \caption{Reconstruction results of 4x accelerated 15-channel knee data. A 2D Cartesian structured under-sampling along phase encodes was done. The top row displays reconstructions (SOS), and the bottom row shows corresponding error images. The yellow arrows in the zoomed cartilage region show minute details better preserved by the proposed scheme over other state-of-the-art methods. The numbers are SNR reported in dB. The k-space Deep-SLR scheme \textbf{K-DSLR} yields comparable results to the parallel SLR scheme. The addition of the image domain prior further improves performance. We show H-DSLR reconstructions of different slices with different accelerations factors in the supplementary material (see Fig. S5).} \vspace{-1em} \label{fig:mc_knee_florian} \end{figure*} \subsection{Annihilation Operators on Piecewise Constant Images} \label{hypo_test} Single-channel Deep-SLR scheme is used to study its inner workings and its similarity to classical SLR methods in Fig. \ref{fig:psf}. Note that k-space SLR methods for single-channel MRI schemes \cite{ongie2015super,ongie2017convex} learn linear annihilation relations in k-space. As shown in the literature \cite{ongie2015super,ongie2017convex}, the SLR penalty in \eqref{lp} is a weighted $\ell_2$ norm of the gradients of the image, where the weights correspond to the sum of squares (SOS) of the estimated null space filters. Specifically, the linear annihilation operator has several linearly independent null space vectors; the sum of squares of the IFFT of the null space vectors yield zeros in the location of the gradients in the single-channel setting as shown in the literature \cite{ongie2017fast}. The SLR scheme estimates annihilation relations from under-sampled data using an optimization strategy. By contrast, the proposed scheme learns to estimate the annihilation relations from under-sampled measurements based on its training on exemplar data. The solutions provided by the unconstrained setting considered in this paper and \cite{aggarwal2018modl} are similar to constrained setting in \cite{tamir2019unsupervised}, where the formulation in \eqref{deep} is replaced by \begin{equation}\label{unconstrained} \boldsymbol{\gamma}^* = \arg \min \|\mathcal N_{\rm k}(\mathcal G(\widehat{\boldsymbol \gamma}))\|^2 ~~\mbox{such that}~~\|\mathcal A(\boldsymbol \gamma)-\mathbf b\|^2 \leq \sigma^2, \end{equation} where $\sigma$ is the noise variance; see \cite{tamir2019unsupervised} for detailed performance comparisons of the constrained and unconstrained formulations. In this case, the data consistency layer specified by \eqref{dchybrid} gets modified as \begin{equation}\label{dclayer} \boldsymbol{\gamma}_{n+1} = \arg \min \|\mathcal G(\widehat{\boldsymbol \gamma}) - \mathbf{\widehat z}_n\|^2 ~~\mbox{such that} ~~\|\mathcal A(\boldsymbol \gamma)-\mathbf b\|^2 \leq \sigma^2. \end{equation} When $\mathcal A$ satisfies the restricted isometry conditions \cite{candes2008restricted}, then $ \epsilon \|\boldsymbol\gamma-\boldsymbol\gamma^*\|^2 \leq \mathcal A(\boldsymbol\gamma-\boldsymbol\gamma^*)\leq\delta \|\boldsymbol\gamma-\boldsymbol\gamma^*\|^2$, where $\epsilon$ and $\delta$ are the restricted isometry property (RIP) constants. Thus, \begin{equation} \|\boldsymbol \gamma_{n+1} -\boldsymbol{\gamma}^*\|^2\leq \frac{\sigma^2}{\epsilon}, \end{equation} where $\boldsymbol \gamma^*$ is the true solution. This relation implies that at each iteration $n$, the input to the network $\boldsymbol \gamma_{n}$ is within a $\sigma^2/\epsilon$ ball of the true solution $\boldsymbol \gamma^*$. We note that an arbitrary non-linear function can be approximated by its first-order Taylor series representation in a small neighborhood. Our hypothesis is that the first order Taylor series approximation of the non-linear annihilation block $\mathcal N_{\rm k}(\mathcal G(\widehat{\boldsymbol \gamma}))$ within the $\sigma^2/\epsilon$ ball around $\boldsymbol \gamma^*$ closely matches the linear annihilation relations in SLR schemes. Specifically, the annihilation filters would kill the high gradients, while preserving the noise. The use of this annihilation filterbank within the residual block, results in preserving the true signal while suppressing the noise-like perturbations. In order to test this hypothesis, small random perturbations of variance $\sigma = 0.01$ are added to a given image $\boldsymbol \gamma^*$ and the corresponding output perturbations are analyzed; the sum of squares of the corresponding perturbations is an indicator of the response of the annihilation operator. We consider a piecewise constant image in Fig. \ref{fig:psf}, which was derived from an image from the HCP dataset (described in Section \ref{data_acq}), by thresholding. The CNN network with the same architecture as above (12-layer UNET as $\mathcal N_{\rm k}$) are trained using piecewise constant brain images from 10 training subjects, also obtained by thresholding the HCP data. Following training, random perturbations are added to a new dataset and its k-space data is passed through the network. \\ The sum of squares of the IFFT of the outputs for 1000 realizations are evaluated, which are shown in Fig. \ref{fig:psf}.(d). Note that the zeros of the SOS output function closely mimics the SOS function in Fig. \ref{fig:psf}.(c). This behaviour is observed across a wide variety of testing slices unseen by the trained network, as shown in Fig. \ref{fig:psf} and also in Fig. S1 in the supplemental document, which justifies its generalizability. The experiment strengthens our hypothesis that the proposed network behaves as a linear projector similar to classical SLR schemes for each image $\boldsymbol \gamma^*$. While similar results are observed for natural images, it is difficult to visualize this due to the large dynamic range. We note that without additional constraints on the weights, one cannot guarantee that the Lipschitz constants of the network is bounded for all inputs, including adversarial perturbations. \subsection{Parallel MRI Recovery} Proposed multi-channel schemes are compared against state-of-the-art calibration-less and calibrated schemes in Figures \ref{fig:mc_knee_florian} and \ref{fig:mc_brain}, and Table \ref{tab:comp_mc}. The methods have been tested on 420 3D brain slices collected from three subjects. The same set of methods have also been tested on approximately 300 (seven subjects) 3D knee slices. Similar to the single-channel case, the performance of the multi-channel K-DSLR is comparable to the parallel SLR (PSLR) scheme. The k-space network exhibits some residual aliasing in the knee example in Fig. \ref{fig:mc_knee_florian}.(d), which can be attributed to the highly structured/uniform nature of sampling. Note that the data was acquired with a calibration region, which the iterative PSLR scheme seems to have benefited from, even though we did not explicitly rely on a calibrated approach. The table reveals that the proposed H-DSLR outperformed the multi-channel PSLR and K-UNET \cite{han2019k} and that it is slightly better than the pre-calibrated approach MoDL \cite{aggarwal2018modl}. Note that MoDL is a calibrated scheme, which requires the explicit knowledge of the coil sensitivities. The coil sensitivities were estimated from the fully sampled region in the center of k-space for both knee (Fig. \ref{fig:mc_knee_florian}) and brain (Fig. \ref{fig:mc_brain}) experiments using ESPIRiT \cite{uecker2014espirit}. The calibration-less methods compared here (PSLR, the proposed method and K-UNET) perform an interpolation in k-space without explicit knowledge of the coil sensitivities. The addition of the image domain prior (H-DSLR in Fig. \ref{fig:mc_knee_florian}.(f)) is found to suppress the artifacts and provide reconstructions that are comparable to the MoDL scheme. The proposed Deep-SLR scheme facilitates the recovery of the images without the knowledge of the coil sensitivities. This approach thus eliminates the potential mismatch between the calibration scans for the estimation of the coil sensitivities and the main scan in approaches that rely on an extra calibration scans. By removing the need for an explicit calibration region, this approach enables higher acceleration factors. An additional study of the robustness of our proposed approach to acceleration factors for both knee and brain datasets is presented in the supplementary section of this paper. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth,keepaspectratio=true,trim={1.8cm 9.8cm 1.1cm 10.25cm},clip]{Brain_PMRI_zoomed_images_1_v1.pdf} \caption{Reconstruction results of 10x accelerated 12-channel brain data. SNR (dB)/PSNR (dB)/SSIM values are reported for each case. The under-sampling pattern was chosen to be a 2D Cartesian non-uniform variable density. The top row images are reconstructions (SOS), while the bottom row shows corresponding error images. The yellow arrows in the zoomed cerebellum region show minute details better preserved by the proposed scheme than by other state-of-the-art methods. The \textbf{K-DSLR} scheme has errors of lower magnitude than the calibration-less k-space methods PSLR and K-UNET. The proposed hybrid scheme \textbf{H-DSLR} performs comparably to the pre-calibrated approach MoDL. We show H-DSLR reconstructions of different slices with different accelerations factors in the supplementary material (see Fig. S4).} \vspace{-1.4em} \label{fig:mc_brain} \end{figure*} \begin{table}[b!] \fontsize{6}{8} \selectfont \centering \renewcommand{\arraystretch}{0.8} \begin{tabular}{|c|cc|c|} \hline \multicolumn{4}{|c|}{Signal-to-Noise Ratio (SNR)}\\ \hline \multicolumn{3}{|c|}{Brain} & \multicolumn{1}{|c|}{Knee} \\ \hline Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{c}{10x} & \multicolumn{1}{|c|}{4x} \\ Methods & SNR & SNR & SNR \\ \hline PSLR &21.02 $\pm$ 2.33 &18.12 $\pm$ 2.58 & 24.26 $\pm$ 2.12 \\ K-UNET &19.58 $\pm$ 2.01 &17.28 $\pm$ 1.98 & 26.81 $\pm$ 2.05 \\ \textbf{K-DSLR} & 21.58 $\pm$ 1.74 & 18.71 $\pm$ 1.83 & 27.87 $\pm$ 1.36 \\ MoDL &23.30 $\pm$ 1.53 &21.63 $\pm$ 1.62 & 29.77 $\pm$ 1.19 \\ \textbf{H-DSLR} & 24.34 $\pm$ 1.15 & 22.20 $\pm$ 1.23 & 30.57 $\pm$ 0.96 \\ \hline \multicolumn{4}{|c|}{Peak Signal-to-Noise Ratio (PSNR)}\\ \hline \multicolumn{3}{|c|}{Brain} & \multicolumn{1}{|c|}{Knee} \\ \hline Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{c}{10x} & \multicolumn{1}{|c|}{4x} \\ Methods & PSNR & PSNR & PSNR \\ \hline PSLR &31.17 $\pm$ 2.30 &28.21 $\pm$ 2.61 & 28.19 $\pm$ 2.03 \\ K-UNET &29.49 $\pm$ 1.96 &27.14 $\pm$ 1.91 & 30.94 $\pm$ 2.14 \\ \textbf{K-DSLR} & 31.66 $\pm$ 1.77 & 28.55 $\pm$ 1.84 & 31.67 $\pm$ 1.33 \\ MoDL &33.43 $\pm$ 1.51 &31.53 $\pm$ 1.57 & 33.77 $\pm$ 1.19 \\ \textbf{H-DSLR} & 34.46 $\pm$ 1.22 & 32.31 $\pm$ 1.23 & 34.69 $\pm$ 0.98 \\ \hline \multicolumn{4}{|c|}{Structural Similarity (SSIM)}\\\hline \multicolumn{3}{|c|}{Brain} & \multicolumn{1}{|c|}{Knee} \\ \hline Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{c}{10x} & \multicolumn{1}{|c|}{4x} \\ Methods & SSIM & SSIM & SSIM \\ \hline PSLR &0.942 $\pm$ 0.035 &0.918 $\pm$ 0.041 &0.873 $\pm$ 0.027 \\ K-UNET &0.920 $\pm$ 0.029 &0.883 $\pm$ 0.030 &0.887 $\pm$ 0.021 \\ \textbf{K-DSLR} & 0.938 $\pm$ 0.018 & 0.913 $\pm$ 0.023 & 0.904 $\pm$ 0.011 \\ MoDL &0.951 $\pm$ 0.020 &0.921 $\pm$ 0.026 &0.928 $\pm$ 0.015 \\ \textbf{H-DSLR} & 0.958 $\pm$ 0.011 & 0.935 $\pm$ 0.013 & 0.944 $\pm$ 0.008 \\ \hline \end{tabular} \vspace{1em} \caption{Quantitative comparison of PSLR, MoDL, proposed, and UNET reconstructions in terms of SNR (dB), PSNR (dB) and SSIM. The bold-faced methods are the proposed ones.} \label{tab:comp_mc} \vspace{-2em} \end{table} \subsection{Benefits over Calibrated Approaches} Pre-calibrated approaches, which estimate coil sensitivities from calibration scans, suffer from motion-induced mismatch between the calibration and main scans, resulting in artifacts. We study the benefit of the uncalibrated deep SLR methods using a simulation. Specifically, we simulate a mismatch by modulating the k-space data of the accelerated scan with a linearly varying phase term, which corresponds to a shift in image domain. A phase shift of 5 pixels along horizontal as well as vertical direction was applied on the 2D slices, assuming a minor physical motion during scan would lead to a similar amount of shift in either direction. We compare the pre-trained MoDL and H-DSLR framework on this data, whose results are shown in Fig. \ref{fig:ben_trans_low_window}(a)-(d). Due to the mismatch between coil images and the corresponding sensitivities, there are visible striped artifacts in the MoDL reconstruction. By contrast, we observe that the the proposed hybrid DSLR framework remains unaffected. This simulation study shows the benefit of our proposed method over calibrated setting during motion. \begin{figure*}[t!]\vspace{1.5em} \centering \includegraphics[width=0.8\textwidth,keepaspectratio=true,trim={1.7cm 9.6cm 5.7cm 10cm},clip]{proposed_vs_precalibrated_vs_autocalibrated.pdf} \caption{The top row of images (a)-(d) show comparisons of pre-calibrated MoDL with the proposed calibration-less approach during mismatches in scans. A cartesian 2D 6-fold under-sampling mask in (b) was used for under-sampling the k-space. The acquired k-space measurements were translated in spatial domain to emulate motion. The MoDL reconstruction shows diagonally striped motion artifacts due to mismatch. Our proposed scheme remains unaffected. The bottom row of images (e)-(h) display comparisons of the proposed approach with self-calibrated MoDL. The mask in (f) is used for under-sampling the k-space data and subsequent reconstruction. It samples 16 fully sampled lines in the center for calibration purposes. The coil sensitivities for MoDL are estimated using ESPIRiT \cite{uecker2014espirit} from the calibration window of 16 x 16 at the center of k-space. The performance of self-calibrated MoDL breaks down due to inaccurate sensitivities estimated from a smaller calibration region. Thus, the requirement of a larger calibration region limits acceleration. Our proposed scheme is robust to acceleration in the calibration region, thus pushing it further.} \label{fig:ben_trans_low_window} \end{figure*} Self-calibrated approaches do not require an additional calibration scan and hence are not sensitive to the above motion errors. They instead leverage a fully sampled calibration region (center of k-space) to estimate the coil sensitivities. However, this approach restricts the achievable acceleration rates. While the acceleration rate can be increased by reducing the size of the calibration region, the smaller calibration region results in inaccurate sensitivity estimates. The sensitivities were estimated from a 24 x 24 region in our MoDL scheme. We now estimate the sensitivities using ESPIRiT \cite{uecker2014espirit} from a calibration window of 16 x 16. The pre-trained MoDL was tested on the dataset using those estimated sensitivities. As seen in our experimental results in Fig. \ref{fig:ben_trans_low_window}(e)-(h), the inaccurate sensitivities resulted in several visible artifacts in the MoDL reconstructions. The proposed method does not suffer from these artifacts since it is an uncalibrated scheme and does not rely on the central k-space region to estimate the sensitivities. \subsection{Comparison of the Computational Complexity} A key benefit of the proposed Deep-SLR scheme over SLR methods is the quite significant reduction in runtime, along with the improved performance offered by the combination of the image domain prior. The recorded runtimes are shown in Table \ref{tab:run_time}. We report runtimes for 10 iterations ($K=10$) of our proposed k-space and hybrid Deep-SLR algorithms, and MoDL. We note that the DL approaches are roughly a few thousands-fold faster than the IRLS-SLR schemes in both cases. As discussed previously, SLR methods estimate the linear projection operator on the fly and require at least 50 iterations to converge. The high complexity of the SVD and the evaluation of the Gram matrix, along with the large number of iterations, is the main reason for the long runtime of the SLR methods. By contrast, the Deep-SLR approaches pre-learn the CNNs from exemplar data, which eliminates the need for \eqref{qup}. The hybrid Deep-SLR approach is slightly slower than k-space Deep-SLR in both the cases since the former uses two CNNs compared to one by the latter even if the effective number of parameters are the same. In a single-channel setting, although K-UNET and I-UNET have more learnable parameters, these approaches are faster by virtue of a single iteration rather than multiple iterations in proposed schemes. Note that the iterative approach brings improved performance as discussed in the previous sections. In the parallel MRI setting, the Deep-SLR schemes use five-layer CNNs that make them faster than K-UNET even after multiple iterations. We note that the MoDL scheme uses a multi-channel forward model that requires a conjugate gradient (CG) algorithm to enforce DC, which makes it slower than the Deep-SLR schemes. By contrast, the proposed scheme recovers the coil images; the forward model only includes Fourier sampling, which makes these schemes faster in training and testing. \begin{table}[t!] \fontsize{6}{8} \selectfont \centering \renewcommand{\arraystretch}{0.8} \begin{tabular}{ccccccc} \hline \multicolumn{6}{c}{Single-channel recovery (minutes per subject)}\\ \hline Organ & GIRAF & K-UNET & \textbf{K-DSLR} & I-UNET & \textbf{H-DSLR}\\ Knee/Brain & 197.33 & 0.07 & 0.32 & 0.07 & 0.37\\ \hline \multicolumn{6}{c}{Parallel MRI recovery (minutes per subject)}\\ \hline Organ &PSLR & K-UNET & \textbf{K-DSLR} & MoDL & \textbf{H-DSLR} \\ Brain &1223 & 0.7 & 0.17 & 0.83 & 0.19 \\ Knee &3106.67 &2.83 & 0.63 &4.40 & 0.75 \\\hline \end{tabular} \vspace{1em} \caption{Comparison of Single-channel and Parallel MRI reconstruction times. The reported values are average reconstruction times per subject in minutes. The bold-faced methods are the proposed ones.} \label{tab:run_time} \vspace{-2.5em} \end{table} \section{Discussion and Conclusion} We introduced a general model-based DL framework to significantly accelerate SLR matrix-completion algorithms. The key distinction with SLR methods is the pre-learning of the CNN parameters from exemplar data. Since the parameters need not be estimated from the measured data itself, the proposed algorithm is faster by several orders of magnitude. In addition, an additional image domain prior helps to further improve performance. We showed the utility of the proposed scheme in two representative applications with drastically different lifting structure. In most cases considered in this work, the performance of the k-space network is comparable or better than the corresponding PSLR scheme. The addition of the image domain network further improved performance. The hybrid DSLR outperforms the existing pre-calibrated MoDL scheme in the parallel MRI setting. However, the performance of the k-space DSLR scheme is marginally lower than the corresponding SLR scheme in the single-channel brain case. Additional experiments on larger datasets are needed to understand whether this is a consistent observation. The proposed framework is applicable in theory to a wide range of SLR priors described in earlier work \cite{jacob2020structured}. In this study, we restricted our attention to three representative applications. The applicability of the proposed framework to other problem settings is beyond the scope of this work and will be considered elsewhere. The MSE was used as the loss to train the networks. Since perceptual metrics such as SSIM are related to the MSE in a non-linear fashion, the performance of the proposed networks with respect to the SSIM may be better or worse with respect to others. The training can be changed to use arbitrary loss metrics, including SSIM, which may yield more visually pleasing images than the ones trained using MSE loss. Most of the experiments in this paper were restricted to scans on the same scanners. More work is needed to determine its utility in a multi-scanner and multi-center setting. We have not addressed the design of the sampling scheme that is optimal for the problem in this work. We refer the readers to our recent work that focuses on this aspect \cite{aggarwal2020j}. \bibliographystyle{IEEEtran} \section{Single-channel Signal Recovery} The proposed approaches for single-channel recovery are compared against the state-of-the-art in Table S1. The methods were tested on single-channel knee (coronal and sagittal view), and brain data mentioned in Section IV-A in the main paper. The mean SNR, PSNR and SSIM values with corresponding standard deviations were calculated for 2560 (10 subjects) brain, and 420 (3 subjects) knee (sagittal and coronal) slices, respectively. The coronal knee and brain were 4x under-sampled while the sagittal knee was 6x under-sampled. The proposed k-space network K-DSLR outperforms K-UNET (k-space UNET) and I-UNET (image-space UNET). K-DSLR performance is comparable to calibration-less approach GIRAF that motivates the proposed scheme. The addition of spatial domain prior in H-DSLR improves performance significantly over GIRAF. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true,trim={1.7cm 8.2cm 11.4cm 8.8cm},clip]{psf_analysis_supplementary.pdf} \caption{Illustration of linear and non-linear annihilation operators on piecewise constant images. The images and their gradients are shown in (a) and (b) respectively. The SOS-$\mathcal R$ is the sum of squares function on linear annihilation operator $\mathcal R$ from SLR schemes that kills edges or high gradient regions as shown in (c). The non-linear extension of SOS-$\mathcal R$ is SOS-$\mathcal N_{\rm k}$ and its outputs are shown in (d) which closely match to (c). The non-linear operation is generalizable across variety of brain slices shown here.} \label{fig:single_channel_annihilation} \end{figure} \begin{figure*}[h!] \centering \includegraphics[scale=0.9,keepaspectratio=true,trim={0.4cm 8.45cm 2.6cm 9cm},clip]{Knee_single_channel_coronal_zoomed_images.pdf}\hspace*{\fill} \caption{Reconstruction results of 6-fold accelerated single-channel knee data with coronal view. SNR (dB)/ PSNR (dB)/ SSIM values are reported for each case. The top row displays reconstructions (magnitude images) while the bottom row displays corresponding error maps. The yellow arrows point out the differences in the zoomed coronal view of cartilage region. The proposed schemes outperform state-of-the-art schemes and preserve complex structures better as pointed out by arrows.} \label{fig:knee_single_channel_zoomed_coronal} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[scale=0.9,keepaspectratio=true,trim={1.8cm 9.1cm 2.6cm 9.7cm},clip]{Knee_single_channel_sagittal_zoomed_images.pdf} \caption{Reconstruction results of 4-fold accelerated single-channel knee data with sagittal view. SNR (dB)/ PSNR (dB)/ SSIM values are reported for each case. The top row displays reconstructions (magnitude images) while the bottom row displays corresponding error maps. The yellow arrows point out the differences in the zoomed sagittal view of cartilage region. The proposed schemes recover finer details better compared to others.} \label{fig:knee_single_channel_zoomed_sagittal} \end{figure*} We compare reconstruction quality of single-channel coronal and sagittal knee slices in Fig. \ref{fig:knee_single_channel_zoomed_coronal} and Fig. \ref{fig:knee_single_channel_zoomed_sagittal}, respectively. The K-DSLR and H-DSLR reconstructions are that of a $K = 10$ iteration model. We observe that K-DSLR results are comparable to model-based GIRAF. By contrast, our proposed model-based schemes outperform direct inversion approaches, K-UNET, and I-UNET. Note that our proposed schemes have much smaller number of trainable parameters compared to the UNETs. The multiple iterations of the proposed alternating strategy improves overall performance. An addition of spatial domain prior in H-DSLR visibly improves reconstruction quality, and SNR. The yellow arrows in the zoomed cartilage region point out differences in the preservation of minute structures. K-UNET and I-UNET seem to be missing lot of details as pointed out by the arrows. Although GIRAF and K-DSLR reconstructions miss few details, the H-DSLR scheme preserves all of those. The H-DSLR reconstructions are better in terms of SNR than other methods considered here. \begin{table}[h!] \fontsize{6}{8} \selectfont \centering \renewcommand{\arraystretch}{0.8} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Signal-to-Noise Ratio (SNR)}\\ \hline Organ&\multicolumn{1}{|c|}{Knee Coronal} &\multicolumn{1}{|c|}{Knee Sagittal} & \multicolumn{1}{|c|}{Brain (CCP)} \\ \cline{2-4} Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{|c|}{4x} & \multicolumn{1}{|c|}{4x} \\ Methods & SNR & SNR & SNR \\ \hline GIRAF &18.14 $\pm$ 1.58 &19.01 $\pm$ 1.64 & 22.02 $\pm$ 1.36 \\ K-UNET &17.76 $\pm$ 1.29 &18.78 $\pm$ 1.54 & 19.78 $\pm$ 1.47 \\ \textbf{K-DSLR} & 18.69 $\pm$ 1.08 & 19.16 $\pm$ 1.11 & 21.18 $\pm$ 1.01 \\ I-UNET &17.29 $\pm$ 1.77 &18.53 $\pm$ 1.83 & 19.43 $\pm$ 1.59 \\ \textbf{H-DSLR} & 19.31 $\pm$ 1.02 & 22.48 $\pm$ 1.26 & 25.39 $\pm$ 0.84 \\ \hline \multicolumn{4}{|c|}{Peak Signal-to-Noise Ratio (PSNR)}\\ \hline Organ&\multicolumn{1}{|c|}{Knee Coronal} &\multicolumn{1}{|c|}{Knee Sagittal} & \multicolumn{1}{|c|}{Brain (CCP)} \\ \cline{2-4} Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{|c|}{4x} & \multicolumn{1}{|c|}{4x} \\ Methods & SNR & SNR & SNR \\ \hline GIRAF &25.01 $\pm$ 1.55 &27.96 $\pm$ 1.59 & 29.11 $\pm$ 1.41 \\ K-UNET &24.55 $\pm$ 1.34 &27.75 $\pm$ 1.52 & 26.68 $\pm$ 1.43 \\ \textbf{K-DSLR} & 25.64 $\pm$ 1.02 & 28.22 $\pm$ 1.03 & 28.27 $\pm$ 0.97 \\ I-UNET &24.34 $\pm$ 1.79 &27.49 $\pm$ 1.87 & 26.47 $\pm$ 1.63 \\ \textbf{H-DSLR} & 26.46 $\pm$ 0.98 & 31.54 $\pm$ 1.18 & 32.53 $\pm$ 0.87 \\ \hline \multicolumn{4}{|c|}{Structural Similarity (SSIM)}\\ \hline Organ&\multicolumn{1}{|c|}{Knee Coronal} &\multicolumn{1}{|c|}{Knee Sagittal} & \multicolumn{1}{|c|}{Brain (CCP)} \\ \cline{2-4} Acceleration & \multicolumn{1}{c}{6x} & \multicolumn{1}{|c|}{4x} & \multicolumn{1}{|c|}{4x} \\ Methods & SSIM & SSIM & SSIM \\ \hline GIRAF &0.841 $\pm$ 0.031 &0.877 $\pm$ 0.040 &0.915 $\pm$ 0.023 \\ K-UNET &0.830 $\pm$ 0.029 &0.872 $\pm$ 0.032 &0.842 $\pm$ 0.019 \\ \textbf{K-DSLR} & 0.849 $\pm$ 0.019 & 0.878 $\pm$ 0.025 & 0.902 $\pm$ 0.017 \\ I-UNET &0.834 $\pm$ 0.028 &0.873 $\pm$ 0.026 &0.838 $\pm$ 0.026 \\ \textbf{H-DSLR} & 0.852 $\pm$ 0.011 & 0.921 $\pm$ 0.013 & 0.929 $\pm $ 0.009 \\ \hline \end{tabular} \vspace{1em} \caption{Quantitative comparison of SLR, Deep-SLR and UNET reconstructions in the context of single-channel recovery. The bold-faced methods are the proposed ones.} \label{tab:comp_sc} \vspace{-2em} \end{table} We show results of hypothesis test (described in Section V-B in the main paper) on single-channel brain slices with different anatomies in Fig. \ref{fig:single_channel_annihilation}. The proposed k-space network learns non-linear annihilation relations that can kill edges or high gradient regions in piecewise constant images. The results in Fig. \ref{fig:single_channel_annihilation} show that the non-linear block $\mathcal N_{\rm k}$ can linearly approximate the annihilation relations which closely match with those learnt by the SLR schemes. Specifically, the SOS-$\mathcal N_{\rm k}$ outputs of the perturbations for each case in Fig. \ref{fig:single_channel_annihilation}.(d) are similar to SOS-$\mathcal R$ outputs in Fig. \ref{fig:single_channel_annihilation}.(c) from SLR schemes. Thus, the non-linear annihilation block $\mathcal N_{\rm k}$ is generalizable over a variety of brain anatomy slices. \section{Parallel MRI Recovery} We study robustness of the proposed H-DSLR scheme to acceleration factors for variety of slices from the test subjects. In Fig. \ref{fig:brain_pmri_var_slices}, we show reconstructions of multi-channel brain dataset for 4x, 6x and 10x accelerations over several anatomies of a test subject. We train a $K = 10$ iteration H-DSLR network end-to-end with 10x under-sampled brain slices and test it on 4x, 6x, 10x slices from subjects unseen by the network. The dataset is the one mentioned in Section IV-A from the main paper. These reconstructions are appreciable over a range of slices including the corner ones. The brain structure is preserved in all the cases; 4x and 6x reconstructions have sharper edges compared to 10x. The minute structures in the cerebellum region are better preserved in 4x compared to the other two which is due to lower acceleration. The 10x reconstruction loses few details in the cerebellum region but preserves most of the gray and white matter; 6x preserves all of them but appears slightly blurred compared to 4x. The proposed scheme could generalize appreciably over a variety of unseen brain slices at different acceleration factors. \begin{figure}[h!] \centering \includegraphics[scale=0.95,keepaspectratio=true,trim={1.8cm 5.4cm 11.8cm 5.8cm},clip]{Brain_PMRI_variation_in_slices_hybrid.pdf} \caption{Proposed H-DSLR reconstructions of 12-channel brain data for 4x, 6x and 10x accelerations. The displayed images are sum-of-squares reconstructions. H-DSLR is robust to acceleration factors for different anatomies of brain.} \label{fig:brain_pmri_var_slices} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.9,keepaspectratio=true,trim={1.9cm 5.2cm 11.3cm 5.6cm},clip]{Florian_Knee_PMRI_variation_in_slices_hybrid.pdf} \caption{Proposed H-DSLR reconstructions of 15-channel knee data for 3x and 4x accelerations. The displayed images are sum-of-squares reconstructions. H-DSLR is robust to acceleration factors for different anatomies of knee.} \label{fig:knee_pmri_var_slices} \end{figure} We perform a similar study for multi-channel knee dataset described in Section IV-A in the main paper. In Fig. \ref{fig:knee_pmri_var_slices}, we display 3x and 4x reconstructions from different anatomies of a test subject. Similar to brain, we train a $K = 10$ iteration H-DSLR network with 4x under-sampled knee slices and test it on 3x, 4x slices from unseen subjects. The reconstructions appear significantly de-aliased for all cases. The cartilage region has several minute details which are slightly better preserved for the 3x case. Overall, the proposed scheme could de-alias variety of slices including the corner ones for different acceleration factors which shows its generalizability. The plot in Fig. \ref{fig:ben_itr_plot} shows the effect of increasing iterations $K$ of our proposed scheme for parallel MRI cases. We observe a similar trend for 6x under-sampled brain and 4x under-sampled knee respectively. The average SNR on test data improves as we increase the iterations. Thus, unrolling the optimization blocks for several iterations is beneficial. Since, the performance saturated after $10^{th}$ iteration, we chose $K = 10$ for parallel MRI experiments. We also observed $K = 10$ iteration model to be optimal for single-channel experiments. We show the intermediate results of H-DSLR algorithm as a function of iterations in Fig. \ref{fig:ben_itr}. We note that for both knee and brain test data, aliasing reduces as a function of iterations upto $K = 10$. The reduction in aliasing with more iterations justifies the benefit of unrolling the proposed scheme for more iterations. The amount of reduction in aliasing with iterations is more initially and saturates afterwards around 9-10 iterations. Thus, visible aliasing is reduced with increase in iterations which provides improved SNR. \begin{figure}[h!] \centering \rotatebox{90}{\hspace{20pt}Average SNR (dB) on test data} \includegraphics[scale=0.55,keepaspectratio=true,trim={0.5cm 0.4cm 0 0},clip]{plot_knee_brain_pmri.pdf}\\ \rotatebox{0}{\hspace{20pt} Number of Iterations for training} \caption{Performance improvement in terms of average SNR (dB) on 6x accelerated 12-channel brain and 4x accelerated 15-channel knee test data respectively. The average testing SNR improves till $K = 10$ iteration and saturates afterwards.} \label{fig:ben_itr_plot} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=\textwidth,keepaspectratio=true,trim={1.7cm 11.2cm 1.6cm 11.8cm},clip]{benefit_of_iteration_brain_knee_supplementary.pdf} \caption{Proposed H-DSLR reconstructions of 4x under-sampled 15-channel knee and 6x under-sampled 12-channel brain data as a function of iterations $K$ from left to right. The images get more de-aliased with increase in iterations. The reported numbers are SNR in dB which improves with iterations.} \label{fig:ben_itr} \end{figure*} \section{Diffusion MRI recovery} \subsection{Structured Low Rank Algorithms for Multi-shot Echo Planar Imaging (EPI) Acquisition} Multi-shot annihilation relations exist for phase-corrupted images as shown in \cite{mani2017multi}, in addition to the multi-channel annihilation relations discussed before. The phase-corrupted images $\boldsymbol\gamma_{i}[\mathbf r],\hspace{2pt} i=1\ldots N$ satisfy a pairwise Fourier domain annihilation relation $ \widehat{\boldsymbol \gamma_{i}}[\mathbf{k}]\ast\widehat{\boldsymbol \phi_{j}}[\mathbf{k}]-\widehat{\boldsymbol \gamma_{j}}[\mathbf{k}]\ast\widehat{\boldsymbol \phi_{i}}[\mathbf{k}]=0, \forall \mathbf{k}$, where $\widehat{\boldsymbol \gamma_{i}}[\mathbf{k}]$ and $\widehat{\boldsymbol \phi_{i}}[\mathbf{k}]$ are the Fourier coefficients of $\boldsymbol \gamma_{i}(\mathbf{k})$ and $\boldsymbol \phi_{i}(\mathbf{k})$ respectively. The $\boldsymbol \phi_{i}[\mathbf{r}]$ are smooth phase images. The relations for each pair of phase-corrupted images can be compactly written as in (5). Similar to the parallel imaging case, the Hankel matrices corresponding to each shot and channel are stacked horizontally to obtain $\mathcal T(\widehat{\boldsymbol \Gamma})$, which is low rank due to large null space $\mathbf{N}$. The columns of $\mathbf{N}$ are vertically stacked filters $\widehat{\boldsymbol \phi_{i}}$. In this case, $\mathcal G=\mathcal I$ (identity mapping). This lifting is similar to the parallel imaging case but with different annihilation relations. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true,trim={1.7cm 11.3cm 11.25cm 11.6cm},clip]{Brain_diffusion_MRI_2shots.pdf} \caption{2-shot reconstruction results of 4-channel partial Fourier brain diffusion MRI. IRLS-M (IRLS-MUSSELS) reconstruction was ground truth for training. The proposed schemes are compared against MoDL-M(MoDL-MUSSELS). The top row shows sum-of-squares images from different schemes while the bottom row shows error maps generated from IRLS-M as ground truth. Both K-DSLR and H-DSLR provide performance comparable to MoDL-M.} \label{fig:diffusion_recon_2shots} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true,trim={1.7cm 12.6cm 11.25cm 12.7cm},clip]{proposed_vs_calibrated_diffusion_brain.pdf} \caption{Comparison of calibrated approaches (MoDL-M (MoDL-MUSSELS) and IRLS-M (IRLS-MUSSELS)) with the proposed calibration-less approaches during mismatch in scans. A 2-shot recovery of the 4-channel partial Fourier brain data is shown for comparisons. The acquired k-space measurements were translated in spatial domain to emulate motion. Both calibrated approaches MoDL-M and IRLS-M reconstructions show diagonally striped motion artifacts due to mismatch. Our proposed schemes (K-DSLR and H-DSLR) remain unaffected.} \label{fig:ben_trans_diffusion} \end{figure} \subsection{CNN Architecture for Diffusion MRI Recovery} For this application we use the MIMO version of modified 12-layer UNET as $\mathcal N_{\rm k}$. The number of input and output channels are set according to complex channels in the dataset which is calculated as $N = N_{sh} \times N_{coil}$ where $N_{sh}$ denotes number of shots per acquisition while $N_{coil}$ corresponds to the number of coils used. Similar to other applications, we ensure the number of trainable parameters are same for both K-DSLR and H-DSLR. The regularization parameters were fixed at $\lambda = 1$, $\beta = 1$. We chose $K = 3$ iteration model based on performance which saturated with further iterations. All other training parameters were kept similar to other experiments. \subsection{Data Acquisition} \label{diff_data_acq} For diffusion MRI experiments, four-shot EPI data of seven healthy subjects were obtained from \cite{aggarwal2019modl}. The subjects were scanned in a 3T scanner using 32-channel head coil. The number of gradient directions were 60 per slice with the parameters: FOV = 210 x 210 mm, TE = 84 ms, slice thickness = 4 mm and a matrix size of 256 x 152 with partial Fourier oversampling of 24 lines. The dataset was split into 68 training slices from five subjects, five validation slices from the sixth subject and six testing slices from the seventh subject. Each slice had 60 directions. Similar to \cite{aggarwal2019modl}, the IRLS-MUSSELS (IRLS-M) \cite{mani2019improved} reconstructions were used as ground truth for training and quantitative comparisons. \subsection{State-of-the-art Methods for Comparison} For diffusion MRI experiments, we compare our proposed scheme with MoDL-MUSSELS \cite{aggarwal2019modl} which is a non-linear extension of IRLS-MUSSELS. MoDL-MUSSELS (MoDL-M) learns non-linear Fourier domain annihilation relations along with spatial regularization. It is a phase blind recovery scheme with a pre-calibrated approach that uses coil sensitivity information estimated from additional calibration scans on top of main scan. On the other hand, our proposed method does a double (phase and coil sensitivity) blind recovery that avoids potential motion artifacts introduced from the sensitivity estimation step. Both the methods were tested on the diffusion data described in Section \ref{diff_data_acq}. \subsection{Brain Diffusion MRI Recovery} We performed a 2-shot recovery of brain diffusion MRI with the proposed scheme and compared against pre-calibrated MoDL-MUSSELS. All the networks were trained with IRLS-MUSSELS reconstructions as ground truth. The comparisons on one of the slices at a specific direction can be seen in Fig. \ref{fig:diffusion_recon_2shots} where error maps are generated by computing absolute differences with IRLS-MUSSELS (ground truth). The H-DSLR and MoDL-MUSSELS reconstructions look sharper at the edges compared to K-DSLR which can be attributed to the spatial prior leveraged by these schemes. K-DSLR error map shows some residual error along the skull region which are further suppressed by the spatial domain prior in H-DSLR. Proposed reconstructions are comparable to MoDL-MUSSELS visually and also through error maps. Note that MoDL-MUSSELS does a phase blind recovery by leveraging coil sensitivity information. On the other hand, the proposed schemes are calibration-less and hence perform a double blind recovery which is more challenging. \subsection{Benefit over Calibrated Approaches} Pre-calibrated approaches suffer from motion induced artifacts due to mismatch between the calibration and main scans. We demonstrate the benefit of our proposed calibration-less scheme over pre-calibrated MODL-MUSSELS and IRLS-MUSSELS through a simulation experiment. Similar to the parallel MRI case, we introduce a mismatch by modulating the Fourier data with a linearly varying phase term which leads to a shift in spatial domain. The reconstructions results on a test slice is shown in Fig. \ref{fig:ben_trans_diffusion}. We observe striped artifacts in both IRLS-MUSSELS and MoDL-MUSSELS reconstructions due to a mismatch between the sensitivities and coil images while our proposed calibration-less approaches remain unaffected. This study shows the benefit of our proposed scheme in avoiding motion artifacts over calibrated approaches. \bibliographystyle{IEEEtran}
2,877,628,090,921
arxiv
\chapter*{Appendix: Mathematical concepts for physicists} \label{appendix} In this appendix we list some mathematical concepts which will be used in the main text, for the benefit of readers whose background is in Physics. \section*{Topology} \label{secA.1} \begin{Def} A {\bf topological space} is a set $M$ with a {\bf topology}, that is, a list of the {\bf open subsets} of $M$, satisfying: \begin{enumerate} \item Both $\varnothing$ and $M$ are open; \item Any union of open sets is open; \item Any finite intersection of open sets is open. \end{enumerate} \end{Def} All the usual topological notions can now be defined. For instance, a {\bf closed set} is a set whose complement is open. The {\bf interior} $\operatorname{int} A$ of a subset $A \subset M$ is the largest open set contained in $A$, its {\bf closure} $\overline{A}$ is the smallest closed set containing $A$, and its {\bf boundary} is $\partial A = \overline{A} \setminus \operatorname{int} A$. The main object of topology is the study of limits and continuity. \begin{Def} A sequence $\{ p_n \}$ is said to {\bf converge} to $p \in M$ if for any open set $U \ni p$ there exists $k \in \mathbb{N}$ such that $p_n \in U$ for all $n \geq k$. \end{Def} \begin{Def} A map $f:M \to N$ between two topological spaces is said to be {\bf continuous} if for each open set $U\subset N$ the preimage $f^{-1}(U)$ is an open subset of $M$. A bijection $f$ is called a {\bf homeomorphism} if both $f$ and its inverse $f^{-1}$ are continuous. \end{Def} A system of local coordinates on a manifold is an example of a homeomorphism between the coordinate neighborhood and an open set of $\mathbb{R}^n$. Two fundamental concepts in topology are compactness and connectedness. \begin{Def} A subset $A \subset M$ is said to be {\bf compact} if every cover of $A$ by open sets admits a finite subcover. It is said to be {\bf connected} it is impossible to write $A = (A \cap U) \cup (A \cap V)$ with $U,V$ disjoint open sets and $A \cap U, A \cap V \neq \varnothing$. \end{Def} The following result generalizes the theorems of Weierstrass and Bolzano. \begin{Thm} Continuous maps carry compact sets to compact sets, and connected sets to connected sets. \end{Thm} \section*{Metric spaces} \label{secA.2} \begin{Def} A {\bf metric space} is a set $M$ and a {\bf distance function} $d:M\times M \to [0,+\infty)$ satisfying: \begin{enumerate} \item {\bf Positivity:} $d(p,q) \geq 0$ and $d(p,q)=0$ if and only if $p=q$; \item {\bf Symmetry:} $d(p,q)=d(q,p)$; \item {\bf Triangle inequality:} $d(p,r) \leq d(p,q) + d(q,r)$, \end{enumerate} for all $p,q,r \in M$. \end{Def} The {\bf open ball} with center $p$ and radius $\varepsilon$ is the set \[ B_{\varepsilon}(p)=\{ q \in M \mid d(p,q) < \varepsilon \}. \] Any metric space has a natural topology, whose open sets are unions of open balls. In this topology $p_n \to p$ if and only if $d(p_n,p) \to 0$, $F \subset M$ is closed if and only if every convergent sequence in $F$ has limit in $F$, and $K \subset M$ is compact if and only if every sequence in $K$ has a sublimit in $K$. A fundamental notion for metric spaces is completeness. \begin{Def} A sequence $\{ p_n \}$ in $M$ is said to be a {\bf Cauchy sequence} if for all $\varepsilon > 0$ there exists $N \in \mathbb{N}$ such that $d(p_n,p_m)<\varepsilon$ for all $m,n>N$. A metric space is said to be {\bf complete} if all its Cauchy sequences converge. \end{Def} In particular any compact metric space is complete. \section*{Hopf-Rinow theorem} \label{secA.3} \begin{Def} A Riemannian manifold is said to be {\bf geodesically complete} if any geodesic is defined for every value of its parameter. \end{Def} \begin{Def} Let $(M,g)$ be a connected Riemannian manifold and $p,q \in M$. The {\bf distance} between $p$ and $q$ is defined as \[ d(p,q)=\inf \{ l(\gamma) \mid \gamma \text{ is a smooth curve connecting } p \text{ to } q \}. \] \end{Def} It is easily seen that $(M,d)$ is a metric space. Remarkably, the completeness of this metric space is equivalent to geodesic completeness. \begin{Thm} {\em (Hopf-Rinow)} A connected Riemannian manifold $(M,g)$ is geodesically complete if and only if $(M,d)$ is a complete metric space. \end{Thm} \section*{Differential forms} \label{secA.4} \begin{Def} A {\bf differential-form} $\omega$ {\bf of degree} $k$ is simply a completely anti-symmetric $k$-tensor: $\omega_{\alpha_1 \cdots \alpha_k} = \omega_{[\alpha_1 \cdots \alpha_k]}$. \end{Def} For instance, covector fields are differential forms of degree $1$. Differential forms are useful because of their rich algebraic and differential structure. \begin{Def} If $\omega$ is a $k$-form and $\eta$ is an $l$-form then their {\bf exterior product} is the $(k+l)$-form \[ (\omega \wedge \eta)_{\alpha_1 \cdots \alpha_k \beta_1 \cdots \beta_l} = \frac{(k+l)!}{k!\,l!} \omega_{[\alpha_1 \cdots \alpha_k} \eta_{\beta_1 \cdots \beta_l]}, \] and the {\bf exterior derivative} of $\omega$ is the $(k+1)$-form \[ (d\omega)_{\alpha \alpha_1 \cdots \alpha_k} = (k+1) \nabla_{[\alpha} \omega_{\alpha_1 \cdots \alpha_k]}, \] where $\nabla$ is any symmetric connection. \end{Def} It is easy to see that any $k$-form $\omega$ is given in local coordinates by \begin{equation} \label{kform} \omega = \sum_{\alpha_1 < \cdots < \alpha_k} \omega_{\alpha_1 \cdots \alpha_k}(x) dx^{\alpha_1} \wedge \cdots \wedge dx^{\alpha_k}, \end{equation} and therefore has $n \choose k$ independent components on an $n$-dimensional manifold. \begin{Prop} \label{propertiesforms} If $\omega$, $\eta$ and $\theta$ are differential forms then: \begin{enumerate} \item $\omega \wedge (\eta \wedge \theta) = (\omega \wedge \eta) \wedge \theta$; \item $\omega \wedge \eta = (-1)^{(\deg \omega)(\deg \eta)} \eta \wedge \omega$; \item $\omega \wedge (\eta + \theta) = \omega \wedge \eta + \omega \wedge \theta$; \item $d(\omega + \eta) = d \omega + d \eta$; \item $d(\omega \wedge \eta) = d \omega \wedge \eta + (-1)^{\deg \omega} \omega \wedge d \eta$; \item $d^2 \omega = 0$. \end{enumerate} \end{Prop} It is clear from these properties that if the $k$-form $\omega$ is given in local coordinates by \eqref{kform} above then \[ d\omega = \sum_{\alpha_1 < \cdots < \alpha_k} \sum_{\alpha} \partial_\alpha \omega_{\alpha_1 \cdots \alpha_k} dx^{\alpha} \wedge dx^{\alpha_1} \wedge \cdots \wedge dx^{\alpha_k}. \] The last property in Proposition~\ref{propertiesforms} has a converse, known as the {\bf Poincar\'e Lemma}. \begin{Lemma}{\em (Poincar\'e)} If $d \omega = 0$ then locally $\omega = d \eta$. \end{Lemma} A related result is the {\bf Frobenius Theorem}. Here we present a particular case of this result. \begin{Thm}{\em (Frobenius)} The nonvanishing $1$-form $\omega$ is locally orthogonal to a family of hypersurfaces if and only if $\omega \wedge d \omega = 0$. \end{Thm} To prove the easy direction in this equivalence it suffices to note that $\omega$ is locally orthogonal to a family of hypersurfaces $\{ f = \text{constant} \}$ if and only if $\omega = g df$ for some nonvanishing function $g$. Note that in particular this is always true for $1$-forms in $2$-dimensional manifolds. We will now assume that our $n$-dimensional manifold is {\bf oriented}, that is, that an orientation can be, and has been, consistently chosen on every tangent space. Any $n$-form $\omega$ is written in local coordinates as \[ \omega = a(x) dx^1 \wedge \cdots \wedge dx^n. \] If the coordinate system is positive, that is, if the coordinate basis $\{ \partial_1, \ldots, \partial_n \}$ has positive orientation at all points, we define \[ \int_U \omega = \int_{x(U)} a(x) dx^1 \ldots dx^n, \] where $U$ is the coordinate neighborhood. This formula does not depend on the choice of local coordinates because $dx^1 \wedge \cdots \wedge dx^n$ transforms by the determinant of the change of variables. \begin{Thm}{\em (Stokes)} If $M$ is an oriented $n$-dimensional manifold with boundary $\partial M$ and $\omega$ is an $(n-1)$-form then \[ \int_M d\omega = \int_{\partial M} \omega. \] \end{Thm} In this theorem the orientations of $M$ and $\partial M$ are related as follows: if $\partial M$ is a level set of $x^1$ in a positive coordinate system and $\partial_1$ points outwards then the coordinate system $(x^2, \ldots, x^n)$ on $\partial M$ is positive. If $M$ has a metric $g$ then its {\bf volume element} is the $n$-form $\epsilon$ which is $1$ when contracted with a positive orthonormal frame. In positive local coordinates we have \[ \epsilon = \sqrt{|\det(g_{\mu\nu})|} \, dx^1 \wedge \cdots \wedge dx^n. \] It is easily seen that $\nabla \epsilon = 0$, where $\nabla$ is the Levi-Civita connection. If $\omega$ is a $k$-form then its {\bf Hodge dual} is the $(n-k)$-form $\star \omega$ given by \[ (\star \omega)_{\beta_1 \cdots \beta_{n-k}} = \frac1{k!} \omega^{\alpha_1 \cdots \alpha_k} \epsilon_{\alpha_1 \cdots \alpha_k \beta_1 \cdots \beta_{n-k}}. \] The operator $\star$, called the Hodge star, can alternatively be defined as follows: if $\{\omega^1, \ldots, \omega^n \}$ is any positively oriented orthonormal coframe (so that the volume element is $\epsilon = \omega^1 \wedge \ldots \wedge \omega^n$) then \begin{align*} \star (\omega^1 \wedge \ldots \wedge \omega^k) & = g(\omega^1, \omega^1) \cdots g(\omega^k, \omega^k) \, \omega^{k+1} \wedge \ldots \wedge \omega^n \\ & = \epsilon((\omega^1)^\sharp, \ldots, (\omega^k)^\sharp, \ldots). \end{align*} \section*{Lie derivative} \label{secA.5} A vector field $X$ can be identified with the differential operator that corresponds to taking derivatives along $X$. In local coordinates, this operator is given by \[ X \cdot f = X^\mu \partial_\mu f. \] It turns out that the commutator of two vector fields $X$ and $Y$, regarded as differential operators, is also a vector field: \begin{align*} [X,Y] \cdot f & = X \cdot (Y^\mu \partial_\mu f) - Y \cdot (X^\mu \partial_\mu f) \\ & = (X \cdot Y^\mu) \partial_\mu f + Y^\mu X^\nu \partial_\nu \partial_\mu f - (Y \cdot X^\mu) \partial_\mu f - X^\mu Y^\nu \partial_\nu \partial_\mu f \\ & = (X \cdot Y^\mu - Y \cdot X^\mu) \partial_\mu f. \end{align*} \begin{Def} The {\bf Lie bracket} of two vector fields $X$ and $Y$ is the vector field \[ [X,Y] = (X \cdot Y^\mu - Y \cdot X^\mu) \partial_\mu. \] \end{Def} This operation is intimately related with the exterior derivative. \begin{Prop} If $\omega$ is a $1$-form then \[ d \omega(X,Y) = X \cdot \omega(Y) - Y \cdot \omega(X) - \omega([X,Y]) \] for all vector fields $X$ and $Y$. \end{Prop} \begin{proof} In local coordinates we have \begin{align*} d \omega(X,Y) & = ( \partial_\mu \omega_\nu - \partial_\nu \omega_\mu ) X^\mu Y^\nu = Y^\nu X \cdot \omega_\nu - X^\nu Y \cdot \omega_\nu \\ & = X \cdot (\omega_\nu Y^\nu) - \omega_\nu X \cdot Y^\nu - Y \cdot (\omega_\nu X^\nu) + \omega_\nu Y \cdot X^\nu \\ & = X \cdot \omega(Y) - Y \cdot \omega(X) - \omega_\nu (X \cdot Y^\nu - Y \cdot X^\nu). \end{align*} \end{proof} If the vector field $X$ is nonzero at some point $p$ then there exists a coordinate system defined in a neighborhood of $p$ such that $X = \partial_1$. In fact, we just have to fix local coordinates $(x^2, \ldots, x^n)$ on a hypersurface $\Sigma$ transverse to $X$ at $p$ and let $x^1$ be the parameter for the flow of $X$ starting at $\Sigma$. If $T$ is any tensor, we define its {\bf Lie derivative} along $X$ as the tensor with components \[ (\mathcal{L}_X T)^{\alpha_1 \cdots \alpha_k}_{\beta_1 \ldots \beta_l} = \partial_1 T^{\alpha_1 \cdots \alpha_k}_{\beta_1 \ldots \beta_l}. \] This can be extended to points where $X$ vanishes by continuity. Although this definition seems to depend on the coordinate system, it is actually invariant. To check this, we just have to find an invariant expression for the Lie derivative of functions, vector fields and $1$-forms and then notice that the Leibnitz rule applies. \begin{Prop} If $X$ is a vector field then: \begin{enumerate} \item $\mathcal{L}_X f = X \cdot f$ for functions $f$; \item $\mathcal{L}_X Y = [X,Y]$ for vector fields $Y$; \item $\mathcal{L}_X \omega = X \lrcorner \, d \omega + d (X \lrcorner \, \omega)$ for $1$-forms $\omega$ \end{enumerate} (where $\lrcorner \,$ means contraction in the first index). \end{Prop} \begin{proof} The formula for functions is immediate. In the coordinate system where $X = \partial_1$, \[ \mathcal{L}_X Y = \partial_1 Y^\mu \partial_\mu = (X \cdot Y^\mu - Y \cdot X^\mu) \partial_\mu = [X,Y]. \] Finally, we have \begin{align*} (X \lrcorner \, d \omega + d (X \lrcorner \, \omega))(Y) & = d \omega(X,Y) + Y \cdot \omega(X) = X \cdot \omega(Y) - \omega([X,Y]) \\ & = \mathcal{L}_X (\omega(Y)) - \omega(\mathcal{L}_XY) = (\mathcal{L}_X\omega) (Y), \end{align*} where we used the Leibnitz rule. This formula is sometimes called {\bf Cartan's magic formula}. \end{proof} \section*{Cartan structure equations} \label{secA.6} Let $\{ E_\mu \}$ be an orthonormal frame, and $\{ \omega^\mu \}$ the corresponding orthonormal coframe, so that \[ \omega^\mu(E_\nu)=\delta^\mu_{\,\,\,\,\nu}. \] Note that the metric can be written as \[ ds^2 = \eta_{\mu\nu} \, \omega^\mu \otimes \omega^\nu, \] where $(\eta_{\mu\nu})=\operatorname{diag}(-1,1,1,1)$ is the flat space metric (which we will use to raise and lower indices). \begin{Def} The {\bf connection forms} associated to the orthonormal frame $\{ E_\mu \}$ are the $1$-forms $\omega^\mu_{\,\,\,\,\nu}$ such that \[ \nabla_X E_\nu = \omega^\mu_{\,\,\,\,\nu}(X) E_\mu \] for all vector fields $X$. The {\bf curvature forms} associated this frame are the $2$-forms $\Omega^\mu_{\,\,\,\,\nu}$ such that \[ R(X,Y) E_\nu = \Omega^\mu_{\,\,\,\,\nu}(X,Y) E_\mu \] for all vector fields $X, Y$. \end{Def} Note that the components of the Riemann tensor in the orthonormal frame can be retrieved from the curvature forms by noticing that \[ \Omega^\mu_{\,\,\,\,\nu} = R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} \, \omega^\alpha \otimes \omega^\beta = \sum_{\alpha < \beta} R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} \, \omega^\alpha \wedge \omega^\beta. \] These forms can be computed by using the so-called {\bf Cartan structure equations}. This is by far the most efficient way to compute the curvature. \begin{Thm} The connection forms are the unique solution of {\bf Cartan's first structure equations} \[ \begin{cases} \omega_{\mu\nu}=-\omega_{\nu\mu}\\ d\omega^\mu + \omega^\mu_{\,\,\,\,\nu} \wedge \omega^\nu = 0 \end{cases}, \] and the curvature forms are given by {\bf Cartan's second structure equations} \[ \Omega^\mu_{\,\,\,\,\nu} = d\omega^\mu_{\,\,\,\,\nu} + \omega^\mu_{\,\,\,\,\alpha} \wedge \omega^\alpha_{\,\,\,\,\nu} \,. \] \end{Thm} \begin{proof} The first condition is equivalent to \[ X \cdot \langle E_\mu, E_\nu \rangle = 0 \] for all vector fields $X$, which in turn is equivalent to the compatibility of the connection with the metric. Using \[ d \omega^\mu (X,Y) = X \cdot \omega^\mu(Y) - Y \cdot \omega^\mu(X) - \omega^\mu([X,Y]) \] for all vector fields $X$ and $Y$, it is easy to see that the second condition is equivalent to \[ [E_\alpha, E_\beta] = \nabla_{E_\alpha} E_\beta - \nabla_{E_\beta} E_\alpha, \] which in turn is equivalent to the symmetry of the connection. Since the Levi-Civita connection is the only connection which is symmetric and compatible with the metric, we conclude that Cartan's first structure equations have a unique solution. Finally, the third condition can be derived by writing \begin{align*} \Omega^\mu_{\,\,\,\,\nu}(X,Y) E_\mu = R(X,Y) E_\nu = \nabla_X \nabla_Y E_\nu - \nabla_Y \nabla_X E_\nu - \nabla_{[X,Y]} E_\nu \end{align*} in terms of the connection forms. \end{proof} \chapter{Preliminaries} \label{chapter1} In this initial chapter we give a very short introduction to special and general relativity for mathematicians. In particular, we relate the index-free differential geometry notation used in Mathematics (e.g.~\cite{ONeill83, Carmo93, Boothby03, GN14}) to the index notation used in Physics (e.g.~\cite{MTW73, W84, HE95}). As an exercise in index gymnastics, we derive the contracted Bianchi identities. \section{Special relativity} \label{sec1.1} Consider an inertial frame $S'$ moving with velocity $v$ with respect to another inertial frame $S$ along their common $x$-axis (Figure~\ref{inercial}). According to classical mechanics, coordinate $x'$ of a point $P$ on the frame $S'$ is related to its $x$ coordinate on the frame $S$ by \[ x' = x - vt. \] Moreover, a clock in $S'$ initially synchronized with a clock in $S$ is assumed to keep the same time: \[ t' = t. \] Thus the spacetime coordinates of events are related by a so-called {\bf Galileo transformation} \[ \begin{cases} x' = x - vt \\ t' = t \end{cases}. \] \begin{figure}[h!] \begin{center} \psfrag{S}{$S$} \psfrag{S'}{$S'$} \psfrag{y}{$y$} \psfrag{y'}{$y'$} \psfrag{x'}{$x'$} \psfrag{vt}{$vt$} \psfrag{P}{$P$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{inercial.eps} \end{center} \caption{Galileo transformation.} \label{inercial} \end{figure} If the point $P$ is moving, its velocity in $S'$ is related to its velocity in $S$ by \[ \frac{dx'}{dt'} = \frac{dx - vdt}{dt} = \frac{dx}{dt} - v. \] This is in conflict with the experimental fact that the speed of light is the same in every inertial frame, indicating that classical mechanics is not correct. Einstein solved this problem in 1905 by replacing the Galileo transformation by the so-called {\bf Lorentz transformation}: \[ \begin{cases} x' = \gamma(x - vt) \\ t' = \gamma(t - vx) \end{cases}. \] Here \[ \gamma = \frac{1}{\sqrt{1-v^2}}, \] and we are using units such that the speed of light is $c=1$ (for example measuring time in years and distance in light-years). Note that if $|v|$ is much smaller than the speed of light, $|v| \ll 1$, then $\gamma \simeq 1$, and we retrieve the Galileo transformation (assuming $\left|v\frac{x}{t}\right| \ll 1$). Under the Lorentz transformation velocities transform as \[ \frac{dx'}{dt'} = \frac{\gamma(dx - vdt)}{\gamma(dt - vdx)} = \frac{\frac{dx}{dt} - v}{1 - v \frac{dx}{dt}}. \] In particular, \[ \frac{dx}{dt} = 1 \Rightarrow \frac{dx'}{dt'} = \frac{1 - v}{1 - v } = 1, \] that is, the speed of light is the same in the two inertial frames. In 1908, Minkowski noticed that \[ - (dt')^2 + (dx')^2 = - \gamma^2 (dt - vdx)^2 + \gamma^2 (dx - vdt)^2 = - dt^2 + dx^2, \] that is, the Lorentz transformations could be seen as isometries of $\mathbb{R}^4$ with the indefinite metric \[ ds^2 = - dt^2 + dx^2 + dy^2 + dz^2 = - dt \otimes dt + dx \otimes dx + dy \otimes dy + dz \otimes dz. \] \begin{Def} The pseudo-Riemannian manifold $(\mathbb{R}^4, ds^2) \equiv (\mathbb{R}^4, \langle \cdot, \cdot \rangle)$ is called the {\bf Minkowski spacetime}. \end{Def} Note that the set of vectors with zero square form a cone (the so-called {\bf light cone}): \[ \langle v,v \rangle = 0 \Leftrightarrow -(v^0)^2 + (v^1)^2 + (v^2)^2 + (v^3)^2 = 0. \] \begin{Def} A vector $v\in\mathbb{R}^4$ is said to be: \begin{enumerate} \item {\bf timelike} if $\langle v, v \rangle < 0$; \item {\bf spacelike} if $\langle v, v \rangle > 0$; \item {\bf lightlike}, or {\bf null}, if $\langle v, v \rangle = 0$. \item {\bf causal} if it is timelike or null; \item {\bf future-pointing} if it is causal and $\left\langle v, \frac{\partial}{\partial t} \right\rangle < 0$. \end{enumerate} The same classification applies to (smooth) curves $c:[a,b] \to \mathbb{R}^4$ according to its tangent vector. \end{Def} \begin{figure}[h!] \begin{center} \psfrag{p}{$p$} \psfrag{null}{null vector} \psfrag{timelike future-pointing}{timelike future-pointing vector} \psfrag{spacelike}{spacelike vector} \psfrag{dt}{$\frac{\partial}{\partial t}$} \psfrag{dx}{$\frac{\partial}{\partial x}$} \psfrag{dy}{$\frac{\partial}{\partial y}$} \epsfxsize=.8\textwidth \leavevmode \epsfbox{cone.eps} \end{center} \caption{Minkowski geometry (traditionally represented with the $t$-axis pointing upwards).} \end{figure} The length $|\langle v, v \rangle|^\frac12$ of a timelike (resp.~spacelike) vector $v \in \mathbb{R}^4$ represents the time (resp.~distance) measured between two events $p$ and $p + v$ in the inertial frame where these events happen in the same location (resp.~are simultaneous). If $c:[a,b] \to \mathbb{R}^4$ is a timelike curve then its length \[ \tau(c) = \int_a^b |\langle \dot{c}(s), \dot{c}(s) \rangle|^\frac12 ds \] represents the {\bf proper time} measured by the particle between events $c(a)$ and $c(b)$. We have: \begin{Prop} {\em (Twin paradox)} Of all timelike curves connecting two events $p, q \in \mathbb{R}^4$, the curve with {\bf maximal} length is the line segment (representing inertial motion). \end{Prop} \begin{proof} We may assume $p=(0,0,0,0)$ and $q=(T,0,0,0)$ on some inertial frame, and parameterize any timelike curve connecting $p$ to $q$ by the time coordinate: \[ c(t)=(t,x(t),y(t),z(t)). \] Therefore \[ \tau(c) = \int_0^T \left|-1+\dot{x}^2+\dot{y}^2+\dot{z}^2\right|^\frac12 dt = \int_0^T \left(1-\dot{x}^2-\dot{y}^2-\dot{z}^2\right)^\frac12 dt \leq \int_0^T 1 dt = T. \] \end{proof} Most problems in special relativity can be recast as questions about the geometry of the Minkowski spacetime. \begin{Prop} {\em (Doppler effect)} An observer moving with velocity $v$ away from a source of light of period $T$ measures the period to be \[ T' = T \sqrt{\frac{1+v}{1-v}} . \] \end{Prop} \begin{proof} Figure~\ref{Doppler} represents two light signals emitted by an observer at rest at $x=0$ with a time difference $T$. These signals are detected by an observer moving with velocity $v$, who measures a time difference $T'$ between them. Now, if the first signal is emitted at $t=t_0$, its history is the line $t = t_0 + x$. Consequently, the moving observer detects the signal at the event with coordinates \[ \begin{cases} t = t_0 + x \\ \\ \displaystyle x = vt \end{cases} \Leftrightarrow \begin{cases} \displaystyle t = \frac{t_0}{1-v} \\ \\ \displaystyle x = \frac{vt_0}{1-v} \end{cases}. \] \begin{figure}[h!] \begin{center} \psfrag{t}{$t$} \psfrag{x}{$x$} \psfrag{x=vt}{$x=vt$} \psfrag{T}{$T$} \psfrag{T'}{$T'$} \epsfxsize=.5\textwidth \leavevmode \epsfbox{Doppler.eps} \end{center} \caption{Doppler effect.} \label{Doppler} \end{figure} Similarly, the second light signal is emitted at $t=t_0 + T$, its history is the line $t = t_0 + T + x$, and it is detected by the moving observer at the event with coordinates \[ \begin{cases} \displaystyle t = \frac{t_0 + T}{1-v} \\ \\ \displaystyle x = \frac{v (t_0 + T)}{1-v} \end{cases}. \] Therefore the time difference between the signals as measured by the moving observer is \begin{align*} \hspace{2cm} T' & = \sqrt{\left(\frac{t_0 + T}{1-v} - \frac{t_0}{1-v} \right)^2 - \left(\frac{v (t_0 + T)}{1-v} - \frac{v t_0}{1-v} \right)^2} \\ & = \sqrt{\frac{T^2}{(1-v)^2} - \frac{v^2 T^2}{(1-v)^2}} = T \sqrt{\frac{1-v^2}{(1-v)^2}} = T \sqrt{\frac{1+v}{1-v}}. \end{align*} \end{proof} In particular, two observers at rest in an inertial frame measure the same frequency for a light signal (Figure~\ref{desvio}). However, because the gravitational field couples to all forms of energy (as $E=mc^2$), one expects that a photon climbing in a gravitational field to lose energy, hence frequency. In 1912, Einstein realized that this could be modelled by considering curved spacetime geometries, so that equal line segments in a (flat) spacetime diagram do not necessarily correspond to the same length. \begin{figure}[h!] \begin{center} \psfrag{t}{$t$} \psfrag{x}{$x$} \psfrag{T}{$T$} \psfrag{T'}{$T'=T$} \epsfxsize=.4\textwidth \leavevmode \epsfbox{desvio.eps} \end{center} \caption{Minkowski geometry is incompatible with the gravitational redshift.} \label{desvio} \end{figure} \section{Differential geometry: Mathematicians vs physicists} \label{sec1.2} Einstein's idea to incorporate gravitation into relativity was to replace the Minkowski spacetime $(\mathbb{R}^4, \langle \cdot, \cdot \rangle)$ by a curved four-dimensional {\bf Lorentzian manifold} $(M,g)\equiv(M, \langle \cdot, \cdot \rangle)$. Here $g$ is a {\bf Lorentzian metric}, that is, a symmetric $2$-tensor field such that at each tangent space $g = \operatorname{diag}(-1,1,1,1)$ in an appropriate basis. Just like in Riemannian geometry, $g$ determines a {\bf Levi-Civita connection}, the unique connection $\nabla$ which is symmetric and compatible with $g$: \begin{align*} & \nabla_X Y - \nabla_Y X = [X,Y]; \\ & X\cdot\langle Y, Z \rangle = \langle \nabla_X Y, Z \rangle + \langle Y, \nabla_X Z \rangle, \end{align*} for all vector fields $X,Y,Z$. The {\bf curvature} of this connection is then given by the operator \[ R(X,Y)Z = \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]} Z. \] The formulas above were written using the abstract notation usually employed by mathematicians. It is often very useful (especially when dealing with contractions) to use the more explicit notation usually adopted by physicists, which emphasizes the indices of the various objects when written in local coordinates: \begin{table}[h] \begin{tabular}{ccccc} {\bf Object} & & {\bf Mathematicians} & & {\bf Physicists} \\ Vector field & & $X$ & & $X^\mu$ \\ Tensor product & & $X \otimes Y$ & & $X^\mu Y^\nu$ \\ Metric & & $g \equiv \langle \cdot, \cdot \rangle$ & & $g_{\mu\nu}$ \\ Inner product & & $g(X,Y) \equiv \langle X, Y \rangle$ & & $g_{\mu\nu} X^\mu Y^\nu$ \\ Associated covector & & $X^\sharp \equiv g(X, \cdot)$ & & $X_\nu \equiv g_{\mu\nu} X^\mu$ \\ Covariant derivative & & $\nabla_X Y$ & & $X^\mu \nabla_\mu Y^\nu$ \\ Covariant derivative tensor & & $\nabla X$ & & $\nabla_\mu X^\nu \equiv \partial_\mu X^\nu + \Gamma^\nu_{\mu\alpha} X^\alpha$ \end{tabular} \end{table} Here $\Gamma^\alpha_{\mu\nu}$ are the {\bf Christoffel symbols} of the Levi-Civita connection; they can be computed from the components $g_{\mu\nu}$ of the metric tensor by the formula \[ \Gamma^\alpha_{\mu\nu} = \frac12 g^{\alpha\beta} \left( \partial_\mu g_{\nu\beta} + \partial_\nu g_{\mu\beta} - \partial_\beta g_{\mu\nu} \right), \] and in turn be used to compute the components of the Riemann curvature tensor: \[ R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} = dx^\mu \left( R(\partial_\alpha,\partial_\beta) \partial_\nu \right) = \partial_\alpha \Gamma^\mu_{\beta\nu} - \partial_\beta \Gamma^\mu_{\alpha\nu} + \Gamma^\mu_{\alpha\gamma} \Gamma^\gamma_{\beta\nu} - \Gamma^\mu_{\beta\gamma} \Gamma^\gamma_{\alpha\nu} . \] The covariant derivative tensor of a vector field $X$, not always emphasized in differential geometry courses for mathematicians, is simply the $(1,1)$-tensor field defined by \[ \nabla X (Y) = \nabla_Y X. \] Also not always emphasized in differential geometry courses for mathematicians is the fact that any connection can be naturally extended to act on tensor fields (via the Leibnitz rule). For instance, if $\omega$ is a covector field ($1$-form) then one defines \[ (\nabla_X \omega) (Y) = X \cdot [\omega(Y)] - \omega(\nabla_X Y). \] In local coordinates, this is \begin{align*} (X^\mu \nabla_\mu \omega_\nu) Y^\nu & = X^\mu \partial_\mu (\omega_\nu Y^\nu) - \omega_\nu (X^\mu \nabla_\mu Y^\nu) \\ & = X^\mu (\partial_\mu \omega_\nu) Y^\nu + X^\mu \omega_\nu \partial_\mu Y^\nu - \omega_\nu (X^\mu \partial_\mu Y^\nu + X^\mu \Gamma_{\mu\alpha}^\nu Y^\alpha) \\ & = (\partial_\mu \omega_\nu - \Gamma_{\mu\nu}^\alpha \omega_\alpha) X^\mu Y^\nu, \end{align*} that is, \[ \nabla_\mu \omega_\nu = \partial_\mu \omega_\nu - \Gamma_{\mu\nu}^\alpha \omega_\alpha. \] The generalization for higher rank tensors is obvious: for instance, if $T$ is a $(2,1)$-tensor then \[ \nabla_\alpha T^\beta_{\mu\nu} = \partial_\alpha T^\beta_{\mu\nu} + \Gamma_{\alpha\gamma}^\beta T^\gamma_{\mu\nu} - \Gamma_{\alpha\mu}^\gamma T^\beta_{\gamma\nu} - \Gamma_{\alpha\nu}^\gamma T^\beta_{\mu\gamma}. \] Note that the condition of compatibility of the Levi-Civita connection with the metric is simply \[ \nabla g = 0. \] In particular, the operations of raising and lowering indices commute with covariant differentiation. As an exercise in index gymnastics, we will now derive a series of identities involving the Riemann curvature tensor. We start by rewriting its definition in the notation of the physicists: \begin{align*} & R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} X^\alpha Y^\beta Z^\nu = \\ & = X^\alpha \nabla_\alpha (Y^\beta \nabla_\beta Z^\mu) - Y^\alpha \nabla_\alpha (X^\beta \nabla_\beta Z^\mu) - (X^\alpha \nabla_\alpha Y^\beta - Y^\alpha \nabla_\alpha X^\beta) \nabla_\beta Z^\mu \\ & = (X^\alpha \nabla_\alpha Y^\beta) (\nabla_\beta Z^\mu) + X^\alpha Y^\beta \nabla_\alpha \nabla_\beta Z^\mu - (Y^\alpha \nabla_\alpha X^\beta) (\nabla_\beta Z^\mu) \\ & \,\,\,\, - Y^\alpha X^\beta \nabla_\alpha \nabla_\beta Z^\mu - (X^\alpha \nabla_\alpha Y^\beta) \nabla_\beta Z^\mu + (Y^\alpha \nabla_\alpha X^\beta) \nabla_\beta Z^\mu \\ & = X^\alpha Y^\beta (\nabla_\alpha \nabla_\beta - \nabla_\beta \nabla_\alpha) Z^\mu. \end{align*} In other words, \[ R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} Z^\nu = (\nabla_\alpha \nabla_\beta - \nabla_\beta \nabla_\alpha) Z^\mu, \] or, equivalently, \begin{equation} \label{commutator} 2\nabla_{[\alpha} \nabla_{\beta]} Z_\mu = R_{\alpha\beta\mu\nu} Z^\nu, \end{equation} where the square brackets indicate anti-symmetrization\footnote{Thus $T_{[\alpha\beta]}=\frac12\left(T_{\alpha\beta}-T_{\beta\alpha}\right)$, $T_{[\alpha\beta\gamma]}=\frac16\left(T_{\alpha\beta\gamma} + T_{\beta\gamma\alpha} + T_{\gamma\alpha\beta} - T_{\beta\alpha\gamma} - T_{\alpha\gamma\beta} - T_{\gamma\beta\alpha}\right)$, etc.}. This is readily generalized for arbitrary tensors: from \begin{align*} 2\nabla_{[\alpha} \nabla_{\beta]} (Z_\mu W_\nu) & = (2\nabla_{[\alpha} \nabla_{\beta]} Z_\mu) W_\nu + (2\nabla_{[\alpha} \nabla_{\beta]} W_\nu) Z_\mu \\ & = R_{\alpha\beta\mu\sigma} Z^\sigma W_\nu + R_{\alpha\beta\nu\sigma} W^\sigma Z_\mu \end{align*} one readily concludes that \[ 2\nabla_{[\alpha} \nabla_{\beta]} T_{\mu\nu} = R_{\alpha\beta\mu\sigma} T^\sigma_{\,\,\,\,\nu} + R_{\alpha\beta\nu\sigma} T_{\mu}^{\,\,\,\,\sigma}. \] Let us choose \[ Z_\mu = \nabla_\mu f \equiv \partial_\mu f \] in equation~\eqref{commutator}. We obtain \[ R_{[\alpha\beta\mu]\nu} Z^\nu = 2 \nabla_{[[\alpha} \nabla_{\beta]} Z_{\mu]} = 2 \nabla_{[\alpha} \nabla_{[\beta} Z_{\mu]]} = 0, \] because \[ \nabla_{[\mu} Z_{\nu]} = \partial_{[\mu} Z_{\nu]} - \Gamma^\alpha_{[\mu\nu]} Z_\alpha = \partial_{[\mu} \partial_{\nu]} f = 0. \] Since we can choose $Z$ arbitrarily at a given point, it follows that \[ R_{[\alpha\beta\mu]\nu} = 0 \Leftrightarrow R_{\alpha\beta\mu\nu} + R_{\beta\mu\alpha\nu} + R_{\mu\alpha\beta\nu} = 0. \] This is the so-called {\bf first Bianchi identity}, and is key for obtaining the full set of symmetries of the Riemann curvature tensor: \[ R_{\alpha\beta\mu\nu} = - R_{\beta\alpha\mu\nu} = - R_{\alpha\beta\nu\mu} = R_{\mu\nu\alpha\beta}. \] In the notation of the mathematicians, it is written as \[ R(X,Y)Z + R(Y,Z) X + R(Z,X) Y = 0 \] for all vector fields $X,Y,Z$. Let us now take the covariant derivative of equation~\eqref{commutator}: \[ \nabla_\gamma R_{\alpha\beta\mu\nu} Z^\nu + R_{\alpha\beta\mu\nu} \nabla_\gamma Z^\nu = 2 \nabla_\gamma \nabla_{[\alpha} \nabla_{\beta]} Z_\mu. \] At any given point we can choose $Z$ such that \[ \nabla_\gamma Z^\nu \equiv \partial_\gamma Z^\nu + \Gamma^\nu_{\gamma\delta} Z^\delta = 0. \] Assuming this, we then obtain\footnote{In the formula below the indices between vertical bars are not anti-symmetrized.} \begin{align*} \nabla_{[\gamma} R_{\alpha\beta]\mu\nu} Z^\nu & = 2 \nabla_{[\gamma} \nabla_{[\alpha} \nabla_{\beta]]} Z_\mu = 2 \nabla_{[[\gamma} \nabla_{\alpha]} \nabla_{\beta]} Z_\mu \\ & = R_{[\gamma\alpha\beta]\delta} \nabla^\delta Z_\mu + R_{[\gamma\alpha|\mu\delta|} \nabla_{\beta]} Z^\delta = 0. \end{align*} Since we can choose $Z$ arbitrarily at a given point, it follows that \begin{equation} \label{second} \nabla_{[\alpha}R_{\beta\gamma]\mu\nu} = 0 \Leftrightarrow \nabla_{\alpha}R_{\beta\gamma\mu\nu} + \nabla_{\beta}R_{\gamma\alpha\mu\nu} + \nabla_{\gamma}R_{\alpha\beta\mu\nu} = 0 \end{equation} This is the so-called {\bf second Bianchi identity}. In the notation of the mathematicians, it is written as \[ \nabla R (X,Y,Z,\cdot,\cdot) + \nabla R (Y,Z,X,\cdot,\cdot) + \nabla R (Z,X,Y,\cdot,\cdot) = 0 \] for all vector fields $X,Y,Z$. Recall that the Riemann curvature tensor has only one independent contraction, called the {\bf Ricci tensor}: \[ R_{\mu\nu} = R_{\alpha\mu\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\alpha}. \] The trace of the Ricci tensor, in turn, is known as the {\bf scalar curvature}: \[ R = g^{\mu\nu} R_{\mu\nu}. \] These quantities satisfy the so-called {\bf contracted Bianchi identity}, which is obtained from \eqref{second} by contracting the pairs of indices $(\beta,\mu)$ and $(\gamma,\nu)$: \[ \nabla_\alpha R - \nabla^\beta R_{\alpha\beta} - \nabla^\gamma R_{\alpha\gamma} = 0 \Leftrightarrow \nabla^\beta R_{\alpha\beta} - \frac12 \nabla_\alpha R = 0 \Leftrightarrow \nabla^\beta \left(R_{\alpha\beta} - \frac12 R g_{\alpha\beta}\right) = 0. \] The contracted Bianchi identity is equivalent to the statement that the {\bf Einstein tensor} \[ G_{\mu\nu} = R_{\mu\nu} - \frac12 R g_{\mu\nu} \] is divergenceless: \[ \nabla^\mu G_{\mu\nu} = 0. \] \section{General relativity} \label{sec1.3} Newtonian gravity is described by a scalar function $\phi$, called the {\bf gravitational potential}. The equation of motion for a free-falling particle of mass $m$ in Cartesian coordinates is \[ m \frac{d^2x^i}{dt^2} = - m \partial_i \phi \Leftrightarrow \frac{d^2x^i}{dt^2} = - \partial_i \phi. \] Note that all free-falling particles describe the same trajectories (an observation dating back to Galileo). The gravitational potential is determined from the matter mass density $\rho$ by the {\bf Poisson equation} \[ \Delta \phi = 4 \pi \rho \] (using units such that Newton's gravitational constant is $G=1$; this choice, together with $c=1$, defines the so-called {\bf geometrized units}, where lengths, time intervals and masses all have the same dimensions). To implement his idea of describing gravity via a curved four-dimensional Lorentzian manifold $(M,g)$, Einstein had to specify $(i)$ how free-falling particles would move on this manifold, and $(ii)$ how to determine the curved metric $g$. Since free particles move along straight lines in the Minkowski spacetime, Einstein proposed that free falling particles should move along timelike geodesics. In other words, he suggested replacing the Newtonian equation of motion by the geodesic equation \[ \ddot{x}^\mu + \Gamma^\mu_{\alpha\beta}\dot{x}^\alpha\dot{x}^\beta = 0. \] Moreover, Einstein knew that it is possible to define the {\bf energy-momentum tensor} $T_{\mu\nu}$ of the matter content of the Minkowski spacetime, so that the conservation of energy and momentum is equivalent to the vanishing of its divergence: \[ \nabla^\mu T_{\mu\nu} = 0. \] This inspired Einstein to propose that $g$ should satisfy the so-called {\bf Einstein field equations}: \[ G_{\mu\nu} + \Lambda g_{\mu\nu} = 8 \pi T_{\mu\nu}. \] Here $\Lambda$ is a constant, known as the {\bf cosmological constant}. Note that the Einstein field equations imply, via the contracted Bianchi identity, that the energy-momentum tensor is divergenceless. As a simple example, we consider a pressureless perfect fluid, known as {\bf dust}. Its energy-momentum tensor is \[ T_{\mu\nu} = \rho U_\mu U_\nu, \] where $\rho$ is the dust rest density and $U$ is a unit timelike vector field tangent to the histories of the dust particles. The equations of motion for the dust can be found from \begin{align*} \nabla^\mu T_{\mu\nu} = 0 & \Leftrightarrow \left[ \nabla^\mu (\rho U_\mu) \right] U_ \nu + \rho U_\mu \nabla^\mu U_ \nu = 0 \\ & \Leftrightarrow \operatorname{div} (\rho U) U + \rho \nabla_U U = 0. \end{align*} Since $U$ and $\nabla_UU$ are orthogonal (because $\langle U, U \rangle = -1$), we find \[ \begin{cases} \operatorname{div} (\rho U) = 0 \\ \nabla_U U = 0 \end{cases} \] in the support of $\rho$. These are, respectively, the equation of conservation of mass and the geodesic equation. Thus the fact that free-falling particles move along geodesics can be seen as a consequence of the Einstein field equations (at least in this model). \section{Exercises} \label{sec1.4} \begin{enumerate} \item {\bf Twin paradox:} Two twins, Alice and Bob, are separated on their $20^\text{th}$ birthday. While Alice remains on Earth (which is an inertial frame to a very good approximation), Bob departs at $80\%$ of the speed of light towards Planet X, $8$ light-years away from Earth. Therefore Bob reaches his destination $10$ years later (as measured on the Earth's frame). After a short stay, he returns to Earth, again at $80\%$ of the speed of light. Consequently Alice is $40$ years old when she sees Bob again. \begin{enumerate} \item How old is Bob when they meet again? \item How can the asymmetry in the twins' ages be explained? Notice that from Bob's point of view he is at rest in his spaceship and it is the Earth which moves away and then back again. \item Imagine that each twin watches the other trough a very powerful telescope. What do they see? In particular, how much time do they experience as they see one year elapse for their twin? \end{enumerate} \item A particularly simple matter model is that of a smooth {\bf massless scalar field} $\phi:M\to \mathbb{R}$, whose energy-momentum tensor is \[ T_{\mu\nu} = \partial_\mu \phi \, \partial_\nu \phi - \frac12 (\partial_\alpha \phi \, \partial^\alpha \phi) g_{\mu\nu}. \] Show that if the Lorentzian manifold $(M,g)$ satisfies the Einstein equations with this matter model then $\phi$ satisfies the {\bf wave equation} \[ \Box \, \phi = 0 \Leftrightarrow \nabla^\mu \partial_\mu \phi = 0. \] \item The energy-momentum tensor for {\bf perfect fluid} is \[ T_{\mu\nu} = (\rho+p) U_\mu U_\nu + p g_{\mu\nu}, \] where $\rho$ is the fluid's rest density, $p$ is the fluid's rest pressure, and $U$ is a unit timelike vector field tangent to the histories of the fluid particles. Show that: \begin{enumerate} \item $\left( T_{\mu\nu} \right) = \operatorname{diag}(\rho,p,p,p)$ in any orthonormal frame including $U$; \item the motion equations for the perfect fluid are \[ \begin{cases} \operatorname{div}(\rho U) + p \operatorname{div} U = 0 \\ (\rho+p) \nabla_U U = - (\operatorname{grad} p)^\perp \end{cases}, \] where $^\perp$ represents the orthogonal projection on the spacelike hyperplane orthogonal to $U$. \end{enumerate} \end{enumerate} \chapter{Exact solutions} \label{chapter2} In this chapter we present a number of exact solutions of the Einstein field equations, as well as their Penrose diagrams. These solutions will be used as examples or counter-examples to the theorems in the subsequent chapters. We also discuss the matching of two different solutions across a timelike hypersurface. A different perspective on Penrose diagrams can be found in \cite{HE95}. \section{Minkowski spacetime} \label{sec2.1} The simplest solution of the Einstein field equations with zero cosmological constant in vacuum (i.e.~with vanishing energy-momentum tensor) is the Minkowski spacetime, that is, $\mathbb{R}^4$ with the metric \[ ds^2 = - dt^2 + dx^2 + dy^2 + dz^2. \] Since this metric is flat, its curvature vanishes, and so do its Ricci and Einstein tensors. It represents a universe where there is no gravity whatsoever. Transforming the Cartesian coordinates $(x,y,z)$ to spherical coordinates $(r,\theta,\varphi)$ yields \[ ds^2 = - dt^2 + dr^2 + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right). \] Performing the additional change of coordinates \[ \begin{cases} u = t - r \qquad \text{(retarded time)} \\ v = t + r \qquad \text{(advanced time)} \end{cases} \] we obtain \[ ds^2 = - du \, dv + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] where \[ r(u,v) = \frac12 (v-u). \] The coordinates $(u,v)$ are called {\bf null coordinates}: their level sets are null cones formed by outgoing/ingoing null geodesics emanating from the center. Note that they are subject to the constraint \[ r \geq 0 \Leftrightarrow v \geq u. \] Finally, the coordinate change \begin{equation} \label{tanh} \begin{cases} \tilde{u} = \tanh u \\ \tilde{v} = \tanh v \end{cases} \Leftrightarrow \begin{cases} u = \operatorname{arctanh} \tilde{u} \\ v = \operatorname{arctanh} \tilde{v} \end{cases} \end{equation} brings the metric into the form \[ ds^2 = - \frac{1}{\left(1-\tilde{u}^2\right)\left(1-\tilde{v}^2\right)} d\tilde{u} \, d\tilde{v} + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] where now \[ r\left(\tilde{u},\tilde{v}\right) = \frac12 (\operatorname{arctanh} \tilde{v}-\operatorname{arctanh} \tilde{u}) \] and \begin{equation} \label{range} -1 < \tilde{u} \leq \tilde{v} < 1. \end{equation} Because $(\tilde{u},\tilde{v})$ are also null coordinates, it is common to represent their axes tilted by $45^\circ$. The plane region defined by \eqref{range} is then represented in Figure~\ref{uv}. \begin{figure}[h!] \begin{center} \psfrag{u}{$\tilde{u}$} \psfrag{v}{$\tilde{v}$} \epsfxsize=.4\textwidth \leavevmode \epsfbox{uv.eps} \end{center} \caption{Range of the coordinates $(\tilde{u},\tilde{v})$.} \label{uv} \end{figure} This region is usually called the {\bf Penrose diagram} for the Minkowski spacetime. If we take each point in the diagram to represent a sphere $S^2$ of radius $r\left(\tilde{u},\tilde{v}\right)$, the diagram itself represents the full spacetime manifold, in a way that makes causality relations apparent: any causal curve is represented in the diagram by a curve with tangent at most $45^\circ$ from the vertical. In Figure~\ref{Pen_Mink} we represent some level hypersurfaces of $t$ and $r$ in the Penrose diagram. The former approach the point $i^0$ in the boundary of the diagram, called the {\bf spacelike infinity}, whereas the later go from the boundary point $i^-$ ({\bf past timelike infinity}) to the boundary point $i^+$ ({\bf future timelike infinity}). Finally, null geodesics start at the null boundary line $\mathscr{I^-}$ ({\bf past null infinity}) and end at the null boundary line $\mathscr{I^+}$ ({\bf future null infinity}). These boundary points and lines represent ideal points at infinity, and do not correspond to actual points in the Minkowski spacetime. \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{r=0}{$r=0$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.4\textwidth \leavevmode \epsfbox{Pen_Mink.eps} \end{center} \caption{Penrose diagram for the Minkowski spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Mink} \end{figure} \section{Penrose diagrams} \label{sec2.2} The concept of Penrose diagram can be easily generalized for any spherically symmetric space-time. Such spacetimes have metric \[ ds^2 = g_{AB} dx^A dx^B + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \qquad r=r(x^0,x^1), \] where $g_{AB} dx^A dx^B$ is a Lorentzian metric on a $2$-dimensional quotient manifold with boundary (which we assume to be diffeomorphic to a region of the plane). It turns out that any such metric is conformal to the Minkowski metric: \begin{equation} \label{confmet} g_{AB} dx^A dx^B = - \Omega^2 du \, dv, \qquad \Omega=\Omega(u,v). \end{equation} This can be seen locally as follows: choose a spacelike line $S$, a coordinate $u$ along it, and a coordinate $w$ along a family of null geodesics emanating from $S$, so that $S$ corresponds to $w=0$ (Figure~\ref{conformal}). Then near $S$ we have \[ g_{uu} = \left\langle \frac{\partial}{\partial u}, \frac{\partial}{\partial u} \right\rangle > 0 \] and \[ g_{ww} = \left\langle \frac{\partial}{\partial w}, \frac{\partial}{\partial w} \right\rangle = 0. \] \begin{figure}[h!] \begin{center} \psfrag{u}{$u$} \psfrag{w}{$w$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{conformal.eps} \end{center} \caption{Choice of the coordinates $(u,w)$.} \label{conformal} \end{figure} Therefore the $2$-dimensional metric is written in these coordinates \[ g_{AB} dx^A dx^B = g_{uu} du^2 + 2g_{uw} du dw = g_{uu} du \left( du + \frac{2g_{uw}}{g_{uu}} dw \right). \] As for any $1$-form in a $2$-dimensional manifold, we have \[ du + \frac{2g_{uw}}{g_{uu}} dw = f dv \] for suitable functions $f$ and $v$. Note that $f$ cannot vanish, because $(u,w)$ are local coordinates. Moreover, we can assume $f<0$ by replacing $v$ with $-v$ if necessary. Choosing $\Omega^2 = -fg_{uu}$ then yields \eqref{confmet}. We then see that any spherically symmetric metric can be written as \begin{equation} \label{Penrose_met} ds^2 = - \Omega^2 du \, dv + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \end{equation} with $\Omega=\Omega(u,v)$ and $r=r(u,v)$. By rescaling $u$ and $v$ if necessary, we can assume that the range of $(u,v)$ is bounded, and hence obtain a Penrose diagram depicting the causal geometry. As we will see, this is extremely helpful in more complicated spherical symmetric solutions of the Einstein field equations. \begin{Remark} From \[ \left( g_{AB} \right) = \left( \begin{matrix} 0 & -\frac{\Omega^2}2 \\ -\frac{\Omega^2}2 & 0 \end{matrix} \right) \Rightarrow \left( g^{AB} \right) = \left( \begin{matrix} 0 & -\frac2{\Omega^2} \\ -\frac2{\Omega^2} & 0 \end{matrix} \right) \] it is easily seen that \[ \partial_A \left( \sqrt{ - \det\left(g_{CD}\right)} \, g^{AB} \partial_B u \right) = 0 \Leftrightarrow \nabla_A \nabla^A u = 0. \] and similarly for $v$. In other words, the null coordinates $u$ and $v$ are solutions of the wave equation in the $2$-dimensional Lorentzian manifold: \[ \Box u = \Box v = 0. \] This is the Lorentzian analogue of the so-called {\bf isothermal coordinates} for Riemannian surfaces. The proof that the later exist locally is however slightly more complicated: given a point $p$ on the surface, one chooses a local harmonic function with nonvanishing derivative, \[ \Delta u = 0, \qquad (du)_p \neq 0, \] and considers the equation \begin{equation} \label{dv} dv = \star du. \end{equation} Here $\star$ is the Hodge star, which for generic orientable $n$-dimensional pseudo-Riemannian manifolds is defined as follows: if $\{\omega^1, \ldots, \omega^n \}$ is any positively oriented orthonormal coframe then \[ \star (\omega^1 \wedge \cdots \wedge \omega^k) = \langle \omega^1, \omega^1 \rangle \cdots \langle \omega^k, \omega^k \rangle \, \omega^{k+1} \wedge \cdots \wedge \omega^n. \] By the Poincar\'e Lemma, equation~\eqref{dv} can be locally solved, since \[ d \star du = \star \star d \star du = \star (\Delta u) = 0. \] Moreover, $v$ is itself harmonic, because \[ \Delta v = \star d \star dv = \star d \star \star du = \star d (-du) = 0. \] Finally, \[ \| du \| = \| dv \| = \frac1{\Omega} \] for some local function $\Omega > 0$, and so the metric is written is these coordinates as \[ ds^2 = \Omega^2 \left( du^2 + dv^2 \right). \] \end{Remark} \section{The Schwarzschild solution} \label{sec2.3} If we try to solve the vacuum Einstein field equations with zero cosmological constant for a spherically symmetric Lorentzian metric, we obtain, after suitably rescaling the time coordinate, the {\bf Schwarzschild metric} \[ ds^2 = -\left(1 - \frac{2M}{r}\right) dt^2 + \left(1 - \frac{2M}{r}\right)^{-1} dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right) \] (where $M \in \mathbb{R}$ is a constant). Note that for $M=0$ we retrieve the Minkowski metric in spherical coordinates. Note also that if $M>0$ then the metric is defined in two disconnected domains of coordinates, corresponding to $r \in (0, 2M)$ and $r \in (2M, + \infty)$. The physical interpretation of the Schwarzschild solution can be found by considering the proper time of a timelike curve parameterized by the time coordinate: \[ \tau = \int_{t_0}^{t_1} \left[ \left(1 - \frac{2M}{r}\right) - \left(1 - \frac{2M}{r}\right)^{-1} \dot{r}^2 - r^2 \dot{\theta}^2 - r^2 \sin^2 \theta \dot{\varphi}^2 \right]^{\frac12} dt, \] where $\dot{r}=\frac{dr}{dt}$, etc. The integrand $L_S$ is the Lagrangian for geodesic motion in the Schwarzschild spacetime when parameterized by the time coordinate. Now for motions with speeds much smaller than the speed of light we have $\dot{r}^2 \ll 1$, etc. Assuming $\frac{M}{r} \ll 1$ as well we have \begin{align*} L_S & = \left[ 1 - \frac{2M}{r} - \left(1 - \frac{2M}{r}\right)^{-1} \dot{r}^2 - r^2 \dot{\theta}^2 - r^2 \sin^2 \theta \dot{\varphi}^2 \right]^{\frac12} \\ & \simeq 1 - \frac{M}{r} - \frac12 \left( \dot{r}^2 + r^2 \dot{\theta}^2 + \sin^2 \theta \dot{\varphi}^2 \right) = 1 - L_N, \end{align*} where \[ L_N = \frac12 \left( \dot{r}^2 + r^2 \dot{\theta}^2 + r^2 \sin^2 \theta \dot{\varphi}^2 \right) + \frac{M}{r} \] is precisely the Newtonian Lagrangian for the motion of a particle in the gravitational field of a point mass $M$. The Schwarzschild solution should therefore be considered the relativistic analogue of this field. To write the Schwarzschild metric in the form \eqref{Penrose_met} we note that the quotient metric is \begin{align*} ds^2 & = -\left(1 - \frac{2M}{r}\right) dt^2 + \left(1 - \frac{2M}{r}\right)^{-1} dr^2 \\ & = -\left(1 - \frac{2M}{r}\right) \left[ dt^2 - \left(1 - \frac{2M}{r}\right)^{-2} dr^2 \right] \\ & = -\left(1 - \frac{2M}{r}\right) \left[ dt - \left(1 - \frac{2M}{r}\right)^{-1} dr \right] \left[ dt + \left(1 - \frac{2M}{r}\right)^{-1} dr \right] \\ & = -\left(1 - \frac{2M}{r}\right) du \, dv, \end{align*} where we define \[ u = t - \int \left(1 - \frac{2M}{r}\right)^{-1} dr = t - r - 2M \log |r - 2M| \] and \[ v = t + \int \left(1 - \frac{2M}{r}\right)^{-1} dr = t + r + 2M \log |r - 2M|. \] In the domain of coordinates $r>2M$ we have $1 - \frac{2M}{r} > 0$, and so the quotient metric is already in the required form. Note however that, unlike what happened in the Minkowski spacetime, we now have \[ v - u = 2r + 4M \log |r - 2M| \in (-\infty, +\infty). \] Consequently, by applying the coordinate rescaling \eqref{tanh} we obtain the full square, instead of a triangle (Figure~\ref{Pen_Schw_out}). Besides the infinity points and null boundaries also present in the Penrose diagram for the Minkowski spacetime, there are two new null boundaries, $\mathscr{H^-}$ ({\bf past event horizon}) and $\mathscr{H^+}$ ({\bf future event horizon}), where $r=2M$. \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{H+}{$\mathscr{H^+}$} \psfrag{H-}{$\mathscr{H^-}$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.5\textwidth \leavevmode \epsfbox{Pen_Schw_out.eps} \end{center} \caption{Penrose diagram for the region $r>2M$ of the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw_out} \end{figure} It seems reasonable to expect that the metric can be extended across the horizons, since $r$ does not tend to zero nor to infinity there; this expectation is confirmed by calculating the so-called {\bf Kretschmann scalar}: \[ R_{\alpha\beta\mu\nu} R^{\alpha\beta\mu\nu} = \frac{48M^2}{r^6}. \] This is perfectly well behaved as $r \to 2M$, and seems to indicate that the horizons are mere singularities of the coordinate system $(t,r)$. To show that this is indeed the case, note that in the $(u,r)$ coordinate system the quotient metric is written \[ ds^2 = - \left(1 - \frac{2M}{r}\right) du^2 - 2 du \, dr. \] Since \[ \det \left( \begin{matrix} - 1 + \frac{2M}{r} & -1 \\ -1 & 0 \end{matrix} \right) = -1, \] we see that the metric is well defined across $r=2M$ in this coordinate system. Moreover, we know that it solves the Einstein equations in the coordinate domains $r<2M$ and $r>2M$; by continuity, it must solve it in the whole domain $r \in (0, +\infty)$. Note that the coordinate domains $r<2M$ and $r>2M$ are glued along $r=2M$ so that the outgoing null geodesics $u=\text{constant}$ go from $r=0$ to $r=+\infty$; in other words, the gluing is along the past event horizon $\mathscr{H^-}$. To obtain the Penrose diagram for the coordinate domain $r<2M$ we note that the quotient metric can be written as \begin{align*} ds^2 & = -\left(1 - \frac{2M}{r}\right) du \, dv = - \left(\frac{2M}{r} - 1\right) du \, (-dv) = - \left(\frac{2M}{r} - 1\right) du \, dv', \end{align*} where $v'=-v$. Since in this coordinate domain $\frac{2M}{r} - 1 > 0$, the quotient metric is in the required form. Note however that we now have \[ u + v' = - 2r - 4M \log |r - 2M| \in (- 4M \log (2M), +\infty), \] and by setting \[ v'' = v' + 4M \log (2M) \] we obtain \[ u + v'' > 0. \] Consequently, by applying the coordinate rescaling \eqref{tanh} we obtain a triangle (Figure~\ref{Pen_Schw_down}). There is now a spacelike boundary, where $r=0$, and two null boundaries $\mathscr{H^-}$, where $r=2M$. The Penrose diagram for the domain of the coordinates $(u,r)$ can be obtained gluing the Penrose diagrams in Figures~\ref{Pen_Schw_out} and \ref{Pen_Schw_down} along $\mathscr{H^-}$, so that the null geodesics $u=\text{constant}$ match (Figure~\ref{Pen_Schw_right}). \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{i-}{$i^-$} \psfrag{H-}{$\mathscr{H^-}$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Pen_Schw_down.eps} \end{center} \caption{Penrose diagram for a region $r<2M$ of the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw_down} \end{figure} \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{r=0}{$r=0$} \psfrag{H+}{$\mathscr{H^+}$} \psfrag{H-}{$\mathscr{H^-}$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.7\textwidth \leavevmode \epsfbox{Pen_Schw_right.eps} \end{center} \caption{Penrose diagram for the domain of the coordinates $(u,r)$ in the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw_right} \end{figure} If instead we use the $(v,r)$ coordinate system, the quotient metric is written \[ ds^2 = - \left(1 - \frac{2M}{r}\right) dv^2 + 2 dv \, dr. \] Again the metric is well defined across $r=2M$ in this coordinate system, since \[ \det \left( \begin{matrix} - 1 + \frac{2M}{r} & 1 \\ 1 & 0 \end{matrix} \right) = - 1, \] and solves the Einstein equations in the whole coordinate domain $r \in (0, +\infty)$. The coordinate domains $r<2M$ and $r>2M$ are now glued along $r=2M$ so that the ingoing null geodesics $v=\text{constant}$ go from $r=+\infty$ to $r=0$; in other words, the gluing is along the future event horizon $\mathscr{H^+}$. To obtain the Penrose diagram for the coordinate domain $r<2M$ we note that the quotient metric can be written as \begin{align*} ds^2 & = -\left(1 - \frac{2M}{r}\right) du \, dv = - \left(\frac{2M}{r} - 1\right) (- du) \, dv = - \left(\frac{2M}{r} - 1\right) du' \, dv, \end{align*} where $u'=-u$. Since in this coordinate domain $\frac{2M}{r} - 1 > 0$, the quotient metric is in the required form. We have \[ u' + v = 2r + 4M \log |r - 2M| \in (- \infty, 4M \log (2M) ), \] and by setting \[ u'' = u' - 4M \log (2M) \] we obtain \[ u'' + v < 0. \] Consequently, by applying the coordinate rescaling \eqref{tanh} we obtain a triangle (Figure~\ref{Pen_Schw_up}). Again there is a spacelike boundary, where $r=0$, and two null boundaries $\mathscr{H^+}$, where $r=2M$. The Penrose diagram for the domain of the coordinates $(v,r)$ can be obtained gluing the Penrose diagrams in Figures~\ref{Pen_Schw_out} and \ref{Pen_Schw_up} along $\mathscr{H^+}$, so that the null geodesics $v=\text{constant}$ match (Figure~\ref{Pen_Schw_right_up}). \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{i+}{$i^+$} \psfrag{H+}{$\mathscr{H^+}$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Pen_Schw_up.eps} \end{center} \caption{Penrose diagram for a region $r<2M$ of the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw_up} \end{figure} \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{r=0}{$r=0$} \psfrag{H+}{$\mathscr{H^+}$} \psfrag{H-}{$\mathscr{H^-}$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.7\textwidth \leavevmode \epsfbox{Pen_Schw_right_up.eps} \end{center} \caption{Penrose diagram for the domain of the coordinates $(v,r)$ in the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw_right_up} \end{figure} Both regions $r<2M$ can of course be glued to the region $r>2M$ simultaneously. Since they are invariant under reflections with respect to $t=0$ (the vertical line through their common vertex), it is then clear that a mirror-reversed copy of the region $r>2M$ can be glued to the surviving null boundaries $\mathscr{H^-}$ and $\mathscr{H^+}$ (Figure~\ref{Pen_Schw}). The resulting spacetime, known as the {\bf maximal analytical extension} of the Schwarzschild solution, is a solution of the Einstein equations which cannot be extended any further, since $r \to 0$ or $r \to + \infty$ on the boundary of its Penrose diagram. Note that by continuity the Einstein equations hold at the point where the four Penrose diagrams intersect (known as the {\bf bifurcate sphere}). \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{r=0}{$r=0$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.7\textwidth \leavevmode \epsfbox{Pen_Schw.eps} \end{center} \caption{Penrose diagram for the maximal analytic extension of the Schwarzschild spacetime, with some level hypersurfaces of $t$ and $r$ represented.} \label{Pen_Schw} \end{figure} Let us now analyze in detail the Penrose diagram for the maximal analytic extension of the Schwarzschild spacetime. There are two {\bf asymptotically flat} regions $r > 2M$, corresponding to two causally disconnected universes, joined by a {\bf wormhole}. There are also two regions where $r < 2M$: a {\bf black hole region}, bounded by the future event horizons $\mathscr{H^+}$, from which no causal curve can escape; and a {\bf white hole region}, bounded by the past event horizons $\mathscr{H^-}$, from which every causal curve must escape. Note that the horizons themselves correspond to spheres which are propagating at the speed of light, but whose radius remains constant, $r = 2M$. The black hole in the maximal analytic extension of the Schwarzschild spacetime is an {\bf eternal black hole}, that is, a black hole which has always existed (as opposed to having formed by some physical process). We will see shortly how to use the Schwarzschild solution to model physically realistic black holes. \section{Friedmann-Lema\^\i tre-Robertson-Walker models} \label{sec2.4} The simplest models of {\bf cosmology}, the study of the Universe as a whole, are obtained from the assumption that space is homogeneous and isotropic (which is true on average at very large scales). It is well known that the only isotropic $3$-dimensional Riemannian metrics are, up to scale, given by \[ dl^2 = \frac{dr^2}{1-kr^2} + r^2\left( d\theta^2 + \sin^2 \theta d\varphi^2 \right), \] where \[ k = \begin{cases} \,\,\,\, 1 \qquad \text{for the standard metric on } S^3 \\ \,\,\,\, 0 \qquad \text{for the standard metric on } \mathbb{R}^3 \\ -1 \qquad \text{for the standard metric on } H^3 \\ \end{cases}. \] Allowing for a time-dependent scale factor $a(t)$ (also known as the ``radius of the Universe''), we arrive at the {\bf Friedmann-Lema\^\i tre-Robertson-Walker} (FLRW) family of Lorentzian metrics: \begin{equation} \label{FLRW} ds^2 = -dt^2 + a^2(t) \left[\frac{dr^2}{1-kr^2} + r^2\left( d\theta^2 + \sin^2 \theta d\varphi^2 \right)\right]. \end{equation} To interpret these metrics, we consider a general Lorentzian metric of the form \[ ds^2 = - e^{2 \phi} dt^2 + h_{ij} dx^i dx^j = - e^{2 \phi} dt^2 + dl^2. \] The Riemannian metric $dl^2$ is readily interpreted as giving the distances measured between nearby observers with fixed space coordinates $x^i$ in radar experiments: indeed, such observers measure proper time $\tau$ given by \[ d\tau^2 = e^{2\phi} dt^2. \] The null geodesics representing a radar signal bounced by a given observer from a nearby observer (Figure~\ref{radar}) satisfy \[ ds^2 = 0 \Leftrightarrow e^{2 \phi} dt^2 = dl^2 \Leftrightarrow d \tau^2 = dl^2 \Leftrightarrow d\tau = \pm dl. \] \begin{figure}[h!] \begin{center} \psfrag{2dl}{$2dl$} \epsfxsize=.15\textwidth \leavevmode \epsfbox{radar.eps} \end{center} \caption{Distance between nearby observers.} \label{radar} \end{figure} Since the speed of light is $c=1$, the distance traveled between the observers will be half the time between the emission and the reception of the signal: \[ \frac{(\tau + dl) - (\tau - dl)}{2} = dl. \] Moreover, the unit timelike covector field tangent to the trajectories of the the observers with fixed space coordinates $x^i$ is \[ U = - e^\phi dt \Leftrightarrow U_\mu = - e^\phi \nabla_ \mu t. \] Therefore \begin{align*} \nabla_U U^\mu & = - U^\nu \nabla_\nu ( e^\phi \nabla^\mu t) = (U \cdot \phi) U^\mu - e^\phi U^\nu \nabla_\nu \nabla^\mu t \\ & = (U \cdot \phi) U^\mu - e^\phi U^\nu \nabla^\mu \nabla_\nu t = (U \cdot \phi) U^\mu + e^\phi U^\nu \nabla^\mu (e^{-\phi} U_\nu) \\ & = (U \cdot \phi) U^\mu - U^\nu U_\nu \nabla^\mu \phi + U^\nu \nabla^\mu U_\nu \\ & = \nabla^\mu \phi + (U^\nu \nabla_\nu \phi) U^\mu + \frac12 \nabla^\mu (U^\nu U_\nu) = \nabla^\mu \phi + (U^\nu \nabla_\nu \phi) U^\mu, \end{align*} since $U^\nu U_\nu = -1$. In other words, \[ \nabla_U U = (\operatorname{grad} \phi)^\perp, \] where $^\perp$ represents the orthogonal projection on the spacelike hyperplane orthogonal to $U$. Therefore the observers with fixed space coordinates in the FLRW models have zero acceleration, that is, they are free-falling (by opposition to the corresponding observers in the Schwarzschild spacetime, who must accelerate to remain at fixed $r>2M$). Moreover, the distance between two such observers varies as \[ d(t) = a(t) \frac{d_0}{a_0} \Rightarrow \dot{d} = \dot{a} \, \frac{d_0}{a_0} = \frac{\dot{a}}{a} \, d. \] This relation, known as the {\bf Hubble law}, is often written as \[ v = H d, \] where $v$ is the relative velocity and \[ H = \frac{\dot{a}}{a} \] is the so-called {\bf Hubble constant} (for historical reasons, since it actually varies in time). We will model the matter content of the universe as an uniform dust of galaxies placed at fixed space coordinates (hence free-falling): \begin{equation} \label{TFLRW} T = \rho(t) dt \otimes dt. \end{equation} Plugging the metric~\eqref{FLRW} and the energy-momentum tensor~\eqref{TFLRW} into the Einstein equations, and integrating once, results in the so-called {\bf Friedmann equations} \[ \begin{cases} \displaystyle \frac12 \dot{a}^2 - \frac{\alpha}{a} - \frac{\Lambda}6 a^2 = - \frac{k}2 \\ \\ \displaystyle \frac{4\pi}3 \rho a^3 = \alpha \end{cases} \] (where $\alpha$ is an integration constant). The first Friedmann equation is a first order ODE for $a(t)$; it can be seen as the equation of conservation of energy for a particle moving in the $1$-dimensional effective potential \[ V(a) = - \frac{\alpha}{a} - \frac{\Lambda}6 a^2 \] with energy $- \frac{k}2$. Once this equation has been solved, the second Friedmann equation yields $\rho(t)$ from $a(t)$. We now examine in detail the FLRW models arising from the solutions of these equations. \subsection{Milne universe} If we set $\alpha=\Lambda=0$ then the first Friedmann equation becomes \[ \dot{a}^2 = - k. \] Therefore either $k=0$ and $\dot{a} = 0$, which corresponds to the Minkowski spacetime, or $k = -1$ and $\dot{a}^2 = 1$, that is \[ ds^2 = - dt^2 + t^2 dl^2_{H^3}, \] where $dl^2_{H^3}$ represents the metric of the unit hyperbolic $3$-space; this is the so-called {\bf Milne universe}. It turns out that the Milne universe is isometric to an open region of the Minkowski spacetime, namely the region limited by the future (or past) light cone of the origin. This region is foliated by hyperboloids $S_t$ of the form \[ T^2 - X^2 - Y^2 - Z^2 = t^2, \] whose induced metric is that of a hyperbolic space of radius $t$ (Figure~\ref{Milne}). Note that the light cone corresponds to $a(t)=t=0$, that is, the {\bf Big Bang} of the Milne universe. \begin{figure}[h!] \begin{center} \psfrag{u}{$T$} \psfrag{x}{$X$} \psfrag{y}{$Y$} \psfrag{St}{$S_t$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Milne.eps} \end{center} \caption{Milne universe.} \label{Milne} \end{figure} \subsection{de Sitter universe} If $\alpha=0$ and $\Lambda>0$ we can choose units such that $\Lambda=3$. The first Friedmann equation then becomes \[ \dot{a}^2 - a^2 = - k. \] In this case all three values $k=1$, $k=0$ and $k=-1$ are possible; the corresponding metrics are, respectively, \begin{align*} & ds^2 = - dt^2 + \cosh^2t \, dl^2_{S^3}; \\ & ds^2 = - dt^2 + e^{2t} dl^2_{\mathbb{R}^3}; \\ & ds^2 = - dt^2 + \sinh^2t \, dl^2_{H^3}, \end{align*} where $dl^2_{S^3}$, $dl^2_{\mathbb{R}^3}$ and $dl^2_{H^3}$ represent the metric of the unit $3$-sphere, the Euclidean $3$-space and the the unit hyperbolic $3$-space. It turns out that the last two models correspond to open regions of the first, which is then called the {\bf de Sitter universe}. It represents a spherical universe which contracts to a minimum radius ($1$ in our units) and then re-expands. It is easily seen to be isometric to the unit hyperboloid \[ -T^2 + X^2 + Y^2 + Z^2 + W^2 = 1 \] in the Minkowski $5$-dimensional spacetime (Figure~\ref{dS}). \begin{figure}[h!] \begin{center} \psfrag{u}{$T$} \psfrag{x}{$X$} \psfrag{y}{$Y$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{dS.eps} \end{center} \caption{de Sitter universe.} \label{dS} \end{figure} To obtain the Penrose diagram for the de Sitter universe we write its metric as \begin{align*} ds^2 & = - dt^2 + \cosh^2t \left[ d \psi^2 + \sin^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = \cosh^2t \left[ - d\tau^2 + d \psi^2 + \sin^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = \cosh^2t \left[ - d\tau^2 + d \psi^2 \right] + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \end{align*} where $\psi \in [0,\pi]$, \[ \tau = \int_{- \infty}^t \frac{dt}{\cosh t} \] and \[ r = \cosh t \sin \psi. \] Since \[ \int_{- \infty}^{+ \infty} \frac{dt}{\cosh t} = \pi, \] we see that the quotient metric is conformal to the square $(0,\pi) \times [0, \pi]$ of the Minkowski $2$-dimensional spacetime, and so the Penrose diagram is as depicted in Figure~\ref{dS}. Note that there are two lines where $r=0$, corresponding to two antipodal points of the $3$-sphere. A light ray emitted from one of these points at $t=-\infty$ has just enough time to reach the other point at $t=+\infty$ (dashed line in the diagram). Note also that in this case $\mathscr{I^-}$ and $\mathscr{I^+}$ (defined as the past and future boundary points approached by null geodesics along which $r \to + \infty$) are spacelike boundaries. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.5\textwidth \leavevmode \epsfbox{Pen_dS.eps} \end{center} \caption{Penrose diagram for the de Sitter universe.} \label{Pen_dS} \end{figure} \subsection{Anti-de Sitter universe} If $\alpha=0$ and $\Lambda<0$ we can choose units such that $\Lambda=-3$. The the first Friedmann equation then becomes \[ \dot{a}^2 + a^2 = - k. \] In this case only $k=-1$ is possible; the corresponding metric is \[ ds^2 = - dt^2 + \cos^2t \, dl^2_{H^3}. \] It turns out (see Exercise~\ref{development} in Chapter~\ref{chapter5}) that this model is an open region of the spacetime with metric \[ ds^2 = - \cosh^2\psi dt^2 + d \psi^2 + \sinh^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \] (where $\psi \in [0,+\infty)$), called the {\bf anti-de Sitter universe}. It represents a static hyperbolic universe (with radius $1$ in our units). To obtain the Penrose diagram for the anti-de Sitter universe we write its metric as \begin{align*} ds^2 & = \cosh^2\psi \left[ - dt^2 + \frac{d \psi^2}{\cosh^2 \psi} \right] + \sinh^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \\ & = \cosh^2\psi \left[ - dt^2 + dx^2 \right] + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \end{align*} where \[ x = \int_0^\psi \frac{d\psi}{\cosh \psi} \] and \[ r = \sinh \psi. \] Since \[ \int_0^{+ \infty} \frac{d\psi}{\cosh \psi} = \frac{\pi}2, \] we see that the quotient metric is conformal to the strip $\mathbb{R} \times [0, \frac{\pi}2)$ of the Minkowski $2$-dimensional spacetime, and so the Penrose diagram is as depicted in Figure~\ref{Pen_AdS}. The FLRW model above corresponds to the triangular region in the diagram. Note also that in this case $\mathscr{I^-} \equiv \mathscr{I^+} \equiv \mathscr{I}$ is a timelike boundary. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{I}{$\mathscr{I}$} \epsfxsize=.4\textwidth \leavevmode \epsfbox{Pen_AdS.eps} \end{center} \caption{Penrose diagram for the anti-de Sitter universe.} \label{Pen_AdS} \end{figure} \subsection{Universes with matter and $\Lambda=0$} If $\alpha>0$ and $\Lambda=0$, the first Friedmann equation becomes \[ \dot{a}^2 - \frac{2\alpha}a = - k. \] In this case all three values $k=1$, $k=0$ and $k=-1$ are possible. Although it is possible to obtain explicit formulas for the solutions of these equations, it is simpler to analyze the graph of the effective potential $V(a)$ (Figure~\ref{Lzero_graph}). Possibly by reversing and translating $t$, we can assume that all solutions are defined for $t>0$, with $\lim_{t\to 0} a(t) = 0$, implying $\lim_{t\to 0} \rho(t) = + \infty$. Therefore all three models have a true singularity at $t=0$, known as the {\bf Big Bang}, where the scalar curvature $R=8\pi \rho$ also blows up; this is not true for the Milne universe or the open region in the anti-de Sitter universe, which can be extended across the Big Bang. The spherical universe ($k=1$) reaches a maximum radius $2 \alpha$ and re-collapses, forming a second singularity (the {\bf Big Crunch}); the radius of the flat ($k=0$) and hyperbolic ($k=-1$) universes increases monotonically. \begin{figure}[h!] \begin{center} \psfrag{a}{$a$} \psfrag{V(a)}{$V(a)$} \psfrag{k=1}{$k=1$} \psfrag{k=0}{$k=0$} \psfrag{k=-1}{$k=-1$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Lzero_graph.eps} \end{center} \caption{Effective potential for FLRW models with $\Lambda=0$.} \label{Lzero_graph} \end{figure} To obtain the Penrose diagram for the spherical universe we write its metric as \begin{align*} ds^2 & = - dt^2 + a^2(t) \left[ d \psi^2 + \sin^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \psi^2 + \sin^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \psi^2 \right] + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \end{align*} where $\psi \in [0,\pi]$, \[ \tau = \int_{0}^t \frac{dt}{a(t)} \] and \[ r = a(t) \sin \psi. \] Since \[ \int_{0}^{t_\text{max}} \frac{dt}{a(t)} = 2 \int_0^{a_\text{max}} \frac{da}{a\dot{a}} = 2 \int_0^{2 \alpha} \frac{da}{a\sqrt{\frac{2 \alpha}{a} - 1}} = 2\pi, \] we see that the quotient metric is conformal to the rectangle $(0,2\pi) \times [0, \pi]$ of the Minkowski $2$-dimensional spacetime, and so the Penrose diagram is as depicted in Figure~\ref{Pen_spherical}. Note that there are two lines where $r=0$, corresponding to two antipodal points of the $3$-sphere. A light ray emitted from one of these points at $t=0$ has just enough time to circle once around the universe an return at $t=t_\text{max}$ (dashed line in the diagram). Note also that the Big Bang and the Big Crunch are spacelike boundaries. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{Big Bang}{Big Bang} \psfrag{Big Crunch}{Big Crunch} \epsfxsize=.5\textwidth \leavevmode \epsfbox{Pen_spherical.eps} \end{center} \caption{Penrose diagram for the spherical universe.} \label{Pen_spherical} \end{figure} To obtain the Penrose diagram for the flat universe we write its metric as \begin{align*} ds^2 & = - dt^2 + a^2(t) \left[ d \rho^2 + \rho^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \rho^2 + \rho^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \rho^2 \right] + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \end{align*} where $\rho \in [0,+\infty)$ and $r = a(t) \rho$. Since \[ \int_{0}^{+\infty} \frac{dt}{a(t)} = \int_0^{+\infty} \frac{da}{a\dot{a}} = \int_0^{+ \infty} \frac{da}{a\sqrt{\frac{2 \alpha}{a}}} = + \infty, \] we see that the quotient metric is conformal to the region $(0,+\infty) \times [0,+\infty)$ of the Minkowski $2$-dimensional spacetime, and so the Penrose diagram is as depicted in Figure~\ref{Pen_flat}. Note that the Big Bang is a spacelike boundary. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{Big Bang}{Big Bang} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{I+}{$\mathscr{I^+}$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Pen_flat.eps} \end{center} \caption{Penrose diagram for the flat and hyperbolic universes.} \label{Pen_flat} \end{figure} The Penrose diagram for the hyperbolic universe turns out to be the same as for the flat universe. To see this we write its metric as \begin{align*} ds^2 & = - dt^2 + a^2(t) \left[ d \psi^2 + \sinh^2\psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \psi^2 + \sinh^2\psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right] \\ & = a^2(t) \left[ - d\tau^2 + d \psi^2 \right] + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \end{align*} where $\psi \in [0,+\infty)$ and $r = a(t) \sinh \psi$, and note that \[ \int_{0}^{+\infty} \frac{dt}{a(t)} = \int_0^{+\infty} \frac{da}{a\dot{a}} = \int_0^{+ \infty} \frac{da}{a\sqrt{\frac{2 \alpha}{a} + 1}} = + \infty. \] \subsection{Universes with matter and $\Lambda>0$} If $\alpha>0$ and $\Lambda>0$ we can choose units such that $\Lambda=3$. The first Friedmann equation then becomes \[ \dot{a}^2 - \frac{2\alpha}a - a^2 = - k. \] In this case all three values $k=1$, $k=0$ and $k=-1$ are possible. As before, we analyze the graph of the effective potential $V(a)$ (Figure~\ref{Lpositive_graph}). The hyperbolic and flat universes behave qualitatively like when $\Lambda=0$, although $\dot{a}(t)$ is now unbounded as $t \to + \infty$, instead of approaching some constant. The spherical universe has a richer spectrum of possible behaviors, depending on $\alpha$, represented in Figure~\ref{Lpositive_graph} by drawing the line of constant energy $-k=-1$ at three different heights. The higher line (corresponding to $\alpha > \frac{\sqrt{3}}{9}$) yields a behaviour similar to that of the hyperbolic and flat universes. The intermediate line (corresponding to $\alpha=\frac{\sqrt{3}}{9}$) gives rise to an unstable equilibrium point $a = \frac{\sqrt{3}}{3}$, where the attraction force of the matter is balanced by the repulsion force of the cosmological constant; it corresponds to the so-called {\bf Einstein universe}, the first cosmological model ever proposed. The intermediate line also yields two solutions asymptotic to the Einstein universe, one containing a Big Bang and the other endless expansion. Finally, the lower line (corresponding to $\alpha < \frac{\sqrt{3}}{9}$) yields two different types of behaviour (depending on the initial conditions): either similar to the spherical model with $\Lambda=0$, or to the de Sitter universe. \begin{figure}[h!] \begin{center} \psfrag{a}{$a$} \psfrag{V(a)}{$V(a)$} \psfrag{k=1}{$k=1$} \psfrag{k=0}{$k=0$} \psfrag{k=-1}{$k=-1$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Lpositive_graph.eps} \end{center} \caption{Effective potential for FLRW models with $\Lambda>0$.} \label{Lpositive_graph} \end{figure} It is currently believed that the best model for our physical Universe is the the flat universe with $\Lambda > 0$. If we write the first Friedmann equation as \[ H^2 \equiv \frac{\dot{a}^2}{a^2} = \frac{8 \pi}{3} \rho + \frac{\Lambda}{3} \] then the terms on the right-hand side are in the proportion $2 : 5$ at the present time. \subsection{Universes with matter and $\Lambda<0$} If $\alpha>0$ and $\Lambda<0$ we can choose units such that $\Lambda=-3$. The first Friedmann equation then becomes \[ \dot{a}^2 - \frac{2\alpha}a + a^2 = - k. \] In this case all three values $k=1$, $k=0$ and $k=-1$ are possible. As before, we analyze the graph of the effective potential $V(a)$ (Figure~\ref{Lnegative_graph}). The qualitative behaviour of the hyperbolic, flat and spherical universes is the same as the spherical universe with $\Lambda=0$, namely starting at a Big Bang and ending at a Big Crunch. \begin{figure}[h!] \begin{center} \psfrag{a}{$a$} \psfrag{V(a)}{$V(a)$} \psfrag{k=1}{$k=1$} \psfrag{k=0}{$k=0$} \psfrag{k=-1}{$k=-1$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Lnegative_graph.eps} \end{center} \caption{Effective potential for FLRW models with $\Lambda>0$.} \label{Lnegative_graph} \end{figure} \section{Matching} \label{sec2.5} Let $(M_1,g_1)$ and $(M_2,g_2)$ be solutions of the Einstein field equations containing open sets $U_1$ and $U_2$ whose boundaries $S_1$ and $S_2$ are {\bf timelike hypersurfaces}, that is, hypersurfaces whose induced metric is Lorentzian (or, equivalently, whose normal vector is spacelike). If $S_1$ is diffeomorphic to $S_2$ then we can identify them to obtain a new manifold $M$ gluing $U_1$ to $U_2$ along $S_1 \cong S_2$ (Figure~\ref{matching}). \begin{figure}[h!] \begin{center} \psfrag{g1}{$g_1$} \psfrag{g2}{$g_2$} \psfrag{n}{$n$} \psfrag{S1=S2}{$S_1 \cong S_2$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{matching.eps} \end{center} \caption{Matching two spacetimes.} \label{matching} \end{figure} Let $n$ be the unit normal vector to $S_1$ pointing out of $U_1$, which we identify with the unit normal vector to $S_2$ pointing into $U_2$. If $(x^1, x^2, x^3)$ are local coordinates on $S \equiv S_1 \cong S_2$, we can construct a system of local coordinates $(t,x^1, x^2, x^3)$ in a neighbourhood of $S$ by moving a distance $t$ along the geodesics with initial condition $n$. Note that $U_1$, $S$ and $U_2$ correspond to $t<0$, $t=0$ and $t>0$ in these coordinates. Since $\frac{\partial}{\partial t}$ is the unit tangent vector to the geodesics, we have \begin{align*} & \frac{\partial}{\partial t} \left\langle \frac{\partial}{\partial t}, \frac{\partial}{\partial x^i} \right\rangle = \left\langle \nabla_{\frac{\partial}{\partial t}} \frac{\partial}{\partial t}, \frac{\partial}{\partial x^i} \right\rangle + \left\langle \frac{\partial}{\partial t}, \nabla_{\frac{\partial}{\partial t}} \frac{\partial}{\partial x^i} \right\rangle \\ & = \left\langle \frac{\partial}{\partial t}, \nabla_{\frac{\partial}{\partial x^i}} \frac{\partial}{\partial t} \right\rangle = \frac{\partial}{\partial x^i} \left( \frac12 \left\langle \frac{\partial}{\partial t}, \frac{\partial}{\partial t} \right\rangle \right) = 0 \end{align*} ($i=1,2,3$), where we used \[ \nabla_{\frac{\partial}{\partial t}} \frac{\partial}{\partial x^i} - \nabla_{\frac{\partial}{\partial x^i}} \frac{\partial}{\partial t} = \left[ \frac{\partial}{\partial t}, \frac{\partial}{\partial x^i} \right] = 0. \] Since for $t=0$ we have \[ \left\langle \frac{\partial}{\partial t}, \frac{\partial}{\partial x^i} \right\rangle = \left\langle n, \frac{\partial}{\partial x^i} \right\rangle = 0, \] we see that $\frac{\partial}{\partial t}$ remains orthogonal to the surfaces of constant $t$. This result will be used repeatedly. \begin{Lemma} ({\bf Gauss Lemma I}) Let $(M,g)$ be a Riemannian or a Lorentzian manifold, and $S \subset M$ a hypersurface whose normal vector field $n$ satisfies $g(n,n) \neq 0$. The hypersurfaces $S_t$ obtained from $S$ by moving a distance $t$ along the geodesics orthogonal to $S$ remain orthogonal to the geodesics. \end{Lemma} The same ideas can be used to prove a closely related result. \begin{Lemma} ({\bf Gauss Lemma II}) Let $(M,g)$ be a Riemannian or a Lorentzian manifold and $p \in M$. The hypersurfaces $S_t$ obtained from $p$ by moving a distance $t$ along the geodesics through $p$ remain orthogonal to the geodesics. \end{Lemma} In this coordinate system the metrics $g_1$ and $g_2$ are given on $t \leq 0$ and $t \geq 0$, respectively, by \[ g_A = dt^2 + h^A_{ij}(t,x) dx^i dx^j \] ($A=1,2$). Therefore we can define a continuous metric $g$ on $M$ if \[ h^1_{ij}(0,x) dx^i dx^j = h^2_{ij}(0,x) dx^i dx^j, \] that is, if \[ {g_1}_{|_{TS}} = {g_2}_{|_{TS}}. \] This also guarantees continuity of all tangential derivatives of the metric, but not of the normal derivatives. In order to have a $C^1$ metric we must have \[ \frac{\partial h^1_{ij}}{\partial t}(0,x) dx^i dx^j = \frac{\partial h^2_{ij}}{\partial t}(0,x) dx^i dx^j, \] that is, \[ {\mathcal{L}_n g_1}_{|_{TS}} = {\mathcal{L}_n g_2}_{|_{TS}}. \] Note that in this case the curvature tensor (hence the energy-momentum tensor) is at most discontinuous across $S$. More importantly, as shown in Exercise~\ref{continuity}, the components $T_{tt}$ and $T_{ti}$ of the energy-momentum tensor are continuous across $S$ (that is, the flow $T_{\mu\nu} n^\mu$ of energy and momentum across $S$ is equal on both sides), implying that the energy-momentum tensor satisfies the integral version of the conservation equation $\nabla^\mu T_{\mu\nu} = 0$. Therefore we can consider $(M,g)$ a solution of the Einstein equations. Recall that \[ K_A = \frac12 {\mathcal{L}_n g_A}_{|_{TS_A}} \] is known as the {\bf extrinsic curvature}, or {\bf second fundamental form}, of $S_A$. We can summarize the discussion above in the following statement. \begin{Prop} Two solutions $(M_1,g_1)$ and $(M_2,g_2)$ of the Einstein field equations can be matched along diffeomorphic timelike boundaries $S_1$ and $S_2$ if and only if the induced metrics and second fundamental forms coincide: \[ g_1 = g_2 \qquad \text{ and } \qquad K_1 = K_2. \] \end{Prop} \section{Oppenheimer-Snyder collapse} \label{sec2.6} We can use the matching technique to construct a solution of the Einstein field equations which describes a spherical cloud of dust collapsing to a black hole. This is a physically plausible model for a black hole, as opposed to the eternal black hole. Let us take $(M_1,g_1)$ to be a flat collapsing FLRW universe: \[ g_1 = - d\tau^2 + a^2(\tau) \left[ d \sigma^2 + \sigma^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \right]. \] We choose $S_1$ to be the hypersurface $\sigma=\sigma_0$, with normal vector \[ n = \frac1a \frac{\partial}{\partial \sigma}. \] The induced metric then is \[ {g_1}_{|_{TS_1}} = - d\tau^2 + a^2(\tau) {\sigma_0}^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] and the second fundamental form \[ K_1 = a(\tau) \sigma_0 \left(d\theta^2 + \sin^2\theta d\varphi^2\right). \] Here we used {\bf Cartan's magic formula}: if $\omega$ is a differential form and $X$ is a vector field then \[ \mathcal{L}_X \omega = X \lrcorner \, d\omega + d (X \lrcorner \, \omega). \] Thus, for example, \[ \mathcal{L}_n d\tau = n \lrcorner \, d^2 \tau + d (n \lrcorner \, d\tau) = 0. \] Note that the function $a(\tau)$ is constrained by the first Friedmann equation: \begin{equation} \label{Friedmann} \dot{a}^2 = \frac{2\alpha}{a} \end{equation} (we assume $\Lambda=0$). We now take $(M_2,g_2)$ to be the Schwarzschild solution, \[ g_2 = - V dt^2 + V^{-1} dr^2 + r^2 \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] where \[ V = 1 - \frac{2M}r, \] and choose $S_2$ to be a spherically symmetric timelike hypersurface given by the parameterization \[ \begin{cases} t = t(\tau) \\ r = r(\tau) \end{cases}. \] The exact form of the functions $t(\tau)$ and $r(\tau)$ will be fixed by the matching conditions; for the time being they are constrained only by the condition that $\tau$ is the proper time along $S_2$: \[ - V \dot{t}^2 + V^{-1} \dot{r}^2 = -1. \] The induced metric is then \[ {g_2}_{|_{TS_2}} = - d\tau^2 + r^2(\tau) \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] and so the two induced metrics coincide if and only if \begin{equation} \label{matchingcond1} r(\tau) = a(\tau) \sigma_0. \end{equation} To simplify the calculation of the second fundamental form, we note that since $S_1$ is ruled by timelike geodesics, so is $S_2$, because the induced metrics and extrinsic curvatures are the same (therefore so are the Christoffel symbols). Therefore $t(\tau)$ and $r(\tau)$ must be a solution of the radial geodesic equations, which are equivalent to \begin{equation} \label{matchingcond2} \begin{cases} V \dot{t} = E \\ - V \dot{t}^2 + V^{-1} \dot{r}^2 = -1 \end{cases} \Leftrightarrow \begin{cases} V \dot{t} = E \\ \displaystyle \dot{r}^2 = E^2 - 1 + \frac{2M}{r} \end{cases} \end{equation} (where $E>0$ is a constant). Equations~\eqref{Friedmann}, \eqref{matchingcond1} and \eqref{matchingcond2} are compatible if and only if \[ \begin{cases} E = 1 \\ M = \alpha {\sigma_0}^3 \end{cases}, \] that is, if and only if $S_2$ represents a spherical shell dropped from infinity with zero velocity and the mass parameter of the Schwarzschild spacetime is related to the density $\rho(\tau)$ of the collapsing dust by \[ M = \frac{4 \pi}{3} r^3(\tau) \rho(\tau). \] To compute the second fundamental form of $S_2$ we can then consider a family of free-falling spherical shells which includes $S_2$. If $s$ is the parameter indexing the shells (with $S_2$ corresponding to, say, $s=s_0$) then by the Gauss Lemma we can write the Schwarzschild metric in the form \[ g_2 = - d \tau^2 + A^2(\tau,s) ds^2 + r^2(\tau,s) \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \] (consider, for instance, the change of coordinates determined by the solution $t=t(\tau,s)$, $r=r(\tau,s)$ of the radial geodesic equations with initial conditions determined by $t(0,s)=0$, $r(0,s)=s$, $\dot{r}(0,s)=0$). The unit normal vector field $n$ to the hypersurfaces of constant $s$ is then \[ n = \frac1A \frac{\partial}{\partial s}, \] and so we have \[ K_2 = \frac12 {\mathcal{L}_n g_2}_{|_{TS_2}} = r (n \cdot r) \left(d\theta^2 + \sin^2\theta d\varphi^2\right). \] On the other hand, in Schwarzschild coordinates \[ n = V^{-1} \dot{r} \frac{\partial}{\partial t} + V \dot{t} \frac{\partial}{\partial r}, \] since $n$ must be unit and orthogonal to \[ \dot{t} \frac{\partial}{\partial t} + \dot{r} \frac{\partial}{\partial r}. \] Therefore we have \[ K_2 = r V \dot{t} \left(d\theta^2 + \sin^2\theta d\varphi^2\right) = r E \left(d\theta^2 + \sin^2\theta d\varphi^2\right), \] or, using $E=1$, \[ K_2 = r(\tau) \left(d\theta^2 + \sin^2\theta d\varphi^2\right). \] In other words, $K_1 = K_2$ follows from the previous conditions, and so we indeed have a solution of the Einstein equations. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{Big Crunch}{Big Crunch} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{S1}{$S_1$} \psfrag{S2}{$S_2$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.9\textwidth \leavevmode \epsfbox{Pen_OS1.eps} \end{center} \caption{Matching hypersurfaces in the collapsing flat FLRW universe and the Schwarzschild solution.} \label{Pen_OS1} \end{figure} To construct the Penrose diagram for this solution we represent $S_1$ and $S_2$ in the Penrose diagrams of the collapsing flat FLRW universe (obtained by reversing the time direction in the expanding flat FLRW universe) and the Schwarzschild solution (Figure~\ref{Pen_OS1}). Identifying these hypersurfaces results in the Penrose diagram depicted in Figure~\ref{Pen_OS2}. \begin{figure}[h!] \begin{center} \psfrag{r=0}{$r=0$} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{S}{$S$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{Pen_OS2.eps} \end{center} \caption{Penrose diagram for the Oppenheimer-Snyder collapse.} \label{Pen_OS2} \end{figure} \section{Exercises} \label{sec2.7} \begin{enumerate} \item In this exercise we will solve the vacuum Einstein equations (without cosmological constant) for the spherically symmetric Lorentzian metric given by \[ \hspace{2cm} ds^2 = -(A(t,r))^2 dt^2 + (B(t,r))^2 dr^2 + r^2 \left( d\theta^2 + r^2 \sin^2 \theta d\varphi^2 \right), \] where $A$ and $B$ are positive smooth functions. \begin{enumerate} \item Use Cartan's first structure equations, \[ \begin{cases} \omega_{\mu\nu}=-\omega_{\nu\mu}\\ d\omega^\mu + \omega^\mu_{\,\,\,\,\nu} \wedge \omega^\nu = 0 \end{cases}, \] to show that the nonvanishing connection forms for the orthonormal frame dual to \[ \hspace{2cm} \omega^0 = A dt, \quad \quad \omega^r = B dr, \quad \quad \omega^\theta = r d\theta, \quad \quad \omega^\varphi = r \sin \theta d\varphi \] are (using the notation $\dot{}=\frac{\partial}{\partial t}$ and ${}'=\frac{\partial}{\partial r}$) \begin{align*} & \omega^0_{\,\,\,\,r} = \omega^r_{\,\,\,\,0} = \frac{A'}{B} dt + \frac{\dot{B}}{A}dr \, ;\\ & \omega^\theta_{\,\,\,\,r} = - \omega^r_{\,\,\,\,\theta} = \frac1B d\theta \, ;\\ & \omega^\varphi_{\,\,\,\,r} = - \omega^r_{\,\,\,\,\varphi} = \frac{\sin\theta}{B} d\varphi \, ;\\ & \omega^\varphi_{\,\,\,\,\theta} = - \omega^\theta_{\,\,\,\,\varphi} = \cos\theta d\varphi \, . \end{align*} \item Use Cartan's second structure equations, \[ \Omega^\mu_{\,\,\,\,\nu} = d\omega^\mu_{\,\,\,\,\nu} + \omega^\mu_{\,\,\,\,\alpha} \wedge \omega^\alpha_{\,\,\,\,\nu} \, , \] to show that the curvature forms on this frame are \begin{align*} \hspace{2cm} & \Omega^0_{\,\,\,\,r} = \Omega^r_{\,\,\,\,0} = \left(\frac{A''B-A'B'}{AB^3}+\frac{\dot{A}\dot{B}-A\ddot{B}}{A^3B} \right)\, \omega^r \wedge \omega^0 \, ; \\ & \Omega^0_{\,\,\,\,\theta} = \Omega^\theta_{\,\,\,\,0} = \frac{A'}{rAB^2} \, \omega^\theta \wedge \omega^0 + \frac{\dot{B}}{rAB^2} \, \omega^\theta \wedge \omega^r \, ; \\ & \Omega^0_{\,\,\,\,\varphi} = \Omega^\varphi_{\,\,\,\,0} = \frac{A'}{rAB^2} \, \omega^\varphi \wedge \omega^0 + \frac{\dot{B}}{rAB^2} \, \omega^\varphi \wedge \omega^r \, ; \\ & \Omega^\theta_{\,\,\,\,r} = -\Omega^r_{\,\,\,\,\theta} = \frac{B'}{rB^3} \, \omega^\theta \wedge \omega^r + \frac{\dot{B}}{rAB^2} \, \omega^\theta \wedge \omega^0 \, ; \\ & \Omega^\varphi_{\,\,\,\,r} = -\Omega^r_{\,\,\,\,\varphi} = \frac{B'}{rB^3} \, \omega^\varphi \wedge \omega^r + \frac{\dot{B}}{rAB^2} \, \omega^\varphi \wedge \omega^0 \, ; \\ & \Omega^\varphi_{\,\,\,\,\theta} = -\Omega^\theta_{\,\,\,\,\varphi} = \frac{B^2-1}{r^2B^2} \, \omega^\varphi \wedge \omega^\theta \, . \end{align*} \item Using \[ \Omega^\mu_{\,\,\,\,\nu} = \sum_{\alpha<\beta} R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\,\mu} \omega^\alpha \wedge \omega^\beta \, , \] determine the components $R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\,\mu}$ of the curvature tensor in this orthonormal frame, and show that the nonvanishing components of the Ricci tensor in this frame are \begin{align*} & R_{00} = \frac{A''B-A'B'}{AB^3}+\frac{\dot{A}\dot{B}-A\ddot{B}}{A^3B} + \frac{2A'}{rAB^2} \, ; \\ & R_{0r} = R_{r0}= \frac{2\dot{B}}{rAB^2} \, ; \\ & R_{rr} = \frac{A'B'-A''B}{AB^3}+\frac{A\ddot{B}-\dot{A}\dot{B}}{A^3B} + \frac{2B'}{rB^3} \, ; \\ & R_{\theta\theta} = R_{\varphi\varphi} = - \frac{A'}{rAB^2} + \frac{B'}{rB^3} + \frac{B^2-1}{r^2B^2} \, . \end{align*} Conclude that the nonvanishing components of the Einstein tensor in this frame are \begin{align*} \hspace{2cm} & G_{00} = \frac{2B'}{rB^3} + \frac{B^2-1}{r^2B^2} \, ; \\ & G_{0r} = G_{r0}= \frac{2\dot{B}}{rAB^2} \, ; \\ & G_{rr} = \frac{2A'}{rAB^2} - \frac{B^2-1}{r^2B^2} \, ; \\ & G_{\theta\theta} = G_{\varphi\varphi} = \frac{A''B-A'B'}{AB^3}+\frac{\dot{A}\dot{B}-A\ddot{B}}{A^3B} + \frac{A'}{rAB^2} - \frac{B'}{rB^3} \, . \end{align*} \item Show that if we write \[ B(t,r) = \left(1 - \frac{2m(t,r)}r\right)^{-\frac12} \] for some smooth function $m$ then \[ G_{00} = \frac{2m'}{r^2}. \] Conclude that the Einstein equations $G_{00}=G_{0r}=0$ are equivalent to \[ B = \left(1 - \frac{2M}r\right)^{-\frac12}, \] where $M \in \mathbb{R}$ is an integration constant. \item Show that the Einstein equation $G_{00}+G_{rr}=0$ is equivalent to $A = \frac{\alpha(t)}B$ for some positive smooth function $\alpha$. \item Check that if $A$ and $B$ are as above then the remaining Einstein equations $G_{\theta\theta} = G_{\varphi\varphi} = 0$ are automatically satisfied. \item Argue that it is always possible to rescale the time coordinate $t$ so that the metric is written \[ \hspace{2cm} ds^2 = -\left(1 - \frac{2M}r\right) dt^2 + \left(1 - \frac{2M}r\right)^{-1} dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right) \] (the statement that any spherically symmetric solution of the vacuum Einstein equations without cosmological constant is of this form is known as {\bf Birkhoff's theorem}). \end{enumerate} \item Show that the Riemannian manifold obtained by gluing the hypersurfaces $t=0$ of the two exterior regions in the maximally extended Schwarzschild solution along the horizon $r=2M$ is isometric to the {\bf Flamm paraboloid}, that is, the hypersurface in $\mathbb{R}^4$ with equation \[ \sqrt{x^2+y^2+z^2} = 2M + \frac{w^2}{8M} \] (Figure~\ref{Flamm}). \begin{figure}[h!] \begin{center} \epsfxsize=.5\textwidth \leavevmode \epsfbox{Flamm.eps} \end{center} \caption{Two-dimensional analogue of the Flamm paraboloid.} \label{Flamm} \end{figure} \item Recall that the nonvanishing components of the Einstein tensor of the spherically symmetric Lorentzian metric \[ \hspace{2cm} ds^2 = -(A(t,r))^2 dt^2 + (B(t,r))^2 dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right) \] in the orthonormal frame dual to \[ \hspace{2cm} \omega^0 = A dt, \quad \quad \omega^r = B dr, \quad \quad \omega^\theta = r d\theta, \quad \quad \omega^\varphi = r \sin \theta d\varphi, \] are given by (using the notation $\dot{}=\frac{\partial}{\partial t}$ and ${}'=\frac{\partial}{\partial r}$) \begin{align*} \hspace{2cm} & G_{00} = \frac{2B'}{rB^3} + \frac{B^2-1}{r^2B^2} = \frac{2m'}{r^2}; \\ & G_{0r} = G_{r0}= \frac{2\dot{B}}{rAB^2}; \\ & G_{rr} = \frac{2A'}{rAB^2} - \frac{B^2-1}{r^2B^2}; \\ & G_{\theta\theta} = G_{\varphi\varphi} = \frac{A''B-A'B'}{AB^3}+\frac{\dot{A}\dot{B}-A\ddot{B}}{A^3B} + \frac{A'}{rAB^2} - \frac{B'}{rB^3}, \end{align*} where \[ B(t,r) = \left(1 - \frac{2m(t,r)}r\right)^{-\frac12}. \] \begin{enumerate} \item Assuming \begin{itemize} \item $G_{0r}=0$ (so that $B$, and hence $m$, do not depend on $t$); \item $G_{00}+G_{rr}=0$ (so that $A = \frac{\alpha(t)}B$ for some positive smooth function $\alpha(t)$); \item $\alpha(t)=1$ (which can always be achieved by rescaling $t$), \end{itemize} show that \[ G_{\theta\theta} = G_{\varphi\varphi} = \frac12\left(A^2\right)''+\frac1r\left(A^2\right)'. \] \item Prove that the general spherically symmetric solution of the vacuum Einstein field equations with a cosmological constant $\Lambda$ is the {\bf Kottler metric} \begin{align*} \hspace{2cm} g = & -\left(1 - \frac{2M}r - \frac{\Lambda}3 r^2 \right) dt^2 + \left(1 - \frac{2M}r - \frac{\Lambda}3 r^2 \right)^{-1} dr^2 \\ & + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right). \end{align*} \item Obtain the Penrose diagram for the maximal extension of the Kottler solution with $\Lambda>0$ and $0<M<\frac{1}{3\sqrt{\Lambda}}$. \item Consider now the spherically symmetric electromagnetic field \[ F = E(t,r) \, \omega^r \wedge \omega^0. \] Show that this field satisfies the vacuum Maxwell equations \[ dF = d\star F=0 \] (where $\star$ is the Hodge star) if and only if \[ E(t,r) = \frac{e}{r^2} \] for some constant $e \in \mathbb{R}$ (the electric charge in units for which $4 \pi \varepsilon_0=1$). \item As we shall see in Chapter~\ref{chapter6}, this electromagnetic field corresponds to the energy-momentum tensor \[ \hspace{2cm} T = \frac{E^2}{8\pi} \left( \omega^0 \otimes \omega^0 - \omega^r \otimes \omega^r + \omega^\theta \otimes \omega^\theta + \omega^\varphi \otimes \omega^\varphi \right). \] Prove that the general spherically symmetric solution of the Einstein field equations with an electromagnetic field of this kind is the {\bf Reissner-Nordstr\"om metric} \begin{align*} \hspace{2cm} g = & -\left(1 - \frac{2M}r + \frac{e^2}{r^2} \right) dt^2 + \left(1 - \frac{2M}r + \frac{e^2}{r^2} \right)^{-1} dr^2 \\ & + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right). \end{align*} \item Obtain the Penrose diagram for the maximal extension of the Reissner-Nordstr\"om solution with $M>0$ and $0 < e^2 < M^2$. \end{enumerate} \item Consider the spherically symmetric Lorentzian metric \[ \hspace{2cm} ds^2 = - dt^2 + a^2(t) \left(\frac1{1-kr^2}dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right) \right), \] where $a$ is a positive smooth function. \begin{enumerate} \item Use Cartan's first structure equations, \[ \begin{cases} \omega_{\mu\nu}=-\omega_{\nu\mu}\\ d\omega^\mu + \omega^\mu_{\,\,\,\,\nu} \wedge \omega^\nu = 0 \end{cases}, \] to show that the nonvanishing connection forms for the orthonormal frame dual to \begin{align*} & \omega^0 = dt, \quad \quad \quad \quad \, \omega^r = a(t)\left(1-kr^2\right)^{-\frac12} dr, \\ & \omega^\theta = a(t) r d\theta, \quad \quad \omega^\varphi = a(t) r \sin \theta d\varphi \end{align*} are \begin{align*} & \omega^0_{\,\,\,\,r} = \omega^r_{\,\,\,\,0} = \dot{a} \left(1-kr^2\right)^{-\frac12} dr\, ;\\ & \omega^0_{\,\,\,\,\theta} = \omega^\theta_{\,\,\,\,0} = \dot{a} r d\theta\, ;\\ & \omega^0_{\,\,\,\,\varphi} = \omega^\varphi_{\,\,\,\,0} = \dot{a} r \sin \theta d\varphi\, ;\\ & \omega^\theta_{\,\,\,\,r} = - \omega^r_{\,\,\,\,\theta} = \left(1-kr^2\right)^\frac12 d\theta\, ;\\ & \omega^\varphi_{\,\,\,\,r} = - \omega^r_{\,\,\,\,\varphi} = \left(1-kr^2\right)^\frac12 \sin\theta d\varphi\, ;\\ & \omega^\varphi_{\,\,\,\,\theta} = - \omega^\theta_{\,\,\,\,\varphi} = \cos\theta d\varphi\, . \end{align*} \item Use Cartan's second structure equations, \[ \Omega^\mu_{\,\,\,\,\nu} = d\omega^\mu_{\,\,\,\,\nu} + \omega^\mu_{\,\,\,\,\alpha} \wedge \omega^\alpha_{\,\,\,\,\nu}\, , \] to show that the curvature forms on this frame are \begin{align*} & \Omega^0_{\,\,\,\,r} = \Omega^r_{\,\,\,\,0} = \frac{\ddot{a}}{a} \omega^0 \wedge \omega^r\, ; \\ & \Omega^0_{\,\,\,\,\theta} = \Omega^\theta_{\,\,\,\,0} = \frac{\ddot{a}}{a} \omega^0 \wedge \omega^\theta\, ; \\ & \Omega^0_{\,\,\,\,\varphi} = \Omega^\varphi_{\,\,\,\,0} = \frac{\ddot{a}}{a} \omega^0 \wedge \omega^\varphi\, ; \\ & \Omega^\theta_{\,\,\,\,r} = -\Omega^r_{\,\,\,\,\theta} = \left(\frac{k}{a^2}+\frac{\dot{a}^2}{a^2}\right) \omega^\theta \wedge \omega^r\, ; \\ & \Omega^\varphi_{\,\,\,\,r} = -\Omega^r_{\,\,\,\,\varphi} = \left(\frac{k}{a^2}+\frac{\dot{a}^2}{a^2}\right) \omega^\varphi \wedge \omega^r\, ; \\ & \Omega^\varphi_{\,\,\,\,\theta} = -\Omega^\theta_{\,\,\,\,\varphi} = \left(\frac{k}{a^2}+\frac{\dot{a}^2}{a^2}\right) \omega^\varphi \wedge \omega^\theta\, . \end{align*} \item Using \[ \Omega^\mu_{\,\,\,\,\nu} = \sum_{\alpha<\beta} R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\,\mu} \omega^\alpha \wedge \omega^\beta\, , \] determine the components $R_{\alpha\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\,\mu}$ of the curvature tensor on this orthonormal frame, and show that the nonvanishing components of the Ricci tensor on this frame are \begin{align*} & R_{00} = -\frac{3\ddot{a}}{a}\, ; \\ & R_{rr} = R_{\theta\theta} = R_{\varphi\varphi} = \frac{\ddot{a}}{a} + \frac{2\dot{a}^2}{a^2} + \frac{2k}{a^2}\, . \end{align*} Conclude that the nonvanishing components of the Einstein tensor on this frame are \begin{align*} & G_{00} = \frac{3\dot{a}^2}{a^2} + \frac{3k}{a^2}\, ; \\ & G_{rr} = G_{\theta\theta} = G_{\varphi\varphi} = -\frac{2\ddot{a}}{a} - \frac{\dot{a}^2}{a^2} - \frac{k}{a^2}\, . \end{align*} \item Show that the Einstein equations with a cosmological constant $\Lambda$ for a comoving pressureless perfect fluid of nonnegative density $\rho$, $G+\Lambda g=8\pi \rho \, dt^2$, are equivalent to the system \[ \begin{cases} \displaystyle \frac{\dot{a}^2}{a^2} + \frac{k}{a^2} = \frac{8\pi\rho}{3} + \frac{\Lambda}3 \\ \\ \displaystyle \frac{2\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{k}{a^2} = \Lambda \end{cases}. \] Show that this system can be integrated to \[ \begin{cases} \displaystyle \frac{4\pi\rho}{3}a^3 = \alpha \\ \\ \displaystyle \frac12\dot{a}^2 - \frac{\alpha}{a} - \frac{\Lambda}6 a^2 = - \frac{k}{2} \end{cases}, \] where $\alpha$ is a nonnegative integration constant. \item Draw the Penrose diagram of the solutions with $\alpha > 0$, $\Lambda > 0$ and $k=0$ (currently believed to model our physical Universe). \end{enumerate} \item Compute the metrics of the following manifolds in the local coordinates indicated, and sketch the corresponding Penrose diagrams: \begin{enumerate} \item The region $T > \sqrt{X^2+Y^2+Z^2}$ of the $4$-dimensional Minkowski spacetime using the parameterization \[ \begin{cases} T = t \cosh \psi \\ X = t \sinh \psi \sin\theta \cos\varphi \\ Y = t \sinh \psi \sin\theta \sin\varphi \\ Z = t \sinh \psi \cos\theta \end{cases}. \] \item The hyperboloid $X^2+Y^2+Z^2+W^2=1+T^2$ in the $5$-dimensional Minkowski spacetime using the parameterization \[ \begin{cases} T = \sinh t \\ X = \cosh t \sin \psi \sin\theta \cos\varphi \\ Y = \cosh t \sin \psi \sin\theta \sin\varphi \\ Z = \cosh t \sin \psi \cos\theta \\ W = \cosh t \cos \psi \end{cases}. \] \item The region $W>1$ of the same hyperboloid using the parameterization \[ \begin{cases} T = \sinh t \cosh \psi \\ X = \sinh t \sinh \psi \sin\theta \cos\varphi \\ Y = \sinh t \sinh \psi \sin\theta \sin\varphi \\ Z = \sinh t \sinh \psi \cos\theta \\ W = \cosh t \end{cases}. \] \item The region $T>W$ of the same hyperboloid using the parameterization defined implicitly by \[ \begin{cases} T = W + e^t \\ X = e^t x \\ Y = e^t y \\ Z = e^t z \end{cases}. \] \item The hyperboloid $T^2+U^2=1+X^2+Y^2+Z^2$ in $\mathbb{R}^5$ with the pseudo-Riemannian metric \[ ds^2=-dT^2-dU^2+dX^2+dY^2+dZ^2 \] using the parameterization \[ \begin{cases} T = \cosh \psi \cos t \\ U = \cosh \psi \sin t \\ X = \sinh \psi \sin\theta \cos\varphi \\ Y = \sinh \psi \sin\theta \sin\varphi \\ Z = \sinh \psi \cos\theta \end{cases}. \] \end{enumerate} \item Show that the anti-de Sitter metric \[ ds^2 = - \cosh^2\psi dt^2 + d \psi^2 + \sinh^2 \psi \left(d\theta^2 + \sin^2\theta d\varphi^2\right) \] is a solution of the vacuum Einstein field equations with cosmological constant $\Lambda=-3$. \item The $3$-dimensional anti-de Sitter space can be obtained by ``unwrapping'' the hyperboloid $T^2+U^2=1+X^2+Y^2$ in $\mathbb{R}^4$ with the pseudo-Riemannian metric \[ ds^2=-dT^2-dU^2+dX^2+dY^2. \] In this exercise we identify $\mathbb{R}^4$ with the space of $2 \times 2$ matrices by the map \[ (T,U,X,Y) \mapsto \left( \begin{matrix} T + X & U + Y \\ -U + Y & T - X \end{matrix} \right). \] \begin{enumerate} \item Show that the hyperboloid corresponds to the Lie group $SL(2,\mathbb{R})$ of $2 \times 2$ matrices with unit determinant. \item Check that the squared norm of a vector $v\in\mathbb{R}^4$ in the metric above is $\langle v, v \rangle = - \det V$, where $V$ is the $2 \times 2$ matrix associated to $v$. Conclude that the metric induced on the hyperboloid is bi-invariant (that is, invariant under left and right multiplication). \item Use the Penrose diagram in Figure~\ref{SL2R} and the fact that the one-parameter subgroups of a Lie group with a bi-invariant metric are geodesics of that metric to conclude that the exponential map $\exp: \mathfrak{sl}(2,\mathbb{R}) \to SL(2,\mathbb{R})$ is not surjective. \item Write explicitly the matrices in $SL(2,\mathbb{R})$ which are not in the image of $\exp: \mathfrak{sl}(2,\mathbb{R}) \to SL(2,\mathbb{R})$ by using the parameterization \[ \begin{cases} T = \cosh \psi \cos t \\ U = \cosh \psi \sin t \\ X = \sinh \psi \cos\varphi \\ Y = \sinh \psi \sin\varphi \end{cases}. \] \end{enumerate} \begin{figure}[h!] \begin{center} \psfrag{p=0}{$\psi=0$} \psfrag{p=8}{$\psi=+\infty$} \psfrag{t=p}{$t=\pi$} \psfrag{t=0}{$t=0$} \psfrag{t=-p}{$t=-\pi$} \epsfxsize=.4\textwidth \leavevmode \epsfbox{SL2R.eps} \end{center} \caption{Exponential map on $SL(2,\mathbb{R})$.} \label{SL2R} \end{figure} \item\label{continuity} Consider a Riemannian or Lorentzian metric given in the Gauss Lemma form \[ g = dt^2 + h_{ij}(t,x) dx^i dx^j, \] so that the level sets of $t$ are Riemannian or Lorentzian manifolds with induced metric $h(t)=h_{ij}dx^i dx^j$ and second fundamental form \[ K(t)=\frac12\frac{\partial h_{ij}}{\partial t}dx^idx^j. \] Show that in these coordinates: \begin{enumerate} \item The Christoffel symbols are \[ \Gamma^0_{ij} = - K_{ij}; \quad \Gamma^i_{jk} = \bar{\Gamma}^i_{jk}; \quad \Gamma^i_{0j} = K^i_{\,\,j}, \] where $\bar{\Gamma}^i_{jk}$ are the Christoffel symbols of $h$. \item The components of the Riemann tensor are \begin{align*} \label{Riemann1} & R_{0i0}^{\,\,\,\,\,\,\,\, j} = - \frac{\partial}{\partial t} K^{j}_{\,\, i} - K_{il} K^{lj}; \\ \nonumber & R_{ij0}^{\,\,\,\,\,\,\,\, l} = - \bar{\nabla}_i K^l_{\,\, j} + \bar{\nabla}_j K^{l}_{\,\, i}; \\ \nonumber & R_{ijl}^{\,\,\,\,\,\,\,\, m} = \bar{R}_{ijl}^{\,\,\,\,\,\,\,\, m} - K_{il} K^{m}_{\,\,\,\, j} + K_{jl} K^{m}_{\,\,\,\, i}, \end{align*} where $\bar{\nabla}$ is the Levi-Civita connection of $h$ and $\bar{R}_{ijl}^{\,\,\,\,\,\,\,\, m}$ are the components of the Riemann tensor of $h$. \item The time derivative of the inverse metric is given by the formula \[ \frac{\partial h^{ij}}{\partial t} = -2K^{ij}. \] \item The components of the Ricci tensor are \begin{align*} & R_{00} = - \frac{\partial}{\partial t} K^{i}_{\,\, i} - K_{ij} K^{ij}; \\ & R_{0i} = - \bar{\nabla}_i K^j_{\,\, j} + \bar{\nabla}_j K^{j}_{\,\, i}; \\ & R_{ij} = \bar{R}_{ij} - \frac{\partial}{\partial t} K_{ij} + 2 K_{il} K^{l}_{\,\, j} - K^{l}_{\,\, l} K_{ij}, \end{align*} where $\bar{R}_{ij}$ are the components of the Ricci tensor of $h$. \item The scalar curvature is \[ R = \bar{R} - 2 \frac{\partial}{\partial t} K^{i}_{\,\, i} - \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij}, \] where $\bar{R}$ is the scalar curvature of $h$. \item The component $G_{00}$ of the Einstein tensor is \[ G_{00} = \frac12 \left( - \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} \right). \] This shows that the matching conditions guarantee the continuity of $G_{00}$ and $G_{0i}=R_{0i}$. \end{enumerate} \item Recall that the nonvanishing components of the Einstein tensor of the static, spherically symmetric Lorentzian metric \[ \hspace{2cm} ds^2 = -(A(r))^2 dt^2 + (B(r))^2 dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right) \] in the orthonormal frame dual to \[ \hspace{2cm} \omega^0 = A dt, \quad \quad \omega^r = B dr, \quad \quad \omega^\theta = r d\theta, \quad \quad \omega^\varphi = r \sin \theta d\varphi, \] are given by \begin{align*} \hspace{2cm} & G_{00} = \frac{2B'}{rB^3} + \frac{B^2-1}{r^2B^2} = \frac{2m'}{r^2}; \\ & G_{rr} = \frac{2A'}{rAB^2} - \frac{B^2-1}{r^2B^2}; \\ & G_{\theta\theta} = G_{\varphi\varphi} = \frac{A''B-A'B'}{AB^3} + \frac{A'}{rAB^2} - \frac{B'}{rB^3}, \end{align*} where \[ B(r) = \left(1 - \frac{2m(r)}r\right)^{-\frac12}. \] In this exercise we will solve the Einstein equations (without cosmological constant) for a static perfect fluid of constant rest density $\rho$ and rest pressure $p$, and match it to a Schwarzschild exterior. \begin{enumerate} \item Show that \[ B(r) = \left(1 - kr^2 \right)^{-\frac12}, \] where $k=\frac{8 \pi}{3} \rho$. Conclude that the spatial metric is that of a sphere $S^3$ with radius $\frac1{\sqrt{k}}$. \item Solve the ordinary differential equation $G_{rr}=G_{\theta\theta}$ to obtain \[ A(r) = C \left(1 - kr^2 \right)^{\frac12} + D, \] where $C,D \in \mathbb{R}$ are integration constants. \item Show that the matching conditions to a Schwarzschild exterior of mass $M > 0$ across a surface $r=R$ are \[ \begin{cases} \displaystyle A(R)=\left(1 - \frac{2M}R \right)^{\frac12} \\ \displaystyle A'(R)=\frac{M}{R^2}\left(1 - \frac{2M}R \right)^{-\frac12} \\ \displaystyle B(R)=\left(1 - \frac{2M}R \right)^{-\frac12} \end{cases}. \] \item Conclude that \[ A(r) = \frac32 \left(1 - \frac{2M}R \right)^{\frac12} - \frac12 \left(1 - \frac{2Mr^2}{R^3} \right)^{\frac12}. \] \item Show that \[ p(r) = \frac{k\left(1 - kr^2 \right)^{\frac12}}{\frac32 \left(1 - kR^2 \right)^{\frac12} - \frac12 \left(1 - kr^2 \right)^{\frac12}} - k. \] What is the value of $p(R)$? \item Show that $M$ and $R$ must satisfy {\bf Buchdahl's limit}: \[ \frac{2M}{R} < \frac89. \] What happens to $p(0)$ as $\frac{2M}{R} \to \frac89$? \end{enumerate} \end{enumerate} \chapter{Causality} \label{chapter3} In this chapter we briefly discuss the causality theory of a Lorentzian manifold, following \cite{GN14}. We take a minimal approach; more details can be found in \cite{ONeill83, W84, Penrose87, Naber88, HE95, Ringstrom09}. \section{Past and future} \label{sec3.1} A spacetime $(M,g)$ is said to be {\bf time-orientable} if there exists a timelike vector field, that is, a vector field $X$ satisfying $g(X,X)<0$. In this case, we can define a time orientation on each tangent space $T_pM$ by declaring causal vectors $v \in T_pM$ to be {\bf future-pointing} if $g(v, X_p) \leq 0$. It can be shown that any non-time-orientable spacetime admits a {\bf time-orientable double covering} (just like any non-orientable manifold admits an orientable double covering). Assume that $(M,g)$ is {\bf time-oriented} (i.e.~time-orientable with a definite choice of time orientation). A timelike or causal curve $c:I \subset \mathbb{R} \to M$ is said to be {\bf future-directed} if $\dot{c}$ is future-pointing. The {\bf chronological future} of $p\in M$ is the set $I^+(p)$ of all points to which $p$ can be connected by a future-directed timelike curve. The {\bf causal future} of $p\in M$ is the set $J^+(p)$ of all points to which $p$ can be connected by a future-directed causal curve. Notice that $I^+(p)$ is simply the set of all events which are accessible to a particle with nonzero mass at $p$, whereas $J^+(p)$ is the set of events which can be causally influenced by $p$ (as this causal influence cannot propagate faster than the speed of light). Analogously, the {\bf chronological past} of $p\in M$ is the set $I^-(p)$ of all points which can be connected to $p$ by a future-directed timelike curve, and the {\bf causal past} of $p\in M$ is the set $J^-(p)$ of all points which can be connected to $p$ by a future-directed causal curve. In general, the chronological and causal pasts and futures can be quite complicated sets, because of global features of the spacetime. Locally, however, causal properties are similar to those of Minkowski spacetime. More precisely, we have the following statement: \begin{Prop} \label{local} Let $(M,g)$ be a time-oriented spacetime. Then each point $p_0\in M$ has an open neighborhood $V \subset M$ such that the spacetime $(V,g)$ obtained by restricting $g$ to $V$ satisfies: \begin{enumerate} \item $V$ is {\bf geodesically convex}, that is, $V$ is a {\bf normal neighborhood} of each of its points such that given $p,q \in V$ there exists a unique geodesic (up to reparameterization) connecting $p$ to $q$; \item $q\in I^+(p)$ if and only if there exists a future-directed timelike geodesic connecting $p$ to $q$; \item $J^+(p) = \overline{I^+(p)}$; \item $q \in J^+(p) \setminus I^+(p)$ if and only if there exists a future-directed null geodesic connecting $p$ to $q$. \end{enumerate} \end{Prop} \begin{proof} Recall that the {\bf exponential map} $\exp_p: U \subset T_pM \to M$ is the map given by \[ \exp_p(v)=c_v(1), \] where $c_v$ is the geodesic with initial conditions $c_v(0)=p$, $\dot{c}_v(0)=v$; equivalently, $\exp_p(tv)=c_v(t)$ (since $c_{tv}(1)=c_v(t)$). Recall also that $V$ is a {\bf normal neighborhood} of $p$ if $\exp_p: U \to V$ is a diffeomorphism. The existence of geodesically convex neighborhoods is true for any affine connection and is proved for instance in \cite{KN96}. To prove assertion (2), we start by noticing that if there exists a future-directed timelike geodesic connecting $p$ to $q$ then it is obvious that $q \in I^+(p)$. Suppose now that $q \in I^+(p)$; then there exists a future-directed timelike curve $c:[0,1] \to V$ such that $c(0)=p$ and $c(1)=q$. Choose {\bf normal coordinates} $(x^0,x^1,x^2,x^3)$, given by the parameterization \[ \varphi(x^0,x^1,x^2,x^3)=\exp_p(x^0 E_0 + x^1 E_1 + x^2 E_2 + x^3 E_3), \] where $\{E_0,E_1,E_2,E_3\}$ is an orthonormal basis of $T_pM$ with $E_0$ timelike and future-pointing. These are global coordinates in $V$, since $\exp_p:U \to V$ is a diffeomorphism. Defining \begin{align*} W_p(q) & = - \left(x^0(q)\right)^2 + \left(x^1(q)\right)^2 + \left(x^2(q)\right)^2 + \left(x^3(q)\right)^2 \\ & = \sum_{\mu,\nu=0}^3 \eta_{\mu\nu}x^\mu(q)x^\nu(q), \end{align*} with $(\eta_{\mu\nu})=\operatorname{diag}(-1,1,1,1)$, we have to show that $W_p(q)<0$. Let $W_p(t)= W_p(c(t))$. Since $x^\mu(p)=0$, we have $W_p(0)=0$. Setting $x^\mu(t)=x^\mu(c(t))$, we obtain \begin{align*} & \dot{W}_p(t) = 2 \sum_{\mu,\nu=0}^3 \eta_{\mu\nu}x^\mu(t)\dot{x}^\nu(t);\\ & \ddot{W}_p(t) = 2 \sum_{\mu,\nu=0}^3 \eta_{\mu\nu}x^\mu(t)\ddot{x}^\nu(t) + 2\sum_{\mu,\nu=0}^3 \eta_{\mu\nu}\dot{x}^\mu(t)\dot{x}^\nu(t), \end{align*} and consequently \begin{align*} & \dot{W}_p(0) = 0;\\ & \ddot{W}_p(0) = 2\langle \dot{c}(0), \dot{c}(0) \rangle < 0. \end{align*} Therefore there exists $\varepsilon > 0$ such that $W_p(t) < 0$ for $t \in (0, \varepsilon)$. By the Gauss Lemma, the level surfaces of $W_p$ are orthogonal to the geodesics through $p$. Therefore, if $c_v(t)=\exp_p(tv)$ is the geodesic with initial condition $v \in T_pM$, we have \[ (\operatorname{grad} W_p)_{c_v(1)} = a(v) \dot{c}_v(1). \] Now \begin{align*} \left\langle (\operatorname{grad} W_p)_{c_v(t)}, \dot{c}_v(t) \right\rangle & = \frac{d}{dt} W_p(c_v(t)) = \frac{d}{dt} \langle tv, tv \rangle \\ & = \frac{d}{dt} \left( t^2 \langle v, v \rangle \right) = 2t \langle v, v \rangle, \end{align*} and hence \[ \left\langle (\operatorname{grad} W_p)_{c_v(1)}, \dot{c}_v(1) \right\rangle = 2 \langle v, v \rangle. \] On the other hand, \[ \left\langle (\operatorname{grad} W_p)_{c_v(1)}, \dot{c}_v(1) \right\rangle = \langle a(v) \dot{c}_v(1) , \dot{c}_v(1) \rangle = a(v) \langle v, v \rangle. \] We conclude that $a(v)=2$, and therefore \[ (\operatorname{grad} W_p)_{c_v(1)} = 2 \dot{c}_v(1). \] Consequently, $\operatorname{grad} W_p$ is tangent to geodesics through $p$, being future-pointing on future-directed geodesics. Suppose that $W_p(t) < 0$. Then $\left(\operatorname{grad} W_p\right)_{c(t)}$ is timelike future-pointing, and so \[ \dot{W}(t) = \left\langle \left(\operatorname{grad} W_p\right)_{c(t)}, \dot{c}(t) \right\rangle < 0, \] as $\dot{c}(t)$ is also timelike future-pointing. We conclude that we must have $W_p(t)<0$ for all $t\in[0,1]$. In particular, $W_p(q)=W_p(1)<0$, and hence there exists a future-directed timelike geodesic connecting $p$ to $q$. To prove assertion (3), let us see first that $\overline{I^+(p)}\subset J^+(p)$. If $q \in \overline{I^+(p)}$, then $q$ is the limit of a sequence of points $q_n\in I^+(p)$. By (2), $q_n = \exp_p(v_n)$ with $v_n \in T_pM$ timelike future-pointing. Since $\exp_p$ is a diffeomorphism, $v_n$ converges to a causal future-pointing vector $v \in T_pM$, and so $q=\exp_p(v)$ can be reached from $p$ by a future-directed causal geodesic. The converse inclusion $J^+(p) \subset \overline{I^+(p)}$ holds in general (cf.~Proposition~\ref{global}). Finally, (4) is obvious from (3) and the fact that $\exp_p$ is a diffeomorphism onto $V$. \end{proof} This local behavior can be used to prove the following global result. \begin{Prop} \label{global} Let $(M,g)$ be a time oriented spacetime and $p \in M$. Then: \begin{enumerate} \item $I^+(p)$ is open; \item $J^+(p) \subset \overline{I^+(p)}$; \item $I^+(p)=\operatorname{int} J^+(p)$ \item if $r \in J^+(p)$ and $q \in I^+(r)$ then $q \in I^+(p)$; \item if $r \in I^+(p)$ and $q \in J^+(r)$ then $q \in I^+(p)$. \end{enumerate} \end{Prop} \begin{proof} Exercise. \end{proof} The twin paradox also holds locally for general spacetimes. More precisely, we have the following statement: \begin{Prop} Let $(M,g)$ be a time-oriented spacetime, $p_0 \in M$ and $V \subset M$ a geodesically convex open neighborhood of $p_0$. The spacetime $(V,g)$ obtained by restricting $g$ to $V$ satisfies the following property: if $p,q \in V$ with $q \in I^+(p)$, $c$ is the timelike geodesic connecting $p$ to $q$ and $\gamma$ is any timelike curve connecting $p$ to $q$, then $\tau(\gamma) \leq \tau(c)$, with equality if and only if $\gamma$ is a reparameterization of $c$. \end{Prop} \begin{proof} Any timelike curve $\gamma:[0,1] \to V$ satisfying $\gamma(0)=p$, $\gamma(1)=q$ can be written as \[ \gamma(t)=\exp_p(r(t)n(t)), \] for $t \in [0,1]$, where $r(t) \geq 0$ and $\langle n(t), n(t) \rangle = -1$. We have \[ \dot{\gamma}(t)=(\exp_p)_*\left(\dot{r}(t)n(t)+r(t)\dot{n}(t)\right). \] Since $\langle n(t), n(t) \rangle = -1$, we have $\langle \dot{n}(t), n(t) \rangle = 0$, and consequently $\dot{n}(t)$ is tangent to the level surfaces of the function $v \mapsto \langle v, v \rangle$. We conclude that \[ \dot{\gamma}(t) = \dot{r}(t) X_{\gamma(t)} + Y(t), \] where $X$ is the unit tangent vector field to timelike geodesics through $p$ and $Y(t)=r(t)(\exp_p)_*\dot{n}(t)$ is tangent to the level surfaces of $W_p$ (hence orthogonal to $X_{\gamma(t)}$). Consequently, \begin{align*} \tau(\gamma) & = \int_0^1 \left|\left\langle \dot{r}(t) X_{\gamma(t)} + Y(t),\dot{r}(t) X_{\gamma(t)} + Y(t) \right\rangle\right|^\frac12 dt \\ & = \int_0^1 \left( \dot{r}(t)^2 - |Y(t)|^2 \right)^\frac12 dt \\ & \leq \int_0^1 \dot{r}(t) dt = r(1) = \tau(c), \end{align*} where we have used the facts that $\gamma$ is timelike, $\dot{r}(t)> 0$ for all $t \in [0,1]$ (as $\dot{\gamma}$ is future-pointing) and $\tau(c)=r(1)$ (as $q=\exp_p(r(1)n(1))$). It should be clear that $\tau(\gamma)=\tau(c)$ if and only if $|Y(t)|\equiv 0 \Leftrightarrow Y(t)\equiv 0$ ($Y(t)$ is spacelike or zero) for all $t \in [0,1]$, implying that $n$ is constant. In this case, $\gamma(t)=\exp_p({r(t)n})$ is, up to reparameterization, the geodesic through $p$ with initial condition $n \in T_p M$. \end{proof} There is also a local property characterizing null geodesics. \begin{Prop} Let $(M,g)$ be a time-oriented spacetime, $p_0 \in M$ and $V \subset M$ a geodesically convex open neighborhood of $p_0$. The spacetime $(V,g)$ obtained by restricting $g$ to $V$ satisfies the following property: if for $p,q \in V$ there exists a future-directed null geodesic $c$ connecting $p$ to $q$ and $\gamma$ is a causal curve connecting $p$ to $q$ then $\gamma$ is a reparameterization of $c$. \end{Prop} \begin{proof} Since $p$ and $q$ are connected by a null geodesic, we conclude from Proposition~\ref{local} that $q \in J^+(p) \setminus I^+(p)$. Let $\gamma:[0,1] \to V$ be a causal curve connecting $p$ to $q$. Then we must have $\gamma(t) \in J^+(p) \setminus I^+(p)$ for all $t \in [0,1]$, since $\gamma(t_0) \in I^+(p)$ implies $\gamma(t) \in I^+(p)$ for all $t>t_0$ (see Proposition~\ref{global}). Consequently, we have \[ W_p(\gamma(t))=0 \Rightarrow \left\langle \left(\operatorname{grad} W_p\right)_{\gamma(t)}, \dot{\gamma}(t) \right\rangle = 0, \] where $W_p$ was defined in the proof of Proposition~\ref{local}. The formula \[ (\operatorname{grad} W_p)_{c_v(1)} = 2 \dot{c}_v(1), \] which was proved for timelike geodesics $c_v$ with initial condition $v \in T_pM$, must also hold for null geodesics (by continuity). Hence $\operatorname{grad} W_p$ is tangent to the null geodesics ruling $J^+(p) \setminus I^+(p)$ and future-pointing. Since $\dot{\gamma}(t)$ is also future-pointing, we conclude that $\dot{\gamma}$ is proportional to $\operatorname{grad} W_p$, and therefore $\gamma$ is a reparameterization of a null geodesic, which must be $c$. \end{proof} \begin{Cor} \label{null_geodesic} Let $(M,g)$ be a time-oriented spacetime and $p \in M$. If $q \in J^+(p)\setminus I^+(p)$ then any future-directed causal curve connecting $p$ to $q$ must be a reparameterized null geodesic. \end{Cor} \section{Causality conditions} \label{sec3.2} For physical applications, it is important to require that the spacetime satisfies reasonable causality conditions. The simplest of these conditions excludes time travel, i.e.~the possibility of a particle returning to an event in its past history. \begin{Def} A spacetime $(M,g)$ is said to satisfy the {\bf chronology condition} if it does not contain closed timelike curves. \end{Def} This condition is violated by compact spacetimes: \begin{Prop} Any compact spacetime $(M,g)$ contains closed timelike curves. \end{Prop} \begin{proof} Taking if necessary the time-orientable double covering, we can assume that $(M,g)$ is time-oriented. Since $I^+(p)$ is an open set for any $p \in M$, it is clear that $\{ I^+(p) \}_{p \in M}$ is an open cover of $M$. If $M$ is compact, we can obtain a finite subcover $\{ I^+(p_1), \ldots, I^+(p_N) \}$. Now if $p_1 \in I^+(p_i)$ for $i \neq 1$ then $I^+(p_1) \subset I^+(p_i)$, and we can exclude $I^+(p_1)$ from the subcover. Therefore, we can assume without loss of generality that $p_1 \in I^+(p_1)$, and hence there exists a closed timelike curve starting and ending at $p_1$. \end{proof} A stronger restriction on the causal behavior of the spacetime is the following: \begin{Def} A spacetime $(M,g)$ is said to be {\bf stably causal} if there exists a {\bf global time function}, i.e.~a smooth function $t:M \to \mathbb{R}$ such that $\operatorname{grad}(t)$ is timelike. \end{Def} In particular, a stably causal spacetime is time-orientable. We choose the time orientation defined by $-\operatorname{grad}(t)$, so that $t$ increases along future-directed timelike curves. Notice that this implies that no closed timelike curves can exist, i.e.~any stably causal spacetime satisfies the chronology condition. In fact, any small perturbation of a stably causal spacetime still satisfies the chronology condition (Exercise~\ref{stable}). Let $(M,g)$ be a time-oriented spacetime. A smooth future-directed causal curve $c:(a,b) \to M$ (with possibly $a=-\infty$ or $b=+\infty$) is said to be {\bf future-inextendible} if $\lim_{t \to b} c(t)$ does not exist. The definition of a {\bf past-inextendible} causal curve is analogous. The {\bf future domain of dependence} of $S\subset M$ is the set $D^+(S)$ of all events $p \in M$ such that any past-inextendible causal curve starting at $p$ intersects $S$. Therefore any causal influence on an event $p \in D^+(S)$ had to register somewhere in $S$, and one can expect that what happens at $p$ can be predicted from data on $S$. Similarly, the {\bf past domain of dependence} of $S$ is the set $D^-(S)$ of all events $p \in M$ such that any future-inextendible causal curve starting at $p$ intersects $S$. Therefore any causal influence of an event $p \in D^-(S)$ will register somewhere in $S$, and one can expect that what happened at $p$ can be retrodicted from data on $S$. The {\bf domain of dependence} of $S$ is simply the set $D(S)=D^+(S)\cup D^-(S)$. Let $(M,g)$ be a stably causal spacetime with time function $t:M \to \mathbb{R}$. The level sets $S_a = t^{-1}(a)$ are said to be {\bf Cauchy hypersurfaces} if $D(S_a)=M$. Spacetimes for which this happens have particularly good causal properties. \begin{Def} A stably causal spacetime possessing a time function whose level sets are Cauchy hypersurfaces is said to be {\bf globally hyperbolic}. \end{Def} Notice that the future and past domains of dependence of the Cauchy hypersurfaces $S_a$ are $D^+(S_a) = t^{-1}([a, +\infty))$ and $D^-(S_a) = t^{-1}((-\infty,a])$. \section{Exercises} \label{sec3.?} \begin{enumerate} \item Let $(M,g)$ be the quotient of the $2$-dimensional Minkowski spacetime by the discrete group of isometries generated by the map $f(t,x)=(-t,x+1)$. Show that $(M,g)$ is not time orientable. \item Let $(M,g)$ be a time oriented spacetime and $p \in M$. Show that: \begin{enumerate} \item $I^+(p)$ is open; \item $J^+(p)$ is not necessarily closed; \item $J^+(p) \subset \overline{I^+(p)}$; \item $I^+(p)=\operatorname{int} J^+(p)$ \item if $r \in J^+(p)$ and $q \in I^+(r)$ then $q \in I^+(p)$; \item if $r \in I^+(p)$ and $q \in J^+(r)$ then $q \in I^+(p)$. \end{enumerate} \item Consider the $3$-dimensional Minkowski spacetime $(\mathbb{R}^3, g)$, where \[ g = - dt^2 + dx^2 + dy^2. \] Let $c:\mathbb{R} \to \mathbb{R}^3$ be the curve $c(t)=(t,\cos t, \sin t)$. Show that although $\dot{c}(t)$ is null for all $t \in \mathbb{R}$ we have $c(t)\in I^+(c(0))$ for all $t>0$. What kind of motion does this curve represent? \item \label{stable} Let $(M,g)$ be a stably causal spacetime and $h$ an arbitrary symmetric $(2,0)$-tensor field with compact support. Show that for sufficiently small $|\varepsilon|$ the tensor field $g_\varepsilon = g + \varepsilon h$ is still a Lorentzian metric on $M$, and $(M,g_\varepsilon)$ satisfies the chronology condition. \item Let $(M,g)$ be the quotient of the $2$-dimensional Minkowski spacetime by the discrete group of isometries generated by the map $f(t,x)=(t+1,x+1)$. Show that $(M,g)$ satisfies the chronology condition, but there exist arbitrarily small perturbations of $(M,g)$ (in the sense of Exercise~\ref{stable}) which do not. \item Let $(M,g)$ be a time oriented spacetime and $S \subset M$. Show that: \begin{enumerate} \item $S \subset D^+(S)$; \item $D^+(S)$ is not necessarily open; \item $D^+(S)$ is not necessarily closed. \end{enumerate} \item Let $(M,g)$ be the $2$-dimensional spacetime obtained by removing the positive $x$-semi-axis of Minkowski $2$-dimensional spacetime (cf.~Figure~\ref{stably_causal}). Show that: \begin{enumerate} \item $(M,g)$ is stably causal but not globally hyperbolic; \item there exist points $p,q \in M$ such that $J^+(p) \cap J^-(q)$ is not compact; \item there exist points $p,q \in M$ with $q \in I^+(p)$ such that the supremum of the lengths of timelike curves connecting $p$ to $q$ is not attained by any timelike curve. \end{enumerate} \begin{figure}[h!] \begin{center} \psfrag{t}{$t$} \psfrag{x}{$x$} \psfrag{S}{$S$} \psfrag{D}{$D(S)$} \psfrag{p}{$p$} \psfrag{J+}{$J^+(p)$} \epsfxsize=1.0\textwidth \leavevmode \epsfbox{stably_causal.eps} \end{center} \caption{Stably causal but not globally hyperbolic spacetime.} \label{stably_causal} \end{figure} \item Let $(\Sigma, h)$ be a $3$-dimensional Riemannian manifold. Show that the spacetime $(M,g)=(\mathbb{R} \times \Sigma, -dt \otimes dt + h)$ is globally hyperbolic if and only if $(\Sigma,h)$ is complete. \item Show that the following spacetimes are globally hyperbolic: \begin{enumerate} \item the Minkowski spacetime; \item the FLRW spacetimes; \item the region $\{r>2m\}$ of the Schwarzschild spacetime; \item the region $\{r<2m\}$ of the Schwarzschild spacetime; \item the maximal analytical extension of the Schwarzschild spacetime. \end{enumerate} \item Let $(M,g)$ be a global hyperbolic spacetime with Cauchy hypersurface $S$. Show that $M$ is diffeomorphic to $\mathbb{R} \times S$. \end{enumerate} \chapter{Singularity theorems} \label{chapter4} As we have seen in Chapter~\ref{chapter2}, both the Schwarzschild solution and the FLRW cosmological models display singularities, beyond which timelike and null geodesics cannot be continued. It was once thought that these solutions were singular due to their high degree of symmetry, and that more realistic spacetimes would be non-singular. In this chapter we show that this is not the case: any sufficiently small perturbation of these solutions will still be singular. We follow \cite{W84} when discussing conjugate points and \cite{GN14} for the details of the proofs. See also \cite{ONeill83, Penrose87, Naber88, HE95}. \section{Geodesic congruences} \label{sec4.1} Let $(M,g)$ be a Lorentzian manifold. A {\bf congruence} of curves in an open set $U \subset M$ is the family of integral curves of a nonvanishing vector field $X$ in $U$. We will assume that $X$ is unit timelike and geodesic, that is, \[ \left\langle X, X \right\rangle = -1 \qquad \text{ and } \qquad \nabla_XX = 0. \] The properties of the congruence determined by $X$ are best analyzed by considering its {\bf second fundamental form} \[ B_{\mu\nu} = \nabla_\nu X_\mu. \] This tensor is {\bf purely spatial}, that is, \[ B_{\mu\nu} X^\mu = B_{\mu\nu} X^\nu = 0. \] Indeed, since $X$ is unit, \[ B_{\mu\nu} X^\mu = X^\mu \nabla_\nu X_\mu = \frac12 \nabla_\nu (X_\mu X^\mu) = 0. \] On the other hand, because $X$ is geodesic, \[ B_{\mu\nu} X^\nu = X^\nu \nabla_\nu X_\mu = \nabla_X X_\mu = 0. \] \begin{Prop} The second fundamental form $B$ satisfies \[ \nabla_X B_{\mu\nu} = - B_{\mu\alpha} B^\alpha_{\,\,\,\,\nu} + R_{\alpha \nu \mu \beta} X^\alpha X^\beta. \] \end{Prop} \begin{proof} We have \begin{align*} X^\alpha \nabla_\alpha B_{\mu\nu} & = X^\alpha \nabla_\alpha \nabla_\nu X_{\mu} = X^\alpha \nabla_\nu \nabla_\alpha X_{\mu} + X^\alpha R_{\alpha\nu\mu\beta} X^\beta \\ & = \nabla_\nu (X^\alpha \nabla_\alpha X_{\mu}) - (\nabla_\nu X^\alpha) (\nabla_\alpha X_{\mu}) + R_{\alpha\nu\mu\beta} X^\alpha X^\beta \\ & = - B_{\mu\alpha} B^\alpha_{\,\,\,\,\nu} + R_{\alpha\nu\mu\beta} X^\alpha X^\beta. \end{align*} \end{proof} Let $c(t,s)$ be a one-parameter family of geodesics of the congruence, parameterized such that \[ \frac{\partial c}{\partial t} = X \] (Figure~\ref{deviation}). The {\bf geodesic deviation vector} associated to $c$ is \[ Y = \frac{\partial c}{\partial s}. \] \begin{figure}[h!] \begin{center} \psfrag{X}{$X$} \psfrag{Y}{$Y$} \epsfxsize=.3\textwidth \leavevmode \epsfbox{deviation.eps} \end{center} \caption{Geodesic deviation.} \label{deviation} \end{figure} \begin{Prop} The geodesic deviation vector satisfies \[ \nabla_X Y^\mu = B^\mu_{\,\,\,\,\nu} Y^\nu. \] \end{Prop} \begin{proof} The definition of $Y$ implies that \[ [X,Y]=0 \Leftrightarrow \nabla_XY - \nabla_Y X = 0. \] Consequently, we have \[ \nabla_X Y^\mu = \nabla_Y X^\mu = Y^\nu \nabla_\nu X^\mu = B^\mu_{\,\,\,\,\nu} Y^\nu. \] \end{proof} The equation for $\nabla_X B_{\mu\nu}$ then yields the following famous result. \begin{Prop} The geodesic deviation vector satisfies the {\bf Jacobi equation} \[ \nabla_X \nabla_X Y = R(X,Y)X. \] \end{Prop} \begin{proof} We have \begin{align*} \nabla_X \nabla_X Y^\alpha & = \nabla_X (B^\alpha_{\,\,\,\,\beta} Y^\beta) = (\nabla_X B^\alpha_{\,\,\,\,\beta}) Y^\beta + B^\alpha_{\,\,\,\,\beta} \nabla_X Y^\beta \\ & = - B^{\alpha\mu} B_{\mu\beta} Y^\beta + R_{\mu\beta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\alpha} X^\mu X^\nu Y^\beta + B^\alpha_{\,\,\,\,\beta} B^\beta_{\,\,\,\,\mu} Y^\mu \\ & = R^\alpha_{\,\,\,\,\beta\mu\nu} X^\beta X^\mu Y^\nu. \end{align*} Alternatively, we can simply notice that \[ \nabla_X \nabla_X Y = \nabla_X \nabla_Y X = \nabla_Y \nabla_X X + R(X,Y) X = R(X,Y) X. \] \end{proof} We now define the kinematic quantities associated to the congruence. \begin{Def} The {\bf spatial metric} associated to the congruence is \[ h_{\mu\nu} = g_{\mu\nu} + X_\mu X_\nu. \] The {\bf expansion}, {\bf shear} and {\bf vorticity} are defined as\footnote{Curved brackets indicate symmetrization: $B_{(\mu\nu)}=\frac12\left(B_{\mu\nu}+B_{\nu\mu}\right)$.} \begin{align*} & \theta = h^{\mu\nu} B_{\mu\nu} = g^{\mu\nu} B_{\mu\nu}, \\ & \sigma_{\mu\nu} = B_{(\mu\nu)} - \frac13 \theta h_{\mu\nu}, \\ & \omega_{\mu\nu} = B_{[\mu\nu]}, \end{align*} so that we have the decomposition \[ B_{\mu\nu} = \frac13 \theta h_{\mu\nu} + \sigma_{\mu\nu} + \omega_{\mu\nu}. \] \end{Def} Note that all the tensors above are purely spatial: \[ h_{\mu\nu} X^\nu = \sigma_{\mu\nu} X^\nu = \omega_{\mu\nu} X^\nu = 0, \] Moreover, the trace of $h$ is \[ h^{\mu\nu} h_{\mu\nu} = g^{\mu\nu} h_{\mu\nu} = g^{\mu\nu} (g_{\mu\nu} + X_\mu X_\nu) = 4 - 1 = 3, \] and so the shear is traceless: \[ h^{\mu\nu} \sigma_{\mu\nu} = g^{\mu\nu} \sigma_{\mu\nu} = 0. \] Fix a geodesic $c$, and let $Y$ be a geodesic deviation vector along $c$. If $Y$ is initially orthogonal to $c$ then it will remain orthogonal: \[ X \cdot (X_\mu Y^\mu) = (\nabla_X X_\mu) Y^\mu + X_\mu \nabla_X Y^\mu = X_\mu B^{\mu\nu} Y_\nu = 0. \] In an orthonormal frame $\{X,E_1,E_2,E_3\}$ parallel along $c$ we then have \[ \dot{Y}^i = B_{ij} Y^j = \left(\frac13 \theta \delta_{ij} + \sigma_{ij} + \omega_{ij}\right) Y^j = \frac13 \theta Y^i + \sigma_{ij} Y^j + \omega_{ij}Y^j \] ($i=1,2,3$). If we consider a small spacelike sphere in the hypersurface orthogonal to $c$ and let it be carried by the geodesics of the congruence, we see that $\theta$ measures the rate at which the sphere's volume grows, $\sigma$ describes the sphere's volume-preserving shape deformations, and $\omega$ gives the sphere's angular velocity. \begin{Prop} The expansion of the congruence satisfies the {\bf Raychaudhuri equation} \[ X \cdot \theta = - \frac13 \theta^2 - \sigma_{\mu\nu} \sigma^{\mu\nu} + \omega_{\mu\nu} \omega^{\mu\nu} - R_{\mu\nu} X^\mu X^\nu. \] \end{Prop} \begin{proof} Taking the trace of the equation for $\nabla_X B_{\mu\nu}$ (that is, contracting with $g^{\mu\nu}$) yields \begin{align*} X \cdot \theta & = - B_{\mu\nu} B^{\nu\mu} - R_{\alpha \beta} X^\alpha X^\beta \\ & = - \left(\frac13 \theta h_{\mu\nu} + \sigma_{\mu\nu} + \omega_{\mu\nu} \right) \left(\frac13 \theta h^{\mu\nu} + \sigma^{\mu\nu} - \omega^{\mu\nu} \right) - R_{\mu \nu} X^\mu X^\nu \\ & = - \frac13 \theta^2 - \sigma_{\mu\nu} \sigma^{\mu\nu} + \omega_{\mu\nu} \omega^{\mu\nu} - R_{\mu\nu} X^\mu X^\nu. \end{align*} \end{proof} \section{Energy conditions} \label{sec4.2} \begin{Def} A given energy-momentum tensor $T_{\mu\nu}$, with trace $T = g^{\mu\nu}T_{\mu\nu}$, is said to satisfy: \begin{enumerate} \item the {\bf strong energy condition (SEC)} if $T_{\mu\nu}X^\mu X^\nu + \frac12 T \geq 0$ for all unit timelike vectors $X$; \item the {\bf weak energy condition (WEC)} if $T_{\mu\nu}X^\mu X^\nu \geq 0$ for all timelike vectors $X$; \item the {\bf null energy condition (NEC)} if $T_{\mu\nu}X^\mu X^\nu \geq 0$ for all null vectors $X$; \item the {\bf dominant energy condition (DEC)} if $-T^{\mu\nu} X_\nu$ is causal and future-pointing for all causal future-pointing vectors $X$. \end{enumerate} \end{Def} The weak energy condition is the reasonable requirement that any observer should measure a non-negative energy density, and the null energy condition can be thought of as the same requirement for observers moving at the speed of light. The dominant energy condition, on the other hand, demands that any observer should measure the flow of energy and momentum to be causal. To understand the strong energy condition, we write the Einstein equations as \[ R_{\mu\nu} - \frac12 R g_{\mu\nu} = 8 \pi T_{\mu\nu} \] (possibly including the cosmological constant in the energy-momentum tensor). Note that the trace of this equation yields \[ - R = 8 \pi T, \] and so the Einstein equations can also be written as \[ R_{\mu\nu} = 8 \pi \left( T_{\mu\nu} - \frac12 T g_{\mu\nu} \right). \] Therefore the strong energy condition simply requires that the Ricci tensor satisfies $R_{\mu\nu}X^\mu X^\nu \geq 0$ for all timelike vectors $X$ (given that the Einstein equations are written as above). Generically, the energy-momentum tensor is diagonalizable, that is, there exists an orthonormal frame $\{E_0,E_1,E_2,E_3\}$ in which the energy-momentum tensor is diagonal, \[ (T_{\mu\nu}) = \operatorname{diag}(\rho, p_1, p_2, p_3). \] The timelike eigenvalue $\rho$ and is called the {\bf rest energy density}, and the spacelike eigenvalues $p_1, p_2,p_3$ are known as the {\bf principal pressures}. In terms of these eigenvalues, we have: \begin{enumerate} \item SEC $\Leftrightarrow \rho+\sum_{i=1}^3p_i\geq 0$ and $\rho+p_i\geq 0$ ($i=1,2,3$). \item WEC $\Leftrightarrow \rho\geq 0$ and $\rho+p_i\geq 0$ ($i=1,2,3$). \item NEC $\Leftrightarrow \rho+p_i\geq 0$ ($i=1,2,3$). \item DEC $\Leftrightarrow \rho\geq |p_i|$ ($i=1,2,3$). \end{enumerate} Using this characterization, it is easy to see that the NEC is the weakest energy condition, that is, it is implied by any of the other conditions. The remaining three energy conditions are largely independent, except that the DEC implies the WEC. Notice that in particular the SEC does not imply the WEC. \section{Conjugate points} \label{sec4.3} \begin{Def} Let $(M,g)$ be a Lorentzian manifold. A point $q \in M$ is said to be {\bf conjugate} to $p \in M$ along a timelike geodesic $c$ if there exists a nonvanishing solution $Y$ of the Jacobi equation $\nabla_X \nabla_X Y = R(X,Y)X$ such that $Y_p=Y_q=0$. \end{Def} Informally, two points $p$ and $q$ are conjugate along $c$ if there exists a nearby timelike geodesic intersecting $c$ at both $p$ and $q$ (Figure~\ref{conjugate}). \begin{figure}[h!] \begin{center} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{g}{$c$} \psfrag{X}{$X$} \psfrag{Y}{$Y$} \epsfxsize=.3\textwidth \leavevmode \epsfbox{conjugate.eps} \end{center} \caption{Geodesic deviation.} \label{conjugate} \end{figure} Chose an orthonormal frame $\{X,E_1, E_2, E_3\}$ parallel along $c$, where $X$ is the unit tangent vector. Let $Y$ be a geodesic deviation vector $Y$ which vanishes at $p = c(0)$. If we write \[ Y = Y^0 X + Y^i E_i \] then the Jacobi equation becomes \[ \begin{cases} \ddot{Y}^0 = 0 \\ \ddot{Y}^i = R^i_{\,\,\,\,00j} Y^j \end{cases}. \] Since $Y^0$ is an affine function of the proper time $\tau$, it must vanish identically if $Y$ vanishes again along $c$. We will therefore assume that $Y^0=0$, that is, that $Y$ is orthogonal to $c$. Since the remaining components of $Y$ satisfy a linear ODE, we know that \[ Y^i(\tau) = A_{ij}(\tau) \dot{Y}^j(0), \] where $A(\tau)$ is the fundamental matrix solution vanishing at $\tau=0$: \[ \begin{cases} A(0)=0 \\ \dot{A}(0)=I \\ \ddot{A}_{ij} = R^i_{\,\,\,\,00k} A_{kj} \end{cases}. \] Although $A(0)=0$ is singular, $A(\tau)$ is not singular for $\tau > 0$ sufficiently small, because $\dot{A}(0)=I$. If $A(\tau)$ becomes singular for some $\tau_*>0$ then $q = c(\tau_*)$ is conjugate to $p$ (we just have to choose $\dot{Y}(0)$ to be a nonvanishing column vector in the kernel of $A(\tau_*)$). Consider the congruence of timelike geodesics through $p$. Since on the one hand \[ \dot{Y}^i = B_{ij} Y^j \] and on the other \[ \dot{Y} = \dot{A} \dot{Y}(0) = \dot{A} A^{-1} A \dot{Y}(0) = \dot{A} A^{-1} Y, \] we conclude that \[ B = \dot{A} A^{-1}, \] and so \[ \theta = \operatorname{tr} B = \operatorname{tr} (\dot{A} A^{-1}) = \frac{d}{d\tau} \log(\det A). \] Therefore the expansion of the congruence blows up if and only if the geodesic approaches the first conjugate point $q$. \begin{Thm} Let $(M,g)$ be a $4$-dimensional Lorentzian manifold satisfying the SEC, $c$ a timelike geodesic and $p = c(0)$. Suppose that the expansion $\theta$ of the congruence of timelike geodesics through $p$ takes a negative value $\theta_0 < 0$ at some point $r = c(\tau_0)$, with $\tau_0>0$. Then there exists a point $q$ conjugate to $p$ along $c$ at a distance at most $\frac{3}{|\theta_0|}$ from $r$. \end{Thm} \begin{proof} By the Gauss Lemma, we can find local coordinates $(t,x^1,x^2,x^3)$ such that $X^\sharp = dt$, and so \[ dX^\sharp = 0 \Leftrightarrow \nabla_{[\mu}X_{\nu]} = 0. \] In other words, the congruence has no vorticity, and the Raychaudhuri equation becomes \[ \frac{d\theta}{d\tau} = - \frac13 \theta^2 - \sigma_{\mu\nu} \sigma^{\mu\nu} - R_{\mu\nu} X^\mu X^\nu. \] Because $\sigma$ is purely spatial and $(M,g)$ satisfies the SEC, we have \[ \frac{d\theta}{d\tau} \leq - \frac13 \theta^2 \Leftrightarrow - \frac1{\theta^2}\frac{d\theta}{d\tau} \geq \frac13 \Rightarrow \frac1{\theta} \geq \frac1{\theta_0} + \frac13(\tau-\tau_0). \] We conclude that $\frac1{\theta}$ vanishes, and so $\theta$ blows up, at proper time at most $\tau_0 + \frac{3}{|\theta_0|}$. \end{proof} Let $c(t,s)$ be a one-parameter family of timelike curves connecting two points $p$ and $q$: \[ c(t_0,s) = p \qquad \text{ and } \qquad c(t_1,s) = q \] for all $s$. Then the connecting vector \[ Y = \frac{\partial c}{\partial s}, \] which in general is not a Jacobi field, satisfies \[ Y_p = Y_q = 0. \] We assume that $c(t,s)$ has been parameterized in such a way that $Y$ does not vanish identically. The tangent vector \[ X = \frac{\partial c}{\partial t} \] is timelike, and if we define \[ f(t,s) = \left( - \left\langle X, X \right\rangle \right)^\frac12 \] then the length of each curve is \[ \tau(s) = \int_{t_0}^{t_1} f(t,s) dt. \] We have \begin{align*} \frac{d\tau}{ds} & = \int_{t_0}^{t_1} \frac{\partial f}{\partial s} dt = - \int_{t_0}^{t_1} \frac1{f} \left\langle X, \nabla_Y X \right\rangle dt = - \int_{t_0}^{t_1} \frac1{f} \left\langle X, \nabla_X Y \right\rangle dt \\ & = - \int_{t_0}^{t_1} X \cdot \left( \frac1{f} \left\langle X, Y \right\rangle \right) dt + \int_{t_0}^{t_1} \left\langle \nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt \\ & = \int_{t_0}^{t_1} \left\langle \nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt, \end{align*} where we used the Fundamental Theorem of Calculus and the fact that $Y_p=Y_q=0$. This shows that the timelike curve $c$ defined as $c(t)=c(t,0)$ has extremal length among all timelike curves in such one-parameter families if and only if \[ \nabla_X \left(\frac{X}{f}\right) = 0, \] that is, if and only if it is a timelike geodesic. Assume this to be the case. Then \[ \frac{d^2\tau}{ds^2}(0) = \int_{t_0}^{t_1} Y \cdot \left\langle \nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt = \int_{t_0}^{t_1} \left\langle \nabla_Y\nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt. \] Assuming that $f(t,0)=1$ (that is, $c$ is parameterized by its proper time) and $\langle X, Y \rangle = 0$ for $s=0$ (which is always possible by reparameterizing $c(t,s)$) leads to \[ \frac{d^2\tau}{ds^2}(0) = \int_{t_0}^{t_1} \left\langle \nabla_Y\nabla_X X, Y \right\rangle dt. \] Finally, using \[ R(X,Y)Z = \nabla_X\nabla_Y Z - \nabla_Y\nabla_X Z - \nabla_{[X,Y]} Z = \nabla_X\nabla_Y Z - \nabla_Y\nabla_X Z \] we obtain \begin{align*} \frac{d^2\tau}{ds^2}(0) & = \int_{t_0}^{t_1} \left\langle \nabla_X\nabla_Y X - R(X,Y)X, Y \right\rangle dt \\ & = \int_{t_0}^{t_1} \left\langle \nabla_X\nabla_X Y - R(X,Y)X, Y \right\rangle dt. \end{align*} \begin{Thm} \label{Thmconjugate} A timelike curve $c$ connecting the points $p, q \in M$ locally maximizes the proper time (along any one-parameter family of timelike curves connecting the same points) if and only if it is a timelike geodesic without conjugate points to $p$ between $p$ and $q$. \end{Thm} \begin{proof} It is clear from what was done above that $c$ being a geodesic is a necessary condition. Let us assume that $c$ has no conjugate points (to $p$, say) between $p$ and $q$. In an orthonormal frame parallel along $c$ we have \[ \frac{d^2\tau}{ds^2}(0) = \int_{t_0}^{t_1} Y^i \left(\ddot{Y}^i - R^i_{\,\,\,\,00j} Y^j \right) dt. \] Because there are no conjugate points, the fundamental matrix solution $A(t)$ is nondegenerate for $t \in (t_0,t_1)$, and we can set \[ Y^i = A_{ij} Z^j. \] We have \[ \ddot{Y}^i = A_{ij} \ddot{Z}^j + 2\dot{A}_{ij} \dot{Z}^j + \ddot{A}_{ij} Z^j = A_{ij} \ddot{Z}^j + 2\dot{A}_{ij} \dot{Z}^j + R^i_{\,\,\,\,00k} {A}_{kj} Z^j, \] and so \begin{align*} \frac{d^2\tau}{ds^2}(0) & = \int_{t_0}^{t_1} A_{ij} Z^j \left( A_{ik} \ddot{Z}^k + 2\dot{A}_{ik} \dot{Z}^k \right) dt \\ & = \int_{t_0}^{t_1} Z^t A^t \left( A \ddot{Z} + 2\dot{A} \dot{Z} \right) dt \\ & = \int_{t_0}^{t_1} \left[ \frac{d}{dt} \left(Z^tA^tA \dot{Z}\right) - \dot{Z}^tA^tA \dot{Z} - Z^t\dot{A}^tA \dot{Z} + Z^tA^t\dot{A} \dot{Z} \right] dt \\ & = - \int_{t_0}^{t_1} (A\dot{Z})^t A \dot{Z} dt + \int_{t_0}^{t_1} Z^t \left(A^t\dot{A} - \dot{A}^tA\right) \dot{Z} dt. \end{align*} Above we used the Fundamental Theorem of Calculus and the fact that $(AZ)^t A\dot{Z} = Y^t (\dot{Y} - \dot{A}Z) = Y^t (\dot{Y} - BY)$ vanishes at $t_0$ and $t_1$ (although $B$ blows up as $(t-t_0)^{-1}$, $Y$ vanishes as $t-t_0$ or faster by Taylor's formula). From $\dot{A}=BA$ we have \[ A^t\dot{A} - \dot{A}^tA = A^tBA - A^tB^tA = A^t \left( B - B^t \right) A = 2 A^t \omega A = 0, \] because the vorticity matrix $\omega$ vanishes for the congruence of timelike geodesics through $p$. Therefore \[ \frac{d^2\tau}{ds^2}(0) = - \int_{t_0}^{t_1} (A\dot{Z})^t A \dot{Z} dt \leq 0, \] with equality if and only if \[ \dot{Z} \equiv 0 \Rightarrow Z \equiv 0 \Rightarrow Y \equiv 0 \] (note that if $Z$ is constant then it must be zero because $0 = Y_q = A(t_1) Z$ and $A(t_1)$ is nonsingular). We conclude that $c$ is indeed a maximum of the proper time along any one-parameter family of timelike curves connecting $p$ and $q$. On the other hand, if there exists a conjugate point along $c$ between $p$ and $q$, say $r=c(t^*)$, then let $\hat{Y}$ be a nonvanishing Jacobi field such that $\hat{Y}(t_0) = \hat{Y}(t^*) = 0$ (in particular $\hat{Y}$ is orthogonal to $c$), and let $Y$ be the vector field along $c$ that coincides with $\hat{Y}$ between $p$ and $r$ and is zero between $r$ and $q$. Similarly, let $\hat{Z}$ be the (necessarily spacelike) vector field parallel along $c$ such that $\hat{Z}(t^*) = -\nabla_X\hat{Y}(t^*)$, and let $Z(t)=\theta(t)\hat{Z}(t)$, where $\theta$ is a smooth function satisfying $\theta(t_0)=\theta(t_1)=0$ and $\theta(t^*)=1$. Finally, let $Y_\varepsilon$ be the vector field along $c$ defined by $Y_\varepsilon = Y + \varepsilon Z$, and consider a one-parameter family of curves $c_\varepsilon(t,s)$ such that $c_\varepsilon(t,0)=c(t)$ and $Y_\varepsilon = \frac{\partial c_\varepsilon}{\partial s}$. Since $Y_\varepsilon$ is not $C^1$, we must write the formula for the second derivative of the length as \[ \frac{d^2\tau}{ds^2}(0) = - \int_{t_0}^{t_1} \biggl( \left\langle \nabla_X Y_\varepsilon, \nabla_X Y_\varepsilon\right\rangle + \left\langle R(X,Y_\varepsilon)X, Y_\varepsilon \right\rangle \biggr) dt = I(Y_\varepsilon, Y_\varepsilon), \] where the bilinear form $I$ is clearly symmetric. Therefore \[ \frac{d^2\tau}{ds^2}(0) = I(Y,Y) + 2\varepsilon I(Y,Z) + \varepsilon^2 I(Z,Z). \] Since $Y$ is a Jacobi field between $p$ and $r$, and zero between $r$ and $q$, we have $I(Y,Y)=0$. On the other hand, \begin{align*} I(Y,Z) & = - \int_{t_0}^{t^*} \biggl( \left\langle \nabla_X Y, \nabla_X Z \right\rangle + \left\langle R(X,Y)X, Z \right\rangle \biggr) dt \\ & = - \biggl[ \left\langle \nabla_X Y, Z\right\rangle\biggr]_{t_0}^{t^*} + \int_{t_0}^{t^*} \biggl( \left\langle \nabla_X\nabla_X Y, Z \right\rangle - \left\langle R(X,Y)X, Z \right\rangle \biggr) dt \\ & = \left\langle \nabla_X \hat{Y}(t^*), \nabla_X \hat{Y}(t^*)\right\rangle > 0. \end{align*} Therefore for $\varepsilon > 0$ sufficiently small the one-parameter family $c_\varepsilon(t,s)$ contains curves whose length is greater than the length of $c$. Figure~\ref{conjugate2} illustrates the geometric idea behind the proof above: $c_s$ represents a generic curve of a one-parameter family corresponding to $Y$, and has the same length as $c$; adding $\varepsilon Z$ changes $c_s$ between points $u$ and $v$, say, making it longer by the twin paradox. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{r}{$r$} \psfrag{s}{$u$} \psfrag{t}{$v$} \psfrag{c}{$c$} \psfrag{ct}{$c_s$} \psfrag{Y}{$Y$} \psfrag{DY}{$\nabla_X\hat{Y}(t^*)$} \epsfxsize=.5\textwidth \leavevmode \epsfbox{conjugate2.eps} \end{center} \caption{Proof of Theorem~\ref{Thmconjugate}.} \label{conjugate2} \end{figure} The results above can be generalized for timelike geodesics orthogonal to a spacelike hypersurface $S$. If one considers the congruence of such geodesics then at $S$ \begin{align*} \nabla_X Y_\mu & = \nabla_Y X_\mu = Y^\nu (\nabla_\nu X_\mu) = Y^\nu (\nabla_{(\nu} X_{\mu)} + \nabla_{[\nu} X_{\mu]}) \\ & = \nabla_{(\mu} X_{\nu)} Y^\nu = \frac12 \mathcal{L}_X g_{\mu\nu} Y^\nu = K_{\mu\nu} Y^\nu. \end{align*} \begin{Def} Let $(M,g)$ be a Lorentzian manifold and let $S \subset M$ be a spacelike hypersurface with second fundamental form $K$. A point $q \in M$ is said to be {\bf conjugate} to $S$ along a timelike geodesic $c$ orthogonal to $S$ at some point $p \in S$ if there exists a nonvanishing solution $Y$ of the Jacobi equation $\nabla_X \nabla_X Y = R(X,Y)X$ such that $Y_p \in T_pS$, $(\nabla_X Y_\mu )_p = (K_{\mu\nu} Y^\nu)_p$ and $Y_q = 0$. \end{Def} In an orthonormal frame $\{X,E_1, E_2, E_3\}$ parallel along $c$, again we can assume that $Y^0=0$, and have for the remaining components \[ Y^i(t) = A_{ij}(t) Y^j(0), \] where $A(t)$ is the fundamental matrix solution: \[ \begin{cases} A(0)=I \\ \dot{A}(0) = K \\ \ddot{A}_{ij} = R^i_{\,\,\,\,00k} A_{kj} \end{cases}. \] Arguing as above, we have the following result. \begin{Thm} \label{theta_0} Let $(M,g)$ be a $4$-dimensional Lorentzian manifold satisfying the SEC, $S \subset M$ a spacelike hypersurface and $c$ a timelike geodesic orthogonal to $S$ at some point $p \in S$. Suppose that the expansion $\theta$ of the congruence of timelike geodesics orthogonal to $S$ takes a negative value $\theta_0 < 0$ at $p$. Then there exists a point $q$ conjugate to $S$ along $c$ at a distance at most $\frac{3}{|\theta_0|}$ from $S$. \end{Thm} \begin{Thm} \label{maximizing} A timelike curve $c$ connecting the spacelike hypersurface $S \subset M$ to the point $q \in M$ locally maximizes the proper time (along any one-parameter family of timelike curves connecting $S$ to $q$) if and only if it is a timelike geodesic orthogonal to $S$ without conjugate points to $S$ between $S$ and $q$. \end{Thm} \begin{proof} The proof is basically the same as for curves connecting two points. The main differences are the in formula \begin{align*} \frac{d\tau}{ds} & = - \int_{t_0}^{t_1} X \cdot \left( \frac1{f} \left\langle X, Y \right\rangle \right) dt + \int_{t_0}^{t_1} \left\langle \nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt \\ & = \frac1{f(t_0,s)} \left\langle X_p, Y_p \right\rangle + \int_{t_0}^{t_1} \left\langle \nabla_X \left(\frac{X}{f}\right), Y \right\rangle dt, \end{align*} which requires $c$ to be orthogonal to $S$ at $p = c(t_0)$; the fact that $A(t_0)$ does not vanish, but instead \begin{align*} (AZ)^t A\dot{Z} & = Y^t (\dot{Y} - \dot{A}Z) = Y^t (\dot{Y} - BAZ) \\ & = Y^t (\dot{Y} - B Y) = Y^t (\dot{Y} - K Y) = 0 \end{align*} at $t_0$; and the integrated formula \begin{align*} \frac{d^2\tau}{ds^2}(0) & = (K_{\mu\nu} Y^\mu Y^\nu)(p) - \int_{t_0}^{t_1} \biggl( \left\langle \nabla_X Y , \nabla_X Y \right\rangle + \left\langle R(X,Y)X, Y \right\rangle \biggr) dt \\ & = (K_{\mu\nu} Y^\mu Y^\nu)(p) + I(Y,Y), \end{align*} which vanishes when $Y$ is a Jacobi field. \end{proof} \section{Existence of maximizing geodesics} \label{sec4.4} \begin{Prop} \label{compact} Let $(M,g)$ be a globally hyperbolic spacetime, $S$ a Cauchy hypersurface and $p \in D^+(S)$. Then $D^+(S)\cap J^-(p)$ is compact. \end{Prop} \begin{proof} Let us define a {\bf simple neighborhood} $U \subset M$ to be a geodesically convex open set diffeomorphic to an open ball whose boundary is a compact submanifold of a larger geodesically convex open set (therefore $\partial U$ is diffeomorphic to $S^3$ and $\overline{U}$ is compact). It is clear that simple neighborhoods form a basis for the topology of $M$. Also, it is easy to show that any open cover $\{ V_\alpha \}_{\alpha \in A}$ has a countable, locally finite refinement $\{ U_n \}_{n \in \mathbb{N}}$ by simple neighborhoods. If $A=D^+(S)\cap J^-(p)$ were not compact, there would exist a countable, locally finite open cover $\{ U_n \}_{n \in \mathbb{N}}$ of $A$ by simple neighborhoods not admitting any finite subcover. Take $q_n \in A \cap U_n$ such that $q_m \neq q_n$ for $m \neq n$. The sequence $\{ q_n \}_{n \in \mathbb{N}}$ cannot have accumulation points, since any point in $M$ has a neighborhood intersecting only finite simple neighborhoods $U_n$. In particular, each simple neighborhood $U_n$ contains only a finite number of points in the sequence (as $\overline{U}_n$ is compact). Set $p_1=p$. Since $p_1 \in A$, we have $p_1 \in U_{n_1}$ for some $n_1 \in \mathbb{N}$. Let $q_n \not\in U_{n_1}$. Since $q_n \in J^-(p_1)$, there exists a future-directed causal curve $c_n$ connecting $q_n$ to $p_1$. This curve will necessarily intersect $\partial U_{n_1}$. Let $r_{1,n}$ be an intersection point. Since $U_{n_1}$ contains only a finite number of points in the sequence $\{ q_n \}_{n \in \mathbb{N}}$, there will exist infinite intersection points $r_{1,n}$. As $\partial U_{n_1}$ is compact, these will accumulate to some point $p_2 \in \partial U_{n_1}$ (cf.~Figure~\ref{proof}). Because $\overline{U}_{n_1}$ is contained in a geodesically convex open set $V$, which can be chosen so that $v \mapsto (\pi(v),\exp(v))$ is a diffeomorphism onto $V \times V$, we have $p_2 \in J^-(p_1)$: if $\gamma_{1,n}$ is the unique causal geodesic connecting $p_1$ to $r_{1,n}$, parameterized by the global time function $t:M \to \mathbb{R}$, then the subsequence of $\{\gamma_{1,n}\}$ corresponding to a convergent subsequence of $\{r_{1,n}\}$ will converge to a causal geodesic $\gamma_1$ connecting $p_1$ to $p_2$. If $S=t^{-1}(0)$ then we have $t(r_{1,n}) \geq 0$, implying that $t(p_2) \geq 0$ and hence $p_2 \in A$. Since $p_2 \not\in U_{n_1}$, there must exist $n_2 \in \mathbb{N}$ such that $p_2 \in U_{n_2}$. Since $U_{n_2}$ contains only a finite number of points in the sequence $\{ q_n \}_{n \in \mathbb{N}}$, an infinite number of curves $c_n$ must intersect $\partial U_{n_2}$ to the past of $r_{1,n}$. Let $r_{2,n}$ be the intersection points. As $\partial U_{n_2}$ is compact, $\{r_{2,n}\}$ must accumulate to some point $p_3 \in \partial U_{n_2}$. Because $\overline{U}_{n_2}$ is contained in a geodesically convex open set, $p_3 \in J^-(p_2)$: if $\gamma_{2,n}$ is the unique causal geodesic connecting $r_{1,n}$ to $r_{2,n}$, parameterized by the global time function, then the subsequence of $\{\gamma_{2,n}\}$ corresponding to convergent subsequences of both $\{r_{1,n}\}$ and $\{r_{2,n}\}$ will converge to a causal geodesic connecting $p_2$ to $p_3$. Since $J^-(p_2) \subset J^-(p_1)$ and $t(r_{2,n}) \geq 0 \Rightarrow t(p_3) \geq 0$, we have $p_3 \in A$. Iterating the procedure above, we can construct a sequence $\{p_i\}_{i\in\mathbb{N}}$ of points in $A$ satisfying $p_i \in U_{n_i}$ with $n_i \neq n_j$ if $i \neq j$, such that $p_{i}$ is connected to $p_{i+1}$ by a causal geodesic $\gamma_i$. It is clear that $\gamma_i$ cannot intersect $S$, for $t(p_{i+1}) > t(p_{i+2}) \geq 0$. On the other hand, the piecewise smooth causal curve obtained by joining the curves $\gamma_i$ can easily be smoothed into a past-directed causal curve starting at $p_1$ which does not intersect $S$. Finally, such curve is inextendible: it cannot converge to any point, as $\{p_i\}_{i\in\mathbb{N}}$ cannot accumulate. But since $p_1 \in D^+(S)$, this curve would have to intersect $S$. Therefore $A$ must be compact. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{p=p1}{$p=p_1$} \psfrag{p2}{$p_2$} \psfrag{p3}{$p_3$} \psfrag{Un1}{$U_{n_1}$} \psfrag{Un2}{$U_{n_2}$} \epsfxsize=.7\textwidth \leavevmode \epsfbox{proof.eps} \end{center} \caption{Proof of Proposition~\ref{compact}.} \label{proof} \end{figure} \begin{Cor} \label{closed_compact} Let $(M,g)$ be a globally hyperbolic spacetime and $p,q \in M$. Then \begin{enumerate}[(i)] \item $J^+(p)$ is closed; \item $J^+(p) \cap J^-(q)$ is compact. \end{enumerate} \end{Cor} \begin{proof} Exercise. \end{proof} Proposition~\ref{compact} is a key ingredient in establishing the following fundamental result. \begin{Thm} \label{maximal} Let $(M,g)$ be a globally hyperbolic spacetime with Cauchy hypersurface $S$, and $p \in D^+(S)$. Then, among all timelike curves connecting $p$ to $S$, there exists a timelike curve with maximal length. This curve is a timelike geodesic, orthogonal to $S$. \end{Thm} \begin{proof} Consider the set $T(S,p)$ of all timelike curves connecting $S$ to $p$. Since we can always use the global time function $t:M\to \mathbb{R}$ as a parameter, these curves are determined by their images, which are compact subsets of the compact set $A=D^+(S) \cap J^-(p)$. As it is well known (see for instance \cite{Munkres00}), the set $C(A)$ of all compact subsets of $A$ is a compact metric space for the {\bf Hausdorff metric} $d_H$, defined as follows: if $d:M\times M \to \mathbb{R}$ is a metric yielding the topology of $M$, \[ d_H(K,L) = \inf \{ \varepsilon > 0 \mid K \subset U_\varepsilon(L) \text{ and } L \subset U_\varepsilon(K) \}, \] where $U_{\varepsilon}(K)$ is a $\varepsilon$-neighborhood of $K$ for the metric $d$. Therefore, the closure $C(S,p) = \overline{T(S,p)}$ is a compact subset of $C(A)$. It is not difficult to show that $C(S,p)$ can be identified with the set of {\bf continuous causal curves} connecting $S$ to $p$ (a continuous curve $c:[0,t(p)]\to M$ is said to be {\bf causal} if $c(t_2) \in J^+(c(t_1))$ whenever $t_2 > t_1$). The length function $\tau:T(S,p) \to \mathbb{R}$ is defined by \[ \tau(c)=\int_0^{t(p)} |\dot{c}(t)| dt. \] This function is {\bf upper semicontinuous}, i.e.~continuous for the topology \[ \mathcal{O}=\{(-\infty,a) \mid -\infty \leq a \leq +\infty \} \] in $\mathbb{R}$. Indeed, let $c \in T(S,p)$ be parameterized by its arclength $u$. For a sufficiently small $\varepsilon > 0$, the function $u$ can be extended to the $\varepsilon$-neighborhood $U_{\varepsilon}(c)$ in such a way that its level hypersurfaces are spacelike and orthogonal to $c$, that is, $-\operatorname{grad} u$ is timelike and coincides with $\dot{c}$ on $c$ (cf.~Figure~\ref{proof2}). If $\gamma \in T(S,p)$ is in the open ball $B_\varepsilon(c) \subset C(A)$ for the Hausdorff metric $d_H$ then we can use $u$ as a parameter, thus obtaining \[ du(\dot{\gamma}) = 1 \Leftrightarrow \langle \dot{\gamma}, \operatorname{grad} u \rangle = 1. \] Therefore $\dot{\gamma}$ can be decomposed as \[ \dot{\gamma} = \frac1{\langle \operatorname{grad} u, \operatorname{grad} u \rangle} \operatorname{grad} u + X, \] where $X$ is spacelike and orthogonal to $\operatorname{grad} u$, and so \[ |\dot{\gamma}| = \left| \frac1{\langle \operatorname{grad} u, \operatorname{grad} u \rangle} + \langle X,X \rangle \right|^\frac12. \] Given $\delta > 0$, we can choose $\varepsilon>0$ sufficiently small so that \[ -\frac1{\langle \operatorname{grad} u, \operatorname{grad} u \rangle} < \left(1 + \frac{\delta}{2\tau(c)}\right)^2 \] on the $\varepsilon$-neighborhood $U_{\varepsilon}(c)$ (as $\langle \operatorname{grad} u, \operatorname{grad} u \rangle=-1$ along $c$). We have \[ \tau(\gamma) = \int_0^{t(p)} \left|\frac{d \gamma}{dt} \right| \, dt = \int_0^{t(p)} |\dot{\gamma}| \frac{du}{dt} \, dt = \int_{u(\gamma\cap S)}^{\tau(c)} |\dot{\gamma}| \, d u, \] where we have to allow for the fact that $c$ is not necessarily orthogonal to $S$, and so the initial point of $\gamma$ is not necessarily at $u=0$ (cf.~Figure~\ref{proof2}). Consequently, \begin{align*} \tau(\gamma) & = \int_{u(\gamma\cap S)}^{\tau(c)} \left| -\frac1{\langle \operatorname{grad} u, \operatorname{grad} u \rangle} - \langle X,X \rangle \right|^\frac12 \, d u \\ & < \int_{u(\gamma\cap S)}^{\tau(c)} \left(1 + \frac{\delta}{2\tau(c)}\right) \, d u = \left(1 + \frac{\delta}{2\tau(c)}\right) \left(\tau(c) - u(\gamma\cap S)\right). \end{align*} Choosing $\varepsilon$ sufficiently small so that \[ |u|< \left( \frac1{\tau(c)} + \frac{2}{\delta} \right)^{-1} \] on $S \cap U_{\varepsilon}(c)$, we obtain $\tau(\gamma) < \tau(c) + \delta$, proving upper semicontinuity in $T(S,p)$. As a consequence, the length function can be extended to $C(S,p)$ through \[ \tau(c)=\lim_{\varepsilon \to 0} \sup\{ \tau(\gamma) \mid \gamma \in B_\varepsilon(c) \cap T(S,p) \} \] (as for $\varepsilon>0$ sufficiently small the supremum will be finite). Also, it is clear that if $c \in T(S,p)$ then the upper semicontinuity of the length forces the two definitions of $\tau(c)$ to coincide. The extension of the length function to $C(S,p)$ is trivially upper semicontinuous: given $c \in C(S,p)$ and $\delta > 0$, let $\varepsilon>0$ be such that $\tau(\gamma) < \tau(c) + \frac{\delta}2$ for any $\gamma \in B_{2\varepsilon}(c) \cap T(S,p)$. Then it is clear that $\tau(c') \leq \tau(c) + \frac{\delta}2 < \tau(c) + \delta$ for any $c' \in B_{\varepsilon}(c) \cap C(S,p)$. Finally, we notice that the compact sets of $\mathbb{R}$ for the topology $\mathcal{O}$ are the sets with a maximum. Therefore, the length function attains a maximum at some point $c \in C(S,p)$. All that remains to be seen is that the maximum is also attained at a smooth timelike curve $\gamma$. To do so, cover $c$ with finitely many geodesically convex neighborhoods and choose points $p_1, \ldots, p_k$ in $c$ such that $p_1 \in S$, $p_k=p$ and the portion of $c$ between $p_{i-1}$ and $p_{i}$ is contained in a geodesically convex neighborhood for all $i=2, \ldots, k$. It is clear that there exists a sequence $c_n \in T(S,p)$ such that $c_n \to c$ and $\tau(c_n) \to \tau(c)$. Let $t_i=t(p_i)$ and $p_{i,n}$ be the intersection of $c_n$ with $t^{-1}(t_i)$. Replace $c_n$ by the sectionally geodesic curve $\gamma_n$ obtained by joining $p_{i-1,n}$ to $p_{i,n}$ in the corresponding geodesically convex neighborhood. Then $\tau(\gamma_n) \geq \tau(c_n)$, and therefore $\tau(\gamma_n) \to \tau(c)$. Since each sequence $p_{i,n}$ converges to $p_i$, $\gamma_n$ converges to the sectionally geodesic curve $\gamma$ obtained by joining $p_{i-1}$ to $p_{i}$ ($i=2, \ldots, k$), and it is clear that $\tau(\gamma_n) \to \tau(\gamma)=\tau(c)$. Therefore $\gamma$ is a point of maximum for the length. Finally, we notice that $\gamma$ must be smooth at the points $p_i$, for otherwise we could increase its length by using the twin paradox. Therefore $\gamma$ must be a timelike geodesic. It is also clear that $\gamma$ must be orthogonal to $S$, for otherwise it would be possible to increase its length by small deformations. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{p}{$p$} \psfrag{c}{$c$} \psfrag{g}{$\gamma$} \psfrag{S}{$S$} \psfrag{u=0}{$u=0$} \psfrag{u=t(c)}{$u=\tau(c)$} \psfrag{Ue(c)}{$U_{\varepsilon}(c)$} \epsfxsize=.6\textwidth \leavevmode \epsfbox{proof2.eps} \end{center} \caption{Proof of Theorem~\ref{maximal}.} \label{proof2} \end{figure} \section{Hawking's singularity theorem} \label{sec4.5} We have now all the necessary ingredients to prove the Hawking singularity theorem. \begin{Def} A spacetime $(M,g)$ is said to be {\bf singular} if it is not geodesically complete. \end{Def} \begin{Thm} ({\bf Hawking \cite{H66}}) \label{Hawking} Let $(M,g)$ be a globally hyperbolic spacetime satisfying the strong energy condition, and suppose that the expansion of the congruence of future-pointing timelike geodesics orthogonal to $S$ satisfies $\theta\leq\theta_0 < 0$ on a Cauchy hypersurface $S$. Then $(M,g)$ is singular. \end{Thm} \begin{proof} We will show that no future-directed timelike geodesic orthogonal to $S$ can be extended to proper time greater than $\tau_0=-\frac{3}{\theta_0}$ to the future of $S$. Suppose that this was not so. Then there would exist a future-directed timelike geodesic $c$ orthogonal to $S$, parameterized by proper time, defined in an interval $[0,\tau_0+\varepsilon]$ for some $\varepsilon>0$. Let $p=c(\tau_0+\varepsilon)$. According to Theorem~\ref{maximal}, there would exist a timelike geodesic $\gamma$ with maximal length connecting $S$ to $p$, orthogonal to $S$. Because $\tau(c)=\tau_0+\varepsilon$, we would necessarily have $\tau(\gamma)\geq\tau_0+\varepsilon$. Theorem~\ref{theta_0} guarantees that $\gamma$ would develop a conjugate point at a distance of at most $\tau_0$ to the future of $S$, and Theorem~\ref{maximizing} states that $\gamma$ would cease to be maximizing beyond this point. Therefore we arrive at a contradiction. \end{proof} \begin{Remark} It should be clear that $(M,g)$ is singular if the condition $\theta\leq\theta_0 < 0$ on a Cauchy hypersurface $S$ is replaced by the condition $\theta\geq\theta_0 > 0$ on $S$. In this case, no {\bf past-directed} timelike geodesic orthogonal to $S$ can be extended to proper time greater than $\tau_0=\frac{3}{\theta_0}$ to the {\bf past} of $S$. \end{Remark} \begin{Example} \hspace{1cm} \begin{enumerate} \item The FLRW models with $\alpha>0$ and $\Lambda = 0$ are globally hyperbolic, and satisfy the strong energy condition (as $\rho > 0$). Moreover, the expansion of the congruence tangent to $\frac{\partial}{\partial t}$ is $\theta = \frac{3\dot{a}}{a}$. Assume that the model is expanding at time $t_0$. Then $\theta = \theta_0 = \frac{3\dot{a}(t_0)}{a(t_0)}>0$ on the Cauchy hypersurface $S = \{ t=t_0 \}$, and hence Theorem~\ref{Hawking} guarantees that this model is singular to the past of $S$ (i.e.~there exists a big bang). Moreover, Theorem~\ref{Hawking} implies that this singularity is generic: any sufficiently small perturbation of an expanding FLRW model satisfying the strong energy condition will also be singular. Loosely speaking, any expanding universe must have begun at a big bang. \item The region $\{ r < 2m \}$ of the Schwarzschild solution is globally hyperbolic, and satisfies the strong energy condition (as $R_{\mu\nu}=0$). The metric can be written in this region as \[ \hspace{2cm} g = - d\tau^2 + \left( \frac{2m}r - 1 \right) dt^2 + r^2 \left(d\theta^2 + \sin^2 \theta d\varphi^2 \right), \] where \[ \tau = \int_r^{2m} \left( \frac{2m}u - 1 \right)^{-\frac12} du. \] Therefore the inside of the black hole can be pictured as a cylinder $\mathbb{R} \times S^2$ whose shape is evolving in time. As $r \to 0$, the $S^2$ contracts to a singularity, with the $t$-direction expanding. Since \[ K = \frac{dr}{d\tau} \left( -\frac{m}{r^2} dt^2 + r d\theta^2 + r \sin^2 \theta d\varphi^2\right), \] we have \[ \theta = \left( \frac{2m}r - 1 \right)^{-\frac12}\left( \frac{2}r - \frac{3m}{r^2} \right). \] Therefore we have $\theta = \theta_0 < 0$ on any Cauchy hypersurface $S = \{ r=r_0 \}$ with $r_0 < \frac{3m}2$, and hence Theorem~\ref{Hawking} guarantees that the Schwarzschild solution is singular to the future of $S$. Moreover, Theorem~\ref{Hawking} implies that this singularity is generic: any sufficiently small perturbation of the Schwarzschild solution satisfying the strong energy condition will also be singular. Loosely speaking, once the collapse has advanced long enough, nothing can prevent the formation of a singularity. \item It should be noted that Theorem~\ref{Hawking} proves geodesic incompleteness, not the existence of curvature singularities. For instance, it applies to the Milne universe, or a globally hyperbolic region of the anti-de Sitter universe, whose curvature is bounded (they are simply globally hyperbolic regions in larger, inextendible Lorentzian manifolds). \end{enumerate} \end{Example} \section{Penrose's singularity theorem} \label{sec4.6} Let $(M,g)$ be a globally hyperbolic spacetime, $S$ a Cauchy hypersurface with future-pointing unit normal vector field $N$, and $\Sigma \subset S$ a compact $2$-dimensional submanifold with unit normal vector field $n$ in $S$. Let $c_p$ be the null geodesic with initial condition $N_p+n_p$ for each point $p \in \Sigma$. We define a smooth map $\exp:(-\varepsilon,\varepsilon) \times \Sigma \to M$ for some $\varepsilon > 0$ as $\exp(r,p)=c_p(r)$. \begin{Def} The critical values of $\exp$ are said to be {\bf conjugate points} to $\Sigma$. \end{Def} Loosely speaking, conjugate points are points where geodesics starting orthogonally at nearby points of $\Sigma$ intersect. Let $q=\exp(r_0,p)$ be a point not conjugate to $\Sigma$. If $\varphi$ is a local parameterization of $\Sigma$ around $p$, then we can construct a system of local coordinates $(u,r,x^2,x^3)$ on some open set $V \ni q$ by using the map \[ (u,r,x^2,x^3) \mapsto \exp(r, \psi_u(\varphi(x^2,x^3))), \] where $\psi_u$ is the flow along the timelike geodesics orthogonal to $S$ and the map $\exp:(-\varepsilon,\varepsilon) \times \psi_u(\Sigma) \to M $ is defined as above. Since $\frac{\partial}{\partial r}$ is tangent to null geodesics, we have $g_{rr}=\left\langle\frac{\partial}{\partial r},\frac{\partial}{\partial r}\right\rangle = 0$. On the other hand, we have \begin{align*} \frac{\partial g_{r\mu}}{\partial r} & = \frac{\partial}{\partial r} \left\langle\frac{\partial}{\partial r},\frac{\partial}{\partial x^\mu}\right\rangle = \left\langle\frac{\partial}{\partial r},\nabla_{\frac{\partial}{\partial r}}\frac{\partial}{\partial x^\mu}\right\rangle \\ & = \left\langle\frac{\partial}{\partial r},\nabla_{\frac{\partial}{\partial x^\mu}}\frac{\partial}{\partial r}\right\rangle = \frac12 \frac{\partial}{\partial x^\mu} \left\langle\frac{\partial}{\partial r},\frac{\partial}{\partial r}\right\rangle = 0, \end{align*} for $\mu = 0,1,2,3$. Since $g_{ru}=-1$ and $g_{r2}=g_{r3}=0$ on $\psi_u(\Sigma)$, we have $g_{ru}=-1$ and $g_{r2}=g_{r3}=0$ on $V$. Therefore the metric is written in this coordinate system as \[ g = \alpha du^2 - 2 du dr + 2 \beta_A du dx^A + \gamma_{AB} dx^A dx^B. \] Since \[ \det \left( \begin{matrix} \alpha & -1 & \beta_2 & \beta_3 \\ -1 & 0 & 0 & 0 \\ \beta_2 & 0 & \gamma_{22} & \gamma_{23} \\ \beta_3 & 0 & \gamma_{32} & \gamma_{33} \end{matrix} \right) = -\det \left( \begin{matrix} \gamma_{22} & \gamma_{23} \\ \gamma_{32} & \gamma_{33} \end{matrix} \right), \] we see that the functions \[ \gamma_{AB} = \left\langle \frac{\partial}{\partial x^A}, \frac{\partial}{\partial x^B} \right\rangle \] form a positive definite matrix, and so $g$ induces a Riemannian metric on the $2$-dimensional surfaces $\exp(r,\psi_u(\Sigma))$, which are then spacelike. Since the vector fields $\frac{\partial}{\partial x^A}$ can always be defined along $c_{p}$, the matrix $(\gamma_{AB})$ is also well defined along $c_p$, even at points where the coordinate system breaks down, i.e.~at points which are conjugate to $\Sigma$. These are the points for which $\gamma=\det\left(\gamma_{AB}\right)$ vanishes, since only then will $\left\{\frac{\partial}{\partial u},\frac{\partial}{\partial r},\frac{\partial}{\partial x^2},\frac{\partial}{\partial x^3}\right\}$ fail to be linearly independent. In fact the vector fields $\frac{\partial}{\partial x^A}$ are Jacobi fields along $c_p$. It is easy to see that \[ \Gamma^u_{ur} = \Gamma^u_{rr} = \Gamma^u_{rA} = \Gamma^r_{rr} = \Gamma^A_{rr} = 0 \quad \text{ and } \quad \Gamma^A_{rB} = \gamma^{AC} \beta_{CB}, \] where $(\gamma^{AB})=(\gamma_{AB})^{-1}$ and $\beta_{AB} = \frac12 \frac{\partial \gamma_{AB}}{\partial r}$. Consequently, \begin{align*} R_{rr} & = R_{urr}^{\;\;\;\;\;u} + R_{Arr}^{\;\;\;\;\;A} = \left( - \frac{\partial \Gamma^A_{Ar}}{\partial r} - \Gamma^B_{Ar} \Gamma^A_{rB}\right)\\ & = - \frac{\partial}{\partial r} \left(\gamma^{AB} \beta_{AB}\right) - \gamma^{BC} \gamma^{AD} \beta_{CA} \beta_{DB}. \end{align*} The quantity \[ \theta = \gamma^{AB} \beta_{AB} \] appearing in this expression is called the {\bf expansion} of the null geodesics, and has an important geometric meaning: \[ \theta = \frac12 \operatorname{tr} \left((\gamma_{AB})^{-1} \frac{\partial}{\partial r} (\gamma_{AB})\right) = \frac12 \frac{\partial}{\partial r} \log \left(\det\left(\gamma_{AB}\right)\right) = \frac{\partial}{\partial r} \log \left(\det\left(\gamma_{AB}\right)\right)^\frac12. \] Therefore the expansion yields the variation of the area element of the spacelike $2$-dimensional surfaces $\exp(r,\psi_u(\Sigma))$. More importantly for our purposes, we see that a singularity of the expansion indicates a zero of $\det\left(\gamma_{AB}\right)$, i.e.~a conjugate point to $\psi_u(\Sigma)$. \begin{Prop}\label{conjugate_null} Let $(M,g)$ be a globally hyperbolic spacetime satisfying the null energy condition, $S \subset M$ a Cauchy hypersurface with future-pointing unit normal vector field $N$, $\Sigma \subset S$ a compact $2$-dimensional submanifold with unit normal vector field $n$ in $S$ and $p \in \Sigma$ a point where $\theta=\theta_0 < 0$. Then the null geodesic $c_p$ with initial condition $N_p+n_p$ contains at least a point conjugate to $\Sigma$, at an affine parameter distance of at most $-\frac{2}{\theta_0}$ to the future of $\Sigma$ (assuming that it can be extended that far). \end{Prop} \begin{proof} Since $(M,g)$ satisfies the null energy condition, we have $R_{rr}=R_{\mu\nu}\left(\frac{\partial}{\partial r}\right)^\mu\left(\frac{\partial}{\partial r}\right)^\nu \geq 0$. Consequently, \[ \frac{\partial \theta}{\partial r} + \gamma^{BC} \gamma^{AD} \beta_{CA} \beta_{DB} \leq 0. \] Choosing an orthonormal basis (where $\gamma^{AB}=\delta_{AB}$), and using the inequality \[ (\operatorname{tr} A)^2 \leq n \operatorname{tr}(A^tA) \] for square $n \times n$ matrices, it is easy to show that \[ \gamma^{BC} \gamma^{AD} \beta_{CA} \beta_{DB} = \beta_{BA} \beta_{AB} = \operatorname{tr}\left((\beta_{AB}) (\beta_{AB})^t\right) \geq \frac12 \theta^2. \] Consequently $\theta$ must satisfy \[ \frac{\partial \theta}{\partial r} + \frac12 \theta^2 \leq 0. \] Integrating this inequality yields \[ \frac1{\theta} \geq \frac1{\theta_0} + \frac{r}2, \] and hence $\theta$ must blow up at a value of $r$ no greater than $-\frac{2}{\theta_0}$. \end{proof} We define the {\bf chronological future} and the {\bf causal future} of the compact surface $\Sigma$ as \[ I^+(\Sigma)=\bigcup_{p \in \Sigma} I^+(p) \quad \text{ and } \quad J^+(\Sigma)=\bigcup_{p \in \Sigma} J^+(p) \] (with similar definitions for the {\bf chronological past} and the {\bf causal past} of $\Sigma$). It is clear that $I^+(\Sigma)$, being the union of open sets, is itself open, and also that $J^+(\Sigma) \subset \overline{I^+(\Sigma)}$ and $I^+(\Sigma) = \operatorname{int} J^+(\Sigma)$. On the other hand, it is easy to generalize Proposition~\ref{compact} (and consequently Corollary~\ref{closed_compact}) to the corresponding statements with compact surfaces replacing points. In particular, $J^+(\Sigma)$ is closed. Therefore \[ \partial J^+(\Sigma) = \partial I^+(\Sigma) = J^+(\Sigma) \setminus I^+(\Sigma), \] and so, by a straightforward generalization of Corollary~\ref{null_geodesic} in Chapter~\ref{chapter3}, every point in this boundary can be reached from a point in $\Sigma$ by a future-directed null geodesic. Moreover, this geodesic must be orthogonal to $\Sigma$. Indeed, at $\Sigma$ we have \[ \frac{\partial}{\partial u} = N \quad \text{ and } \quad \frac{\partial}{\partial r} = N + n, \] and so the metric takes the form \[ g = - du^2 - 2 du dr + \gamma_{AB} dx^A dx^B. \] If $c:I \subset \mathbb{R} \to M$ is a future-directed null geodesic with $c(0)\in\Sigma$, its initial tangent vector \[ \dot{c}(0) = \dot{u} \frac{\partial}{\partial u} + \dot{r} \frac{\partial}{\partial r} + \dot{x}^A \frac{\partial}{\partial x^A} = (\dot{u} + \dot{r}) N + \dot{r} n + \dot{x}^A \frac{\partial}{\partial x^A} \] must satisfy \[ \dot{u} (\dot{u} + 2 \dot{r}) = \gamma_{AB} \dot{x}^A \dot{x}^B. \] Since $c$ is future-directed we must have $\dot{u} + \dot{r}>0$. On the other hand, by choosing the unit normal to $\Sigma$ on $S$ to be either $n$ or $-n$, we can assume $\dot{r}\geq 0$. If $c$ is not orthogonal to $\Sigma$ we then have \[ \gamma_{AB} \dot{x}^A \dot{x}^B > 0 \Rightarrow \dot{u} (\dot{u} + 2 \dot{r}) > 0 \Rightarrow \dot{u} > 0. \] Now the region where $u>0$ and $r \geq 0$ is clearly a subset of $I^+(\Sigma)$, since its points can be reached from $\Sigma$ by a sectionally smooth curve composed of an arc of timelike geodesic and an arc of null geodesic. Therefore, we see that if $c$ is not orthogonal to $\Sigma$ then $c(t)\in I^+(\Sigma)$ for all $t>0$. Even future-directed null geodesics orthogonal to $\Sigma$ may eventually enter $I^+(\Sigma)$. A sufficient condition for this to happen is given in the following result. \begin{Prop} \label{conjugate_null_I+} Let $(M,g)$ be a globally hyperbolic spacetime, $S$ a Cauchy hypersurface with future-pointing unit normal vector field $N$, $\Sigma \subset S$ a compact $2$-dimensional submanifold with unit normal vector field $n$ in $S$, $p \in \Sigma$, $c_p$ the null geodesic through $p$ with initial condition $N_p+n_p$ and $q = c_p(r)$ for some $r>0$. If $c_p$ has a conjugate point between $p$ and $q$ then $q \in I^+(\Sigma)$. \end{Prop} \begin{proof} We will offer only a sketch of the proof. Let $s$ be the first conjugate point along $c_p$ between $p$ and $q$. Since $q$ is conjugate to $p$, there exists another null geodesic $\gamma$ starting at $\Sigma$ which (approximately) intersects $c_p$ at $s$. The piecewise smooth null curve obtained by following $\gamma$ between $\Sigma$ and $s$, and $c_p$ between $s$ and $q$ is a causal curve but not a null geodesic. This curve can be easily smoothed while remaining causal and nongeodesic, and so by the generalization of Corollary~\ref{null_geodesic} in Chapter~\ref{chapter3} we have $q \in I^+(\Sigma)$. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{s}{$s$} \psfrag{S}{$S$} \psfrag{Sg}{$\Sigma$} \psfrag{g}{$\gamma$} \psfrag{cp}{$c_p$} \epsfxsize=.8\textwidth \leavevmode \epsfbox{proof3.eps} \end{center} \caption{Proof of Proposition~\ref{conjugate_null_I+}.} \label{proof3} \end{figure} \begin{Def} Let $(M,g)$ be a globally hyperbolic spacetime and $S$ a Cauchy hypersurface with future-pointing unit normal vector field $N$. A compact $2$-dimensional submanifold $\Sigma \subset S$ with unit normal vector field $n$ in $S$ is said to be {\bf trapped} if the expansions $\theta^+$ and $\theta^-$ of the null geodesics with initial conditions $N + n$ and $N - n$ are both negative everywhere on $\Sigma$. \end{Def} We have now all the necessary ingredients to prove the Penrose singularity theorem. \begin{Thm} ({\bf Penrose \cite{P65}}) \label{Penrose} Let $(M,g)$ be a connected globally hyperbolic spacetime with a noncompact Cauchy hypersurface $S$, satisfying the null energy condition. If $S$ contains a trapped surface $\Sigma$ then $(M,g)$ is singular. \end{Thm} \begin{proof} Let $t:M \to \mathbb{R}$ be a global time function such that $S=t^{-1}(0)$. The integral curves of $\operatorname{grad} t$, being timelike, intersect $S$ exactly once, and $\partial I^+(\Sigma)$ at most once. This defines a continuous injective map $\pi: \partial I^+(\Sigma) \to S$, whose image is open. Indeed, if $q = \pi(p)$, then all points is some neighborhood of $q$ are images of points in $\partial I^+(\Sigma)$, as otherwise there would be a sequence $q_k \in S$ with $q_k \to q$ such that the integral curves of $\operatorname{grad} t$ through $q_k$ would not intersect $\partial I^+(\Sigma)$. Letting $r_k$ be the intersections of these curves with the Cauchy hypersurface $t^{-1}(t(r))$, for some point $r$ to the future of $p$ along the integral line of $\operatorname{grad} t$, we would have $r_k \to r$, and so $r_k \in I^+(\Sigma)$ for sufficiently large $k$ (as $I^+(\Sigma)$ is open), leading to a contradiction. Since $\Sigma$ is trapped (and compact), there exists $\theta_0 < 0$ such that the expansions $\theta^+$ and $\theta^-$ of the null geodesics orthogonal to $\Sigma$ both satisfy $\theta^+, \theta^- \leq \theta_0$. We will show that there exists a future-directed null geodesic orthogonal to $\Sigma$ which cannot be extended to an affine parameter greater than $r_0=-\frac{2}{\theta_0}$ to the future of $\Sigma$. Suppose that this was not so. Then, according to Proposition~\ref{conjugate_null}, any null geodesic orthogonal to $\Sigma$ would have a conjugate point at an affine parameter distance of at most $r_0$ to the future of $\Sigma$, after which it would be in $I^+(\Sigma)$, by Proposition~\ref{conjugate_null_I+}. Consequently, $\partial I^+(\Sigma)$ would be a (closed) subset of the compact set \[ \exp^+([0,r_0] \times \Sigma) \cup \exp^-([0,r_0] \times \Sigma) \] (where $\exp^+$ and $\exp^-$ refer to the exponential map constructed using the unit normals $n$ and $-n$), hence compact. Therefore the image of $\pi$ would also be compact, hence closed as well as open. Since $M$, and therefore $S$, are connected, the image of $\pi$ would be $S$, which would then be homeomorphic to $\partial I^+(\Sigma)$. But $S$ is noncompact by hypothesis, and we reach a contradiction. \end{proof} \begin{Remark} It should be clear that $(M,g)$ is singular if the condition of existence of a trapped surface is replaced by the condition of existence of an {\bf anti-trapped surface}, that is, a compact surface $\Sigma \subset S$ such that the expansions of null geodesics orthogonal to $\Sigma$ are both positive. In this case, there exists a {\bf past-directed} null geodesic orthogonal to $\Sigma$ which cannot be extended to an affine parameter time greater than $r_0=\frac{2}{\theta_0}$ to the {\bf past} of $\Sigma$. \end{Remark} \begin{Example} \hspace{1cm} \begin{enumerate} \item The region $\{ r < 2m \}$ of the Schwarzschild solution is globally hyperbolic, and satisfies the null energy condition (as $R_{\mu\nu}=0$). Since $r$ (or $-r$) is clearly a time function (depending on the choice of time orientation), it must increase (or decrease) along any future-pointing null geodesic, and therefore any sphere $\Sigma$ of constant $(t,r)$ is anti-trapped (or trapped). Since any Cauchy hypersurface is diffeomorphic to $\mathbb{R} \times S^2$, hence noncompact, we conclude from Theorem~\ref{Penrose} that the Schwarzschild solution is singular to past (or future) of $\Sigma$. Moreover, Theorem~\ref{Penrose} implies that this singularity is generic: any sufficiently small perturbation of the Schwarzschild solution satisfying the null energy condition will also be singular. Loosely speaking, once the collapse has advanced long enough, nothing can prevent the formation of a singularity. \item The FLRW models with $\alpha>0$ and $\Lambda = 0$ are globally hyperbolic, and satisfy the null energy condition. Moreover, radial null geodesics satisfy \[ \frac{dr}{dt} = \pm \frac1a \sqrt{1-kr^2}. \] Therefore, if we start with a sphere $\Sigma$ of constant $(t,r)$ and follow the orthogonal null geodesics along the direction of increasing or decreasing $r$, we obtain spheres whose radii $ar$ satisfy \[ \frac{d}{dt} (ar) = \dot{a}r+a\dot{r} = \dot{a}r \pm \sqrt{1-kr^2}. \] Assume that the model is expanding, with the big bang at $t=0$, and spatially noncompact (in particular $k \neq 1$). Then, for sufficiently small $t>0$, the sphere $\Sigma$ is anti-trapped, and hence Theorem~\ref{Penrose} guarantees that this model is singular to the past of $\Sigma$ (i.e.~there exists a big bang). Moreover, Theorem~\ref{Penrose} implies that this singularity is generic: any sufficiently small perturbation of an expanding, spatially noncompact FLRW model satisfying the null energy condition will also be singular. Loosely speaking, any expanding universe must have begun at a big bang. \end{enumerate} \end{Example} \section{Exercises} \label{sec4.7} \begin{enumerate} \item Let $g$ be a Lorentzian metric given in the Gauss Lemma form, \[ g = - dt^2 + h_{ij} dx^i dx^j, \] and consider the geodesic congruence tangent to $\frac{\partial}{\partial t}$. \begin{enumerate} \item Show that \[ B_{ij} = \Gamma^0_{ij} = \frac12 \frac{\partial h_{ij}}{\partial t}, \] that is, \[ B = \frac12 \mathcal{L}_{\frac{\partial}{\partial t}} g = K, \] where $K$ is the second fundamental form of the hypersurfaces of constant $t$. \item Conclude that the expansion of the congruence of the galaxies in a FLRW model is \[ \theta = \frac{3\dot{a}}{a} = 3H. \] \end{enumerate} \item Let $(M,g)$ be a Lorentzian manifold. \begin{enumerate} \item Use the formula for the Lie derivative of a tensor, \[ \hspace{2cm} (\mathcal{L}_X g) (Y,Z) = X \cdot g(Y,Z) - g(\mathcal{L}_X Y, Z) - g(Y, \mathcal{L}_X Z) \] to show that \[ (\mathcal{L}_X g) (Y,Z) = g(\nabla_Y X, Z) + g(Y, \nabla_Z X). \] \item Show that this formula can be written as \[ (\mathcal{L}_X g)_{\mu\nu} = \nabla_\mu X_\nu + \nabla_\nu X_\mu. \] \item Suppose that $X$ is a Killing vector field, i.e.~$\mathcal{L}_X g=0$. Use the Killing equation \[ \nabla_\mu X_\nu + \nabla_\nu X_\mu = 0 \] to show that $X$ is a solution of the Jacobi equation. Give a geometric interpretation of this fact. \end{enumerate} \item Let $T$ be a diagonalizable energy-momentum tensor, that is, $\left(T_{\mu\nu}\right) = \operatorname{diag}(\rho, p_1,p_2, p_3)$ on some orthonormal frame $\{E_0,E_1,E_2,E_3\}$. Show that: \begin{enumerate} \item $T$ satisfies the SEC if and only if $\rho+\sum_{i=1}^3p_i\geq 0$ and $\rho+p_i\geq 0$ ($i=1,2,3$). \item $T$ satisfies the WEC if and only if $\rho\geq 0$ and $\rho+p_i\geq 0$ ($i=1,2,3$). \item $T$ satisfies the DEC if and only if $\rho\geq |p_i|$ ($i=1,2,3$). \item $T$ satisfies the NEC if and only if $\rho+p_i\geq 0$ ($i=1,2,3$). \item The first three conditions are independent except that the DEC implies the WEC. \item The first three conditions imply the NEC. \end{enumerate} \item Let $(M,g)$ be the globally hyperbolic Lorentzian manifold corresponding to the exterior region of the Schwarzschild solution, that is, $M=\mathbb{R} \times \left(\mathbb{R}^3 \setminus \overline{B_{2m}(0)}\right)$ and \[ \hspace{2cm} g = - \left( 1 - \frac{2m}r \right) dt^2 + \left( 1 - \frac{2m}r \right)^{-1} dr^2 + r^2 \left(d\theta^2 + \sin^2 \theta d \varphi^2 \right) \] (with $m>0$). \begin{enumerate} \item Show that for any $r_0>2m$ the curve \[ c(t)=\left(t,r_0,\frac\pi2,\sqrt{\frac{m}{{r_0}^3}} t\right) \] is a timelike, null or spacelike geodesic, according to whether $r_0>3m$, $r_0=3m$ or $r_0<3m$. \item Argue that the point $q=\left(\pi\sqrt{\frac{{r_0}^3}{m}},r_0,\frac\pi2,\pi\right)$ is conjugate to the point $p=\left(0,r_0,\frac\pi2,0\right)$ along $c$ (note that this can be done without solving the Jacobi equation). \item Show explicitly that if $r_0>3m$ then $c$ stops being maximizing for $t>\pi\sqrt{\frac{{r_0}^3}{m}}$. \end{enumerate} \item Let $(M,g)$ be a globally hyperbolic spacetime and $p,q \in M$ with $q \in I^+(p)$. Show that among all timelike curves connecting $p$ to $q$ there exists a timelike curve with maximal length, which is a timelike geodesic. \item Use ideas similar to those leading to the proof Hawking's singularity theorem to prove {\bf Myers's theorem}: if $(M,\langle\cdot,\cdot\rangle)$ is a complete Riemannian manifold whose Ricci curvature satisfies $R_{\mu\nu} X^\mu X^\nu \geq \varepsilon g_{\mu\nu} X^\mu X^\nu$ for some $\varepsilon > 0$ then $M$ is compact. Can these ideas be used to prove a singularity theorem in Riemannian geometry? \item Explain why Hawking's singularity theorem does not apply to each of the following geodesically complete Lorentzian manifolds: \begin{enumerate} \item Minkowski's spacetime; \item Einstein's universe; \item de Sitter's universe; \item Anti-de Sitter spacetime. \end{enumerate} \item Consider the metric \[ \hspace{2cm} ds^2 = \alpha \, du^2 - 2 \, du \, dr + 2 \beta_A \, du \, dx^A + \gamma_{AB} \, dx^A dx^B. \] \begin{enumerate} \item Show that the Christoffel symbols satisfy \[ \hspace{2cm} \Gamma^u_{ur} = \Gamma^u_{rr} = \Gamma^u_{rA} = \Gamma^r_{rr} = \Gamma^A_{rr} = 0 \quad \text{ and } \quad \Gamma^A_{rB} = \gamma^{AC} \beta_{CB}, \] where $(\gamma^{AB})=(\gamma_{AB})^{-1}$ and $\beta_{AB} = \frac12 \frac{\partial \gamma_{AB}}{\partial r}$. \item Conclude that \[ \hspace{2cm} R_{rr} = - \frac{\partial}{\partial r} \left( \gamma^{AB} \beta_{AB}\right) - \gamma^{AB} \gamma^{CD} \beta_{AC} \beta_{BD}. \] \end{enumerate} \item Let $(M,g)$ be a globally hyperbolic spacetime with Cauchy hypersurfaces $S_0$ and $S_1$ satisfying $S_1 \subset D^+(S_0)$, and $\Sigma \subset S_1$ a compact surface. Show that: \begin{enumerate} \item $D^+(S_0)\cap J^-(\Sigma)$ is compact; \item $J^-(\Sigma)$ is closed. \end{enumerate} \item Explain why Penrose's singularity theorem does not apply to each of the following geodesically complete Lorentzian manifolds: \begin{enumerate} \item Minkowski's spacetime; \item Einstein's universe; \item de Sitter's universe; \item Anti-de Sitter spacetime. \end{enumerate} \end{enumerate} \chapter{Cauchy problem} \label{chapter5} In this chapter we discuss the Cauchy problem for the Einstein field equations, following \cite{W84}. We start by studying the Klein-Gordon equation, as a prototypical wave equation, and the the Maxwell equations, where the issues of constraints on the inital data and gauge freedom also arise. We then sketch the proof of the Choquet-Bruhat theorem and discuss the Lichnerowicz method for solving the constraint equations. See \cite{Ringstrom09} for a more complete discussion. \section{Divergence theorem} \label{sec5.1} It is possible to define the divergence of a vector field on any orientable manifold where a volume form has been chosen. \begin{Def} Let $M$ be an orientable $n$-dimensional manifold with volume form $\epsilon$, and let $X$ be a vector field on $M$. The {\bf divergence} of $X$ is the function $\operatorname{div} X$ such that \[ d(X \lrcorner \, \epsilon) = (\operatorname{div} X) \epsilon. \] \end{Def} The following result in a straightforward application of the Stokes theorem. \begin{Thm} ({\bf Divergence theorem}) If $M$ is a compact orientable $n$-dimensional manifold with boundary $\partial M$ then \[ \int_M (\operatorname{div} X) \epsilon = \int_{\partial M} X \lrcorner \, \epsilon. \] \end{Thm} \begin{Prop} If $(M,g)$ is a pseudo-Riemannian manifold with Levi-Civita connection $\nabla$ then \[ \operatorname{div} X = \nabla_\mu X^\mu. \] \end{Prop} \begin{proof} Take local coordinates such that $X=\partial_1$ around each point where $X$ does not vanish. Since \[ \epsilon = \sqrt{|\det(g_{\mu\nu})|} \, dx^1 \wedge \ldots \wedge dx^n, \] we have \[ d(X \lrcorner \, \epsilon) = \mathcal{L}_X \epsilon - X \lrcorner \, d\epsilon = \partial_1 \log\left|\det(g_{\mu\nu})\right|^\frac12 \, \epsilon, \] where we used $d \epsilon = 0$. Since \[ \partial_1 \log \left|\det A\right| = \operatorname{tr} (A^{-1} \partial_1 A) \] for any matrix-valued function $A$ in $M$, we have \begin{align*} d(X \lrcorner \, \epsilon) & = \frac12 \left(g^{\mu\nu} \partial_1 g_{\mu\nu}\right) \epsilon = \frac12 \left(g^{\mu\nu} (\mathcal{L}_X g)_{\mu\nu}\right) \epsilon \\ & = \frac12 g^{\mu\nu} (\nabla_\mu X_\nu + \nabla_\nu X_\mu ) \epsilon = (\nabla_\mu X^\mu) \epsilon. \end{align*} This formula can be easily extended to the set of zeros of $X$: by continuity on the boundary, and trivially in the interior. \end{proof} Assume now that $(M,g)$ is a compact orientable $n$-dimensional Lorentzian manifold with boundary $\partial M$, and let $\{E_1, \ldots, E_n \}$ be a positive orthonormal frame with $E_1 = N$ the outward unit normal vector on $\partial M$. The volume element is \[ \epsilon = - E_1^\sharp \wedge \ldots \wedge E_n^\sharp, \] and the volume element of $\partial M$ with the induced orientation is \[ \sigma = \pm E_2^\sharp \wedge \ldots \wedge E_n^\sharp \] (according to whether $N$ is timelike or spacelike). Therefore we have on $\partial M$ \[ X \lrcorner \, \epsilon = \pm \langle X, N \rangle \sigma, \] implying that the divergence theorem can be written as \[ \int_M (\operatorname{div} X) \epsilon = \int_{\partial M} \langle X, n \rangle \sigma, \] where $n$ is the outward unit normal vector in the points where it is spacelike, and is the inward unit normal vector in the points where it is timelike (Figure~\ref{normal}). \begin{figure}[h!] \begin{center} \psfrag{ntimelike}{$n$ timelike} \psfrag{nnull}{$n$ null} \psfrag{nspacelike}{$n$ spacelike} \epsfxsize=.6\textwidth \leavevmode \epsfbox{normal.eps} \end{center} \caption{Normal vector for the divergence theorem on a Lorentzian manifold.} \label{normal} \end{figure} It may happen that $\partial M$ has points where the normal is null, hence tangent to $\partial M$. In that case we choose the orthonormal frame such that \[ N = E_1 + E_2 \] is a null normal, with $E_1$ timelike and pointing outwards, and $E_2$ spacelike and (necessarily) pointing inwards. Then \[ \epsilon = - N^\sharp \wedge E_2^\sharp \wedge \ldots \wedge E_n^\sharp \] and so \[ X \lrcorner \, \epsilon = - \langle X, N \rangle \sigma, \] where \[ \sigma = E_2^\sharp \wedge \ldots \wedge E_n^\sharp \] is a volume element on $\partial M$ (compatible with the induced orientation, as $E_1$ points outwards). So we must choose in this case $n = -N$, that is, we must use the normal whose timelike component points inwards and whose spacelike component points outwards (Figure~\ref{normal}). Note that the magnitude (but not the sign) of the volume element $\sigma$ will depend on the choice of $n$. \section{Klein-Gordon equation} \label{sec5.2} Let $(M,g)$ be a Lorentzian manifold. A smooth function $\phi:M\to\mathbb{R}$ is a solution of the {\bf Klein-Gordon equation} if it satisfies \[ \Box \phi - m^2 \phi = 0 \Leftrightarrow \nabla^\mu \partial_\mu \phi - m^2 \phi = 0. \] For reasons that will become clear in Chapter~\ref{chapter6}, we define the {\bf energy-momentum tensor} associated to this equation as \[ T_{\mu\nu} = \partial_\mu \phi \, \partial_\nu \phi - \frac12 g_{\mu\nu} \left( \partial_\alpha \phi \, \partial^\alpha \phi + m^2 \phi^2 \right). \] If $\phi$ is a solution of the Klein-Gordon equation then \begin{align*} \nabla^\mu T_{\mu\nu} & = \Box \phi \, \partial_\nu \phi + \partial_\mu \phi \, \nabla^\mu \partial_\nu \phi - \partial_\alpha \phi \, \nabla_\nu \partial^\alpha \phi - m^2 \phi \, \partial_\nu \phi \\ & = (\Box \phi - m^2 \phi) \partial_\nu \phi = 0 \end{align*} Moreover, if $(M,g)$ is time-oriented then it is possible to prove that $T$ satisfies the dominant energy condition, that is, $T_{\mu\nu}X^\nu$ corresponds to a past-pointing causal vector whenever $X$ is a future-pointing causal vector. Assume that $X$ is a future-pointing timelike Killing vector field, and define $Y^\mu = T^{\mu\nu}X_\nu$. Then $Y$ is a past-pointing causal vector field satisfying \[ \nabla_\mu Y^\mu = T^{\mu\nu}\nabla_\mu X_\nu = 0 \] (where we used the Killing equation $\nabla_{(\mu}X_{\nu)}=0$ and the symmetry of $T$). Let us now focus on the case when $(M,g)$ is flat Minkowski spacetime and $X = \frac{\partial}{\partial t}$. Consider the Cauchy hypersurface $S_0 = \{ t=t_0 \}$, and let $B_0$ be the ball of radius $R$ in that hypersurface: \[ B_0 = \{ t=t_0, x^2 + y^2 + z^2 \leq R^2 \}. \] Let $S_1 = \{ t = t_1 \}$ be another Cauchy hypersurface, and consider the ball $B_1 = D(B_0) \cap S_1$ in that hypersurface (Figure~\ref{domain}). By the divergence theorem we have \[ \int_{B_0} \langle Y, X \rangle + \int_C \langle Y, n \rangle + \int_{B_1} \langle Y, -X \rangle = 0, \] where $C$ is the null portion of $\partial D(B_0)$ between $S_0$ and $S_1$ and $n$ is a past-pointing normal. Since $Y$ is a past-pointing causal vector we have $\langle Y, n \rangle \leq 0$, and so \begin{equation} \label{ineq} \int_{B_1} \langle Y, X \rangle \leq \int_{B_0} \langle Y, X \rangle. \end{equation} \begin{figure}[h!] \begin{center} \psfrag{n}{$n$} \psfrag{S0}{$S_0$} \psfrag{B0}{$B_0$} \psfrag{B1}{$B_1$} \psfrag{C}{$C$} \psfrag{nspacelike}{$n$ spacelike} \epsfxsize=1.0\textwidth \leavevmode \epsfbox{domain.eps} \end{center} \caption{Proof of Proposition~\ref{unicity}.} \label{domain} \end{figure} Note that \begin{align*} \langle Y, X \rangle & = T_{\mu\nu}X^\mu X^\nu = (X \cdot \phi)^2 + \frac12 \left( \partial_\alpha \phi \, \partial^\alpha \phi + m^2 \phi^2 \right) \\ & = \frac12 \left[ (\partial_0 \phi)^2 + (\partial_x \phi)^2 + (\partial_y \phi)^2 + (\partial_z \phi)^2 + m^2 \phi^2 \right]. \end{align*} We conclude immediately that if $\phi$ is a solution of the Klein-Gordon equation and $\phi=\partial_0\phi=0$ in $B_0$ then $\phi=0$ in $B_1$, and indeed in $D(S_0)$ (since $t_1$ is arbitrary). Because the Klein-Gordon equation is linear we can then deduce the following result. \begin{Prop}\label{unicity} Given two smooth functions $\phi_0,\psi_0:S_0 \to \mathbb{R}$, there exists at most a solution $\phi$ of the Klein-Gordon equation satisfying $\phi=\phi_0$ and $\partial_0\phi = \psi_0$ on $S_0$. \end{Prop} Given a smooth function $\phi:M \to \mathbb{R}$ and some set $A \subset \mathbb{R}^4$ we define the {\bf Sobolev norm} \[ \| \phi \|^2_{H^1(A)} = \int_A \left[ \phi^2 + (\partial_0 \phi)^2 + (\partial_x \phi)^2 + (\partial_y \phi)^2 + (\partial_z \phi)^2 \right]. \] In this definition $A$ can either be an open set or a submanifold (in which case we use the induced volume form in the integral). More generally, for each $k \in \mathbb{N}_0$ we define the Sobolev norms \[ \| \phi \|^2_{H^k(A)} = \int_A \sum_{|\alpha| \leq k} (\partial^\alpha \phi)^2, \] where $\alpha = (\alpha_0, \alpha_1, \alpha_2, \alpha_3) \in {\mathbb{N}_0}^4$, $|\alpha| = \alpha_0 + \alpha_1 + \alpha_2 + \alpha_3$, $\partial = (\partial_0,\partial_x,\partial_y,\partial_z)$ and $\partial^\alpha = \partial_0^{\alpha_0} \partial_x^{\alpha_1} \partial_y^{\alpha_2} \partial_z^{\alpha_3}$. Similarly, given a smooth function $\phi_0:S_0 \to \mathbb{R}$ we define \[ \| \phi_0 \|^2_{H^k(B_0)} = \int_{B_0} \sum_{|\alpha| \leq k} (D^\alpha \phi_0)^2, \] where now $D = (\partial_x,\partial_y,\partial_z)$. Inequality \eqref{ineq} can then be written as \[ \| \phi \|^2_{H^1(B_1)} \leq C \| \phi \|^2_{H^1(B_0)} \leq C \| \phi_0 \|^2_{H^1(B_0)} + C \| \psi_0 \|^2_{H^0(B_0)}, \] where $C>0$ is a generic positive constant (not always the same). Integrating this inequality in $t$ from $t_0 - R$ to $t_0 + R$ we obtain \[ \| \phi \|^2_{H^1(D(B_0))} \leq C \| \phi_0 \|^2_{H^1(B_0)} + C \| \psi_0 \|^2_{H^0(B_0)}. \] On $S_0$, all spatial partial derivatives of $\phi$ and $\partial_0\phi$ are given by partial derivatives of $\phi_0$ and $\psi_0$. On the other hand, from the Klein-Gordon equation we have \[ \partial_0^2 \phi_{|_{S_0}} = \partial_x^2 \phi_0 + \partial_y^2 \phi_0 + \partial_z^2 \phi_0 - m^2 \phi_0, \] and so, by partial differentiation, we can obtain all spatial partial derivatives of $\partial^2_0\phi$ on $S_0$ from partial derivatives of $\phi_0$. Differentiating the Klein-Gordon equation with respect to $t$ yields \[ \partial_0^3 \phi_{|_{S_0}} = \partial_x^2 \psi_0 + \partial_y^2 \psi_0 + \partial_z^2 \psi_0 - m^2 \psi_0, \] and it should be clear that all partial derivatives of $\phi$ on $S_0$ are given by partial derivatives of $\phi_0$ and $\psi_0$. Note from the general partial derivative of the Klein-Gordon equation, \[ \partial_0^2 \partial^\alpha\phi = \partial_x^2 \partial^\alpha\phi + \partial_y^2 \partial^\alpha\phi + \partial_z^2\partial^\alpha \phi - m^2 \partial^\alpha\phi, \] that $\partial^\alpha\phi$ also satisfies the Klein-Gordon equation, and so \begin{align*} & \| \partial^\alpha\phi \|^2_{H^1(B_1)} \leq C \| \partial^\alpha\phi \|^2_{H^1(B_0)} \Rightarrow \\ & \| \phi \|^2_{H^k(B_1)} \leq C \| \phi_0 \|^2_{H^k(B_0)} + C \| \psi_0 \|^2_{H^{k-1}(B_0)} \Rightarrow \\ & \| \phi \|^2_{H^k(D(B_0))} \leq C \| \phi_0 \|^2_{H^k(B_0)} + C \| \psi_0 \|^2_{H^{k-1}(B_0)} , \end{align*} whence \[ \| \phi \|_{H^k(D(B_0))} \leq C \| \phi_0 \|_{H^k(B_0)} + C \| \psi_0 \|_{H^{k-1}(B_0)} . \] Another important norm is the {\bf supremum norm}, \[ \| \phi \|_{C^0(A)} = \sup_{p \in A} |\phi(p)|. \] More generally, we define the norms \[ \| \phi \|_{C^k(A)} = \sum_{|\alpha| \leq k} \sup_{p \in A} |\partial^\alpha\phi(p)|. \] \begin{Def} A set $A \subset \mathbb{R}^n$ is said to satisfy the {\bf interior cone condition} if there exists a (closed) cone of height $h>0$ and solid angle $\Omega > 0$ at the vertex such that for each point $p \in A$ it is possible to map the cone isometrically into $A$ in such a way that the vertex is mapped to $p$. \end{Def} \begin{Thm}({\bf Sobolev inequality}) If $A \subset \mathbb{R}^n$ satisfies the interior cone condition and $k > \frac{n}2$ then for any smooth function $f: A \to \mathbb{R}$ \[ \| f \|_{C^0(A)} \leq C \| f \|_{H^k(A)}. \] \end{Thm} \begin{proof} Exercise. \end{proof} For a solution of the Klein-Gordon equation we then have \[ \| \phi \|_{C^0(D(B_0))} \leq \| \phi \|_{H^3(D(B_0))} \leq C \| \phi_0 \|_{H^3(B_0)} + C \| \psi_0 \|_{H^{2}(B_0)}. \] More generally, \[ \| \partial^\alpha \phi \|_{C^0(D(B_0))} \leq \| \partial^\alpha \phi \|_{H^3(D(B_0))} \leq C \| \phi_0 \|_{H^{m+3}(B_0)} + C \| \psi_0 \|_{H^{m+2}(B_0)}, \] where $m=|\alpha|$. We conclude that \[ \| \phi \|_{C^m(D(B_0))} \leq C \| \phi_0 \|_{H^{m+3}(B_0)} + C \| \psi_0 \|_{H^{m+2}(B_0)}, \] whence \[ \| \phi \|_{C^m(D(B_0))} \leq C \| \phi_0 \|_{C^{m+3}(B_0)} + C \| \psi_0 \|_{C^{m+2}(B_0)}. \] \begin{Thm} Given initial data $\phi_0, \psi_0: S_0 \to \mathbb{R}$ for the Klein-Gordon equation, there exists a unique smooth solution $\phi$ satisfying $\phi=\phi_0$ and $\partial_0\phi = \psi_0$ on $S_0$. Moreover, if $B_0 \subset S_0$ is a ball then the solution in $D(B_0)$ depends only on the initial data in $B_0$, and the map \[ C^{m+3}(B_0) \times C^{m+2}(B_0) \ni (\phi_0, \psi_0) \mapsto \phi \in C^m(D(B_0)) \] is continuous. \end{Thm} \begin{proof} We just have to prove existence of solution. To do that, we note that if $\phi$ is a solution then we know all its partial derivatives on $S_0$. Therefore, we can construct a power series for $\phi$ around each point of $S_0$. If $\phi_0$ and $\psi_0$ are analytic then the {\bf Cauchy-Kowalewski theorem} guarantees that these series converge, and so there exists an analytic solution $\phi$. If $\phi_0$ and $\psi_0$ are smooth then there exist sequences $\phi_{0,n}$ and $\psi_{0,n}$ of analytic functions which converge to $\phi_0$ and $\psi_0$ in all spaces $C^{m}(B_0)$. The corresponding analytic solutions $\phi_n$ of the Klein-Gordon equation thus form a Cauchy sequence in all spaces $C^m(D(B_0))$, and hence must converge in all these spaces to some function $\phi$. This function is therefore smooth, and, passing to the limit, must satisfy the Klein-Gordon equation in all sets $(D(B_0))$. \end{proof} Using similar tecnhiques, it is possible to prove a much stronger result. \begin{Thm} Let $(M,g)$ be a globally hyperbolic spacetime with a Cauchy hypersurface $S$ and future unit normal $N$. Then the {\bf linear, diagonal second order hyperbolic system} \[ g^{\mu\nu} \nabla_\mu \partial_\nu \phi_i + A^\mu_{ij} \partial_\mu \phi_j + B_{ij} \phi_j + C_i = 0 \qquad (i=1, \ldots, n), \] where $A^\mu_{ij}$, $B_{ij}$ and $C_i$ are smooth and $\nabla$ is any connection, yields a well-posed Cauchy problem with initial data in $S$. More precisely, given smooth initial data $(\phi_1, \ldots, \phi_n, N \cdot \phi_1, \ldots, N \cdot \phi_n)$ on $S$ there exists a unique smooth solution of the system, defined in $M$. Moreover, the solutions depend continuously on the initial data, and if two initial data sets coincide on some closed subset $B \subset S$ then the corresponding solutions coincide in $D(B)$. \end{Thm} To solve the Cauchy problem for the Einstein equations, we will need to solve more complicated systems of hyperbolic equations. \begin{Thm} \label{hyperbolic2} Consider the {\bf quasi-linear, diagonal second order hyperbolic system} \[ g^{\mu\nu}(x,\phi,\partial\phi) \nabla_\mu \partial_\nu \phi_i = F_i(x,\phi,\partial\phi) \qquad (i=1, \ldots, n), \] where $g^{\mu\nu}$ and $F_i$ are smooth and $\nabla$ is any connection on some manifold $M$. Let $(\phi_0)_1, \ldots, (\phi_0)_n$ be a solution of this system, and define $(g_0)^{\mu\nu}=g^{\mu\nu}(x,\phi_0,\partial\phi_0)$. Assume that $(M,g)$ is globally hyperbolic, and let $S$ be a Cauchy hypersurface. Then the system above yields a well-posed Cauchy problem with initial data in $S$, in the following sense: given initial data in $S$ sufficiently close to the initial data for $(\phi_0)_1, \ldots, (\phi_0)_n$ there exists an open neighborhood $V$ of $S$ such that the system has a unique solution in $V$, and $(V,g(x,\phi,\partial\phi))$ is globally hyperbolic. Moreover, the solutions depend continuously on the initial data, and if two initial data sets coincide on some closed subset $B \subset S$ then the corresponding solutions coincide in $D(B)$. \end{Thm} \begin{proof} The idea of the proof is to start with the linear hyperbolic system \[ g^{\mu\nu}(x,\phi_0,\partial\phi_0) \nabla_\mu \partial_\nu \phi_i = F_i(x,\phi_0,\partial\phi_0) \qquad (i=1, \ldots, n), \] which by the previous theorem has a unique solution $\phi_1$ close to $\phi_0$. Because of this, there exists a neighborhood $V_1$ of $S$ such that $(V_1,g(x,\phi_1,\partial\phi_1))$ is globally hyperbolic with Cauchy hypersurface $S$, and so the system \[ g^{\mu\nu}(x,\phi_1,\partial\phi_1) \nabla_\mu \partial_\nu \phi_i = F_i(x,\phi_1,\partial\phi_1) \qquad (i=1, \ldots, n), \] again has a unique solution $\phi_2$ close to $\phi_1$. Iterating this procedure we obtain a sequence $\phi_n$ which can then be shown to converge to the unique solution of the quasi-linear hyperbolic system. \end{proof} \section{Maxwell's equations: constraints and gauge} \label{sec5.3} As a warm-up problem to solving the Einstein field equations we consider the considerably easier problem of solving the Maxwell equations without sources in flat Minkowski spacetime. These equations can be split into what we shall call {\bf constraint equations}, \[ \begin{cases} \operatorname{div} {\bf E} = 0 \\ \operatorname{div} {\bf B} = 0 \end{cases}, \] and {\bf evolution equations}: \[ \begin{cases} \displaystyle \frac{\partial {\bf E}}{\partial t} = \operatorname{curl} {\bf B} \\ \\ \displaystyle \frac{\partial {\bf B}}{\partial t} = - \operatorname{curl} {\bf E} \end{cases}. \] One expects that the evolution equations completely determine ${\bf E}(t, {\bf x})$ and ${\bf B}(t, {\bf x})$ from initial data \[ \begin{cases} {\bf E}(0, {\bf x}) = {\bf E}_0({\bf x}) \\ {\bf B}(0, {\bf x}) = {\bf B}_0({\bf x}) \end{cases}. \] This initial data, however, is not completely free: it must satisfy the constraint equations \[ \operatorname{div} {\bf E}_0 = \operatorname{div} {\bf B}_0 = 0. \] This suffices to guarantee that the constraint equations are satisfied by ${\bf E}(t, {\bf x})$ and ${\bf B}(t, {\bf x})$, since they are preserved by the evolution: for instance, \[ \frac{\partial}{\partial t} (\operatorname{div} {\bf E}) = \operatorname{div} \left( \frac{\partial {\bf E}}{\partial t} \right) = \operatorname{div} (\operatorname{curl} {\bf B}) = 0. \] As we will see, solving the Einstein field equations will also require splitting them into constraint equations and evolution equations. Another issue that will have to be dealt with, gauge freedom, also occurs when solving the Maxwell equations by using the electromagnetic {\bf gauge potentials}. To do so, we note that two of the Maxwell equations (those which do not admit sources) are equivalent to the existence of a vector potential ${\bf A}$ and a scalar potential $\phi$ in terms of which ${\bf B}$ and ${\bf E}$ can be written: \[ \begin{cases} \operatorname{div} {\bf B} = 0 \\ \displaystyle \operatorname{curl} {\bf E} = - \frac{\partial {\bf B}}{\partial t} \end{cases} \Leftrightarrow \begin{cases} {\bf B} = \operatorname{curl} {\bf A} \\ \displaystyle {\bf E} = - \operatorname{grad} \phi - \frac{\partial {\bf A}}{\partial t} \end{cases}. \] These potentials, however, are nonunique: given any smooth function $\chi$, the potentials \[ \begin{cases} {\bf A}' = {\bf A} + \operatorname{grad} \chi \\ \displaystyle \phi' = \phi - \frac{\partial \chi}{\partial t} \end{cases} \] yield the same fields ${\bf B}$ and ${\bf E}$ (${\bf A}'$ and $\phi'$ are said to be related to ${\bf A}$ and $\phi$ by a {\bf gauge transformation}). The remaining Maxwell equations can now be written as \[ \begin{cases} \operatorname{div} {\bf E} = 0 \\ \displaystyle \operatorname{curl} {\bf B} = \frac{\partial {\bf E}}{\partial t} \end{cases} \Leftrightarrow \begin{cases} \displaystyle \Delta \phi + \frac{\partial}{\partial t} (\operatorname{div} {\bf A}) = 0 \\ \displaystyle \operatorname{grad} (\operatorname{div} {\bf A}) - \Delta {\bf A} = - \operatorname{grad} \frac{\partial \phi}{\partial t} - \frac{\partial^2 {\bf A}}{\partial t^2} \end{cases}. \] Therefore, if there exist gauge potentials satisfying \[ \operatorname{div} {\bf A} = - \frac{\partial \phi}{\partial t} \] (the so-called {\bf Lorentz gauge}) then these equations reduce to uncoupled wave equations: \[ \begin{cases} \Box{\phi} = 0 \\ \Box {\bf A} = 0 \end{cases} \] To solve the Maxwell equations using the gauge potentials we then solve these wave equations with initial data $\phi_0$, $\left(\frac{\partial \phi}{\partial t}\right)_0$ and ${\bf A}_0$, $\left(\frac{\partial {\bf A}}{\partial t}\right)_0$ satisfying: \begin{enumerate} \item $\operatorname{curl} {\bf A}_0 = {\bf B}_0$ (possible because $\operatorname{div} {\bf B}_0 = 0$); \item $\phi_0 = 0$ (by choice); \item $\left(\frac{\partial {\bf A}}{\partial t}\right)_0 = - {\bf E}_0$ (giving the correct initial electric field); \item $\left(\frac{\partial \phi}{\partial t}\right)_0 = - \operatorname{div} {\bf A}_0$ (so that the Lorentz gauge condition holds). \end{enumerate} These potentials will determine a solution of the Maxwell equations with the correct initial data {\bf if the Lorentz gauge condition holds for all time}. Now \[ \Box \left( \operatorname{div} {\bf A} + \frac{\partial \phi}{\partial t} \right) = \operatorname{div} (\Box {\bf A}) + \frac{\partial}{\partial t} (\Box \phi) = 0 \] and \[ \left( \operatorname{div} {\bf A} + \frac{\partial \phi}{\partial t} \right)_0 = 0. \] Moreover, \[ \frac{\partial}{\partial t} \left( \operatorname{div} {\bf A} + \frac{\partial \phi}{\partial t} \right) = \operatorname{div}\left(\frac{\partial {\bf A}}{\partial t}\right) + \frac{\partial^2 \phi}{\partial t^2} = \operatorname{div}\left(\frac{\partial {\bf A}}{\partial t}\right) + \Delta \phi, \] and so \[ \left(\frac{\partial}{\partial t} \left( \operatorname{div} {\bf A} + \frac{\partial \phi}{\partial t} \right) \right)_0 = - \operatorname{div} {\bf E}_0 = 0. \] By uniqueness of solution of the wave equation, we conclude that the Lorentz gauge condition does hold for all time, and so the potentials obtained by solving the wave equation with the initial conditions above do determine the solution of the Maxwell equations with initial data ${\bf B}_0$, ${\bf E}_0$. \begin{Remark} The electromagnetic potentials can be seen as the components of the {\bf electromagnetic potential one-form} \[ A = - \phi \, dt + A^1 dx + A^2 dy + A^3 dz. \] Note that a gauge transformation can be written as \[ A' = A + d\chi. \] The electric and magnetic fields can in turn be seen as the components of the {\bf Faraday tensor} \begin{align*} F = dA & = E^1 dx \wedge dt + E^2 dy \wedge dt + E^3 dz \wedge dt \\ & + B^1 dy \wedge dz + B^2 dz \wedge dx + B^3 dx \wedge dy. \end{align*} It should be obvious that $F$ remains invariant under a gauge transformation. The Maxwell equations can be written as \[ dF = 0 \Leftrightarrow F = dA \] and \[ d \star F = 0, \] since \begin{align*} \star F & = E^1 dy \wedge dz + E^2 dz \wedge dx + E^3 dx \wedge dy \\ & - B^1 dx \wedge dt - B^2 dy \wedge dt - B^3 dz \wedge dt \end{align*} (that is, the Hodge star replaces ${\bf B}$ with ${\bf E}$ and ${\bf E}$ with $-{\bf B}$). \end{Remark} \section{Einstein's equations} \label{sec5.4} Let $(M,g)$ be a globally hyperbolic spacetime and $S \subset M$ a Cauchy hypersurface. Let us write $g$ in the Gauss Lemma form near $S$, \[ g = -dt^2 + h_{ij}(t,x) dx^i dx^j, \] so that the level sets of $t$ are Riemannian manifolds with induced metric $h(t)=h_{ij}dx^i dx^j$ and second fundamental form \[ \displaystyle K(t)=\frac12\frac{\partial h_{ij}}{\partial t}dx^idx^j. \] For this choice of coordinates (gauge), finding the metric is equivalent to finding a time-dependent Riemannian metric $h(t)$ on $S$. The vacuum Einstein field equations $G_{\mu\nu}=0$ can be split into {\bf constraint equations} \[ \begin{cases} G_{00} = 0 \\ G_{0i} = 0 \end{cases} \Leftrightarrow \begin{cases} \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} = 0 \\ \bar{\nabla}_i K^j_{\,\, j} - \bar{\nabla}_j K^{j}_{\,\, i} = 0 \end{cases} \] and {\bf evolution equations} \[ G_{ij} = 0 \Leftrightarrow \frac{\partial}{\partial t} K_{ij} = - \bar{R}_{ij} + 2 K_{il} K^{l}_{\,\, j} - K^{l}_{\,\, l} K_{ij} \; , \] where $\bar{\nabla}$, $\bar{R}$ and $\bar{R}_{ij}$ are the Levi-Civita connection, the scalar curvature and the Ricci tensor of $h$. Note that the evolution equations allow us to evolve $h(t)$ and $K(t)$, whereas the constraint equations restrict their initial values $h(0)$ and $K(0)$. If the initial data satisfy the constraint equations, so does the solution of the evolution equations. Indeed, the contracted Bianchi identities give us for free the equations \begin{align*} \nabla^\alpha G_{\alpha \beta} = 0 & \Leftrightarrow \nabla^0 G_{0 \beta} + \nabla^i G_{i \beta} = 0 \Leftrightarrow - \nabla_0 G_{0 \beta} + h^{ij} \nabla_j G_{i \beta} = 0 \\ & \Leftrightarrow \partial_0 G_{0 \beta} - \Gamma_{00}^\alpha G_{\alpha\beta} - \Gamma_{0\beta}^\alpha G_{0\alpha} = h^{ij} (\partial_j G_{i \beta} - \Gamma_{ji}^\alpha G_{\alpha\beta} - \Gamma_{j\beta}^\alpha G_{i \alpha}). \end{align*} If the evolution equations hold, we have $G_{ij} = \partial_\alpha G_{ij} = 0$, and so the contracted Bianchi identities become \begin{align*} &\begin{cases} \partial_0 G_{00} = \Gamma_{00}^\alpha G_{\alpha 0} + \Gamma_{00}^\alpha G_{0\alpha} + h^{ij} (\partial_j G_{i 0} - \Gamma_{ji}^\alpha G_{\alpha 0} - \Gamma_{j 0}^0 G_{i 0}) \\ \partial_0 G_{0k} = \Gamma_{00}^0 G_{0k} + \Gamma_{0k}^\alpha G_{0\alpha} - h^{ij} (\Gamma_{ji}^0 G_{0k} + \Gamma_{jk}^0 G_{i 0}) \end{cases} \\ &\Leftrightarrow\begin{cases} \partial_0 G_{00} = h^{ij} (\partial_j G_{i 0} - K_{ji} G_{00} - \bar\Gamma_{ji}^k G_{k0} ) \\ \partial_0 G_{0k} = K^{i}_{\,\,k} G_{0i} - h^{ij} (K_{ji} G_{0k} + K_{jk} G_{i0}) \end{cases} . \end{align*} This is a system of linear first order partial differential equations on $G_{0 \beta}$; integrating the last three equations, and then the first, it is easy to see that, since the initial data vanishes at $t=0$, the solution vanishes for all $t$. \begin{Remark} In general, any time function $t:M \to \mathbb{R}$ whose level sets $S_t$ are Cauchy hypersurfaces can be completed into a system of local coordinates $(t,x^1,x^2,x^3)$. If \[ N = - \frac{\operatorname{grad} t}{|\operatorname{grad} t|} \] is the future-pointing unit normal to $S_t$, we have the orthogonal decomposition \[ \frac{\partial}{\partial t} = \alpha N + \beta, \] where the positive function $\alpha$ is known as the {\bf lapse function} and the vector field $\beta$, tangent to $S_t$, is known as the {\bf shift vector} (Figure~\ref{lapse}). In these coordinates, the metric is written \begin{align*} g & = (-\alpha^2 + \beta_i \beta^i) dt^2 + 2 \beta_i dt dx^i + h_{ij} dx^i dx^j \\ & = -\alpha^2 dt^2 + h_{ij} (dx^i - \beta^i dt)(dx^j - \beta^j dt). \end{align*} \begin{figure}[h!] \begin{center} \psfrag{S}{$S_t$} \psfrag{n}{$N$} \psfrag{an}{$\alpha N$} \psfrag{b}{$\beta$} \psfrag{t}{$\frac{\partial}{\partial t}$} \epsfxsize=.8\textwidth \leavevmode \epsfbox{lapse.eps} \end{center} \caption{Lapse function and shift vector.} \label{lapse} \end{figure} Note that the Riemannian metric of the Cauchy hypersurfaces $S_t$ is still $h(t)=h_{ij}dx^i dx^j$; the lapse function and the shift vector merely specify how points with the same coodinates $x^i$ in different Cauchy hypersurfaces $S_t$ are related, which is a matter of choice, that is, a gauge freedom. The Gauss Lemma form of the metric, for instance, corresponds $\alpha=1$ and $\beta=0$, but this is not necessarily the best choice. \end{Remark} To prove the existence and uniqueness result for the vacuum Einstein field equations it is best to choose so-called {\bf harmonic coordinates}, that is, coordinates $x^\mu$ satisfying the wave equation, $\Box x^\mu=0$. This equation can be written as \[ H^\mu \equiv \partial_\alpha g^{\alpha\mu} + \frac12 g^{\alpha\mu} g^{\rho\sigma}\partial_{\alpha}g_{\rho\sigma} = 0 \] We define the {\bf reduced Ricci tensor} to be \[ R^H_{\mu\nu} \equiv R_{\mu\nu} + g_{\alpha(\mu}\partial_{\nu)}H^\alpha = - \frac12 g^{\alpha\beta} \partial_\alpha \partial_\beta g_{\mu\nu} + F_{\mu\nu}(g,\partial g), \] and the {\bf reduced Einstein equations} to be \[ R^H_{\mu\nu} = 0. \] Note that $R^H_{\mu\nu}=R_{\mu\nu}$ if the coordinates $x^\mu$ are harmonic; in this case, the reduced Einstein equations coincide with the Einstein equations. Moreover, the reduced Einstein equations are a quasi-linear, diagonal second order hyperbolic system for the components of the metric. Given initial data $(h_{ij},K_{ij})$ satisfying the constraint equations, consider the following initial data for the reduced Einstein equations: \begin{enumerate} \item $g_{ij} = h_{ij}$ (forced); \item $g_{i0} = 0$ (by choice); \item $g_{00} = -1$ (by choice); \item $\frac{\partial g_{ij}}{\partial t} = 2K_{ij}$ (forced); \item $\frac{\partial g_{0\mu}}{\partial t}$ such that $H^\mu = 0$ in $S$. \end{enumerate} If $(h_{ij},K_{ij})$ is close to the trivial data $(\delta_{ij},0)$ for the Minkowski spacetime, Theorem~\ref{hyperbolic2} guarantees that we can solve the reduced Einstein equations in some open neighborhood $V$ of $S$. From \[ G_{\mu\nu} = R^H_{\mu\nu} - \frac12 R^H g_{\mu\nu} - g_{\alpha(\mu}\partial_{\nu)}H^\alpha + \frac12 g_{\mu\nu} \partial_\alpha H^\alpha \] it is easily seen that if $g_{\mu\nu}$ satisfies the reduced Einstein equations $R^H_{\mu\nu} = 0$ then the contracted Bianchi identities yield \[ \nabla^\mu G_{\mu\nu} = 0 \Rightarrow g^{\mu\nu} \partial_\mu \partial_\nu H^\alpha + A^{\alpha\mu}_\beta \partial_\mu H^\beta = 0. \] Moreover, we have from the constraint equations \[ G_{\mu 0} = 0 \Rightarrow \partial_0 H^\mu = 0 \] on $S$. Therefore $H^\mu$ is the solution of a linear, diagonal second order hyperbolic system with vanishing initial conditions. We conclude that $H^\mu=0$ in $V$, and therefore $g_{\mu\nu}$ solves the Einstein equations in $V$. We can always assume that our initial data is close to the trivial data by rescaling: if $x^\mu$ are local coordinates such that $g_{\mu\nu}=\eta_{\mu\nu}$ at some point $p \in S$, where $\eta_{\mu\nu}$ is the Minkowski metric, we define new coordinates $\bar{x}^\mu$ by the formula \[ \bar{x}^\mu = \frac1{\lambda} x^\mu, \] where $\lambda>0$ is a constant. In these new coordinates the metric is \[ \bar{g}_{\mu\nu} = \frac{\partial x^\alpha}{\partial \bar{x}^\mu} \frac{\partial x^\beta}{\partial \bar{x}^\nu} g_{\alpha\beta} = \lambda^2 g_{\mu\nu}. \] If this metric is a solution of the vacuum Einstein field equations, then so is \[ \tilde{g}_{\mu\nu} = \frac1{\lambda^2}\bar{g}_{\mu\nu} = g_{\mu\nu}. \] Note that this metric satisfies \[ \frac{\partial \tilde{g}_{\mu\nu}}{\partial \bar{t}} = \frac{\partial g_{\mu\nu}}{\partial t} \frac{\partial t}{\partial \bar{t}} = \lambda \frac{\partial g_{\mu\nu}}{\partial t}. \] Therefore for $\lambda$ sufficiently small the initial data $(\tilde{g}_{\mu\nu},\frac{\partial \tilde{g}_{\mu\nu}}{\partial \bar{t}})$ will be close to $(\eta_{\mu\nu},0)$. In this way we can obtain a local solution of the Einstein field equations in a neighborhood of each point $p \in S$. By uniqueness of solution of a quasi-linear, diagonal second order hyperbolic system we can glue these local solutions to obtain a global solution defined on an open neighborhood of $S$. In other words, given initial data $(h,K)$ satisfying the constraint equations, there exists a globally hyperbolic spacetime $(M,g)$ satisfying the Einstein field equations such that $S$ is a Cauchy surface with induced metric $h$ and second fundamental form $K$. Finally, now that we have proved the existence of such solutions, it is possible to prove the existence of a {\bf maximal} solution. The proof is as follows: if $(M_1,g_1)$ and $(M_2,g_2)$ are two solutions of the Einstein field equations containing $S$ with the same initial data $(h,K)$, we say that $(M_1,g_1) \leq (M_2,g_2)$ if there is an isometric embedding $\psi:M_1 \to M_2$ preserving $(S,h,K)$. Note that it is possible that neither $(M_1,g_1) \leq (M_2,g_2)$ nor $(M_2,g_2) \leq (M_1,g_1)$. A set of solutions with the property that any two are related by $\leq$ is called a {\bf chain}; it is clear that every chain has an upper bound (the union up to isometric embeddings). Under these conditions, {\bf Zorn's Lemma} guarantees that there is a maximal element $(M,g)$ in the set of all solutions, that is, a solution which cannot be isometric embedded into any other solution. It is possible to prove that this element is unique (if there were two such maximal solutions it would be possible to patch them together to construct a larger solution). We then have the following fundamental result. \begin{Thm} ({\bf Choquet-Bruhat \cite{CB55,CBG69}}) Let $(S,h)$ be a $3$-dimensional Riemannian manifold and $K$ a symmetric tensor field in $S$ satisfying the constraint equations \[ \begin{cases} \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} = 0 \\ \bar{\nabla}_i K^j_{\,\, j} - \bar{\nabla}_j K^{j}_{\,\, i} = 0 \end{cases}, \] where $\bar{\nabla}$ and $\bar{R}$ are the Levi-Civita connection and the scalar curvature of $h$. Then there exists a unique (up to isometry) $4$-dimensional Lorentzian manifold $(M,g)$, called the {\bf maximal Cauchy development} of $(S,h,K)$, satisfying: \begin{enumerate}[(i)] \item $(M,g)$ is a solution of the vacuum Einstein equations; \item $(M,g)$ is globally hyperbolic with Cauchy surface $S$; \item The induced metric and second fundamental forms of $S$ are $h$ and $K$; \item Any $4$-dimensional Lorentzian manifold satisfying $(i)-(iii)$ can be isometrically embedded into $(M,g)$. \end{enumerate} Moreover, if $(S,h,K)$ and $(\bar{S},\bar{h},\bar{K})$ coincide on some closed subset $B \cong \bar{B}$ then $D(B)$ and $D(\bar{B})$ are isometric. Finally, $g$ depends continuously on the initial data $(h,K)$ (for appropriate topologies). \end{Thm} \section{Constraint equations} \label{sec5.5} To obtain initial data for the Einstein equations it is necessary to solve the nonlinear constraint equations \[ \begin{cases} \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} = 0 \\ \bar{\nabla}_i K^j_{\,\, j} - \bar{\nabla}_j K^{j}_{\,\, i} = 0 \end{cases}. \] The {\bf Lichnerowicz method} for solving these equations is as follows: one starts by choosic an arbitrary Riemannian metric $h$ and an arbitrary symmetric tensor $K$ satisfying \[ \begin{cases} K^{i}_{\,\, i} = 0 \\ \bar{\nabla}_j K^{j}_{\,\, i} = 0 \end{cases} \] (that is, $K$ is traceless and divergenceless). These choices satisfy the second, but not the first, constraint equations. On then defines the conformally rescaled metric \[ \tilde{h}=u^4 h \] and the rescaled symmetric tensor \[ \tilde{K}=u^{-2} K \] Clearly $\tilde{K}$ is still traceless, and it is easily seen that it is also divergenceless: \[ \tilde{\nabla}_j \tilde{K}^{j}_{\,\, i} = 0, \] where $\tilde{\nabla}$ is the Levi-Civita connection of $\tilde{h}$. The first constraint equation for the metric $\tilde{h}$ and the symmetric tensor $\tilde{K}$, on the other hand, becomes \[ \tilde{R} - \tilde{K}_{ij} \tilde{K}^{ij} = 0 \Leftrightarrow \bar{\Delta} u - \frac18 \bar{R} u + \frac18 u^{-7} K_{ij} K^{ij} = 0. \] This is a nonlinear elliptic equation on one variable, much simpler than the original system. If one chooses the so-called time symmetric case $K=0$ (the reason for this designation being that in the Gauss Lemma coordinates $t \to -t$ is clearly an isometry of the solution), this equation becomes linear: \[ \bar{\Delta} u - \frac18 \bar{R} u = 0. \] As an example, choose $h$ to be the Euclidean metric, $h_{ij} = \delta_{ij}$; then $\bar{R}=0$ and the equation above is simply the Laplace equation \[ \Delta u = 0. \] A simple solution, related to the gravitational field of a point mass, is \[ u = 1 + \frac{M}{2r}. \] This solution leads exactly to the Schwarzschild solution, which in isotropic coordinates is written \[ ds^2 = - \left( \frac{1 - \frac{M}{2r}}{1 + \frac{M}{2r}}\right)^2 dt^2 + \left(1 + \frac{M}{2r}\right)^4 (dr^2 + r^2(d\theta^2 + \sin^2 \theta d\varphi^2)). \] Since the Laplace equation is linear, one can superimpose solutions. Thus an initial data set for a set of $N$ black holes initially at rest can be obtained by choosing \[ u = \sum_{i=1}^N \left(1 + \frac{M_i}{2r_i}\right), \] where $M_i > 0$ and $r_i$ is the Euclidean distance to a fixed point $p_i \in \mathbb{R}^3$ ($i=1, \ldots, N$). \section{Einstein equations with matter} \label{sec5.6} So far we have analyzed only the vacuum Einstein field equations. To include matter (and also a cosmological constant $\Lambda$) we must introduce matter fields, generically represented by $\psi$, and consider a system of the form \[ \begin{cases} G_{\mu\nu} + \Lambda g_{\mu\nu} = 8 \pi T_{\mu\nu}(g,\psi) \\ \text{Field equations for } \psi \end{cases}. \] If the equations for $\psi$ form a hyperbolic system then Choquet-Bruhat's theorem still applies, with the constraint equations \[ \begin{cases} \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} - 2\Lambda = 16 \pi \rho \\ \bar{\nabla}_i K^j_{\,\, j} - \bar{\nabla}_j K^{j}_{\,\, i} = 8 \pi J_i \end{cases}. \] Here $\rho = T_{00}$ and $J_i = - T_{0i}$ are computed from $h$ and the initial data for $\psi$. As an example, the Einstein-Klein-Gordon system is given by \[ \begin{cases} G_{\mu\nu} + \Lambda g_{\mu\nu} = 8 \pi \left( \partial_\mu \phi \, \partial_\nu \phi - \frac12 g_{\mu\nu} \left( \partial_\alpha \phi \, \partial^\alpha \phi + m^2 \phi^2 \right) \right) \\ \nabla^\mu \partial_\mu \phi - m^2 \phi = 0 \end{cases}, \] and we have \[ \begin{cases} \rho = \frac12 \left( {\psi_0}^2 + h^{ij} \partial_i \phi_0 \partial_j \phi_0 + m^2 {\phi_0}^2 \right) \\ J_i = - \psi_0 \partial_i \phi_0 \end{cases}, \] where $\phi_0$ and $\psi_0$ are the initial data for $\phi$. \section{Exercises} \label{sec5.7} \begin{enumerate} \item Let $(M,g)$ be a time oriented Lorentzian manifold and $X$ a future-pointing timelike vector field. Given a smooth function $\phi:M \to \mathbb{R}$ let \[ T_{\mu\nu} = \partial_\mu \phi \partial_\nu \phi - \frac12 g_{\mu\nu} \left(\partial_\alpha\phi \partial^\alpha \phi + m^2 \phi^2 \right) \] be the energy-momentum tensor associated to the Klein-Gordon equation for $\phi$, and let $Y$ be the vector field defined by \[ Y_\mu = T_{\mu\nu}X^\nu. \] Show that: \begin{enumerate} \item $Y = (X \cdot \phi) \operatorname{grad} \phi - \frac12\left(\left\langle\operatorname{grad}\phi,\operatorname{grad}\phi\right\rangle + m^2\phi^2\right) X$; \item $Y$ is causal. \item $Y$ is past-pointing. \end{enumerate} \item ({\bf Sobolev inequality}) Let $Q$ be a closed solid cone in $\mathbb{R}^n$ with height $H$, solid angle $\Omega$ and vertex at the origin. Let $\psi:\mathbb{R} \to \mathbb{R}$ be a smooth nonincreasing function with $\psi(r)=1$ for $r<\frac{H}3$ and $\psi(r)=0$ for $r>\frac{2H}3$. Show that: \begin{enumerate} \item For any smooth function $f:Q \to \mathbb{R}$ and any $k \in \mathbb{N}$ we have \[ f(0)=\frac{(-1)^k}{(k-1)!} \int_0^{R(\theta)} r^{k-1}\frac{\partial^k}{\partial r^k} \left(\psi(r)f(r,\theta)\right) dr, \] where $(r,\theta)$ are the usual spherical coordinates in $\mathbb{R}^n$ and $r=R(\theta)$ is the equation for the base of the cone (here $f(r,\theta)$ represents the function $f$ written in spherical coordinates). \item There exists a constant $C$, depending on $k$ and $\Omega$, such that \[ f(0)=C\int_Q r^{k-n} \frac{\partial^k}{\partial r^k}\left(\psi f\right). \] \item For $k>\frac{n}2$ we have $|f(0)| \leq C' \|f\|_{H^k(Q)}$, where the constant $C'$ depends on $k$, $H$ and $\Omega$ only (you will need to use the Cauchy-Schwarz inequality for multiple integrals). \end{enumerate} \item Consider a Lorentzian metric given in the Gauss Lemma form \[ g = -dt^2 + h_{ij}(t,x) dx^i dx^j, \] so that the level sets of $t$ are Riemannian manifolds with induced metric $h(t)=h_{ij}dx^i dx^j$ and second fundamental form \[ K(t)=\frac12\frac{\partial h_{ij}}{\partial t}dx^idx^j. \] Show that in these coordinates: \begin{enumerate} \item The Christoffel symbols are \[ \Gamma^0_{ij} = K_{ij}; \quad \Gamma^i_{jk} = \bar{\Gamma}^i_{jk}; \quad \Gamma^i_{0j} = K^i_{\,\,j}, \] where $\bar{\Gamma}^i_{jk}$ are the Christoffel symbols of $h$. \item The components of the Riemann tensor are \begin{align*} \label{Riemann1} & R_{0i0}^{\,\,\,\,\,\,\,\, j} = - \frac{\partial}{\partial t} K^{j}_{\,\, i} - K_{il} K^{lj}; \\ \nonumber & R_{ij0}^{\,\,\,\,\,\,\,\, l} = - \bar{\nabla}_i K^l_{\,\, j} + \bar{\nabla}_j K^{l}_{\,\, i}; \\ \nonumber & R_{ijl}^{\,\,\,\,\,\,\,\, m} = \bar{R}_{ijl}^{\,\,\,\,\,\,\,\, m} + K_{il} K^{m}_{\,\,\,\, j} - K_{jl} K^{m}_{\,\,\,\, i}, \end{align*} where $\bar{\nabla}$ is the Levi-Civita connection of $h$ and $\bar{R}_{ijl}^{\,\,\,\,\,\,\,\, m}$ are the components of the Riemann tensor of $h$. \item The time derivative of the inverse metric is given by the formula \[ \frac{\partial h^{ij}}{\partial t} = -2K^{ij}. \] \item The components of the Ricci tensor are \begin{align*} & R_{00} = - \frac{\partial}{\partial t} K^{i}_{\,\, i} - K_{ij} K^{ij}; \\ & R_{0i} = - \bar{\nabla}_i K^j_{\,\, j} + \bar{\nabla}_j K^{j}_{\,\, i}; \\ & R_{ij} = \bar{R}_{ij} + \frac{\partial}{\partial t} K_{ij} - 2 K_{il} K^{l}_{\,\, j} + K^{l}_{\,\, l} K_{ij}, \end{align*} where $\bar{R}_{ij}$ are the components of the Ricci tensor of $h$. \item The scalar curvature is \[ R = \bar{R} + 2 \frac{\partial}{\partial t} K^{i}_{\,\, i} + \left(K^{i}_{\,\, i}\right)^2 + K_{ij} K^{ij}, \] where $\bar{R}$ is the scalar curvature of $h$. \item The component $G_{00}$ of the Einstein tensor is \[ G_{00} = \frac12 \left( \bar{R} + \left(K^{i}_{\,\, i}\right)^2 - K_{ij} K^{ij} \right). \] \end{enumerate} \item Let $(M,g)$ be an $(n+1)$-dimensional Lorentzian manifold and $(x^0, \ldots, x^n)$ local coordinates on $M$. Show that: \begin{enumerate} \item The condition for these coordinates to be harmonic is written \[ \hspace{2cm} \nabla_\alpha \nabla^\alpha x^\mu = 0 \Leftrightarrow H^\mu \equiv \partial_\alpha g^{\alpha\mu} + \frac12 g^{\alpha\mu}g^{\rho\sigma}\partial_\alpha g_{\rho\sigma} = 0 \] ($\mu=0, \ldots, n$). \item The reduced Ricci tensor is \[ \hspace{2cm} R^H_{\mu\nu} \equiv R_{\mu\nu} + g_{\alpha(\mu}\partial_{\nu)}H^\alpha = - \frac12 g^{\alpha\beta} \partial_\alpha \partial_\beta g_{\mu\nu} + F_{\mu\nu}(g,\partial g). \] \end{enumerate} \item \label{development} Denoting by $h_0$ the standard constant curvature metric on $\mathbb{R}^3$, $S^3$ or $H^3$, compute the cosmological constant and determine the maximal globally hyperbolic developments of the following sets of initial data for the vacuum Einstein equations (with cosmological constant): \begin{enumerate} \item $(\mathbb{R}^3,h_0,0)$; \item $(\mathbb{R}^3,h_0,h_0)$; \item $(S^3,h_0,0)$; \item $(H^3,h_0,0)$; \item $(H^3,h_0,h_0)$; \item $(\mathbb{R}^3,h_0,\operatorname{diag}(p_1,p_2,p_3))$ with $p_1+p_2+p_3={p_1}^2+{p_2}^2+{p_3}^2=1$. \end{enumerate} \item Let $(S,h,K)$ be an initial data set for the vacuum Einstein equations, with $K$ traceless and divergenceless. Given a smooth positive function $u:S \to \mathbb{R}$, consider the conformally rescaled metric $\tilde{h}=u^4 h$ and the symmetric tensor $\tilde{K}=u^{-2} K$. By using normal coordinates when convenient, show that: \begin{enumerate} \item The Christoffel symbols $\tilde{\Gamma}^i_{jk}$ of $\tilde{h}$ are related to the Christoffel symbols $\bar{\Gamma}^i_{jk}$ of $h$ by \[ \hspace{2cm} \tilde{\Gamma}^i_{jk} = \bar{\Gamma}^i_{jk} + 2 \partial_j (\log u) h^{i}_{\,\,\,\,k} + 2 \partial_k (\log u) h^{i}_{\,\,\,\,j} - 2 \partial^i (\log u) h_{jk}. \] \item $\tilde{K}$ is divergenceless for the Levi-Civita connection of $\tilde{h}$. \item The Ricci tensor $\tilde{R}_{ij}$ of $\tilde{h}$ is related to the Ricci tensor $\bar{R}_{ij}$ of $h$ by \begin{align*} \hspace{2cm} \tilde{R}_{ij} = & \, \bar{R}_{ij} - 2 \bar{\nabla}_i \partial_j (\log u) - 2 \bar{\Delta} (\log u) h_{ij} \\ & + 4 \partial_i (\log u) \partial_j(\log u) - 4 \left|\operatorname{grad}(\log u)\right|^2 h_{ij}. \end{align*} \item The scalar curvature $\tilde{R}$ of $\tilde{h}$ is related to the scalar curvature $\bar{R}$ of $h$ by \[ \tilde{R} = u^{-4} \bar{R} - 8 u^{-5} \bar{\Delta} u. \] \end{enumerate} \item Check that the metric \[ \hspace{2cm} ds^2 = - \left( \frac{1 - \frac{M}{2r}}{1 + \frac{M}{2r}}\right)^2 dt^2 + \left(1 + \frac{M}{2r}\right)^4 (dr^2 + r^2(d\theta^2 + \sin^2 \theta d\varphi^2)) \] is indeed the Schwarzschild metric by making the coordinate change \[ R = r \left(1 + \frac{M}{2r}\right)^2. \] \end{enumerate} \chapter{Positive mass theorem} \label{chapter6} In this chapter we present the positive mass theorem. Following \cite{W84}, we start by defining the Komar mass for stationary spacetimes. We then discuss field theory and introduce the Einstein-Hilbert action as a means of motivating the definition of the ADM mass. Finally, we prove the (Riemannian) positive mass theorem and the (Riemannian) Penrose inequality for graphs, following \cite{L10}. For more details see \cite{Mars09, B11}. \section{Komar mass} \label{sec6.1} Recall that the Newtonian gravitational field satisfies \[ \operatorname{div} {\bf G} = - 4 \pi \rho, \] where $\rho$ is the mass density of the matter generating the field. Therefore the total mass of a given system is given by \[ M = \int_{\mathbb{R}^3} \rho = - \frac1{4\pi} \int_{\mathbb{R}^3} \operatorname{div} {\bf G} = - \frac1{4\pi} \int_{\Sigma} \left\langle {\bf G}, {\bf n} \right\rangle, \] where $\Sigma$ is any surface enclosing all the matter and ${\bf n}$ is the outward unit normal. The fact that $M$ does not depend on $\Sigma$ is equivalent to the statement that $\operatorname{div} {\bf G}=0$ in the region between any two such surfaces. For a {\bf static} Lorentzian metric, that is, a metric of the form \[ ds^2 = - e^{2 \phi} dt^2 + h_{ij} dx^i dx^j \] with $\phi$ and $h$ not depending on $t$, the analogue of the gravitational field is minus the acceleration of the observers with constant space coordinates $x^i$, that is, $- \operatorname{grad} \phi$ (see Chapter~\ref{chapter2}). On the other hand, since the metric is static, we expect that the energy computed at a given surface should be multiplied by the redshift factor $e^\phi$ to obtain its reference value at infinity. The relativistic analogue of the formula above is then \[ M = \frac1{4\pi} \int_{\Sigma} e^\phi (\partial_\mu \phi) n^\mu = \frac1{4\pi} \int_{\Sigma} (\partial_\mu e^\phi) n^\mu. \] We have \[ \partial_\mu e^\phi = \partial_\mu (- K^\nu K_\nu)^\frac12 = - e^{-\phi} K^\nu \nabla_\mu K_\nu = - N^\nu \nabla_\mu K_\nu, \] where $K=\frac{\partial}{\partial t}$ is the timelike Killing vector field and $N=e^{-\phi}\frac{\partial}{\partial t}$ is the unit timelike vector with the same direction, that is, the future-pointing unit normal to the hypersurfaces $S_t$ of constant $t$ (see Figure~\ref{Komar}). Therefore, we can write \[ M = - \frac1{4\pi} \int_{\Sigma} (\nabla_\mu K_\nu) n^\mu N^\nu = \frac1{4\pi} \int_{\Sigma} (\nabla_\mu K_\nu) N^\mu n^\nu. \] \begin{figure}[h!] \begin{center} \psfrag{n}{$n$} \psfrag{N}{$N$} \psfrag{K}{$K$} \psfrag{T}{$E_1$} \psfrag{-grad}{$-\operatorname{grad} \phi$} \psfrag{S}{$\Sigma$} \psfrag{St}{$S_t$} \epsfxsize=1.0\textwidth \leavevmode \epsfbox{Komar.eps} \end{center} \caption{Computing the Komar mass on a static spacetime.} \label{Komar} \end{figure} Because $K$ is a Killing vector field, $\nabla_\mu K_\nu$ is a $2$-form; more precisely, \[ (d K^\sharp)_{\mu\nu} = \nabla_\mu K_\nu - \nabla_\nu K_\mu = 2 \nabla_\mu K_\nu. \] If $E_1$ and $E_2$ are two unit vector fields tangent to $\Sigma$ such that $\{N,n,E_1,E_2\}$ is a positive orthonormal frame (see Figure~\ref{Komar}), and so $\{-N^\sharp,n^\sharp,E_1^\sharp,E_2^\sharp\}$ is a positive orthonormal coframe, then we can expand \[ \nabla K^\sharp = - \nabla K^\sharp(N,n) N^\sharp \wedge n^\sharp + \ldots, \] and so \begin{align*} M & = \frac1{4\pi} \int_{\Sigma} \nabla K^\sharp(N,n) = \frac1{4\pi} \int_{\Sigma} \nabla K^\sharp(N,n) E_1^\sharp \wedge E_2^\sharp \\ & = - \frac1{4\pi} \int_{\Sigma} - \nabla K^\sharp(N,n) \star(N^\sharp\wedge n^\sharp) = - \frac1{4\pi} \int_{\Sigma} \star \nabla K^\sharp, \end{align*} that is \[ M = - \frac1{8\pi} \int_{\Sigma} \star d K^\sharp. \] This expression is the so-called {\bf Komar mass}. Although we arrived at this expression by considering a {\bf static} space, it actually works for any {\bf stationary} spacetime. In other words, the timelike Killing vector field $K$ does not have to be hypersurface-orthogonal. To show that the Komar mass is well defined, that is, that $\star d K^\sharp$ is a closed $2$-form in vacuum, we start by noticing that if $X$ is any vector field then \[ (\operatorname{div} X) \epsilon = d (X \lrcorner \, \epsilon) = d \star X^\sharp, \] whence \[ \operatorname{div} X = - \star d \star X^\sharp. \] In particular, for any smooth function $\phi$ \[ \Box \phi = \operatorname{div} \operatorname{grad} \phi = - \star d \star (\operatorname{grad} \phi)^\sharp = - \star d \star d \phi. \] In local coordinates, we have \begin{align*} & \star d \phi = \epsilon(\operatorname{grad} \phi, \cdot, \cdot, \cdot) = \sqrt{|\det(g_{\mu\nu})|} \, dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3(\operatorname{grad} \phi, \cdot, \cdot, \cdot) \\ & = \sqrt{|\det(g_{\mu\nu})|} \, \partial^0 \phi \, dx^1 \wedge dx^2 \wedge dx^3 - \sqrt{|\det(g_{\mu\nu})|} \, \partial^1 \phi \, dx^0 \wedge dx^2 \wedge dx^3 + \ldots \end{align*} Therefore \begin{align*} d \star d \phi & = \partial_0 \left(\sqrt{|\det(g_{\mu\nu})|} \, \partial^0 \phi \right) dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3 \\ & + \partial_1 \left(\sqrt{|\det(g_{\mu\nu})|} \, \partial^1 \phi\right) dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3 + \ldots \end{align*} and we obtain the useful formula \[ \Box \phi = \frac1{\sqrt{|\det(g_{\mu\nu})|}} \partial_\alpha \left(\sqrt{|\det(g_{\mu\nu})|} \, \partial^\alpha \phi\right). \] It is natural to try and generalize this formula for arbitrary $k$-forms. \begin{Def} If $\omega$ is a $k$-form then its {\bf Hodge d'Alembertian} is the $k$-form \[ \Box_H \omega = - \star d \star d \omega - d \star d \star \omega. \] \end{Def} It turns out that in general the Hodge d'Alembertian does not coincide with the usual d'Alembertian \[ \Box \omega = \nabla_\mu \nabla^\mu \omega \] (sometimes called the {\bf rough d'Alembertian}). The relation between these two operators for $1$-forms is given by the following result. \begin{Thm} ({\bf Weitzenbock formula}) If $\omega$ is a $1$-form then \[ \Box_H \omega - \Box \omega = - Ric(\omega^\sharp, \cdot). \] \end{Thm} \begin{proof} We have \begin{align*} (\star d\star d\omega)_\delta & = \frac{3 \cdot 2}{3!\,2!} \epsilon_{\gamma\alpha\beta\delta} \nabla^\gamma (\epsilon^{\mu\nu\alpha\beta} \nabla_\mu\,\omega_\nu) = \frac1{2!} \epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\alpha\beta} \nabla^\gamma \nabla_\mu\,\omega_\nu \\ & = (- g^{\mu}_{\,\,\,\,\gamma} g^{\nu}_{\,\,\,\,\delta} + g^{\mu}_{\,\,\,\,\delta} g^{\nu}_{\,\,\,\,\gamma}) \nabla^\gamma \nabla_\mu\,\omega_\nu = - \nabla^\mu \nabla_\mu\,\omega_\delta + \nabla^\nu \nabla_\delta \,\omega_\nu \end{align*} and \begin{align*} (d\star d\star \omega)_\delta & = \frac{4}{4!} \nabla_\delta (\epsilon_{\gamma\nu\alpha\beta} \nabla^\gamma (\epsilon^{\mu\nu\alpha\beta}\omega_\mu)) = \frac1{3!} \epsilon_{\gamma\nu\alpha\beta}\epsilon^{\mu\nu\alpha\beta}\nabla_\delta\nabla^\gamma\omega_\mu \\ & = - g^{\mu}_{\,\,\,\,\gamma} \nabla_\delta\nabla^\gamma\omega_\mu = - \nabla_\delta\nabla^\mu\omega_\mu. \end{align*} Therefore \[ (\star d\star d\omega + d\star d\star \omega)_\delta = - (\Box\omega)_\delta + \nabla^\mu \nabla_\delta \,\omega_\mu - \nabla_\delta\nabla^\mu\omega_\mu. \] The Weitzenbock formula now follows from \[ \nabla_\mu \nabla_\delta \,\omega^\mu - \nabla_\delta\nabla_\mu\,\omega^\mu = R_{\mu\delta\,\,\,\,\nu}^{\,\,\,\,\,\,\,\,\mu} \, \omega^\nu = R_{\delta\nu} \, \omega^\nu. \] \end{proof} Now let $K=\frac{\partial}{\partial t}$ be the timelike Killing vector field in a stationary spacetime. Then \begin{align*} d \star K^\sharp & = d \left( \frac{\partial}{\partial t} \lrcorner \, \left( \sqrt{|\det(g_{\mu\nu})|} \, dt \wedge dx^1 \wedge dx^2 \wedge dx^3 \right)\right) \\ & = d \left( \sqrt{|\det(g_{\mu\nu})|} \, dx^1 \wedge dx^2 \wedge dx^3 \right) \\ & = \frac{\partial}{\partial t} \sqrt{|\det(g_{\mu\nu})|} \, dx^1 \wedge dx^2 \wedge dx^3 = 0, \end{align*} since the metric coefficients do not depend on $t$. From the Weitzenbock formula we then have \[ \star d \star d K^\sharp = - \Box K^\sharp + Ric(K, \cdot). \] Now, adding cyclic permutations of the fundamental identity for the Riemann tensor, \begin{align*} & \nabla_\mu \nabla_\nu K_\alpha - \nabla_\nu \nabla_\mu K_\alpha = R_{\mu\nu\alpha}^{\,\,\,\,\,\,\,\,\,\,\,\,\beta} K_\beta, \\ & \nabla_\nu \nabla_\alpha K_\mu - \nabla_\alpha \nabla_\nu K_\mu = R_{\nu\alpha\mu}^{\,\,\,\,\,\,\,\,\,\,\,\,\beta} K_\beta,\\ & \nabla_\alpha \nabla_\mu K_\nu - \nabla_\mu \nabla_\alpha K_\nu = R_{\alpha\mu\nu}^{\,\,\,\,\,\,\,\,\,\,\,\,\beta} K_\beta, \end{align*} and using the Killing equation and the first Bianchi identity, we have \[ \nabla_\mu \nabla_\nu K_\alpha = - R_{\nu\alpha\mu}^{\,\,\,\,\,\,\,\,\,\,\,\,\beta} K_\beta, \] whence \[ \Box K_\alpha = - R_{\alpha}^{\,\,\,\,\beta} K_\beta. \] We conclude that \[ \star d \star d K^\sharp = 2 Ric(K, \cdot) \Leftrightarrow d \star d K^\sharp = 2 \star Ric(K, \cdot), \] implying that $\star d K^\sharp$ is indeed closed in vacuum, that is, the Komar mass is well defined: any two homologous compact orientable surfaces $\Sigma_1$ and $\Sigma_2$ which enclose the matter content of the stationary spacetime can be used to compute it (Figure~\ref{Komar2}). \begin{figure}[h!] \begin{center} \psfrag{S1}{$\Sigma_1$} \psfrag{S2}{$\Sigma_2$} \psfrag{matter}{matter} \psfrag{vacuum}{vacuum} \epsfxsize=0.5\textwidth \leavevmode \epsfbox{Komar2.eps} \end{center} \caption{Computing the Komar mass with two homologous surfaces.} \label{Komar2} \end{figure} If $\Sigma$ is the boundary of a spacelike $3$-dimensional manifold $B$ whose future-pointing unit normal is $N$ then the Komar mass can be written as \begin{align*} M & = - \frac1{8\pi} \int_{\Sigma} \star d K^\sharp = - \frac1{8\pi} \int_{B} d \star d K^\sharp = - \frac1{4\pi} \int_{B} \star Ric(K, \cdot) \\ & = - \frac1{4\pi} \int_{B} \epsilon(Ric(K, \cdot)^\sharp, \cdot, \cdot, \cdot) = - \frac1{4\pi} \int_{B} - \left\langle(Ric(K, \cdot)^\sharp, N \right\rangle \\ & = \frac1{4\pi} \int_{B} R_{\mu\nu} K^\mu N^\nu = 2 \int_{B} \left(T_{\mu\nu} - \frac12 T g_{\mu\nu}\right) K^\mu N^\nu \end{align*} (we used $\epsilon = - N^\sharp \wedge \sigma$ in the second line, where $\sigma$ is the volume element for $B$). Note that the Komar mass is not, as one might have guessed, the integral \[ M' = \int_{B} T_{\mu\nu} K^\mu N^\nu. \] This integral is also well defined in a stationary spacetime, since \[ \nabla^\mu ( T_{\mu\nu} K^\nu) = (\nabla^\mu T_{\mu\nu}) K^\nu + T_{\mu\nu} \nabla^\mu K^\nu = 0, \] due to the contracted Bianchi identity and the Killing equation. However, we also have \[ \nabla^\mu (T K_\mu) = K^\mu \nabla_\mu T + T \nabla^\mu K_\mu = 0, \] because $T$ is constant along $K$, which has zero divergence (by contracting the Killing equation). Therefore we also have \[ \nabla^\mu \left( \left(T_{\mu\nu} - \frac12 T g_{\mu\nu} \right) K^\nu\right) = 0. \] To have an idea of what exactly is measured by the Komar mass, consider a static spacetime, where it is possible to choose $B$ such that $N = e^{-\phi} K$. In this case, for a perfect fluid (whose flow lines are necessarily the integral curves of $K$), we have \[ M = 2 \int_{B} \left(T(N,N) + \frac12 T\right) e^\phi = 2 \int_{B} \left(\rho + \frac12 (- \rho + 3 p) \right) e^\phi = \int_{B} \left(\rho + 3 p \right) e^\phi. \] Thus we see that the Komar mass also includes the pressure; this is reminiscent of the Newtonian formula for the internal energy of a monoatomic gas, \[ U = \frac32 pV. \] \section{Field theory} \label{sec6.2} Let us consider the problem of how to define the energy of a field $\psi$ in flat Minkowski space. The field equations for the field $\psi$ are usually the equations for the critical points of an {\bf action} \[ S = \int_{\mathbb{R}^4} \mathcal{L}(\psi, \partial\psi) \, dt \, dx^1 dx^2 dx^3, \] obtained by integrating a {\bf Lagrangian density} $\mathcal{L}$ (where we assume that the field decays fast enough so that $S$ is well defined). The field equations are then \[ \partial_\mu \left( \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)}\right) - \frac{\partial\mathcal{L}}{\partial\psi} = 0. \] Indeed, if $\psi(\lambda)$ is a one-parameter family of fields such that $\psi(0)$ is a critical point of the action, and $\delta \equiv \frac{d}{d \lambda}_{|_{\lambda=0}}$, then \begin{align*} \delta S & = \int_{\mathbb{R}^4} \left( \frac{\partial\mathcal{L}}{\partial\psi} \delta \psi + \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \delta (\partial_\mu \psi) \right) = \int_{\mathbb{R}^4} \left( \frac{\partial\mathcal{L}}{\partial\psi} \delta \psi + \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \partial_\mu (\delta\psi) \right) \\ & = \int_{\mathbb{R}^4} \partial_\mu \left( \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \delta\psi \right) + \int_{\mathbb{R}^4} \left( \frac{\partial\mathcal{L}}{\partial\psi} - \partial_\mu \left(\frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \right) \right) \delta\psi. \end{align*} For field variations $\delta \psi$ with compact support we then have \[ \delta S = \int_{\mathbb{R}^4} \left( \frac{\partial\mathcal{L}}{\partial\psi} - \partial_\mu \left(\frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \right) \right) \delta\psi. \] The {\bf canonical energy-momentum tensor} is defined as \[ T^{\mu}_{\,\,\,\,\nu} = \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \partial_\nu \psi - \mathcal{L} \delta^{\mu}_{\,\,\,\,\nu} \] and satisfies \begin{align*} \partial_\mu T^{\mu}_{\,\,\,\,\nu} & = \partial_\mu \left( \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)}\right) \partial_\nu \psi + \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi)} \partial_\mu \partial_\nu \psi \\ & - \frac{\partial\mathcal{L}}{\partial\psi} (\partial_\mu\psi) \delta^{\mu}_{\,\,\,\,\nu} - \frac{\partial\mathcal{L}}{\partial(\partial_\alpha \psi)} (\partial_\mu \partial_\alpha \psi) \delta^{\mu}_{\,\,\,\,\nu} = 0. \end{align*} Defining the {\bf Hamiltonian density} to be \[ \mathcal{H} = -T^0_{\,\,\,\,0} = -\frac{\partial\mathcal{L}}{\partial(\partial_0 \psi)} \partial_0 \psi + \mathcal{L}, \] it is then clear from the divergence theorem that the {\bf Hamiltonian} \[ H = \int_{S_t} \mathcal{H} \, dx^1 dx^2 dx^3 \] is independent of the hypersurface $S_t$ of constant $t$ chosen to compute the integral (assuming that the field decays fast enough so that $H$ is well defined and the boundary integral corresponding to the divergence term vanishes). The Hamiltonian can be identified with the total energy of the field $\psi$, which is therefore constant in time. If we have $N$ fields $\psi_1, \ldots, \psi_N$ instead of a single field $\psi$, then it is easily seen that the field equations are \[ \partial_\mu \left( \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi_i)}\right) - \frac{\partial\mathcal{L}}{\partial\psi_i} = 0 \qquad (i=1, \ldots, N), \] the canonical energy-momentum tensor is \[ T^{\mu}_{\,\,\,\,\nu} = \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi_i)} \partial_\nu \psi_i - \mathcal{L} \delta^{\mu}_{\,\,\,\,\nu} \] (summed over $i$), and the Hamiltonian density is \[ \mathcal{H} = - T^0_{\,\,\,\,0} = - \frac{\partial\mathcal{L}}{\partial(\partial_0 \psi_i)} \partial_0 \psi_i + \mathcal{L}. \] Note carefully that up until this point the metric was not used to raise or lower indices, or even to apply the divergence theorem. This will be important in Section~\ref{sec6.4}. If we use Cartesian coordinates then the tensor \[ T^{\mu\nu} = \frac{\partial\mathcal{L}}{\partial(\partial_\mu \psi_i)} \partial^\nu \psi_i - \mathcal{L} g^{\mu\nu} \] satisfies the conservation equation \[ \nabla_\mu T^{\mu\nu} = 0. \] This provides a method for obtaining the energy-momentum tensor that is used in the Einstein field equations for the various matter models. We now list some simple examples. \subsection{Klein-Gordon field} The Lagrangian density for the Klein-Gordon field is \[ \mathcal{L} = \frac12 (g^{\mu\nu}\partial_\mu\phi\,\partial_\nu\phi + m^2\phi^2). \] Consequently the canonical energy-momentum tensor for the Klein-Gordon field is \[ T^{\mu\nu} = \partial^\mu \phi \, \partial^\nu \phi - \frac12 g^{\mu\nu} \left( \partial_\alpha \phi \, \partial^\alpha \phi + m^2 \phi^2 \right), \] as claimed in Chapter~\ref{chapter5}. \subsection{Electromagnetic field} In this case we can take as our fields the electromagnetic potentials $A_0, A_1, A_2, A_3$. The Lagrangian density (in units where $4\pi\varepsilon_0=1$) is \[ \mathcal{L} = \frac1{16\pi} g^{\alpha\mu}g^{\beta\nu} F_{\alpha\beta} F_{\mu\nu}, \] where \[ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu. \] Using \[ \frac{\partial F_{\alpha\beta}}{\partial(\partial_\mu A_\nu)} = \delta^{\mu}_{\,\,\,\,\alpha} \delta^{\nu}_{\,\,\,\,\beta} - \delta^{\nu}_{\,\,\,\,\alpha} \delta^{\mu}_{\,\,\,\,\beta}, \] it is easily seen that \[ \partial_\mu \left( \frac{\partial\mathcal{L}}{\partial(\partial_\mu A_\nu)}\right) - \frac{\partial\mathcal{L}}{\partial A_\nu} = 0 \Leftrightarrow \partial_\mu F^{\mu\nu} = 0, \] which are indeed the Maxwell equations $d\star F = 0$, as \begin{align*} & \epsilon_{\mu\gamma\delta\nu} \nabla^\mu \epsilon^{\alpha\beta\gamma\delta} F_{\alpha\beta} = \epsilon_{\mu\nu\gamma\delta} \epsilon^{\alpha\beta\gamma\delta} \nabla^\mu F_{\alpha\beta} \\ & = -2(\delta^{\mu}_{\,\,\,\,\alpha} \delta^{\nu}_{\,\,\,\,\beta} - \delta^{\nu}_{\,\,\,\,\alpha} \delta^{\mu}_{\,\,\,\,\beta}) \nabla^\mu F_{\alpha\beta} = -2 \nabla^\mu F_{\mu\nu}. \end{align*} Note that the equations $dF=0$ follow automatically from the definition $F=dA$. The canonical energy-momentum tensor for the electromagnetic field is \[ T^{\mu\nu}_{\text{can}} = \frac{\partial\mathcal{L}}{\partial(\partial_\mu A_\alpha)} \partial^\nu A_\alpha - \mathcal{L} g^{\mu\nu} = \frac1{4\pi} \left( F^{\mu\alpha} \partial^\nu A_\alpha - \frac14 F_{\alpha\beta}F^{\alpha\beta} g^{\mu\nu}\right). \] This tensor is neither symmetric nor gauge-invariant. However, \[ F^{\mu\alpha} \partial^\nu A_\alpha = F^{\mu\alpha} F^\nu_{\,\,\,\,\alpha } + F^{\mu\alpha} \partial_\alpha A^\nu, \] where the first term is symmetric and gauge-invariant, and the second term is divergenceless: \[ \partial_\mu \left( F^{\mu\alpha} \partial_\alpha A^\nu \right) = F^{\mu\alpha}\partial_\mu\partial_\alpha A^\nu = 0 \] (because $F$ is antisymmetric and the partial derivatives commute). Therefore the tensor \[ T^{\mu\nu} = \frac1{4\pi} \left(F^{\mu\alpha} F^\nu_{\,\,\,\,\alpha } - \frac14 F_{\alpha\beta}F^{\alpha\beta} g^{\mu\nu}\right) \] is symmetric, gauge-invariant and divergenceless. This is the true energy-momentum tensor for the electromagnetic field. \subsection{Relativistic elasticity} A continuous medium can be described by a Riemannian 3-manifold $(S, k)$ (the {\bf relaxed configuration}) and projection map $\pi: \mathbb{R}^4 \to S$ whose level sets are timelike curves (the worldlines of the medium particles), as shown in Figure~\ref{congruence}. \begin{figure}[h!] \begin{center} \psfrag{M}{$\mathbb{R}^4$} \psfrag{p}{$\downarrow\pi$} \psfrag{S}{$S$} \psfrag{h}{$h$} \psfrag{k}{$k$} \epsfxsize=0.6\textwidth \leavevmode \epsfbox{congruence.eps} \end{center} \caption{A continuous medium in Minkowski's spacetime.} \label{congruence} \end{figure} If we choose local coordinates $(\bar{x}^1, \bar{x}^2, \bar{x}^3)$ on $S$ then we can think of $\pi$ as a set of three scalar fields $\bar{x}^1, \bar{x}^2, \bar{x}^3:\mathbb{R}^4 \to \mathbb{R}$. For a given worldline, we can complete this set of scalar fields into local coordinates $(\bar{t},\bar{x}^1, \bar{x}^2, \bar{x}^3)$ for $\mathbb{R}^4$ such that $\bar{t}$ is the proper time along that worldline and its level sets are orthogonal to it: \[ g = - d\bar{t}^2 + h_{ij} d\bar{x}^i d\bar{x}^j \qquad \text{(on the worldline)}. \] Notice that the orthogonal metric \[ h = h_{ij} d\bar{x}^i d\bar{x}^j \] can be thought of as a time-dependent Riemannian metric on $S$, describing the local deformations of the medium along each worldline, that is, the deviations from the natural metric \[ k = k_{ij} d\bar{x}^i d\bar{x}^j. \] We can compute the (inverse) metric $h$ from \[ h^{ij} = g^{\mu\nu} \frac{\partial \bar{x}^i}{\partial x^\mu} \frac{\partial \bar{x}^j}{\partial x^\nu}, \] which does not depend on the choice of $\bar{t}$. In other words, the metric $h$ is a quadratic function of the partial derivatives of the fields $\bar{x}^1, \bar{x}^2, \bar{x}^3$. An elastic Lagrangian density $\mathcal{L}$ for these fields is obtained by assuming that $\mathcal{L} = \mathcal{L}(\bar{x}^i, h^{ij})$. The canonical energy-momentum tensor is \[ T^{\mu\nu} = \frac{\partial\mathcal{L}}{\partial(\partial_\mu \bar{x}^i)} \partial^\nu \bar{x}^i - \mathcal{L} g^{\mu\nu}, \] and so in the coordinate system $(\bar{t},\bar{x}^1, \bar{x}^2, \bar{x}^3)$ we have \[ T^{\bar{0}\bar{0}} = \mathcal{L}, \] that is, the elastic Lagrangian density is just the rest energy density $\rho = T^{\bar{0}\bar{0}}$ measured by each particle in the medium. The choice of $\rho = \rho(\bar{x}^i, h^{ij})$ is called the {\bf elastic law} of the continuous medium. We define {\bf homogeneous and isotropic materials} to be those for which $\rho$ depends only on the eigenvalues $({s_1}^2,{s_2}^2,{s_3}^2)$ of $h_{ij}$ with respect to $k_{ij}$ (that is, the eigenvalues of the matrix $(h_{ij})$ in a frame where $k_{ij}=\delta_{ij}$). Note that $(s_1,s_2,s_3)$ are the stretch factors along the principal directions given by the eigenvectors of $h_{ij}$, that is, the distance between two nearby points along a principal direction in the current configuration, as measured by the metric $h$, divided by the distance between the same points in the relaxed configuration, as measured by the metric $k$. Assume that $k_{ij}=\delta_{ij}$, that is, that $(S, k)$ is the Euclidean space. We define the more convenient variables \begin{align*} & \lambda_0 = \det (h^{ij}) = \frac1{(s_1 s_2 s_3)^2} ; \\ & \\ & \lambda_1 = \operatorname{tr} (h^{ij}) = \frac1{{s_1}^2} + \frac1{{s_2}^2} + \frac1{{s_3}^2}; \\ & \\ & \lambda_2 = \operatorname{tr} \operatorname{cof} (h^{ij}) = \frac1{(s_1s_2)^2} + \frac1{(s_2 s_3)^2} + \frac1{(s_3 s_1)^2}. \end{align*} Note that \[ s_1 s_2 s_3 = \left(\frac1{\lambda_0}\right)^\frac12 \] is the volume occupied in the deformed state by a unit volume of material in the relaxed configuration. Equivalently, \[ n = (\lambda_0)^\frac12 \] is the number density of particles of the medium in the deformed state, if we normalize the number density in the relaxed configuration to be $1$ particle per unit volume. Elastic media whose elastic law depends only on $n$, \[ \rho=\rho(\lambda_0), \] are simply perfect fluids. To check this, we note that from the formula for the inverse of a matrix we have \[ h_{ij} = \frac1{\lambda_0} A^{ji} = \frac1{\lambda_0} A^{ij}, \] where $A^{ij}$ is the $(i,j)$-cofactor of $(h^{ij})$. On the other hand, from the Laplace expansion for determinants we have \[ \lambda_0 = \sum_{j=1}^3 h^{ij} A^{ij} \qquad (\text{no sum over } i), \] and so \[ \frac{\partial \lambda_0}{\partial h^{ij}} = A^{ij} = \lambda_0 h_{ij}. \] Therefore \begin{align} T^{\mu\nu} & = \frac{d \rho}{d \lambda_0} \frac{\partial \lambda_0}{\partial h^{ij}} \frac{\partial h^{ij}}{\partial(\partial_\mu \bar{x}^k)} \partial^\nu \bar{x}^k - \rho g^{\mu\nu} \nonumber \\ & = \frac{d \rho}{d \lambda_0} \lambda_0 h_{ij} (g^{\mu\alpha} \delta^{i}_{\,\,\,\,k} \partial_\alpha \bar{x}^j + g^{\mu\alpha} \partial_\alpha \bar{x}^i \delta^{j}_{\,\,\,\,k})\partial^\nu \bar{x}^k - \rho g^{\mu\nu} \nonumber \\ & = 2\lambda_0\frac{d \rho}{d \lambda_0} h_{ij} \partial^\mu \bar{x}^i \partial^\nu \bar{x}^j - \rho g^{\mu\nu}. \nonumber \end{align} Since \[ h_{\mu\nu} = h_{ij} \partial_\mu \bar{x}^i \partial_\nu \bar{x}^j \] is simply the metric on the hyperplanes orthogonal to the worldlines, that is, \[ h_{\mu\nu} = g_{\mu\nu} + U_\mu U_\nu, \] where $U$ is the unit tangent vector to the worldlines, we obtain \begin{align} T^{\mu\nu} & = 2\lambda_0\frac{d \rho}{d \lambda_0}U^\mu U^\nu + \left(2\lambda_0\frac{d \rho}{d \lambda_0} - \rho\right) g^{\mu\nu} \nonumber \\ & = (\rho + p) U^\mu U^\nu + p g^{\mu\nu} \nonumber \end{align} with \[ p = 2\lambda_0\frac{d \rho}{d \lambda_0} - \rho, \] which is indeed the energy-momentum tensor of a perfect fluid. For example, dust corresponds to the elastic law $\rho=\rho_0 \sqrt{\lambda_0}$ (for some positive constant $\rho_0$), yielding $p=0$, and a stiff fluid, with equation of state $p=\rho$, is given by the choice $\rho=\rho_0 \lambda_0$. The ``hard phase" rigid fluid introduced by Christodoulou, with equation of state $p=\rho-\rho_0$, corresponds to $\rho=\frac{\rho_0}2(\lambda_0+1)$. To obtain elastic materials that are not fluids we must choose elastic laws that also depend on $\lambda_1$ and $\lambda_2$. For instance, an elastic law is said to be {\bf quasi-Hookean} if it is of the form \[ \rho = \hat{p}(n) + \hat{\mu}(n) \sigma, \] where $\sigma$ is a {\bf shear scalar}, that is, a non-negative function of the stretch factors such that $\sigma=0$ if and only if $s_1 = s_2 = s_3$. The functions $\hat{\rho}$ and $\hat{\mu}$ are called the {\bf unsheared energy density} and the {\bf rigidity modulus} of the elastic material. Examples of these are the {\bf John quasi-Hookean material}, corresponding to the shear scalar \[ \sigma = \frac{{s_1}^2 + {s_2}^2 + {s_3}^2}{\left({s_1}^2 {s_2}^2 {s_3}^2 \right)^\frac13} - 3, \] and the the {\bf Karlovini-Samuelsson quasi-Hookean material}, corresponding to the shear scalar \[ \sigma = \frac1{12} \left[ \left( \frac{s_1}{s_2} - \frac{s_2}{s_1} \right)^2 + \left( \frac{s_1}{s_3} - \frac{s_3}{s_1} \right)^2 + \left( \frac{s_2}{s_3} - \frac{s_3}{s_2} \right)^2 \right]. \] It is easily seen that the first elastic law is of the form \[ \rho=f(\lambda_0)+g(\lambda_0) \lambda_2, \] whereas the second is of the form \[ \rho=f(\lambda_0)+g(\lambda_0) \lambda_1 \lambda_2. \] Other examples are the {\bf stiff ultra-rigid material} of Karlovini and Samuelsson, given by \[ \rho=\frac{\rho_0}4 (\lambda_2 + 1), \] and the {\bf Brotas rigid solid}, given by \[ \rho=\frac{\rho_0}{8}(\lambda_0 + \lambda_1 + \lambda_2 + 1) \] (where $\rho_0$ is a positive constant). \section{Einstein-Hilbert action} \label{sec6.3} The variational formulation of field theories is coordinate-free, and so it gives a simple method to write the field equations on an arbitrary coordinate system. We must be careful, however, to note that the action written in the new coordinate system must include the Jacobian of the coordinate transformation: \[ S = \int_{\mathbb{R}^4} \mathcal{L}(\psi, \partial\psi) \sqrt{|\det(g_{\mu\nu})|} \, dx^0 dx^1 dx^2 dx^3. \] This suggests that to generalize the field equations to an arbitrary curved spacetime $(M,g)$ one should consider actions of this form, where $\mathcal{L}$ should be invariant under coordinate changes. In particular, one may wonder if there is a Lagrangian action for the metric itself which yields the Einstein field equations. The answer to this question is affirmative, and the corresponding action is known as the {\bf Einstein-Hilbert action}: \[ S = \int_{M} R \sqrt{|\det(g_{\mu\nu})|} \, dx^0 dx^1 dx^2 dx^3 = \int_{M} R \epsilon, \] where $R$ and $\epsilon$ are the scalar curvature and the volume element of the metric $g$, and the integral is over an arbitrary (oriented) manifold $M$. Note that $R$ depends on $g$ and its first and second partial derivatives, unlike the Lagrangian densities that we encountered before. Instead of deriving the Euler-Lagrange equations for this case, we will proceed in a more geometric way. We note here, however, that this action is exceptional in that it leads to second-order equations for the metric, as opposed to the fourth order equations that are typical of Lagrangian densities depending on second partial derivatives. To obtain the Euler-Lagrange equations for the Einstein-Hilbert action we start by considering two affine connections $\nabla$ and $\tilde{\nabla}$ on $M$. Because \[ (\tilde{\nabla}_X - \nabla_X) (fY) = f (\tilde{\nabla}_X - \nabla_X) Y, \] there exists a tensor $C$ such that \[ (\tilde{\nabla}_X - \nabla_X) Y = C(X,Y) \Leftrightarrow \tilde{\nabla}_\mu Y^\nu = \nabla_\mu Y^\nu + C^\nu_{\mu\alpha} Y^\alpha. \] If both $\nabla$ and $\tilde{\nabla}$ are symmetric then \[ 0 = (\tilde{\nabla}_X Y - \tilde{\nabla}_Y X) - (\nabla_X Y - \nabla_Y X) = C(X,Y) - C(Y,X), \] that is, $C$ is symmetric: \[ C^\alpha_{\mu\nu} = C^\alpha_{\nu\mu}. \] Using the Leibnitz rule, it is easy to determine the relation between the covariant derivatives of any tensor using the two connections: for example, \[ \tilde{\nabla}_\alpha T^\beta_{\mu\nu} = \nabla_\alpha T^\beta_{\mu\nu} + C^\beta_{\alpha\gamma} T^\gamma_{\mu\nu} - C^\gamma_{\alpha\mu} T^\beta_{\gamma\nu} - C^\gamma_{\alpha\nu} T^\beta_{\mu\gamma}. \] Assume now that $\tilde{\nabla}$ is the Levi-Civita connection for the metric $g$. Then we have \[ 0 = \tilde{\nabla}_\alpha g_{\mu\nu} = \nabla_\alpha g_{\mu\nu} - C^\beta_{\alpha\mu} g_{\beta\nu} - C^\beta_{\alpha\nu} g_{\mu\beta} = \nabla_\alpha g_{\mu\nu} - C_{\nu\alpha\mu} - C_{\mu\alpha\nu}. \] By subtracting this identity from its cyclic permutations, \begin{align*} & \nabla_\mu g_{\nu\alpha} = C_{\alpha\mu\nu} + C_{\nu\mu\alpha}, \\ & \nabla_\nu g_{\alpha\mu} = C_{\mu\nu\alpha} + C_{\alpha\nu\mu}, \end{align*} we readily obtain \begin{align} & 2 C_{\alpha\mu\nu} = \nabla_\mu g_{\nu\alpha} + \nabla_\nu g_{\alpha\mu} - \nabla_\alpha g_{\mu\nu} \Leftrightarrow \nonumber \\ & C^\alpha_{\mu\nu} = \frac12 g^{\alpha\beta} \left(\nabla_\mu g_{\nu\beta} + \nabla_\nu g_{\mu\beta} - \nabla_\beta g_{\mu\nu}\right). \label{Christoffel} \end{align} Moreover, we have \begin{align*} \tilde{\nabla}_\mu \tilde{\nabla}_\nu X^\alpha = & \, \tilde{\nabla}_\mu (\nabla_\nu X^\alpha + C^\alpha_{\nu\beta} X^\beta ) = \nabla_\mu \nabla_\nu X^\alpha + C^\alpha_{\mu\beta} \nabla_\nu X^\beta - C^\beta_{\mu\nu} \nabla_\beta X^\alpha \\ & + \nabla_\mu C^\alpha_{\nu\beta} X^\beta + C^\alpha_{\nu\beta} \nabla_\mu X^\beta + C^\alpha_{\mu\gamma} C^\gamma_{\nu\beta} X^\beta - C^\gamma_{\mu\nu} C^\alpha_{\gamma\beta} X^\beta, \end{align*} whence \begin{align*} \tilde{R}_{\mu\nu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} X^\beta = & \, (\tilde{\nabla}_\mu \tilde{\nabla}_\nu - \tilde{\nabla}_\nu \tilde{\nabla}_\mu) X^\alpha = R_{\mu\nu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} X^\beta \\ & + \left(\nabla_\mu C^\alpha_{\nu\beta} - \nabla_\nu C^\alpha_{\mu\beta} + C^\alpha_{\mu\gamma} C^\gamma_{\nu\beta} - C^\alpha_{\nu\gamma} C^\gamma_{\mu\beta} \right) X^\beta, \end{align*} that is, \begin{equation} \label{curvature} \tilde{R}_{\mu\nu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} = R_{\mu\nu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} + \nabla_\mu C^\alpha_{\nu\beta} - \nabla_\nu C^\alpha_{\mu\beta} + C^\alpha_{\mu\gamma} C^\gamma_{\nu\beta} - C^\alpha_{\nu\gamma} C^\gamma_{\mu\beta}. \end{equation} Note that we retrieve the usual formulae for the Christoffel symbols and the Riemann tensor from equations~\eqref{Christoffel} and \eqref{curvature} in the case when $\nabla_\mu = \partial_\mu$. Let $g(\lambda)$ be a one-parameter family of Lorentzian metrics on $M$, and choose $\nabla$ and $\tilde{\nabla}$ to be the Levi-Civita connections of $g(0)$ and $g(\lambda)$. The difference between these connections is a tensor $C(\lambda)$ with $C(0)=0$. Again setting $\delta \equiv \frac{d}{d \lambda}_{|_{\lambda=0}}$, we have from~\eqref{Christoffel} \begin{align*} \delta C^\alpha_{\mu\nu} & = \frac12 g^{\alpha\beta} \left(\nabla_\mu \delta g_{\nu\beta} + \nabla_\nu \delta g_{\mu\beta} - \nabla_\beta \delta g_{\mu\nu}\right) \\ & = \frac12 \left(\nabla_\mu \delta g_{\nu}^{\,\,\,\,\alpha} + \nabla_\nu \delta g_{\mu}^{\,\,\,\,\alpha} - \nabla^\alpha \delta g_{\mu\nu} \right), \end{align*} where $g_{\mu\nu}$ means $g_{\mu\nu}(0)$ and all indices are raised or lowered with this metric. Note carefully that $\delta g$ means the tensor $\delta g_{\mu\nu}$, possible with some indices raised. Thus for instance \[ \delta(g^{\mu\nu}) = - g^{\mu\alpha}g^{\nu\beta} \delta g_{\alpha\beta} = - \delta g^{\mu\nu}. \] From \eqref{curvature} we have \[ \delta R_{\mu\nu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} = \nabla_\mu \delta C^\alpha_{\nu\beta} - \nabla_\nu \delta C^\alpha_{\mu\beta}, \] whence \begin{align*} \delta R_{\mu\nu} & = \nabla_\alpha \delta C^\alpha_{\mu\nu} - \nabla_\mu \delta C^\alpha_{\alpha\nu} \\ & = \frac12 \nabla_\alpha \left( \nabla_\mu \delta g_{\nu}^{\,\,\,\,\alpha} + \nabla_\nu \delta g_{\mu}^{\,\,\,\,\alpha} - \nabla^\alpha \delta g_{\mu\nu} \right) - \frac12 \nabla_\mu \nabla_\nu \delta g_{\alpha}^{\,\,\,\,\alpha}. \end{align*} Note that \[ g^{\mu\nu} \delta R_{\mu\nu} = - \nabla_\mu \nabla^\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\mu \nabla^\nu \delta g_{\mu\nu} = \nabla^\mu \left( -\nabla_\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\nu \delta g_{\mu\nu}\right) \] is a divergence with respect to the metric $g(0)$, and will vanish when integrated for variations of the metric with compact support. The variation of the Einstein-Hilbert action is \[ \delta S = \delta \int_M R \epsilon = \int_M \delta(R \epsilon) = \int_M( \delta R \epsilon + R \delta \epsilon). \] We have \[ \delta R = \delta(g^{\mu\nu} R_ {\mu\nu}) = - \delta g^{\mu\nu} R_ {\mu\nu} + \nabla^\mu \left( -\nabla_\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\nu \delta g_{\mu\nu}\right) \] and, using the identity \[ \delta \det A = (\det A) \operatorname{tr}(A^{-1} \delta A), \] for any matrix-valued function $A$, \begin{align*} \delta \epsilon & = \delta \sqrt{-\det(g_{\mu\nu})} \, dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3 \\ & = - \frac12 \left(\det(g_{\mu\nu})\right) g^{\mu\nu} \delta g_{\mu\nu} \left(-\det(g_{\mu\nu})\right)^{-\frac12} \, dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3 \\ & = \frac12 g_{\mu\nu} \delta g^{\mu\nu} \epsilon. \end{align*} We conclude that \begin{align*} \delta S & = - \int_M \left(R_ {\mu\nu} - \frac12 R g_{\mu\nu} \right) \delta g^{\mu\nu} \epsilon + \int_M \nabla^\mu \left( -\nabla_\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\nu \delta g_{\mu\nu}\right) \epsilon \\ & = - \int_M G_{\mu\nu} \delta g^{\mu\nu} \epsilon. \end{align*} for variations of the metric with compact support. We conclude that the Euler-Lagrange equations for the Einstein-Hilbert action are the Einstein equations $G_{\mu\nu} = 0$. \begin{Remark} It is easy to see that the Einstein tensor of a compact surface $(M,g)$ vanishes identically. Therefore we have \[ \delta \int_M R \epsilon = 0 \] automatically. If $g_1$ and $g_2$ are any two Riemannian metrics on $M$ then $g(\lambda)=(1-\lambda)g_1 + \lambda g_2$ is a Riemannian metric which interpolates between $g_1$ and $g_2$. We then have \[ \frac{d}{d\lambda} \int_M R(\lambda) \epsilon(\lambda) = 0 \Rightarrow \int_M R_1 \epsilon_1 = \int_M R_2 \epsilon_2, \] that is, the integral of the scalar curvature does not depend on the metric. This statement is known as the {\bf Gauss-Bonnet theorem}. \end{Remark} To include matter fields $\psi$ and a cosmological constant $\Lambda$ in the Einstein equations we consider the action \[ S = \int_M \left(\mathcal{L}(g,\psi) - \frac1{16 \pi}(R - 2\Lambda)\right) \epsilon. \] It is clear that \[ \delta S = \int_M E(\psi) \delta \psi \, \epsilon + \frac1{16 \pi} \int_M \left(G_{\mu\nu} + \Lambda g_{\mu\nu} - 8 \pi T_{\mu\nu} \right) \delta g^{\mu\nu} \epsilon, \] where we have defined \[ \delta \int_M \mathcal{L}(g,\psi) \epsilon = \int_M E(\psi) \delta \psi \, \epsilon - \frac12 \int_M T_{\mu\nu} \delta g^{\mu\nu} \epsilon. \] The Euler-Lagrange equations are then the Einstein equations with sources plus the field equations for $\psi$: \[ \begin{cases} G_{\mu\nu} + \Lambda g_{\mu\nu} = 8 \pi T_{\mu\nu} \\ E(\psi) = 0 \end{cases} . \] The energy-momentum tensor is then \[ T_{\mu\nu} = - 2 \frac{\delta \mathcal{L}}{\delta g^{\mu\nu}} - \mathcal{L} g_{\mu\nu}, \] where the second term comes from the variation of the volume element and we have set \[ \delta \mathcal{L} = \frac{\delta \mathcal{L}}{\delta g^{\mu\nu}} \delta g^{\mu\nu} + E(\psi) \delta \psi. \] It is interesting to note that the energy-momentum tensor as defined here often agrees with what one would expect from the canonical energy-momentum tensor associated to the Lagrangian density $\mathcal{L}$ in Minkowski's spacetime. \section{Gravitational waves} \label{sec6.35} Since we have computed the variation of the Ricci tensor, we make a short digression to discuss the linearized Einstein vacuum equations, which describe the propagation of gravitational waves on a fixed solution $(M,g)$ of the full nonlinear vacuum equations $R_{\mu\nu} = \Lambda g_{\mu\nu}$. The linearized equations are simply \[ \delta R_{\mu\nu} = \Lambda \delta g_{\mu\nu}, \] that is, \begin{equation} \label{linearEinstein} \nabla_\alpha \left( \nabla_\mu \delta g_{\nu}^{\,\,\,\,\alpha} + \nabla_\nu \delta g_{\mu}^{\,\,\,\,\alpha} - \nabla^\alpha \delta g_{\mu\nu} \right) - \nabla_\mu \nabla_\nu \delta g_{\alpha}^{\,\,\,\,\alpha} = 2 \Lambda \delta g_{\mu\nu}. \end{equation} Note that some variations of the metric are trivial, in that they arise from the diffeomorphism invariance of the Einstein equations: if $\psi_\lambda$ is a one-parameter family of diffeomorphisms, then the variation \[ g(\lambda) = {\psi_\lambda}^* g \] yields metrics which are isometric to $g=g(0)$, but expressed in different coordinates. In this case we have \[ \delta g = \mathcal{L}_V g, \] that is \[ \delta g_{\mu\nu} = \nabla_\mu V_\nu + \nabla_\nu V_\mu, \] where $V$ is the vector field defined at each point $p \in M$ by \[ V_p = \frac{d}{d \lambda}_{|_{\lambda=0}} \psi_\lambda(p). \] Therefore, the linearized Einstein equations have gauge freedom: we can always add a Lie derivative of $g$ to any given variation of the metric without altering its physical meaning. This corresponds to gauge transformations of the form \[ \delta g_{\mu\nu} \to \delta g_{\mu\nu} + \nabla_\mu V_\nu + \nabla_\nu V_\mu. \] We will now construct a gauge where equations~\eqref{linearEinstein} look particularly simple. To do that, we consider the trace-reversed metric perturbation \[ \overline{\delta g}_{\mu\nu} = \delta g_{\mu\nu} - \frac12 (\delta g_{\alpha}^{\,\,\,\,\alpha}) g_{\mu\nu}, \] which transforms under a gauge transformation as \[ \overline{\delta g}_{\mu\nu} \to \overline{\delta g}_{\mu\nu} + \nabla_\mu V_\nu + \nabla_\nu V_\mu - (\nabla_\alpha V^\alpha) g_{\mu\nu}, \] so that its divergence transforms as \begin{align*} \nabla^\mu \overline{\delta g}_{\mu\nu} \to & \nabla^\mu \overline{\delta g}_{\mu\nu} + \Box V_\nu + \nabla_\mu \nabla_\nu V^\mu - \nabla_\nu \nabla_\alpha V^\alpha \\ & = \nabla^\mu \overline{\delta g}_{\mu\nu} + \Box V_\nu + R_{\mu\nu\,\,\,\,\alpha}^{\,\,\,\,\,\,\,\,\mu} V^\alpha \\ & = \nabla^\mu \overline{\delta g}_{\mu\nu} + \Box V_\nu + R_{\nu\alpha} V^\alpha. \end{align*} Assume that $(M,g)$ is globally hyperbolic. By solving the wave equation \[ \Box V_\nu + R_{\nu\alpha} V^\alpha + \nabla^\mu \overline{\delta g}_{\mu\nu} = 0, \] we can then change to a gauge where \[ \nabla^\mu \overline{\delta g}_{\mu\nu} = 0, \] that is, \[ \nabla^\mu \delta g_{\mu\nu} = \frac12 \nabla_\nu \delta g_{\alpha}^{\,\,\,\,\alpha}. \] Taking the trace of equation~\eqref{linearEinstein} we then obtain \begin{equation} \label{waveperturbation} \Box \, \delta g_{\alpha}^{\,\,\,\,\alpha} + 2 \Lambda \delta g_{\alpha}^{\,\,\,\,\alpha} = 0. \end{equation} Note that we still have a residual gauge freedom, corresponding to vector fields $V$ such that \[ \Box V_\nu + R_{\nu\alpha} V^\alpha = 0. \] Choosing $V$ and its derivatives on some Cauchy hypersurface $S$ such that \[ \delta g_{\alpha}^{\,\,\,\,\alpha} + 2 \nabla_\alpha V^\alpha = N \cdot \left( \delta g_{\alpha}^{\,\,\,\,\alpha} + 2 \nabla_\alpha V^\alpha \right) = 0, \] where $N$ is the future-pointing unit normal to $S$, and solving the wave equation for $V$ above, we guarantee the existence of a gauge where \[ \delta g_{\alpha}^{\,\,\,\,\alpha} = N \cdot \left( \delta g_{\alpha}^{\,\,\,\,\alpha}\right) = 0 \] on $S$. The wave equation~\eqref{waveperturbation} then guarantees that \[ \delta g_{\alpha}^{\,\,\,\,\alpha} = 0 \] on $M$, and so \[ \nabla^\mu \delta g_{\mu\nu} = 0. \] This is the so-called {\bf transverse traceless gauge}. In this gauge, the linearized Einstein equation~\eqref{linearEinstein} can be written as \[ \nabla_\alpha \nabla_\mu \delta g_{\nu}^{\,\,\,\,\alpha} + \nabla_\alpha \nabla_\nu \delta g_{\mu}^{\,\,\,\,\alpha} - \Box \delta g_{\mu\nu} = 2 \Lambda \delta g_{\mu\nu}. \] Using \begin{align*} \nabla_\alpha \nabla_\mu \delta g_{\nu}^{\,\,\,\,\alpha} & = \nabla_\mu \nabla_\alpha \delta g_{\nu}^{\,\,\,\,\alpha} + R_{\alpha\mu\nu\beta} \delta g^{\beta\alpha} + R_{\alpha\mu\,\,\,\,\beta}^{\,\,\,\,\,\,\,\,\alpha} \delta g_{\nu}^{\,\,\,\,\beta} \\ & = R_{\alpha\mu\nu\beta} \delta g^{\alpha\beta} + R_{\mu\beta} \delta g_{\nu}^{\,\,\,\,\beta} \\ & = R_{\alpha\mu\nu\beta} \delta g^{\alpha\beta} + \Lambda \delta g_{\mu\nu}, \end{align*} we finally obtain \[ \Box \delta g_{\mu\nu} - 2R_{\alpha\mu\nu\beta} \delta g^{\alpha\beta} = 0. \] \section{ADM mass} \label{sec6.4} If we write the metric in the Gauss Lemma form, \[ g = - dt^2 + h_{ij}(t,x) dx^i dx^j \] then from the exercises in Chapter~\ref{chapter5} we have \[ R = \bar{R} + 2 \frac{\partial}{\partial t} \left(K^i_{\,\,\,\,i}\right) + \left(K^i_{\,\,\,\,i}\right)^2 + K_{ij} K^{ij}. \] Setting \[ X = K^i_{\,\,\,\,i} \frac{\partial}{\partial t}\, , \] we have \[ \operatorname{div} X = \nabla_0 X^0 + \nabla_i X^i = \partial_0 X^0 + \Gamma_{i0}^i X^0 = \partial_0 X^0 + K^i_{\,\,\,\,i} X^0, \] that is, \[ \frac{\partial}{\partial t} \left(K^i_{\,\,\,\,i}\right) = \operatorname{div} X - \left(K^i_{\,\,\,\,i}\right)^2. \] We conclude that \[ R = \bar{R} - \left(K^i_{\,\,\,\,i}\right)^2 + K_{ij} K^{ij} + 2 \operatorname{div} X, \] and so the Einstein-Hilbert action corresponds to the Lagrangian density \[ \mathcal{L} = \sqrt{\det(h_{ij})} \left(\bar{R} - \left(K^i_{\,\,\,\,i}\right)^2 + K_{ij} K^{ij}\right). \] Therefore the Hamiltonian density is \begin{align*} \mathcal{H} & = - \frac{\partial \mathcal{L}}{\partial(\partial_0 h_{ij})} \partial_0 h_{ij} + \mathcal{L} = - \frac{\partial \mathcal{L}}{\partial(K_{ij})} K_{ij} + \mathcal{L} \\ & = - \sqrt{\det(h_{ij})} \left( 2 K^{ij} - 2 K^l_{\,\,\,\,l} h^{ij} \right) K_{ij} + \mathcal{L} \\ & =\sqrt{\det(h_{ij})} \left( \bar{R} + \left(K^i_{\,\,\,\,i}\right)^2 - K_{ij} K^{ij}\right) = 2\sqrt{\det(h_{ij})} \, G_{00} = 0 \end{align*} for any solution of the vacuum Einstein field equations. That is, the total energy associated to the fields $h_{ij}$ is simply zero. \begin{figure}[h!] \begin{center} \psfrag{n}{$n$} \psfrag{M}{$M$} \psfrag{dM}{$\partial M$} \psfrag{d/dt}{$\frac{\partial}{\partial t}$} \psfrag{S}{$\Sigma$} \epsfxsize=0.4\textwidth \leavevmode \epsfbox{boundary.eps} \end{center} \caption{Computing the boundary terms for the Einstein-Hilbert action.} \label{boundary} \end{figure} To try to obtain a nonzero quantity we reconsider the boundary terms that were discarded when varying the Einstein-Hilbert action: \[ \int_M \nabla^\mu \left( -\nabla_\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\nu \delta g_{\mu\nu}\right) \epsilon = \int_{\partial M} \left( -\nabla_\mu \delta g_{\nu}^{\,\,\,\,\nu} + \nabla^\nu \delta g_{\mu\nu}\right) n^\mu \omega, \] where we take $M$ to be a manifold with a timelike boundary $\partial M$ at infinity, tangent to $\frac{\partial}{\partial t}$, with unit normal $n$ and volume element $\omega$ (Figure~\ref{boundary}). Note that it is not necessary to consider the flux of the vector field $X$ which we discarded above as it is orthogonal to $n$. The boundary integral can be written as \[ \int_\mathbb{R} \int_{\Sigma} \left( - \partial_i (h^{jk}\delta h_{jk}) + h^{jk}\bar\nabla_k \delta h_{ij}\right) n^i \sigma dt, \] where we take $\partial M$ to be the flow by $\frac{\partial}{\partial t}$ of a spacelike surface $\Sigma$ and $\omega = dt \wedge \sigma$. Suppose that $h$ approaches the Euclidean metric at infinity, so that in an appropriate coordinate system $(x^1,x^2,x^3)$ we have \[ h_{ij} = \delta_{ij} + O\left(r^{-p}\right), \quad \delta h_{ij} = O\left(r^{-p}\right), \quad \partial_k h_{ij}, \partial_k \delta h_{ij}, \Gamma^k_{ij} = O\left(r^{-p-1}\right) \] as $r \to +\infty$, with $r^2 = {(x^1)}^2 + {(x^2)}^2 + {(x^3)}^2$ and $p>\frac12$. Then the boundary integral can be written as \begin{align*} & \lim_{r \to \infty} \int_\mathbb{R} \int_{S_r} \left( - \partial_i \delta h_{jj} + \partial_j \delta h_{ij} \right) \frac{x^i}{r} dt \\ & = \delta \lim_{r \to \infty} \int_\mathbb{R} \int_{S_r} \left( - \partial_i h_{jj} + \partial_j h_{ij} \right) \frac{x^i}{r} dt = \delta I, \end{align*} where $S_r$ is a coordinate sphere of radius $r$. Note that $\delta$ can be brought outside the integral because we have replaced the metric-dependent terms $\bar\nabla$, $n$ and $\sigma$. If $S$ is the Einstein-Hilbert action, we then have \[ \delta S = - \int_M G_{\mu\nu} \delta g^{\mu\nu} + \delta I, \] and so the Einstein equations hold if and only if \[ \delta(S-I) = 0. \] One can think of $I$ as the integral of a singular Lagrangian density $\mathcal{I}$. The Einstein-Hilbert Lagrangian density $\mathcal{L}$ should therefore be replaced by $\mathcal{L} - \mathcal{I}$, and so, since $\mathcal{I}$ does not depend on $K_{ij}$, the corresponding Hamiltonian density $\mathcal{H}$ should be replaced by $\mathcal{H} - \mathcal{I} = - \mathcal{I}$. The Hamiltonian should then be \[ H = - \lim_{r \to \infty} \int_{S_r} \left( \partial_j h_{ij} - \partial_i h_{jj} \right) \frac{x^i}{r}. \] This Hamiltonian suggests a definition of the total energy of the gravitational field at a given time slice, provided that $h$ approaches the Euclidean metric at infinity. \begin{Def} A $3$-dimensional Riemannian manifold $(S,h)$ is said to be {\bf asymptotically flat} if there exist: \begin{enumerate}[(i)] \item A compact set $K \subset S$ such that $S \setminus K$ is diffeomorphic to $\mathbb{R}^3 \setminus \overline{B_1}(0)$; \item A chart $(x^1,x^2,x^3)$ on $S \setminus K$ (called a {\bf chart at infinity}) such that \[ |h_{ij} - \delta_{ij}| + r |\partial_k h_{ij}| + r^2 |\partial_k \partial_l h_{ij}|= O(r^{-p}) \text{ and } \bar{R}=O(r^{-q}) \] for some $p > \frac12$ and $q>3$, where $r^2 = {(x^1)}^2 + {(x^2)}^2 + {(x^3)}^2$ and $\bar{R}$ is the scalar curvature of $h$. \end{enumerate} \end{Def} \begin{figure}[h!] \begin{center} \psfrag{S}{$S$} \psfrag{K}{$K$} \epsfxsize=1.0\textwidth \leavevmode \epsfbox{asymp_flat.eps} \end{center} \caption{Asymptotically flat manifold.} \label{asymp_flat} \end{figure} \begin{Def} ({\bf Arnowitt-Deser-Misner \cite{ADM61}}) The {\bf ADM mass} of an asymptotically flat Riemannian manifold $(S,h)$ is \[ M = \lim_{r \to + \infty} \frac{1}{16 \pi} \int_{S_r} \left( \partial_j h_{ij} - \partial_i h_{jj} \right) \frac{x^i}r, \] where $S_r$ is a sphere of radius $r$ in the chart at infinity $(x^1,x^2,x^3)$. \end{Def} Note that in this definition the Hamiltonian has been multiplied by the factor $-\frac1{16\pi}$ that is used when coupling to matter fields. \begin{Thm} ({\bf Ashtekar \cite{AA79}}) If $(S,h)$ is asymptotically flat and the maximal Cauchy development of $(S,h,K)$ is stationary then the Komar mass of the maximal Cauchy development coincides with the ADM mass of $(S,h)$. \end{Thm} \begin{Thm} ({\bf Bartnik \cite{B86}}) The ADM mass is well defined, that is, it does not depend on the choice of the chart at infinity. \end{Thm} \section{Positive mass theorem} \label{sec6.5} Since the gravitational field is attractive, its energy is presumably negative, at least for bound states. On the other hand, we expect gravitational waves to carry positive energy. It is therefore an important question to decide whether there is a lower bound for the ADM mass (the inexistence of which would signal an instability). In the simplest case of time-symmetric initial data ($K=0$) the restriction equations reduce to \[ \bar{R} = 16 \pi \rho, \] and we expect $\rho \geq 0 \Leftrightarrow \bar{R} \geq 0$ for reasonable matter fields. In this case, we have the following famous result. \begin{Thm} ({\bf Schoen and Yau \cite{SY81}}) Let $(S,h)$ be a complete asymptotically flat Riemannian $3$-manifold with scalar curvature $\bar{R} \geq 0$. Then: \begin{enumerate}[(i)] \item Its ADM mass is nonnegative, $M \geq 0$. \item If $M=0$ then $S=\mathbb{R}^3$ and $h$ is the Euclidean metric. \end{enumerate} \end{Thm} \begin{proof} Following \cite{L10}, we give a proof of $(i)$ only for asymptotically flat Riemannian $3$-manifolds $S$ that are graphs of smooth functions $f:\mathbb{R}^3 \to \mathbb{R}$ with the metric $h$ induced by the Euclidean metric of $\mathbb{R}^4$. Using the Cartesian coordinates $(x^1,x^2,x^3)$ of $\mathbb{R}^3$ as global coordinates on the graph we have \[ h_{ij} = \delta_{ij} + \partial_i f \partial_j f. \] From that expression one can show that the scalar curvature of the graph is the divergence of a vector field on $\mathbb{R}^3$: \[ \bar{R} = \partial_i \left( \frac{1}{1 + |\operatorname{grad} f|^2} (\partial_i f \partial_j\partial_j f - \partial_i \partial_j f \partial_j f)\right). \] On the other hand, \begin{align*} M & = \lim_{r \to + \infty} \frac{1}{16 \pi} \int_{S_r} \left( \partial_j h_{ij} - \partial_i h_{jj} \right) \frac{x^i}r \\ & = \lim_{r \to + \infty} \frac{1}{16 \pi} \int_{S_r} \left( \partial_i \partial_j f \partial_j f + \partial_i f \partial_j \partial_j f - 2 \partial_i \partial_j f \partial_j f \right) \frac{x^i}r \\ & = \lim_{r \to + \infty} \frac{1}{16 \pi} \int_{S_r} \left( \partial_i f \partial_j \partial_j f - \partial_i \partial_j f \partial_j f \right) \frac{x^i}r . \end{align*} Since $(S,h)$ is asymptotically flat, the derivatives of $f$ approach zero with certain decays as $r \to + \infty$. One can then easily show that \begin{align*} M & = \lim_{r \to + \infty} \frac{1}{16 \pi} \int_{S_r} \left( \frac{1}{1 + |\operatorname{grad} f|^2} (\partial_i f \partial_j \partial_j f - \partial_i \partial_j f \partial_j f) \right) \frac{x^i}r \\ & = \frac{1}{16 \pi} \int_{\mathbb{R}^3} \partial_i \left( \frac{1}{1 + |\operatorname{grad} f|^2} (\partial_i f \partial_j \partial_j f - \partial_i \partial_j f \partial_j f) \right) \\ & = \frac{1}{16 \pi} \int_{\mathbb{R}^3} \bar{R} \geq 0. \end{align*} We do not prove the rigidity statement $(ii)$. It is interesting to note that this statement, together with the formula above for the ADM mass, implies that any graph with zero scalar curvature is flat. \end{proof} The volume element of the graph is \[ \epsilon = \sqrt{1 + |\operatorname{grad} f|^2} \, dx^1 \wedge dx^2 \wedge dx^3, \] since the eigenvalues of the matrix $(h_{ij})$ are $1 + |\operatorname{grad} f|^2$ for the eigenvector $\operatorname{grad} f$ and $1$ for the eigenvectors orthogonal to $\operatorname{grad} f$. Consequently the ADM mass is \[ M = \frac{1}{16 \pi} \int_{S} \frac{\bar{R}}{\sqrt{1 + |\operatorname{grad} f|^2}} \, \epsilon = \int_{S} \frac{\rho}{\sqrt{1 + |\operatorname{grad} f|^2}} \, \epsilon < \int_{S} \rho \, \epsilon. \] The difference \[ M - \int_{S} \rho \, \epsilon < 0 \] can be thought of as the (negative) gravitational binding energy. \section{Penrose inequality} \label{sec6.6} The positive mass theorem admits a refinement in the case when black holes are present, known as the Penrose inequality. The idea is that black hole horizons correspond to minimal surfaces $\Sigma$ on the Riemannian manifold $(S,h)$, each contributing with a mass $M$ at least as big as the mass of a Schwarzschild black hole with the same event horizon area $A$: \[ M \geq \sqrt{\frac{A}{16\pi}}. \] To understand the motivation for this inequality, recall that in the proof of the Penrose singularity theorem we defined the outward null expansion of a $2$-surface $\Sigma$ on a Cauchy hypersurface $S$ as \[ \theta = \frac12 \gamma^{AB} \frac{\partial \gamma_{AB}}{\partial r} = \frac12 \operatorname{tr} \left(\mathcal{L}_{\frac{\partial}{\partial r}} g\right)_{|_{T\Sigma}} \] If $N$ is the future-pointing unit normal to $S$ and $n$ is the outward unit normal to $\Sigma$ on $S$ we then have \[ \theta = \frac12 \operatorname{tr} \left(\mathcal{L}_{N + n} g\right)_{|_{T\Sigma}} = \operatorname{tr} K_{|_{T\Sigma}} + \frac12 \operatorname{tr} \left(\mathcal{L}_{n} g\right)_{|_{T\Sigma}}, \] where we used the fact that $\frac{\partial}{\partial r} = N + n$ on $\Sigma$, and also that if $X,Y$ are tangent to $\Sigma$ then \[ \left(\mathcal{L}_Z g\right) (X,Y) = \left\langle X, \nabla_Y Z \right\rangle + \left\langle Y, \nabla_X Z \right\rangle \] depends only on $Z$ along $\Sigma$. For time-symmetric initial data ($K=0$) this becomes \[ \theta = \frac12 \operatorname{tr} \left(\mathcal{L}_{n} g\right)_{|_{T\Sigma}} = \frac12 \operatorname{tr} \left(\mathcal{L}_{n} h\right)_{|_{T\Sigma}} = \operatorname{tr} \kappa, \] where $h$ is the metric induced by $g$ on $S$ and $\kappa$ is the second fundamental form of $\Sigma$ on $S$. We conclude that $\Sigma$ is {\bf marginally trapped}, that is, has zero outward null expansion, if and only $\operatorname{tr} \kappa = 0$, which is precisely the condition for $\Sigma$ to be a minimal surface. Now a marginally trapped surface anticipates the formation of trapped surfaces, which will lead to geodesic incompleteness of the resulting spacetime, presumably due to singularities. If one believes the {\bf weak cosmic censorship conjecture}, these singularities should only occur inside black holes, and so there should be a black hole horizon envelloping any marginally trapped surface. \begin{Thm} ({\bf Huisken and Ilmanen \cite{HI01}, Bray \cite{B01}}) Let $(S,h)$ be a complete asymptotically flat Riemannian manifold with scalar curvature $\bar{R} \geq 0$. Then: \begin{enumerate}[(i)] \item Its ADM mass satisfies $M \geq \sqrt{\frac{A}{16\pi}}$, where $A$ is the sum of the areas of the outer minimal surfaces. \item If $M=\sqrt{\frac{A}{16\pi}}$ then the restriction of $(S,h,0)$ to the exterior of the outer minimal surfaces coincides with the initial data for the Schwarzschild solution of mass $M$ outside the event horizon. \end{enumerate} \end{Thm} \begin{proof} Following \cite{L10}, we give a proof of only for asymptotically flat Riemannian $3$-manifolds $S$ that are graphs of smooth functions $f:U\subset \mathbb{R}^3 \to \mathbb{R}$, with the metric $h$ induced by the Euclidean metric of $\mathbb{R}^4$. Here \[ U = \mathbb{R}^3 \setminus \bigcup_{a=1}^N K_a, \] where $K_1, \ldots, K_N$ are disjoint convex compact sets with smooth boundaries $\partial K_1, \ldots, \partial K_N$ to which $f$ extends as a constant and where $|\operatorname{grad} f|\to+\infty$ (so that they are minimal surfaces of the graph). Applying the divergence theorem as in the proof of the positive mass theorem for graphs we now obtain \[ M = \frac{1}{16 \pi} \int_{U} \bar{R} + \frac{1}{16 \pi} \sum_{a=1}^N \int_{\partial K_a} \left( \frac{1}{1 + |\operatorname{grad} f|^2} (\partial_i f \partial_j \partial_j f - \partial_i \partial_j f \partial_j f) \right) n^i, \] where $n$ is the outward unit normal to $\partial K_a$. Now as is well known the Laplacian $\Delta f$ of $f$ in $U$ is related to the Laplacian $\bar{\Delta} f$ of $f$ in $\partial K_a$ by \[ \Delta f = \bar{\Delta} f + Hf(n,n) + (n \cdot f) \operatorname{tr} \kappa, \] where $Hf$ is the Hessian of $f$ and $\kappa$ is the second fundamental form of $\partial K_a$ in $\mathbb{R}^3$. Since $f$ is constant on $\partial K_a$ we have $\bar{\Delta} f = 0$ and so \begin{align*} & (\partial_i f \partial_j \partial_j f - \partial_i \partial_j f \partial_j f) n^i = (n \cdot f) \Delta f - Hf(n, \operatorname{grad} f) \\ & = (n \cdot f) Hf(n,n) + (n \cdot f)^2 \operatorname{tr} \kappa - \langle \operatorname{grad} f, n \rangle Hf(n,n) \\ & = \langle \operatorname{grad} f, n \rangle^2 \operatorname{tr} \kappa = |\operatorname{grad} f|^2 \operatorname{tr} \kappa, \end{align*} where we used $\operatorname{grad} f = \langle \operatorname{grad} f, n \rangle n$. Therefore \begin{align*} M & = \frac{1}{16 \pi} \int_{U} \bar{R} + \frac{1}{16 \pi} \sum_{a=1}^N \int_{\partial K_a} \left( \frac{|\operatorname{grad} f|^2}{1 + |\operatorname{grad} f|^2} \right) \operatorname{tr} \kappa \\ & = \frac{1}{16 \pi} \int_{U} \bar{R} + \frac{1}{16 \pi} \sum_{a=1}^N \int_{\partial K_a} \operatorname{tr} \kappa. \end{align*} Now {\bf Minkowski's inequality} for the smooth boundaries of compact convex sets states that \[ \int_{\partial K_a} \operatorname{tr} \kappa \geq \sqrt{16 \pi A_a}, \] where $A_a$ is the area of $\partial K_a$. Since $\bar{R} \geq 0$ we conclude that \begin{align*} M & \geq \sum_{a=1}^N \sqrt{\frac{A_a}{16 \pi}} \geq \sqrt{\frac{1}{16 \pi} \sum_{a=1}^N A_a}, \end{align*} where we used $\sqrt{a+b} \leq \sqrt{a} + \sqrt{b}$ for $a,b>0$. We do not prove the rigidity statement $(ii)$. It is interesting to note that this statement, together with the formula above for the ADM mass, implies that any graph of the type considered above with zero scalar curvature is a Flamm paraboloid. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{S}{$S$} \psfrag{A1}{$A_1$} \psfrag{A2}{$A_2$} \epsfxsize=1.0\textwidth \leavevmode \epsfbox{Penrose.eps} \end{center} \caption{Penrose inequality; the sum of the areas of the outer minimal surfaces is $A=A_1 + A_2$.} \label{Penrose_ineq} \end{figure} \section{Exercises} \label{sec6.7} \begin{enumerate} \item Let $g$ be a static spherically symmetric Lorentzian metric on $\mathbb{R}^4$ whose matter fields have spatially compact support and satisfy the dominant energy condition. There exist smooth functions $\phi=\phi(r)$ and $m=m(r)$ such that \[ g = - e^{2\phi(r)} dt^2 + \frac{dr^2}{1-\frac{2m(r)}{r}} + r^2 \left( d\theta^2 + \sin^2\theta d\varphi^2 \right). \] Show that: \begin{enumerate} \item The Einstein equations imply \begin{align*} & \frac{dm}{dr}=4\pi r^2\rho; \\ & \frac{d\phi}{dr}=\frac{m+4\pi r^3p}{r(r-2m)}, \end{align*} where $\rho$ and $p$ are the energy density and the radial pressure as measured by the static observers. \item There exist constants $M\geq 0$ and $\Phi \in \mathbb{R}$ such that \[ \lim_{r \to +\infty} m(r)=M \quad \text{ and } \quad \lim_{r \to +\infty} \phi(r)=\Phi. \] \item If we choose the coordinate $t$ such that $\Phi=0$ then $M$ is the Komar mass of $g$ with respect to the timelike Killing vector field $\frac{\partial}{\partial t}$. \item The constant $M$ satisfies $M \leq E$, where \[ E = \int_{\{t=0\}} \rho, \] with equality exactly when $g$ is the Minkowski metric. \end{enumerate} \item Starting with the appropriate Lagrangian density, show that the Klein-Gordon equation can be written as \[ \hspace{2cm} \frac{1}{\sqrt{|\det(g_{\mu\nu})|}} \partial_\alpha \left( \sqrt{|\det(g_{\mu\nu})|} \, \partial^\alpha \phi \right) - m^2 \phi = 0. \] \item Starting with the Einstein-Hilbert-Klein-Gordon action \[ \hspace{2cm} S = \int_M \left[ \frac12\left(g^{\mu\nu}\partial_\mu \phi \, \partial_\nu \phi + m^2 \phi^2\right) - \frac1{16\pi} R \right] \epsilon \] obtain the energy-momentum tensor for $\phi$: \[ \hspace{2cm} T_{\mu\nu} = \partial_\mu\phi\,\partial_\nu\phi - \frac12 g_{\mu\nu} \left(\partial_\alpha\phi\,\partial^\alpha\phi+m^2 \phi^2\right). \] Note that this agrees with what one would expect from the canonical energy-momentum tensor. \item Show that the Einstein-Hilbert action is equivalent to the (first order, non-geometric) {\bf Einstein action} \[ \hspace{2cm} S = \int_{M} g^{\alpha\beta} \left( \Gamma_{\alpha\gamma}^\delta \Gamma_{\beta\delta}^\gamma - \Gamma_{\alpha\beta}^\gamma \Gamma_{\gamma\delta}^\delta \right) \sqrt{|\det(g_{\mu\nu})|} \, dx^0 dx^1 dx^2 dx^3, \] by showing that they differ by a term of the form $\partial_\alpha Q^\alpha$, where \[ \hspace{2cm} Q^\alpha = \left( g^{\beta \gamma} \Gamma^{\alpha}_{\beta\gamma} - g^{\alpha \beta} \Gamma^{\gamma}_{\beta\gamma} \right) \sqrt{|\det(g_{\mu\nu})|}. \] The following formulae will be useful: \[ \hspace{2cm} \partial_\alpha \sqrt{|\det(g_{\mu\nu})|} = - \frac12 g_{\beta\gamma} \partial_\alpha g^{\beta\gamma} \sqrt{|\det(g_{\mu\nu})|} \] and \[ \hspace{2cm} \nabla_\alpha g^{\beta\gamma} = 0 \Leftrightarrow \partial_\alpha g^{\beta\gamma} = - \Gamma^{\beta}_{\alpha\delta} g^{\delta\gamma} - \Gamma^{\gamma}_{\alpha\delta} g^{\beta\delta}. \] \item Let $f:\mathbb{R}^3 \to \mathbb{R}$ be a smooth function and consider the metric $h$ induced on its graph $S$ by the Euclidean metric in $\mathbb{R}^4$, \[ h_{ij} = \delta_{ij} + \partial_i f \partial_j f. \] Show that: \begin{enumerate} \item The inverse metric is \[ h^{ij} = \delta_{ij} - \frac{\partial_i f \partial_j f}{1 + |\operatorname{grad} f|^2}. \] \item The Christoffel symbols are \[ \bar{\Gamma}^k_{ij} = \frac{\partial_k f \partial_i\partial_j f}{1 + |\operatorname{grad} f|^2}. \] \item The Ricci tensor is \begin{align*} \hspace{2cm} \bar{R}_{ij} = & \frac{\partial_i\partial_j f \partial_k \partial_k f - \partial_i\partial_k f \partial_j \partial_k f }{1 + |\operatorname{grad} f|^2} \\ & + \frac{\partial_k f \partial_l f \partial_i\partial_k f \partial_j \partial_l f - \partial_k f \partial_l f \partial_i\partial_j f \partial_k \partial_l f}{\left(1 + |\operatorname{grad} f|^2\right)^2}. \end{align*} \item The scalar curvature is \begin{align*} \hspace{2cm} \bar{R} = & \frac{\partial_i\partial_i f \partial_j \partial_j f - \partial_i\partial_j f \partial_i \partial_j f }{1 + |\operatorname{grad} f|^2} \\ & - \frac{2 \partial_j f \partial_k f (\partial_i\partial_i f \partial_j \partial_k f - \partial_i\partial_j f \partial_i \partial_k f )}{\left(1 + |\operatorname{grad} f|^2\right)^2}. \end{align*} \item The scalar curvature can be written as \[ \hspace{2cm} \bar{R} = \partial_i \left( \frac{1}{1 + |\operatorname{grad} f|^2} (\partial_i f \partial_j\partial_j f - \partial_i \partial_j f \partial_j f)\right). \] \end{enumerate} \item Let $h$ be the spherically symmetric Riemannian metric defined in $\mathbb{R}^3$ by \[ h = \frac{dr^2}{1-\frac{2m(r)}{r}} + r^2 \left( d\theta^2 + \sin^2\theta d\varphi^2 \right), \] where $m$ is a smooth function whose derivative has compact support. \begin{enumerate} \item Check that in Cartesian coordinates we have \[ h_{ij} = \delta_{ij} + \frac{\frac{2m(r)}{r^3}}{1-\frac{2m(r)}{r}} x^i x^j. \] \item Show that if the limit \[ M=\lim_{r \to +\infty} m(r) \] exists then $h$ is asymptotically flat with ADM mass $M$ (which in particular coincides with the Komar mass when appropriate). \item Check that $h$ has scalar curvature \[ \bar{R} = \frac{4}{r^2}\frac{dm}{dr}, \] and use this to prove the Riemannian positive mass theorem for $h$. \item Show that $r=r_0$ is a minimal surface if and only if $m(r_0)=\frac{r_0}2$ (in which case $r$ is a well-defined coordinate only for $r>r_0$), and use this to prove the Riemannian Penrose inequality for $h$. \end{enumerate} \item Consider a Riemannian metric given in the Gauss Lemma form \[ g = dt^2 + h_{ij}(t,x) dx^i dx^j, \] so that the hypersurface $t=0$ is a Riemannian manifold with induced metric $h=h_{ij}dx^i dx^j$ and second fundamental form \[ K=\frac12\frac{\partial h_{ij}}{\partial t}dx^idx^j. \] Show that: \begin{enumerate} \item The Laplacian operators $\Delta$ and $\bar{\Delta}$ of $g$ and $h$ are related by \[ \Delta f = \bar{\Delta} f + Hf(\partial_0,\partial_0) + (\partial_0f) \operatorname{tr} K, \] where $Hf$ is the Hessian of $f$. \item The metric induced on the hypersurface $t=\lambda f(x)$ is \[ h(\lambda) = \left[h_{ij}(\lambda f(x),x) + \lambda^2 \partial_i f \partial_j f\right] dx^idx^j. \] \item The first variation of this metric is \[ \delta h \equiv \frac{d}{d\lambda}_{|_{\lambda=0}} h(\lambda)= 2fK_{ij}dx^i dx^j. \] \item The first variation of the volume element is \[ \delta \sigma \equiv \frac{d}{d\lambda}_{|_{\lambda=0}} \sigma(\lambda) = f \operatorname{tr} K \sigma. \] \item The second variation of the volume element is \begin{align*} \hspace{2cm} \delta^2 \sigma & \equiv \frac{d^2}{d\lambda^2}_{|_{\lambda=0}} \sigma(\lambda) \\ & = \left[ \frac12 f^2 \left(\bar{R}-R+\left(K^{i}_{\,\, i}\right)^2-K_{ij}K^{ij}\right) + |\operatorname{grad} f|^2 \right] \sigma, \end{align*} where $R$ and $\bar{R}$ are the scalar curvatures of $g$ and $h$. \item There is no metric on the $3$-torus with positive scalar curvature. (You will need to use the fact that any metric in the $3$-torus admits a minimizing $2$-torus; this type of idea is used in the proof of the rigidity statement in the positive mass theorem.) \end{enumerate} \item Consider the surfaces $\Sigma_t$ obtained from the boundary $\partial K \equiv \Sigma_0$ of a compact convex set $K \subset \mathbb{R}^3$ by flowing a distance $t$ along the unit normal. Let $A(t)$ and $V(t)$ be the area of $\Sigma_t$ and the volume bounded by $\Sigma_t$. \begin{enumerate} \item Show that \[ \dot{A}(0) = \int_{\partial K} \operatorname{tr} \kappa, \] where $\kappa$ is the second fundamental form of $\Sigma$. \item Prove that $\ddot{A}(t)=8\pi$, implying $A(t)=4\pi t^2 + \dot{A}(0)t + A(0)$. \item Conclude that $V(t)=\frac{4\pi}{3}t^3 + \frac{\dot{A}(0)}{2}t^2 + A(0)t + V(0)$. \item Use the isoperimentric inequality $A(t)^3 \geq 36 \pi V(t)^2$ to prove Minkowski's inequality: \[ \dot{A}(0) \geq \sqrt{16 \pi A(0)}. \] \end{enumerate} \end{enumerate} \chapter{Black holes} \label{chapter7} In this chapter we study black holes and the laws of black hole thermodynamics, following \cite{Townsend97} (see also \cite{BCH73, Poisson07}). An elementary discussion of quantum field theory in curved spacetime and the Hawking radiation can be found in \cite{Carroll03}. \section{The Kerr solution} \label{sec7.1} General rotating black holes are described by the {\bf Kerr metric}, given in the so-called {\bf Boyer-Lindquist coordinates} by \begin{align*} ds^2 = & - \left( 1 - \frac{2Mr}{\rho^2} \right) dt^2 - \frac{4Mar\sin^2\theta}{\rho^2} dt d\varphi + \frac{\rho^2}{\Delta} dr^2 \\ & + \rho^2 d \theta^2 + \left( r^2 + a^2 + \frac{2Ma^2r\sin^2\theta}{\rho^2} \right) \sin^2\theta d\varphi^2, \end{align*} where \begin{align*} & \rho^2 = r^2 + a^2 \cos^2 \theta, \\ & \Delta = r^2 - 2Mr + a^2, \end{align*} and $M,a \in \mathbb{R}$ are constants. Note that the Schwarzschild metric is a particular case, corresponding to $a=0$. It is possible to prove that the Kerr metric solves the vacuum Einstein field equations (see for instance \cite{ONeill95}). The Kerr metric is not spherically symmetric, but admits a two-dimensional group of isometries, generated by the Killing vector fields $X = \frac{\partial}{\partial t}$ and $Y=\frac{\partial}{\partial \varphi}$. The Komar mass associated to $X$ is \[ M_{\text{Komar}} = - \frac1{8\pi} \int_{\Sigma} \star d X^\sharp, \] where $\Sigma$ is a $2$-surface of constant $(t,r)$, and can be computed to be \[ M_{\text{Komar}} = M. \] The expression for the Komar mass in terms of the energy-momentum tensor, given in Chapter~\ref{chapter6}, suggests the definition of the {\bf Komar angular momentum} as \[ J_{\text{Komar}} = \frac1{16\pi} \int_{\Sigma} \star d Y^\sharp \] (note the change in sign and absolute value of the constant, due to the fact that $Y$ is now spacelike and essentially orthogonal to the timelike unit normal $N$). The same exact argument as was done for the Komar mass shows that $J_{\text{Komar}}$ does not depend on the choice of $\Sigma$. Performing the calculation for the Kerr metric yields \[ J_{\text{Komar}} = Ma, \] and so the parameter $a$ can be interpreted as the angular momentum per unit mass. The Killing vector $X$ becomes null on the hypersurface given by the equation \[ r = M + \sqrt{M^2-a^2\cos^2\theta}, \] known as the {\bf ergosphere}. However, it is easy to show that the metric induced on this hypersurface is Lorentzian, and so it cannot be the black hole event horizon (since it can be crossed both ways by timelike curves). The event horizon corresponds to the hypersurface $r=r_+$, where \[ r_+ = M + \sqrt{M^2-a^2}. \] Indeed, the function $\Delta$ changes sign on this hypersurface, and so $\operatorname{grad} r$ becomes timelike (meaning that $r$ must decrease along causal curves). Note that the ergosphere encloses the event horizon, touching it only at the poles, as shown in Figure~\ref{ergosphere}. The region in between, where $X$ is spacelike, is called the {\bf ergoregion}, because matter fields satisfying the dominant energy condition can have negative energy there, and so extract energy from the black hole when absorbed (a mechanism known as the {\bf Penrose process} in the case of particles or {\bf superradiance} in the case of fields). Note also that the existence of an event horizon requires that $|a|\leq M$. If $|a|< M$ the black hole is said to be {\bf subextremal}, and if $|a|=M$ it is called {\bf extremal}. \begin{figure}[h!] \begin{center} \psfrag{event horizon}{event horizon} \psfrag{ergosphere}{ergosphere} \psfrag{ergoregion}{ergoregion} \psfrag{black hole}{black hole} \epsfxsize=0.8\textwidth \leavevmode \epsfbox{ergosphere.eps} \end{center} \caption{Spacelike cross-section of the Kerr solution.} \label{ergosphere} \end{figure} A time-oriented spacetime $(M,g)$ is said to be {\bf asymptotically flat} if it contains an open set $\mathcal{I}$ where the metric is well approximated (in a certain sense which we will not make precise) by the Minkowski metric in the region $\{x^2 + y^2 + z^2 > R^2\}$, for sufficiently large $R>0$. For such spacetimes we can define the {\bf black hole region} as $\mathcal{B}=M\setminus J^-(\mathcal{I})$, so that $\mathcal{B}$ consists of the events which cannot send signals to $\mathcal{I}$. If $\mathcal{B}\neq \varnothing$, the spacetime is called a {\bf black hole spacetime}, and $\mathscr{H^+}=\partial\mathcal{B}$ is called the {\bf event horizon}. Finally, an asymptotically flat spacetime is called {\bf stationary} if it admits a Killing vector field which is timelike on $\mathcal{I}$. The importance of the Kerr solution stems from the following result. \begin{Thm} ({\bf Israel \cite{Israel67}, Carter \cite{Carter71}, Hawking \cite{H72}, Robinson \cite{Robinson75}, Chru\'sciel and Costa \cite{CC08}}) The Kerr solution is the only real-analytic, stationary black hole spacetime satisfying the vacuum Einstein equations. \end{Thm} \section{Killing horizons and the zeroth law} \label{sec7.2} \begin{Def} A {\bf Killing horizon} is a null surface which is orthogonal to a nonvanishing Killing vector field. \end{Def} Note that the Killing vector field is therefore null on the corresponding Killing horizon, and tangent to it. Killing horizons are important due to the following result. \begin{Thm} ({\bf Hawking \cite{H72}}) The event horizon of a stationary black hole spacetime whose matter fields satisfy hyperbolic equations and the weak energy condition is a Killing horizon. \end{Thm} \begin{Prop} The integral curves of a nonvanishing normal to a null hypersurface (e.g.\ the Killing vector field orthogonal to a Killing horizon) are reparameterized null geodesics. \end{Prop} \begin{proof} Let $Z$ be nonvanishing and normal to a null hypersurface $S$, and let $p \in S$. For any tangent vector $v \in T_pS$ it is easy to construct a local vector field $V$ which is tangent to $S$, commutes with $Z$ and satisfies $V_p=v$. We have \[ \left\langle V, \nabla_Z Z \right\rangle = - \left\langle \nabla_Z V, Z \right\rangle = - \left\langle \nabla_V Z, Z \right\rangle = -\frac12 \, V \cdot \left\langle Z, Z \right\rangle = 0. \] Since $p$ and $v$ are arbitrary, we conclude that $\nabla_Z Z$ is orthogonal to $S$, i.e. \[ \nabla_Z Z = k Z, \] for some function $k:S \to \mathbb{R}$. \end{proof} \begin{Def} If $\mathscr{H}$ is a Killing horizon associated to the Killing vector field $Z$ then the function $k:\mathscr{H} \to \mathbb{R}$ such that \[ \nabla_Z Z = k Z, \] on $\mathscr{H}$ is called the {\bf surface gravity} of $\mathscr{H}$ relative to $Z$. \end{Def} \begin{Thm} ({\bf Zeroth law of black hole thermodynamics}) The surface gravity of a Killing horizon on a spacetime satisfying the dominant energy condition is a constant function. \end{Thm} \begin{proof} Let $Z$ be a Killing vector field associated to a Killing horizon $\mathscr{H}$. Since $Z$ is orthogonal to $\mathscr{H}$, we have from the Frobenius theorem that \[ Z^\sharp \wedge dZ^\sharp = 0 \] on $\mathscr{H}$. Because $Z^\sharp$ is nonvanishing, and can therefore be completed to a coframe, we necessarily have \[ dZ^\sharp = 2 Z^\sharp \wedge U^\sharp \Leftrightarrow \nabla_\alpha Z_\beta = Z_\alpha U_\beta - U_\alpha Z_\beta \] on $\mathscr{H}$, for some vector field $U$. If the vector fields $X$ and $Y$ are tangent to $\mathscr{H}$ then \begin{equation} \label{FrobeniusKilling} X^\alpha Y^\beta \nabla_\alpha Z_\beta = 0, \end{equation} and consequently \begin{equation} \label{FrobeniusKilling2} Y^\alpha Z^\beta \nabla_\alpha X_\beta = - Y^\alpha X^\beta \nabla_\alpha Z_\beta = 0. \end{equation} on $\mathscr{H}$. Taking the derivative of \eqref{FrobeniusKilling} along any vector field $V$ tangent to $\mathscr{H}$ yields \begin{align*} V^\mu X^\alpha Y^\beta \nabla_\mu \nabla_\alpha Z_\beta & = - V^\mu (\nabla_\mu X^\alpha) Y^\beta \nabla_\alpha Z_\beta - V^\mu X^\alpha (\nabla_\mu Y^\beta) \nabla_\alpha Z_\beta\\ & = - V^\mu ( Y^\beta \nabla_\mu X^\alpha + X^\alpha \nabla_\mu Y^\beta) (Z_\alpha U_\beta - U_\alpha Z_\beta) = 0, \end{align*} in view of \eqref{FrobeniusKilling2}. On the other hand, we saw in Chapter~\ref{chapter6} that any Killing vector field $Z$ satisfies \begin{equation} \label{RiemannKilling} \nabla_\mu \nabla_\alpha Z_\beta = - R_{\alpha\beta\mu}^{\,\,\,\,\,\,\,\,\,\,\,\,\nu} Z_\nu, \end{equation} whence \begin{equation} \label{RiemannzeroonH} R_{\alpha\beta\mu\nu} X^\alpha Y^\beta V^\mu Z^\nu = 0 \end{equation} for any three vector fields $X,Y,V$ tangent to $\mathscr{H}$. If $X$ and $Y$ are orthonormal then they can always be completed to a local frame $\{X,Y,Z,W\}$ such $W$ is null, orthogonal to $X$ and $Y$, and normalized against $Z$, \[ Z_\mu W^\mu = -1. \] In this frame the metric is written \[ g_{\mu\nu} = X_\mu X_\nu + Y_\mu Y_\nu - Z_\mu W_\nu - W_\mu Z_\nu, \] and so we obtain, for any vector field $V$ tangent to $\mathscr{H}$, \begin{align} \label{Ralphabeta} R_{\alpha\beta} V^\alpha Z^\beta & = R_{\alpha\mu\beta\nu} V^\alpha Z^\beta g^{\mu\nu} = - R_{\alpha\mu\beta\nu} V^\alpha Z^\beta Z^\mu W^\nu \\ & = V^\alpha Z^\beta W^\nu \nabla_\alpha \nabla_\beta Z_\nu, \nonumber \end{align} where we used \eqref{RiemannzeroonH} and \eqref{RiemannKilling}. We have \begin{equation} \label{expressionfork} k = - \left\langle W, \nabla_Z Z \right\rangle = - W^\nu Z^\mu \nabla_\mu Z_\nu. \end{equation} Differentiating \eqref{expressionfork} along a vector field $V$ tangent $\mathscr{H}$ yields \[ V \cdot k = - (V^\alpha \nabla_\alpha W^\nu) Z^\mu \nabla_\mu Z_\nu - W^\nu (V^\alpha \nabla_\alpha Z^\mu) \nabla_\mu Z_\nu - W^\nu Z^\mu V^\alpha \nabla_\alpha \nabla_\mu Z_\nu. \] The first term in the right-hand side of this equation is \[ - (V^\alpha \nabla_\alpha W^\nu) k Z_\nu = k W^\nu V^\alpha \nabla_\alpha Z_\nu = k W^\nu V^\alpha (Z_\alpha U_\nu - U_\alpha Z_\nu) = k V^\alpha U_\alpha, \] whereas the second term is \[ - W^\nu V^\alpha (Z_\alpha U^\mu - U_\alpha Z^\mu) \nabla_\mu Z_\nu = V^\alpha U_\alpha W^\nu Z^\mu \nabla_\mu Z_\nu = - k V^\alpha U_\alpha. \] Therefore these terms cancel out, and, using \eqref{Ralphabeta}, we obtain \[ V \cdot k = - R_{\alpha\beta} V^\alpha Z^\beta. \] From \eqref{Ralphabeta} it is clear that \[ R_{\alpha\beta} Z^\alpha Z^\beta = 0, \] implying that the vector field \[ I_\alpha = R_{\alpha\beta} Z^\beta \] is tangent to $\mathscr{H}$. By Einstein's equation, we have \[ I_\alpha = \left(\frac12 R g_{\alpha\beta} - \Lambda g_{\alpha\beta} + 8 \pi T_{\alpha\beta} \right) Z^\beta = \frac12 R Z_\alpha - \Lambda Z_\alpha + 8 \pi T_{\alpha\beta} Z^\beta, \] where $R$ is the scalar curvature, $\Lambda$ is the cosmological constant and $T_{\alpha\beta}$ is the energy-momentum tensor. Therefore the vector field \[ J_\alpha = T_{\alpha\beta} Z^\beta \] is also tangent to $\mathscr{H}$. Since $T_{\alpha\beta}$ satisfies the dominant energy condition, the vector field $J$ must be causal, and since it is tangent to $\mathscr{H}$ it can only be null and parallel to $Z$. We conclude that the vector field $I$ is also proportional to $Z$, and so \[ V \cdot k = - I_{\alpha} V^\alpha = 0, \] that is, $k$ is constant along $\mathscr{H}$. \end{proof} Let $\lambda$ be a local coordinate that is an affine parameter for the null geodesics along $\mathscr{H}$, that is, \[ \left \langle \frac{\partial}{\partial \lambda}, \frac{\partial}{\partial \lambda} \right \rangle = 0 \qquad \text{ and } \qquad \nabla_{\frac{\partial}{\partial \lambda}} \frac{\partial}{\partial \lambda} = 0 \] on $\mathscr{H}$. Then it is easy to see that \[ Z = k (\lambda-\lambda_0) \frac{\partial}{\partial \lambda} \] on $\mathscr{H}$, where $\lambda_0$ may depend on the null geodesic. In other words, $Z$ vanishes on some cross-section of $\mathscr{H}$. If $Z$ and $\frac{\partial}{\partial \lambda}$ are future-pointing then $Z$ vanishes to the past when $k$ is positive (that is, $\lambda>\lambda_0$), and to the future if $k$ is negative (that is, $\lambda<\lambda_0$). If $t$ is a local coordinate in a neighborhood of $\mathscr{H}$ such that \[ Z = \frac{\partial}{\partial t} \] then we have on $\mathscr{H}$ \[ Z = \frac{d\lambda}{dt} \frac{\partial}{\partial \lambda} = k (\lambda-\lambda_0) \frac{\partial}{\partial \lambda}, \] implying that \[ \lambda-\lambda_0 = C e^{kt}, \] where $C$ may depend on the null geodesic. By rescaling $\lambda$ conveniently we may assume that \[ Z = e^{kt} \frac{\partial}{\partial \lambda} \] on $\mathscr{H}$. Now consider a timelike congruence crossing $\mathscr{H}$, with tangent unit timelike vector field $U$ satisfying \[ [Z,U] = 0. \] This can be accomplished by taking a timelike hypersurface transverse to $Z$, ruled by timelike curves, and moving each curve by the flow of $Z$. The quantity \[ E = - \left \langle \frac{\partial}{\partial \lambda}, U \right \rangle = - \left \langle e^{-kt} Z, U \right \rangle \] represents the energy of a given null geodesic as measured by an observer of the congruence when crossing the Killing horizon, and is related to the frequency of the associated wave. Since $Z$ is a Killing field, we have \begin{align*} Z \cdot E & = - \mathcal{L}_Z \left \langle e^{-kt} Z, U \right \rangle = - \left \langle \mathcal{L}_Z (e^{-kt} Z), U \right \rangle - \left \langle e^{-kt} Z, \mathcal{L}_Z U \right \rangle \\ & = - \left \langle [Z, e^{-kt} Z], U \right \rangle - \left \langle e^{-kt} Z, [Z, U] \right \rangle = \left \langle k e^{-kt} Z, U \right \rangle = - k E, \end{align*} implying that \[ E = E_0 e^{-kt}. \] In other words, the energy of the null geodesic as measured by the observers of the congruence decreases exponentially if $k>0$ ({\bf redshift effect}), and increases exponentially if $k<0$ ({\bf blueshift effect}). \section{Smarr's formula and the first law} \label{sec7.3} Unlike the case of the Schwarzschild black hole, where the event horizon is the Killing horizon corresponding to $X=\frac{\partial}{\partial t}$, the event horizon of the Kerr black hole is a Killing horizon for the Killing vector field \[ Z = X + \Omega Y, \] where $Y=\frac{\partial}{\partial \varphi}$ and $\Omega \in \mathbb{R}$ is an appropriate constant. \begin{Def} $\Omega$ is called the {\bf angular velocity of the event horizon}. \end{Def} To find $\Omega$, we write the quadratic equation in $\omega$ for the vector $X + \omega Y$ to be null at some point with $r>r_+$ (so that we can use Boyer-Lindquist coordinates): \[ - \left( 1 - \frac{2Mr}{\rho^2} \right) - \frac{4Mar\sin^2\theta}{\rho^2} \, \omega + \left( r^2 + a^2 + \frac{2Ma^2r\sin^2\theta}{\rho^2} \right) \sin^2\theta \, \omega^2 = 0. \] The discriminant of this equation has the simple form $\Delta \sin^2 \theta$, and consequently vanishes when we take $r=r_+$, in which case the quadratic equation has the single solution \[ \Omega = \frac{a}{r_+^2 + a^2} = \frac{a}{2Mr_+}. \] This solution is the limit as $r \to r_+$ of the values of $\omega=\omega(r,\theta)$ such that $X + \omega Y$ is null, and so it must coincide with the angular velocity of the event horizon. From the expressions of the Komar mass and angular momentum, it is clear that \[ M - 2 \Omega J = -\frac1{8\pi} \int_{\Sigma} \star d Z^\sharp \] for any compact orientable $2$-surface $\Sigma$ enclosing the event horizon $\mathscr{H}$. Let us consider the case when $\Sigma$ is a spacelike cross-section of $\mathscr{H}$ (Figure~\ref{Smarr}). We can uniquely define a future-pointing unit timelike vector field $N$ and an unit spacelike vector field $n$, both orthogonal to $\Sigma$, such that $Z = N+n$. Because $Z$ is a Killing vector field, $\nabla_\mu Z_\nu$ is a $2$-form; more precisely, \[ (d Z^\sharp)_{\mu\nu} = \nabla_\mu Z_\nu - \nabla_\nu Z_\mu = 2 \nabla_\mu Z_\nu. \] If $E_1$ and $E_2$ are two unit vector fields tangent to $\Sigma$ such that $\{N,n,E_1,E_2\}$ is a positive orthonormal frame, and so $\{-N^\sharp,n^\sharp,E_1^\sharp,E_2^\sharp\}$ is a positive orthonormal coframe, then we can expand \[ \nabla Z^\sharp = - \nabla Z^\sharp(N,n) N^\sharp \wedge n^\sharp + \ldots. \] Therefore, \[ M - 2 \Omega J = - \frac1{4\pi} \int_{\Sigma} \star \nabla Z^\sharp = \frac1{4\pi} \int_{\Sigma} \nabla Z^\sharp(N,n) E_1^\sharp \wedge E_2^\sharp. \] Since \[ \nabla Z^\sharp(N,n) = \nabla Z^\sharp(Z,n) = \left\langle \nabla_Z Z, n \right\rangle = \left\langle k Z, n \right\rangle = k, \] we finally obtain the {\bf Smarr formula}: \[ M = \frac{kA}{4\pi} + 2 \Omega J, \] where $A$ is the area of the cross-section $\Sigma$ (which in particular is the same for all cross-sections of $\mathscr{H}$). \begin{figure}[h!] \begin{center} \psfrag{n}{$n$} \psfrag{N}{$N$} \psfrag{H}{$\mathscr{H}$} \psfrag{E}{$E_1$} \psfrag{S}{$\Sigma$} \psfrag{Z}{$Z$} \epsfxsize=0.8\textwidth \leavevmode \epsfbox{Smarr.eps} \end{center} \caption{Spacelike cross-section of the event horizon.} \label{Smarr} \end{figure} A cross section of the event horizon can be obtained by taking the limit as $r \to r_+$ of a surface of constant $(t,r)$. The induced metric is then \[ ds^2 = (r_+^2 + a^2 \cos^2 \theta) d \theta^2 + \left( r_+^2 + a^2 + \frac{2Ma^2r_+\sin^2\theta}{r_+^2 + a^2 \cos^2 \theta} \right) \sin^2\theta d\varphi^2, \] and its area is \[ A = 4 \pi (r_+^2 + a^2) = 8 \pi (M^2 + M \sqrt{M^2 - a^2}), \] whence \[ M = \sqrt{\frac{A}{16\pi} + \frac{4\pi J^2}{A}}. \] Noting that $M=M(A,J)$ is homogeneous of degree $1/2$, we know from Euler's homogeneous function theorem that \[ \frac12 M = \frac{\partial M}{\partial A} A + \frac{\partial M}{\partial J} J. \] On the other hand, it is easy to check that \[ \frac{\partial M}{\partial J} = \Omega. \] Smarr's formula then implies the following result: \begin{Thm} ({\bf First law of black hole thermodynamics}) The function $M=M(A,J)$ giving the mass of a Kerr black hole as a function of the area of (spacelike cross-sections of) its horizon and its angular momentum satisfies \[ dM = \frac{k}{8\pi} dA + \Omega dJ. \] \end{Thm} This formula provides an easy way to compute the surface gravity of a Kerr black hole horizon: \[ k = \frac1{4M} - M \Omega^2. \] In particular, for an extremal black hole ($r_+ = a = M$) we have \[ \Omega = \frac{1}{2M} \Rightarrow k = 0. \] \section{Second law} \label{sec7.4} We consider arbitrary test fields propagating on a Kerr background. Apart from ignoring their gravitational backreaction, we make no further hypotheses on the fields: they could be any combination of scalar or electromagnetic fields, fluids, elastic media, or other types of matter. By the Einstein equation, their combined energy-momentum tensor $T$ must satisfy \[ \nabla_\mu T^{\mu\nu} = 0. \] Using the symmetry of $T$ and the Killing equation, \[ \nabla_\mu X_\nu + \nabla_\nu X_\mu = 0, \] we have \[ \nabla_\mu (T^{\mu\nu} X_\nu) = 0. \] This conservation law suggests that the total field energy on a given spacelike hypersurface $S$ extending from the black hole event horizon $\mathscr{H^+}$ to infinity (Figure~\ref{Penrose_energy}) should be \[ E = \int_S T^{\mu\nu} X_\nu N_\mu , \] where $N$ is the future-pointing unit normal to $S$. \begin{figure}[h!] \begin{center} \psfrag{i+}{$i^+$} \psfrag{i0}{$i^0$} \psfrag{i-}{$i^-$} \psfrag{H}{$H$} \psfrag{H+}{$\mathscr{H^+}$} \psfrag{H-}{$\mathscr{H^-}$} \psfrag{I+}{$\mathscr{I^+}$} \psfrag{I-}{$\mathscr{I^-}$} \psfrag{I}{$\mathscr{I}$} \psfrag{S0}{$S_0$} \psfrag{S1}{$S_1$} \epsfxsize=0.5\textwidth \leavevmode \epsfbox{Penrose_energy.eps} \end{center} \caption{Penrose diagrams for the region of outer communication of the Kerr spacetime.} \label{Penrose_energy} \end{figure} Analogously, the total field angular momentum on a spacelike hypersurface $S$ extending from the event horizon to infinity is \begin{equation} L = - \int_S T^{\mu\nu} Y_\nu N_\mu , \end{equation} where the minus sign accounts for the timelike unit normal. Consider now two such spacelike hypersurfaces, $S_0$ and $S_1$, with $S_1$ to the future of $S_0$ (Figure~\ref{Penrose_energy}). The energy absorbed by the black hole across the subset $H$ of $\mathscr{H^+}$ between $S_0$ and $S_1$ is then \[ \Delta M = \int_{S_0} T^{\mu\nu} X_\nu N_\mu - \int_{S_1} T^{\mu\nu} X_\nu N_\mu , \] whereas the angular momentum absorbed by the black hole across $H$ is \[ \Delta J = - \int_{S_0} T^{\mu\nu} Y_\nu N_\mu + \int_{S_1} T^{\mu\nu} Y_\nu N_\mu . \] Therefore, we have \[ \Delta M - \Omega \Delta J = \int_{S_0} T^{\mu\nu} Z_\nu N_\mu - \int_{S_1} T^{\mu\nu} Z_\nu N_\mu . \] Because $Z$ is also a Killing vector field, \[ \nabla_\mu (T^{\mu\nu} Z_\nu) = 0, \] and so the divergence theorem, applied to the region bounded by $S_0$, $S_1$ and $H$, yields \[ \Delta M - \Omega \Delta J = \int_{H} T^{\mu\nu} Z_\nu Z_\mu \] (we use $-Z$ as the null normal on $H$). Therefore we have the following result. \begin{Thm} ({\bf Second law of black hole thermodynamics, test field version}) If the energy-momentum tensor $T$ corresponding to any collection of test fields propagating on a Kerr background satisfies the null energy condition at the event horizon then the energy $\Delta M$ and the angular momentum $\Delta J$ absorbed by the black hole satisfy \[ \Delta M \geq \Omega \Delta J. \] \end{Thm} If we think of Kerr black holes as stationary states and imagine that the interaction of a Kerr black hole with test fields results in a new Kerr black hole, then, in view of the first law of black hole thermodynamics, we can rewrite the result above as \[ dA = \frac{8\pi}{k} (dM - \Omega dJ) \geq 0, \] that is, the area of the event horizon of a Kerr black hole can only increase as a result of its interaction with test fields. In fact, it is possible to prove a general result about the area of the event horizon of a black hole spacetime. \begin{Prop}\label{Prop_horizon} The event horizon of a black hole spacetime is ruled by null geodesics. \end{Prop} \begin{proof} Given the causal structure of Minkowski's spacetime, it is clear that $J^-(\mathcal{I})$ coincides with $I^-(\mathcal{I})$, an open set. Let $p \in \mathscr{H}$ be any point in the event horizon and let $U \ni p$ be a simple neighborhood (see Proposition~\ref{compact} in Chapter~\ref{chapter4}). Given a sequence $\{p_n\} \subset U \cap J^-(\mathcal{I})$ converging to $p$, let $c_n$ be a future-pointing causal curve connecting $p_n$ to $\mathcal{I}$, let $q_n$ be the first intersection of $c_n$ with $\partial U$, and let $\gamma_n$ be the future-pointing causal geodesic connecting $p_n$ to $q_n$ in $U$ (see Figure~\ref{horizon}). Since the exponential map centered on any point in $U$ is a diffeomorphism, these geodesics converge to a future-pointing causal geodesic $\gamma$ in $U$ with initial point $p$. Note that $\gamma$ cannot enter $J^-(\mathcal{I})$, because then we would have $p \in J^-(\mathcal{I})\equiv\operatorname{int} J^-(\mathcal{I})$, in contradiction with $p \in \mathscr{H}\equiv\partial J^-(\mathcal{I})$. Moreover, every point in $\gamma$ is the limit of points in $\gamma_n$, hence points in $J^-(\mathcal{I})$. We conclude that $\gamma$ is a curve on $\overline{J^-(\mathcal{I})} \setminus \operatorname{int} J^-(\mathcal{I}) = \mathscr{H}$. Finally, $\gamma$ cannot be a timelike geodesic, because then the sequence $q_n$ would enter the open set $I^+(p)$, and we would again have $p \in J^-(\mathcal{I})$. We conclude that $\gamma$ is a null geodesic. If we extend $\gamma$ maximally towards the future we obtain a future-inextendible null geodesic; covering this curve with simple neighborhoods and applying similar arguments to the above, one can easily show that it never leaves $\mathscr{H}$. \end{proof} \begin{figure}[h!] \begin{center} \psfrag{U}{$U$} \psfrag{p}{$p$} \psfrag{pn}{$p_n$} \psfrag{qn}{$q_n$} \psfrag{H}{$\mathscr{H}$} \epsfxsize=0.4\textwidth \leavevmode \epsfbox{horizon.eps} \end{center} \caption{Proof of Proposition~\ref{Prop_horizon}.} \label{horizon} \end{figure} \pagebreak \begin{Thm} ({\bf Second law of black hole thermodynamics, Hawking's version \cite{H72}}) If the energy-momentum tensor of a black hole spacetime satisfies the null energy condition at the event horizon and the null geodesics ruling the event horizon are complete towards the future then their expansion is never negative. In particular, the area of any spacelike cross-section of the event horizon cannot decrease towards the future. \end{Thm} \begin{proof} Note that the null geodesics ruling the event horizon are orthogonal to any spacelike cross-section $\Sigma$, so that the discussion preceding the proof of Penrose's singularity theorem in Chapter~\ref{chapter4} applies. Suppose that the expansion of some null geodesic were negative at some point $p \in \mathscr{H}$. Then, by the analogue of Proposition~\ref{conjugate_null} in Chapter~\ref{chapter4}, that null geodesic would have a conjugate point to the future of $p$, after which, by the analogue of Proposition~\ref{conjugate_null_I+} in Chapter~\ref{chapter4}, it would leave $\mathscr{H}$. Since this would contradict Proposition~\ref{Prop_horizon}, the expansion can never be negative. \end{proof} \section{Hawking radiation and black hole thermodynamics} \label{sec7.6} Recall the three laws of thermodynamics for, say, a gas: \begin{itemize} \item {\bf Zeroth law:} The temperature is constant throughout the gas when thermal equilibrium has been reached. \item {\bf First law:} The internal energy $U$ as a function of the gas entropy $S$ and volume $V$ satisfies \[ dU = T dS - p dV, \] where $T$ is the temperature and $p$ is the pressure. \item {\bf Second law:} The entropy of the gas cannot decrease towards the future. \end{itemize} These are remarkably similar to the three laws of black hole thermodynamics, if we identify the black hole mass $M$ with the internal energy, (some multiple of) the horizon's surface gravity with the black hole's temperature and (some multiple of) the horizon's area with the black hole's entropy. Inspired by this analogy, Bekenstein \cite{Bek72} proposed in 1972 that black holes are indeed thermodynamic systems. Hawking initially resisted this suggestion, since objects in thermal equilibrium at a given temperature must emit black body radiation, which black holes cannot (classically) do. In 1974, however, Hawking \cite{H74} applied methods of quantum field theory in curved spacetime to show that black holes do indeed emit particles with a thermal spectrum corresponding to the temperature \[ T = \frac{k}{2\pi}. \] In particular, this fixed the black hole entropy as \[ S = \frac{A}4. \] \section{Exercises} \label{sec7.7} \begin{enumerate} \item To compute the Komar mass and angular momentum of the Kerr solution we consider the region $r \gg M,a$. \begin{enumerate} \item Show that a positive orthonormal coframe is approximately given in this region by \begin{align*} & \omega^0 \sim dt, \qquad \omega^r \sim dr, \qquad \omega^\theta \sim r d\theta, \\ & \omega^\varphi \sim r \sin\theta d\varphi - \frac{2Ma\sin\theta}{r^2} dt. \end{align*} \item Establish the following asymptotic formulas: \begin{align*} & X^\sharp \sim - \left( 1 - \frac{2M}{r} \right) dt - \frac{2Ma\sin^2\theta}{r} d\varphi; \\ & Y^\sharp \sim - \frac{2Ma\sin^2\theta}{r} dt + r^2 \sin^2\theta d\varphi; \\ & dX^\sharp \sim \frac{2M}{r^2} \omega^0 \wedge \omega^r + \ldots ; \\ & dY^\sharp \sim - \frac{6Ma\sin^2\theta}{r^2} \omega^0 \wedge \omega^r + \ldots . \end{align*} \item Prove that $M_{\text{Komar}} = M$ and $J_{\text{Komar}} = Ma$. \end{enumerate} \item Show that the metric induced on the ergoshpere is \begin{align*} ds^2 = & - 2a\sin^2\theta dt d\varphi + \frac{2M^3r}{(r-M)^2} d \theta^2 \\ & + \left( r^2 + a^2 + a^2\sin^2\theta \right) \sin^2\theta d\varphi^2, \end{align*} where $r = M + \sqrt{M^2-a^2\cos^2\theta}$. Prove that this metric is Lorentzian. \item The symmetry semiaxis $\theta=0$ is a totally geodesic submanifold of the Kerr solution, with metric \[ ds^2 = - \frac{\Delta}{r^2 + a^2} dt^2 + \frac{r^2 + a^2}{\Delta} dr^2. \] Obtain the maximal analytical extension of this submanifold. Note that $r=0$ is not a singularity, and so the metric can be continued for negative values of $r$. \item Consider the static and spherically symmetric metric given in local coordinates by \[ \hspace{2cm} ds^2 = - V(r) dt^2 + V(r) dr^2 + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right). \] \begin{enumerate} \item Show that this metric can be written in the form \[ \hspace{2cm} ds^2 = - V(r) dv^2 + 2 dv dr + r^2 \left( d\theta^2 + \sin^2 \theta d\varphi^2 \right), \] where \[ v = t + \int \frac{dr}{V(r)}, \] \item Assume that there $V$ has an isolated zero at $r=r_H$, so that this hypersurface is a Killing horizon. Show that the corresponding surface gravity relative to $\frac{\partial}{\partial v}$ is \[ k = \frac12 V'(r_H). \] \end{enumerate} \item Consider a one-parameter family of null geodesics $\gamma(\tau,\lambda)$ connecting two timelike curves $c_0(\tau)=\gamma(\tau,0)$ and $c_1(\tau)=\gamma(\tau,1)$. Show that \[ \frac{\partial}{\partial \lambda} \left\langle \frac{\partial \gamma}{\partial \tau}, \frac{\partial \gamma}{\partial \lambda} \right\rangle = 0, \] and use this to prove that if $\tau$ is the proper time for the curve $c_0$ and \[ \tau'(\tau) = \int_{\tau_0}^\tau \left|\dot{c}_1(t)\right|dt \] is the proper time for the curve $c_1$ then \[ \frac{d\tau'}{d\tau} = \frac{E_0}{E_1}, \] where \[ E_0 = - \left\langle \dot{c}_0, \frac{\partial \gamma}{\partial \lambda} \right\rangle \quad \text{ and } \quad E_1 = - \left\langle \frac{\dot{c}_1}{|\dot{c}_1|}, \frac{\partial \gamma}{\partial \lambda} \right\rangle \] are the energies of the null geodesic $\gamma(\tau,\lambda)$ as measured by the observers corresponding to $c_0$ and $c_1$. \item Compute the area, the angular velocity and the surface gravity of a Kerr black hole horizon (you may find the relation $r_+^2 + a^2 = 2Mr_+$ to be useful here). \item Prove that test fields satisfying the null energy condition at the event horizon cannot destroy an extremal Kerr black hole. More precisely, prove that if an extremal black hole is characterized by the physical quantities $(M,J)$, and absorbs energy and angular momentum $(\Delta M,\Delta J)$ by interacting with the test fields, then the metric corresponding to the physical quantities $(M+\Delta M, J + \Delta J)$ represents, to first order in $\Delta M$ and $\Delta J$, either a subextremal or an extremal Kerr black hole. \item Show that if a test field satisfying the null energy condition at the event horizon extracts energy from a Kerr black hole then $a=J/M$ always decreases. What fraction of the black hole's mass can be extracted? \item Check that the second law holds for the black hole resulting from an Oppenheimer-Snyder collapse. \item Use the second law of black hole thermodynamics to: \begin{enumerate} \item Prove that a Schwarzschild black hole cannot split into two Kerr black holes; \item Give an upper bound to the energy released in the form of gravitational waves when two Kerr black holes coalesce to form a Schwarzschild black hole. Can the efficiency of this process ever exceed $50\%$? \end{enumerate} \end{enumerate} \chapter*{Preface} These lecture notes were written for a one-semester course in mathematical relativity aimed at mathematics and physics students, which has been taught at Instituto Superior T\'ecnico (Universidade de Lisboa) since 2010. They are not meant as an introduction to general relativity, but rather as a complementary, more advanced text, much like Part II of Wald's textbook \cite{W84}, on which they are loosely based. It is assumed that the reader is familiar at least with special relativity, and has taken a course either in Riemannian geometry (typically the mathematics students) or in general relativity (typically the physics students). In other words, the reader is expected to be proficient in (some version of) differential geometry and to be acquainted with the basic principles of relativity. I thank the many colleagues and students who read this text, or parts of it, for their valuable comments and suggestions. Special thanks are due to my colleague and friend Pedro Gir\~ao.
2,877,628,090,922
arxiv
\section{Introduction} Astrophysical jets as highly collimated beams of high velocity material and outflows of less degree of collimation and lower speed are an ubiquitous phenomenon. The jets are observed over a wide range of luminosity and spatial scale, originating from young stars, micro-quasars, or active galactic nuclei. The common understanding is that magnetohydrodynamics (MHD) are responsible for launching, accelerating, and collimating these jets. Furthermore, it is clear that accretion and ejection is related to each other - one efficient way to remove angular momentum from a disk is to eject it vertically into a jet or outflow. Observational data have confirmed the co-existence of bipolar jets in most jet forming regions. Jet and counter jet appear, however, typically asymmetric in shape with very few exceptions (see e.g. \citealt{1990A&A...232...37M}, \citealt{1996ApJ...468L.103R}, \citealt{2007prpl.conf..231R}). \citet{2013A&A...551A...5E} find similar mean velocities in both lobes of the jets observed, while they report asymmetric variations in mass outflow rates and velocities, and suggest that the jet launching mechanism on either side of the disk is not synchronized. One exception is the jet source HH\,212 which ejects an almost perfectly symmetric bipolar structure \citep{1998Natur.394..862Z}, suggesting that the causal origin of jet knots is located close to the central engine. On the other hand, if jets form naturally asymmetric, and if thus asymmetric jets would need special conditions to be formed, we may ask what is this kind of "natural" ejection process and what are the additional conditions for symmetry? A small number of papers have a addressed this topic. \citet{1996A&A...306..329C} discuss the MHD origin of the one-sidedness as probably caused by a superposition of a quadrupolar disk dynamo and a stellar dipole. The first bipolar jet simulation has been published by \citet{2003A&A...398..825V} and follow-up papers. Recent simulations consider asymmetric ejections of stellar wind components from an offset multipole stellar magnetosphere \citep{2010MNRAS.408.2083L}. Velocity asymmetries in protostellar jets have been investigated by \citet{2012A&A...545A..53M}, in particular comparing intrinsic and extrinsic mechanisms, such as e.g. the field alignment between stellar and disk field or the pressure distribution in the ambient cloud. Both effects seem to play a role and work on different time scales. \citet{2012ApJ...759L...1S} discuss semi-analytical models indicating a counter-rotation of the inner jet and compare them to apply MHD simulations. To our knowledge, only very few numerical simulations investigating the bipolar launching of disk jets have been performed. It is therefore interesting to investigate the evolution of both hemispheres of a jet-disk system in order to see whether and how a global asymmetry in the large-scale outflow can be governed by the disk evolution. In a previous paper (paper I, \citealt{2012ApJ...757...65S}) we have presented detailed model simulations for the jet launching process, investigating a comparative set of various launching parameters such as magnetization (or plasma-beta), disk diffusivity and diffusivity scale height. In this paper (paper II) we continue our investigation concentrating on the {\em bipolar} character of jets and outflows, in particular on the fact that both jet components are mostly observed as asymmetric. We investigate the launching of bipolar outflows from initially symmetric conditions, and also from a disk structure with slight asymmetries. With {\em launching}, we denote the process which conveys material from radial accretion into vertical ejection, lifting it from the disk plane into the corona, thus establishing a disk wind. Referring to our notation in paper I we like clarify again that with {\em formation} we denote the process of accelerating and collimating an already existing slow disk wind or stellar wind in to a jet beam. This paper deals with both processes - launching and formation. The present paper is organized as follows. Section 2 describes the numerical setup, the initial and boundary conditions of our simulations with particular emphasis to the approach of simulating both hemispheres. The general evolution of jet launching is presented in section 3, where we also present the simulation of a perfectly symmetric jet. Section 4 is then devoted to asymmetric jets and a parameter study comparing jets from different disk setups. We will discuss their symmetry properties and how symmetry can be broken by the intrinsic disk evolution. \begin{table*} \begin{center} \caption{Grid resolution of various parameter runs. The equidistant grid usually covers the highly diffusive regions of disk and disk wind, while the stretched grid covers the weakly diffusive, almost ideal MHD regions further distant from the launching region. Simulations discussed in this paper are labeled with a star '*'. } \label{tbl:resolution} \begin{tabular}{lrclcl} \hline \hline \noalign{\smallskip} run & \multicolumn{2}{l}{equidistant subgrid} & \multicolumn{2}{l}{stretched subgrids} & \\ & $n_r$ & grid $r$-extension & $n_r$ & grids $r$-extension & \\ & $n_z$ & grid $z$-extension & $n_z$ & grids $z$-extension & \\ \noalign{\smallskip} \hline \noalign{\smallskip} sb1*-sb4* & 800 & $ 0.0 \leq r \leq 60.0 $ & & & \\ & 1214 & $-40.0 \leq z \leq 40 $ & 800, 800 & $-100.0 \leq z \leq -40.$, $40.0 \leq z \leq 100. $ & \\ sb5-sb7 & 800 & $ 0.0 \leq r \leq 60.0 $ & & & \\ & 1214 & $-40.0 \leq z \leq 40 $ & 800, 800 & $-100.0 \leq z \leq -40.$, $40.0 \leq z \leq 100. $ & \\ \noalign{\smallskip} \hline \noalign{\smallskip} cf1-cf5 & 800 & $ 0.0 \leq r \leq 60.0 $ & & &\\ & 1214 & $-40.0 \leq z \leq 40 $ & 800, 800 & $-100.0 \leq z \leq -40$, $40.0 \leq z \leq 100 $ & \\ cf6 & 1600 & $ 0.0 \leq r \leq 60.0 $ & & & \\ & 2428 & $-40.0 \leq z \leq 40 $ & 1200, 1200 & $-100.0 \leq z \leq -40$, $40.0 \leq z \leq 100 $ & \\ cf8 & 2428 & $ 0.0 \leq r \leq 40.0 $ & & & \\ & 2428 & $-10.0 \leq z \leq 10 $ & 1000, 1000 & $ -50.0 \leq z \leq -10$, $10.0 \leq z \leq 50 $ & \\ cf14* & 1200 & $ 0.0 \leq r \leq 50.0 $ & & & \\ & 3600 & $-60.0 \leq z \leq 60 $ & & & \\ cf16* & 2428 & $ 0.0 \leq r \leq 40.0 $ & & & \\ & 2428 & $-20.0 \leq z \leq 20 $ & 1000, 1000 & $ -50.0 \leq z \leq -10$, $10.0 \leq z \leq 50 $ & \\ \noalign{\smallskip} \hline \hline \end{tabular} \end{center} \end{table*} \section{Model setup} As discussed in detail in paper I, we model the launching of MHD jets from slightly sub-Keplerian disks, which are initially in pressure equilibrium with a non-rotating corona. The main goals of paper I were i) to determine the mass ejection to accretion rate fraction for a variety of disk physical characteristics such as plasma-beta, magnetic diffusivity, or disk scale height, and ii) similarly for the angular momentum flux, and iii) to determine the main geometry for these jets, such as the asymptotic jet radius and opening angle, or the size of the main jet launching area of the disk. In the present paper we extend our approach to simulations on a computational domain including {\em both hemispheres}. This allows us to investigate the {\em truly bipolar} launching and how the launching symmetry can affect the symmetry characteristics of jet and counter-jet. We apply the MHD code PLUTO \citep{2007ApJS..170..228M, 2012ApJS..198....7M}, solving the time-dependent resistive MHD equations as described in paper I. Again, the simulations are performed in a 2.5-dimensional setup in cylindrical coordinates (thus applying 3D axisymmetry). \section{Numerical setup - initial \& boundary conditions} The major extension from paper I is that we now treat the truly {\em bipolar} launching of outflows. In general we apply the same initial disk structure and boundary conditions as before, however we extend the disk-outflow system across the equatorial plane into both hemispheres. Above and below the disk, respectively, a hydrostatic corona is prescribed in pressure balance with the disk gas pressure (thus, implying a density jump and entropy jump from disk to corona, see paper I). We apply a uniform grid with across the midplane and attached to that scaled grids, We run simulations applying a different grid resolution (see Tab.\ref{tbl:resolution}). With the highest-resolution grid we resolve the disk scale height at the innermost radius with up to $\simeq 10$ grid cells. \subsection{Boundary conditions} The main goal of this paper is to investigate the symmetry of bipolar jets launched from a diffusive accretion disk. It is therefore essential to carefully check our numerical setup, in particular the internal boundary conditions describing the sink, in order to prevent numerical artifacts generating asymmetry. Compared to paper I, obviously the equatorial-plane boundary condition is now omitted. The disk itself may now evolve into a warped structure, breaking the intrinsic hemispheric symmetry of disk and outflow. A disk mid-plane, if it exists, will not be necessarily along the equatorial plane. Further consequences are e.g. that the electric currents can now flow across the disk midplane. As discussed in paper I, for the outer disk radius we apply n outflow boundary condition. We feed the inner jet launching area by accretion from the inner disk areas. Since compared to paper I the physical extent of the computational domain is somewhat reduced, also the mass reservoir for disk accretion is smaller, which limits the disk mass evolution to comparatively shorter time scales. The remaining boundary conditions are equivalent the conditions similar we have applied in paper I, i.e. the outflow boundary conditions (modified from the original code, see \citealt{2010ApJ...709.1100P}) for the radial and vertical outer boundaries, and the axisymmetry boundary condition along the rotation axis. The sink boundary conditions allow to absorb the mass and angular momentum of the accreting material. \subsection{Initial conditions} We apply the same basic initial conditions as in paper I. In addition, a few extensions to this setup were made in order to break the symmetry and to govern an asymmetric evolution. As in paper I, the initial magnetic field is prescribed by the magnetic flux function $\psi$ following \cite{2007A&A...469..811Z}, \begin{equation} \displaystyle \psi(r,z) = \frac{3}{4} B_{z,i} r_{\rm i}^2 \left(\frac{r}{r_{\rm i}}\right)^{3/4} \frac{m^{5/4}}{\left( m^2 + \left(z/r\right)^2\right)^{5/8} }, \label{eq:magini} \end{equation} where $B_{z,0}$ measures the vertical field strength at $(r=r_{\rm i},z=0)$. The (initial) field tension is determined by the parameter $m$. We apply $m = 0.4$ as in paper I. For the disk density and pressure we apply the same distribution as Eq.~(6) of paper I. However, we also run models with an initially asymmetric disk structure, considering a pressure scale height in the upper disk hemisphere $\epsilon = \epsilon_{\rm up} = 0.15$ different from the pressure scale height in the lower disk hemisphere $\epsilon = \epsilon_{\rm down} = 0.10$. The initial disk corona follows the same distribution as in paper I. In order to compare intrinsic effects of asymmetric bipolar launching with external effects, we have, however, also run comparison simulations with an initially symmetric disk, but with a disk corona of different density/pressure in the upper and lower hemisphere, respectively. \begin{table*} \begin{center} \caption{Overview on our simulation runs, concerning the character of asymmetry imposed and the diffusivity distribution applied. Simulation runs discussed in this paper are labeled with a star '*'. } \label{tbl:bicases} \begin{tabular}{lllll} \tableline \tableline \noalign{\smallskip} run & character & symmetry breaking & $\eta$ profile\tablenotemark{a} & \\ \noalign{\smallskip} \tableline \noalign{\smallskip} sb1* & reference run & none (symmetric) & $f_1$ & $\alpha_{m,1} = 3.0$ \\ sb2* & global disk asymmetry & initial scale height, $(\epsilon_{\rm up},\epsilon_{\rm down}) = (0.15, 0.1)$ & $f_1$ & \\ sb3* & local disk asymmetry & over pressure injected at $r=12$ & $f_1$ & $\alpha_{m,1} = 3.0$ \\ sb4* & global disk asymmetry & initial density contrast, $(\delta_{\rm up},\delta_{\rm down})=(10^{-3}, 10^{-4})$ & $f_2$ & $\alpha_{m,2} = 3.0$ \\ sb5 & local disk asymmetry & over pressure injected at $r=18$ & $f_2$ & $\alpha_{m,2} = 3.0$ \\ sb6 & local disk asymmetry & over pressure injected at $r=18$ & $f_2$ & $\alpha_{m,2} = 1.0$ \\ sb7 & local disk asymmetry & over pressure injected at $r=18$ & $f_2$ & $\alpha_{m,2} = 2.0$ \\ \noalign{\smallskip} \tableline \noalign{\smallskip} cb1 & global disk asymmetry & initial scale height, $\epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_1$ & \\ cb2-3 & global disk asymmetry & initial scale height, $ \epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_2$, $\Gamma = 1/3$ & $\alpha_{m,2} = 0.01$ \\ cb4 & global disk asymmetry & initial scale height, $ \epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_2$, $\Gamma = 1/3$ & $\alpha_{m,2} = 0.05$ \\ cb5 & global disk asymmetry & initial scale height, $ \epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_3$ & $\alpha_{m,3} = 0.05$ \\ cb6 & global disk asymmetry, high res & initial scale height, $\epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_2$, $\Gamma = 1/3$ & $\alpha_{m,2} = 0.05$ \\ cb8 & global disk asymmetry, highest res & initial scale height, $\epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_2$, $\Gamma = 2/3$ & $\alpha_{m,2} = 0.01$ \\ cb14* & global disk asymmetry & initial scale height, $\epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_3$ & $\alpha_{m,3} = 0.1$ \\ cb16* & global disk asymmetry & initial scale height, $\epsilon_{\rm up},\epsilon_{\rm down} = 0.15, 0.1 $ & $f_3$ & $\alpha_{m,3} = 0.1$ \\ \noalign{\smallskip} \tableline \tableline \end{tabular} \end{center} \end{table*} \subsection{Units and normalization} We apply the same code units and normalization as in paper I. Throughout this paper distances are expressed in units of the inner disk radius $r_{\rm i}$, while $p_{\rm d,i}$ and $\rho_{\rm d,i}$ are the disk pressure and density at this radius, respectively\footnote{The index ''i'' refers to the value at the inner disk radius at the equatorial plane at time $t=0$}. For the sake of comparison to previous papers we may assume $r_{\rm i} = 0.1\,$AU for a protostellar jet and $r_{\rm i} = 10\,$Schwarzschild radii for an AGN jet. Velocities are measured in units of the Keplerian velocity $v_{\rm K,0}$ at the inner disk radius. Thus, by assuming smaller inner disk radii, the outflow speed will become larger. Time is measured in units of $t_{\rm i} = r_{\rm i} / v_{\rm K,i}$, which can be related to the Keplerian orbital period $\tau_{\rm K,i} = 2\pi t_{\rm i}$. The (initial) disk aspect ratio $\epsilon$ is the ratio of the isothermal sound speed to the Keplerian speed, both evaluated at disk mid plane, $\epsilon \equiv c_{\rm s} / v_{\rm K}$. Pressure is given in units of $p_{\rm d,i} = \epsilon^2 \rho_{\rm d,i} v_{\rm K,i}^2$. The magnetic field is measured in units of $B_{\rm i} = B_{z,\rm i}$. We adopt $v_{\rm K,i} = 1$, $\rho_{\rm d,i} = 1$ in code units. \begin{figure*} \centering \includegraphics[width=4.4cm]{fig1a.pdf} \includegraphics[width=4.4cm]{fig1b.pdf} \includegraphics[width=4.4cm]{fig1c.pdf} \includegraphics[width=4.4cm]{fig1d.pdf} \caption{Time evolution of the bipolar jet-disk structure for reference simulation sb1 applying a fixed-in-time and fixed-in-space magnetic diffusivity distribution Eq.~\ref{eq:magdiff_global}. Shown is the evolution of mass density (color) and the poloidal magnetic field (contours of poloidal magnetic flux $\Psi(r,z)$) for the flux levels $\Psi = 0.01, 0.03, 0.06, 0.1, 0.15, 0.2, 0.26, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.1, 1.3, 1.5, 1.7$, for the dynamical times steps $t = 0, 100, 1000, 2000, 3000$. } \label{fig:bipo1_fix_case1} \end{figure*} \begin{figure} \centering \includegraphics[width=8cm]{fig2a.pdf} \includegraphics[width=8cm]{fig2b.pdf} \caption{Time evolution of the mass fluxes for reference simulation sb1. Shown is evolution of accretion rate and the mass ejection rates from the upper (solid) and lower (dashed) disk surfaces (all in code units). Ejection rates are measured in the control volumes with $r_1=1.5$ and $r_2 = 10.0$, while the accretion rate is vertically integrated at $r=10$.} \label{fig:bipo1_fix_case1_mass} \end{figure} \section{Magnetic diffusivity} In order to extend of the setup of paper I into a truly bipolar configuration we need to re-consider our model for the magnetic diffusivity $\eta(r,z)$. An obvious constraint is that the prescription of diffusivity should not {\em a priori} influence the symmetry of the system. Since the disk structure may now evolve asymmetric in both hemispheres, the magnetic diffusivity must be able to follow such a disk evolution. Naturally, in asymmetric disks the disk midplane does not follow the equatorial plane. In general, we assume the diffusivity to be anomalous and of turbulent origin. Since we do not resolve the disk turbulence from first principles (e.g. by resolving the MRI), we apply a parameterized diffusivity distribution effectively following in principle an $\alpha$-prescription. We neglect geometrically more complex distributions such as e.g. MRI-inactive dead zones of turbulent diffusivity or a more detailed local treatment of MRI turbulence \citep{1996ApJ...457..355G, 2007ApJ...668L..51P, 2010MNRAS.405...41G, 2012arXiv1210.6664F, 2013A&A...550A..61L, 2013ApJ...767...30B}. As detailed in paper I, we assume a diagonal diffusivity tensor with the non-zero components $\eta_{\rm \phi\phi} \equiv \eta_{\rm p}$, and $\eta_{\rm rr}=\eta_{\rm zz} \equiv \eta_{\phi}$, where we denote $\eta_{\rm p}$ as the {\em poloidal magnetic diffusivity}, and $\eta_{\phi}$ as the toroidal magnetic diffusivity, respectively. The anisotropy parameter $\chi = \eta_{\phi} /\eta_{\rm p}$ quantifies the different strength of diffusivity in poloidal and toroidal direction. Here, we apply $\chi = 3.0$ for all simulations. For the diffusivity profile, we have investigated several options which we will discuss in the following. Table \ref{tbl:bicases} compares the parameter setups for the different magnetic diffusivity distributions applied in our simulations. \subsection{A global magnetic diffusivity prescription} A first option for the diffusivity distribution is to mirror the profile applied in paper I along the equatorial plane, \begin{equation} \eta_{\rm p}(r,z) = \alpha_{\rm m,1} f_1(r,z) \equiv \alpha_{\rm m,1} v_{\rm A,0} H_0 \exp{\left(-\frac{2 z^2}{H_{\eta}^2}\right)}, \label{eq:magdiff_global} \end{equation} with the Alfv\'en speed $v_{\rm A,0} \equiv v_{\rm A}(r, z=0)$ and the disk thermal scale height $H_0 \equiv H(r) = \epsilon r = c_{\rm S}(r,z=0) / v_{\rm K}(r,z=0)$, measured at the midplane and at time zero. Essentially, the diffusivity profile Eq.~\ref{eq:magdiff_global} is geometrically tied to the equatorial plane, potentially inducing hemispheric symmetry of the disk and the outflow. As in paper I, we allow for a diffusivity scale height $H_{\eta}$ larger than the thermal scale height $H_0$ (see discussion in paper I). In fact, \citet{2010MNRAS.405...41G} who investigated the MRI-induced turbulence of accretions disks by high-resolution box simulations, finds an {\em increasing} level of turbulence with disk height. His simulations attest a maximum level of turbulence at about 2-3 disk pressure scale heights, which are in nice agreement with our model approach. More recent simulations by \citet{2011MNRAS.416..361B, 2013ApJ...764...66S} indicate similar scale heights for the turbulent stresses. In Eq.~\ref{eq:magdiff_global}, both $v_{\rm A}$ and $c_{\rm S}$ can be chosen time-independent (as in paper I), or evolving in time (as e.g. in \citealt{2010A&A...512A..82M}). For the sake of comparison, in the present bipolar reference simulation (denoted by sb1) we apply a magnetic diffusivity profile $f_1(r,z)$ fixed in time. \begin{figure*} \centering \includegraphics[width=4.4cm]{fig3a.pdf} \includegraphics[width=4.4cm]{fig3b.pdf} \includegraphics[width=4.4cm]{fig3c.pdf} \includegraphics[width=4.4cm]{fig3d.pdf} \caption{Time evolution if the bipolar jet-disk structure for simulation sb2 applying a fixed-in-time diffusivity profile Eq.~\ref{eq:magdiff_global}, and evolving from an asymmetric initial state with different thermal scale heights for the upper and lower disk hemisphere, $\epsilon_{up}=0.15$ and $\epsilon_{down}=0.1$. We show the evolution for time $t = 0, 100, 1000, 2000$ of the mass density (colors) and the poloidal magnetic field (lines), i.e. contours of poloidal magnetic flux $\Psi$ with flux contours $\Psi = 0.01, 0.03, 0.06, 0.1$, $0.15, 0.2, 0.26, 0.35, 0.45$, $0.55, 0.65, 0.75, 0.85$, $0.95, 1.1, 1.3, 1.5, 1.7$. } \label{fig:bipo1_fix_case2} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.8cm]{fig4a.pdf} \includegraphics[width=5.8cm]{fig4b.pdf} \includegraphics[width=5.8cm]{fig4c.pdf} \caption{Time evolution of the mass fluxes for simulation sb2. Shown is evolution of accretion rate and the mass ejection rates from the upper (solid) and lower (dashed) disk surfaces (all in code units). Ejection rates (middle) are measured in the control volumes with $r_1=1.5$ and $r_2 = 10.0$, while the accretion rate (top) is integrated at $r=10$. For comparison we show the asymptotic (vertical) mass fluxes through the jet, integrated from $r_1=2.0$ to $r_2 = 50.0$ at $z= -75, 75$.} \label{fig:bicase2_massfluxes} \end{figure*} \begin{figure} \centering \includegraphics[width=6cm]{fig5a.pdf} \includegraphics[width=6cm]{fig5b.pdf} \includegraphics[width=6cm]{fig5c.pdf} \includegraphics[width=6cm]{fig5d.pdf} \caption{Disk evolution for the jet-disk system sb2. Shown is the radial velocity $v_r(r,z)$ for the dynamical time steps $t= 40, 100, 1000, 2000$ (from top) indicating the asymmetric evolution of the disk and the disk wind. } \label{bicase2_innerdisk1} \end{figure} \subsection{A local prescription of magnetic diffusivity} In the present paper we investigate an asymmetric evolution of the disk-jet structure in both hemispheres. Therefore, we need to apply a new, truly bipolar setup in which the leading parameters for the diffusivity profile are not defined with respect to the equatorial plane anymore. It is clear that for an asymmetric disk evolution, a hypothetical disk mid-plane will not coincide with the equatorial plane. Therefore, in order to consider the spatial evolution of the asymmetric disk structure, it is essential to apply a {\em local} prescription of diffusivity. Among other options, one possibility is to relate the magnetic diffusivity to the local pressure or density. One may assume that the turbulent Alfv\'enic pressure is proportional to the thermal pressure as discussed e.g. by \citet{1997ApJ...482..712O}. In particular, \citet{2002A&A...395.1045F} suggested that if magnetic diffusivity is interrelated to the turbulent {\em Alfv\'enic pressure}, the magnetic diffusivity profile will follow a power law $\eta_{\rm m} \sim \rho^{1/3}$. In this casse the diffusivity is proportional to the local sound speed $\eta_{\rm m} \propto c_{\rm S}$. We may generalize the power law \begin{equation} \eta_{\rm p}= \alpha_{\rm m ,2} f_2(r,z) \equiv \alpha_{\rm m ,2} \rho^{\Gamma}. \label{eq:magdiff_cemel} \end{equation} The profile with $\Gamma \simeq 1/3$ results in a relatively broad vertical diffusivity profile, wider than the previously used exponential profile Eq.\ref{eq:magdiff_global}. It has been shown that this will impact the jet acceleration and collimation \citep{2002A&A...395.1045F}, maybe more than it effects the launching mechanism itself. For comparison we have also applied different power laws, such as $\Gamma = 2/3$, or even steeper profiles. The drawback of the simple profile \ref{eq:magdiff_cemel} is that the diffusivity {\em decreases} along the disk is decreasing with radius. The outer disk material is therefore strongly coupled to the magnetic field, leading to an super efficient angular momentum removal, rapid accretion, and, thus, a short life-time of the outer disk. On the other hand, in case of MRI driven turbulence, the MRI activity is expected to cease for large radii, where "dead zones" for active accretion may exist \citep{1996ApJ...457..355G, 2011MNRAS.416..361B, 2013ApJ...764...66S}. We have favored another option for the magnetic diffusivity which allows for a disk diffusivity {\em increasing} with radius, \begin{equation} \eta_{\rm p}(r,z) = \alpha_{\rm m,3}\, f_3(r,z) \equiv \alpha_{\rm m,3}\, \tilde{H}_{\rm L}(r,z) \frac{1}{1 + \frac{\tilde{H}_{\rm L}(r,z)}{\tilde{H}_0}. }, \label{eq:magdiff_local} \end{equation} Thus, we apply a density-weighted "local disk scale height" $\tilde{H}_{\rm L}(r,z)$ mainly following the local sound speed in the gas flow, \begin{equation} \tilde{H}_{\rm L}(r,z) = \rho(r,z)^{\sigma_{\rho}} \sqrt{ \gamma \frac{P(r,z)}{\rho(r,z)} } \frac{1}{v_{\rm Kep}(r)} r^{\sigma_{r}}, \end{equation} and a quenching term in order to avoid numerically problematic diffusivities. We typically choose $\tilde{H}_0 = 0.5$, $\sigma_{\rho} = 3/2$, $\sigma_{r} = 3/2$,or $\tilde{H}_0 = 0.1$, $\sigma_{\rho} = 3/2$, $\sigma_{r} = 5/2$ (simulation run cb14). Essentially, both prescriptions for the magnetic diffusivity Eqs.~\ref{eq:magdiff_cemel} and \ref{eq:magdiff_local} allow for a smooth transition from accretion to ejection, and, also to follow the changes in the local disk structure. As the outflow density decreases along the streamlines, also the outflow diffusivity decreases. For low densities towards the asymptotic outflow, the ideal MHD will be approached. A physical motivation can be the following. Turbulently diffusive disk material is lifted from the disk into the disk, and is further accelerated along the outflow while, however, the turbulent motions decay. In paper I we have estimated the scale height where the turbulence will be damped to be several disk thermal scale heights (see our discussion above citing \citealt{2010MNRAS.405...41G}). \section{Test case - launching symmetric bipolar jets} We first present jet launching simulations resulting in symmetric bipolar jets. The first example is the evolution of jets following a magnetic diffusivity description fixed in time and space (case sb1). This case serves as reference simulation for this paper, and also allows for a comparison to the one-hemispheric simulations of paper I. This inflow-outflow evolution is shown in Fig.~\ref{fig:bipo1_fix_case1}. As we see, the outflow evolves perfectly symmetric in both hemispheres for almost 3000 dynamical time steps, until the outer disk starts to deviate from symmetry due to numerical effects. However, even for these late time steps, the inner disk, which is the main jet launching area, is still highly symmetric. In particular this could be seen in the time-evolution of the mass fluxes. Figure \ref{fig:bipo1_fix_case1_mass} shows the accretion rate and the ejection rates into the two hemispheres. The ejection rates are integrated from $r=2$ to $r=20$ at $z = \pm 3\,H(r)$, while the accretion rate is integrated from $z=-3H$ till $z=3H$ at $r=10$. It is not surprising that due to the symmetric diffusivity profile which remains fixed in time together with symmetric initial and boundary conditions, we find a symmetric evolution of both the disk and the outflows. The symmetric bipolar jet structure we obtain does mainly serve as a test case for the numerical setup, in particular the sink boundary conditions. While the symmetric evolution is expected on physical grounds, even a little numerical failure in the setup would lead to asymmetry quickly. In comparison to our one-hemispheric simulation in paper I (reference run case1), the mass fluxes we measure now are quite similar. As accretion rate for the bipolar simulation we measure $M_{\rm acc}(t=2000) \simeq 0.03$, while for the previous reference case1 it was $M_{\rm acc}(t=2000) \simeq 0.015$ The outflow rates for the bipolar simulation are $M_{\rm ejec}(t=2000)\simeq 0.01$ in each direction, which agrees with the $M_{\rm ejec}(t=2000) = 0.007$ in paper I nicely. However, due to the smaller grid extension, the disk mass reservoir is smaller, leading to a faster decay of the disk mass (see the outer disk structure at $t=3000$). Therefore, disk accretion and mass ejection decay faster as well. \section{Bipolar jets from asymmetric disks} Here we present simulations in which we have disturbed the internal hemispheric disk symmetry. We have applied two options for disturbing the disk symmetry - either by prescribing a {\em global} asymmetric initial state, or by injecting a {\em localized} overpressure (an explosion at certain time). For both alternatives we apply a symmetric magnetic diffusivity prescription Eq.~\ref{eq:magdiff_global}, constant in time. \subsection{An asymmetric disk scale height} Option I is to prescribe an initial disk structure with a global pressure asymmetry in both disk hemispheres. In our model, we achieve this by applying a different thermal disk scale height for the initial disk in each hemisphere. In simulation sb2 we have applied $\epsilon = H/r = 0.1$ for the upper hemisphere and $\epsilon = H/r = 0.15$ for the lower hemisphere. Consequently, we have a density and a pressure jump across the equatorial plane, $\Delta P /P = 0.2$. The disk turns into an asymmetric evolution right from the beginning, evolving into a warped structure\footnote{Note, however, the axisymmetric setup} along the midplane (Fig.~\ref{fig:bipo1_fix_case2}). A series of warps is visible along the disk with warp amplitudes of a few local disk scale heights. After about 1000 dynamical time steps the warp amplitudes start to decrease - first along the inner disk, while the outer disk still in a warped state. The disk asymmetry is reflected in the jet evolution. Along with the initial asymmetric disk evolution, the jets launched from the inner disk are asymmetric as well. This is clearly visible in the poloidal magnetic field structure (Fig.~\ref{fig:bipo1_fix_case2}), but also in the mass fluxes we measure. We also note a different time scale in the jet propagation. The upper jet reaches the grid boundary earlier than the lower jet which is delayed by about $\Delta t = 10\%$ (note that this obviously depends on the grid size, and happens rather early at $t < 50$). Figure \ref{fig:bicase2_massfluxes} shows the time evolution of the mass fluxes. Comparing both jet fluxes we find that at early stages, $t<700$, the lower outflow carries about 80\% of the mass load of the upper outflow. From $t\simeq 700-1500$ an asymmetric inflow-outflow system is established. The differences in mass flux are now about 10\%. After $t\simeq 1500$, the variation in the outflow rates decreases, and the inner disk has established a symmetric structure. The same behavior is also visible in the velocity distribution. Figure \ref{bicase2_innerdisk1} shows the complex velocity field of the inner disk. Accretion starts first in the innermost disk regions. The disk asymmetry is well reflected in the $v_r$-distribution. However, after $t=1000$ the system turns to a symmetric geometry. We also see the highest accretion velocities in the upper disk layers. One may indicate this as {\em layered accretion}, however, due to the higher density in the lower disk layers, the radial mass flux is more almost distributed over the disk height. We believe that the disk evolution into a symmetric state is given by two reasons. Firstly by the restoring force of the symmetric gravitational potential, and secondly by the symmetry and the time-independent prescription of the magnetic diffusivity. The latter is interesting since it is a second-order effect only, as the magnetic diffusivity does not provide a force term in the MHD equations which could directly re-configure the disk structure. The diffusion time scale $\tau_{\eta} = \Delta Z^2 /\eta \simeq H^2/\eta \simeq $ is about 100 dynamical time scales for $\eta \simeq 0.01, H \simeq 1$ at radius $r =10$. So far we have concentrated on the inflow-outflow dynamics close to the jet launching region. We now consider the evolution further downstream the outflows. This is interesting as it is this part of the outflow which is in principle accessible by the observations. Figure \ref{fig:bicase2_massfluxes} (bottom) shows the mass fluxes of jet and counter-jet far from the source, integrated from $r = 2$ to $r = 50$ at $z = 75$, and $z = -75$, respectively. We find that the mass flux asymmetry of about 5\% at the launching region propagates to the asymptotic region, where we find a similar mass flux difference. The time lag between launching region and asymptotic domain of about $\Delta t = 300 = 1200 -900$ can be explained by the propagation period of the accelerating outflow of velocities $v_z \simeq 0.2-0.8$ propagating a distance $\Delta z = 75$. The maximum jet velocity is achieved along a narrow cone between $r=10$ and $r=20$ (at $z=100$). The outflow velocity of the lower jet is about $1.2$, while for the upper jet it is $1.1$ times the Keplerian speed at the inner disk radius. The bulk of the mass flux of the outflow is, however, located between $r=15$ and $r=30$, and is a factor 4 higher than for the fast flow and moves with a speed $v_z \simeq 0.4$. \begin{figure} \centering \includegraphics[width=6cm]{fig6a.pdf} \includegraphics[width=6cm]{fig6b.pdf} \includegraphics[width=6cm]{fig6c.pdf} \includegraphics[width=6cm]{fig6d.pdf} \includegraphics[width=6cm]{fig6e.pdf} \caption{Time evolution of the inner disk accretion for simulation sb3, applying the diffusivity distribution fixed in time Eq.~\ref{eq:magdiff_global}. An overpressure is added at $t=400$ in the upper disk hemisphere at radius $r = 12$, lasting for few rotations. Shown is the evolution of the mass density (color) for the dynamical times $t = 400, 404, 420, 600, 2800$. } \label{fig:bicase3-v1} \end{figure} \begin{figure*} \centering \includegraphics[width=3.5cm]{fig7a.pdf} \includegraphics[width=3.5cm]{fig7b.pdf} \includegraphics[width=3.5cm]{fig7c.pdf} \includegraphics[width=3.5cm]{fig7d.pdf} \includegraphics[width=3.5cm]{fig7e.pdf} \caption{Time evolution of the bipolar jet-disk structure for a simulation sb3 with the fixed-in-time diffusivity distribution in time, Eq.~\ref{eq:magdiff_global}, but with a localized overpressure added at $t=400$ for $\Delta t = 20$. See Fig.~\ref{fig:bicase3-v1} for a higher resolution of the inner disk area. We show the evolution for time $t = 0, 100, 1000, 2000, 3000$ of the mass density (color) and the poloidal magnetic field (lines), i.e. contours of the magnetic flux, with flux levels $\Psi = 0.01, 0.03, 0.06, 0.1, 0.15, 0.2$, $0.26, 0.35, 0.45, 0.55$, $0.65, 0.75, 0.85, 0.95$, $1.1, 1.3, 1.5, 1.7$. } \label{fig:bicase3-rho-mag} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.9cm]{fig8a.pdf} \includegraphics[width=5.9cm]{fig8b.pdf} \includegraphics[width=5.9cm]{fig8c.pdf} \caption{Time evolution of the mass fluxes for simulation sb3. Shown is evolution of accretion rate and the mass ejection rates from the upper (solid) and lower (dashed) disk surfaces (all in code units). Ejection rates (middle) are measured in the control volumes with $r_1=1.5$ and $r_2 = 10.0$, while the accretion rate (top) is integrated at $r=5$. For comparison we show the asymptotic (vertical) mass fluxes through the jet, integrated from $r_1=2.0$ to $r_2 = 50.0$ at $z= -75, 75$.} \label{fig:bicase3_massfluxes} \end{figure*} \subsection{A localized disk asymmetry in one disk hemisphere} We start simulation sb3 with a symmetric initial disk structure. As for reference case sb1 the outflows evolve into a symmetric jet-counter jet structure (Fig.~\ref{fig:bicase3-rho-mag}). However, at $t=400$, when a quasi steady state of the inner inflow-outflow structure is reached, we disturb the symmetry of the disk structure by inserting a localized over pressure in the upper disk hemisphere, probably similar to an explosion (see Fig.~\ref{fig:bicase3-v1}). This injection is localized within a box of size ($\Delta r \times \Delta z) = (1.5 \times 0.4)$ located at $(r,z) = (11.25, 1.2)$, and is switched on for $\Delta t = 20$, corresponding to 0.1 of an orbital period at this radius. The injected material has on average a 20 times higher density and a 2000 times higher pressure compared to the ambient disk material. The injected material disturbs the disk symmetry as it expands across the disk. The disturbance is slowly advected into the jet launching region. However, we observe that the disk asymmetry decays faster than it is advected along the disk into the jet launching area at small disk radii. This is easy to understand. The expansion is roughly with sound speed, while the advection happens a sub-sonic speed $v_r \simeq \epsilon v_{\rm Kep}$. Figure \ref{fig:bicase3-v1} shows the disk accretion velocity evolution. The expansion following the "explosion" first penetrates the whole disk in vertical direction, before the over-density is advected slowly inwards. Thus, the symmetry of the inner disk is not really affected by the explosion in the upper hemisphere at $r=12$. The dominant outflow from the inner disk is only weakly affected. When the size of the launching area reaches the explosion site, the asymmetry has almost disappeared. Up to $t=1000$ the mass fluxes show an asymmetry on a 10\%-level. Also a slight structural asymmetry is visible (see the $6^{\rm th}$ flux surface contour in Fig.~\ref{fig:bicase3-rho-mag}) which has propagated from the disk surface to a distance $z \simeq 70$ at this time. That part of the outflow which originates from the initially asymmetric part of the disk (where the blob was injected), starts indeed asymmetric. As the mass flux launched from larger disk radii is small compared to the jet launched from the inner part, the total jet mass fluxes into the two hemispheres differ only marginally. Figure \ref{fig:bicase3_massfluxes} show the time evolution of the mass fluxes. The ejection rates are integrated along the disk surface between $r=1.5$ and $r=10$ thus inside the radius where the explosion happens. Still the explosion disturbs the launching site such that we see a 5-10\% effect in the outflow mass fluxes. This effect is somewhat delayed from the explosion time since the disturbance needs time to be advected inwards. The asymptotic outflow rates, integrated from $r=2$ to $r=50$ at $z=\pm 75$, show only marginal differences, however. Note that the flux surface passing through $(r,z) = (50,75)$ anchors at $r=5$ in the disk, and thus in a weekly disturbed region. In summary, although the initially symmetric disk is clearly disturbed by the asymmetric explosion, the asymmetry decays faster than it propagates to the inner jet launching site. Thus, the asymptotic, collimated jet, which originates in the inner disk area is only marginally asymmetric, moreover as the main mass flux is launched along the innermost field lines. \begin{figure*} \centering \includegraphics[width=5.5cm]{fig9a.pdf} \includegraphics[width=5.5cm]{fig9b.pdf} \includegraphics[width=5.5cm]{fig9c.pdf} \caption{Time evolution of the inner disk structure for simulation run cb14. Here the diffusivity profile is prescribed as a local Eq.~\ref{eq:magdiff_local}. Shown is the evolution of magnetic diffusivity (color) and the poloidal magnetic field (contours of poloidal magnetic flux $\Psi(r,z)$) for the dynamical times steps $t = 50, 500, 1000$ (from left). } \label{fig:cf10-eta} \end{figure*} \begin{figure} \centering \includegraphics[width=8.0cm]{fig10a.pdf} \includegraphics[width=8.0cm]{fig10b.pdf} \caption{Time evolution of the jet mass flux rates for simulation cb14. Shown is also the launching mass flux measured at 3 disk scale heights (top) and the asymptotic fluxes (bottom) integrated at $z= \pm 50$ from $r= 2$ to $r=40$. } \label{fig:cf10-rates} \end{figure} \section{A local magnetic diffusivity model and bipolar jet launching} In order to allow the disk and jet to follow a truly asymmetric evolution, without the restoring effects of a symmetric, global magnetic diffusivity distribution, we have applied a {\em local} description of diffusivity which follows the local evolution of disk and outflow (see Tab.~\ref{tbl:bicases}). The results of this section are somewhat preliminary, as a physically self-consistent parametrisation for a local magnetic (turbulent) diffusivity is not available. In order to investigate the main features of the local approach, we have applied two diffusivity distributions. One follows Eq.~\ref{eq:magdiff_cemel}, the other one refers to Eq.~\ref{eq:magdiff_local}. The main difficulty one faces with a local description is a feedback mechanism such that low densities lead to a lower diffusivity, thus stronger matter-field coupling, thus more efficient angular momentum removal and a faster accretion, which leads to even lower densities. Moreover, the local prescription Eq.~\ref{eq:magdiff_cemel} gives a diffusivity profile decreasing with radius, contrary to the usual choice in the literature, Eq.~\ref{eq:magdiff_global}, which is also realized in Eq.~\ref{eq:magdiff_local}. We find that the feedback mechanism is most efficient in simulations with a strong coupling between diffusivity and density, such as a diffusivity profile Eq.~\ref{eq:magdiff_cemel}. In the extreme cases we see that the feedback mechanism may lead to a temporary gap opening over the innermost disk radii. As a result, the accretion in this area is strongly episodic, and, subsequently also the mass ejection. We will present a detailed investigation of this feedback mechanism in a forthcoming paper. Applying a more sophisticated prescription of the magnetic diffusivity following Eq.~\ref{eq:magdiff_local}, the over-all accretion-ejection evolution is more similar to the picture established by the simulations using a global diffusivity profile. The fundamental difference is, however, the longer lasting and more persistent asymmetry in the disk and the outflows. Figure \ref{fig:cf10-eta} shows the time evolution of magnetic diffusivity and the magnetic field for a simulation with a more sophisticated local setup of magnetic diffusivity following Eq.~\ref{eq:magdiff_local}. We clearly see how the magnetic diffusivity follows the structure of the disk and the outflow. The disk "{warping}" is seen diffusivity distribution as well. Mass loading and matter-field coupling depends on the local disk properties (defined by Eq.~\ref{eq:magdiff_local}). Since the diffusivity profile is broader in vertical direction, mass loading and angular momentum removal is more efficient, establishing higher accretion and outflow rates - in agreement with our results in paper I. The outflow mass fluxes develop a clearly asymmetric structure. Figure \ref{fig:cf10-rates} show the mass fluxes integrated from $r=1.5$ to $r = 10$, along a surface $z=0.3 r$ parallel to the initial disk surface. Jet and counter jet injection mass flow rates differ substantially - by about 30\% over at least 1000 time steps. The asymptotic jet mass fluxes integrated from $r=2$ to $r = 40$, along a surface $z = \pm 50$ are somewhat lower (due to the fact that some of the material injected into the outflow leaves the grid in radial direction), and still differ on a 15\%-level. \section{Impact of the environment} This section is devoted to the question whether jet asymmetries are due to external of internal interaction. It seems clear that the conditions in the ambient gas the jet is penetrating, will certainly affect the expansion and the collimation on the large scales. Here we consider a model setup, in which the bipolar outflow is launched intrinsically symmetric from a (symmetric) disk into an asymmetric star-disk corona. We do this by prescribing a different initial coronal density (and thus pressure), simply choosing $\delta_{\rm up} = 10^{-4}$ and $\delta_{\rm up} = 10^{-3}$. Figure \ref{fig:bipo1-fix-case5} shows the time evolution of density and magnetic field for such a setup. As expected the initial outflow is highly asymmetric, more than the outflows disturbed by intrinsic disk asymmetries. However, the asymmetry is clearly transient. As soon as the initial coronal material is swept out of the computational domain, the accretion-ejection system returns to hemispheric symmetry. This is well visible in the mass flux evolution (Fig.~\ref{fig:bicase5_massfluxes}), which shows a drastic asymmetry during the first hundreds of time steps. Again it is interesting to see various time lags in the flow evolution. Firstly, the time lag between the launching time of the initial asymmetry and the arrival time of asymmetric features in the asymptotic region. Secondly, the time lag between the time step when the disk-outflow system has returned to symmetry, and the time when the outflow symmetry has reached the asymptotic jet. Note that in these simulations the disk structure itself stays rather symmetric, in difference to the simulations discussed above, where we see the disk evolving into a warped structure. However, we can see back reaction of the asymmetric outflow onto the disk structure in this approach. The two asymmetric jets drive a different electric current system in both hemispheres. Both the current systems are connected within the disk - subsequent MHD forces therefore slightly distort the symmetric hydrodynamic disk structure. The deviation from symmetry is, however, not as strong as for the previous cases in which we start with an initially asymmetry disk. \begin{figure*} \centering \includegraphics[width=3.5cm]{fig11a.pdf} \includegraphics[width=3.5cm]{fig11b.pdf} \includegraphics[width=3.5cm]{fig11c.pdf} \includegraphics[width=3.5cm]{fig11d.pdf} \includegraphics[width=3.5cm]{fig11e.pdf} \caption{Time evolution of the bipolar jet-disk structure for simulation sb4. Here the outflow is launched in hemispheres with different density contrast. Shown is the evolution of mass density (color) and the poloidal magnetic field (contours of poloidal magnetic flux $\Psi(r,z)$) for the dynamical times steps $t = 0, 100, 500, 1000, 3000$. } \label{fig:bipo1-fix-case5} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.9cm]{fig12a.pdf} \includegraphics[width=5.9cm]{fig12b.pdf} \includegraphics[width=5.9cm]{fig12c.pdf} \caption{Time evolution of the mass fluxes for simulation sb4. Shown is evolution of accretion rate and the mass ejection rates from the upper (solid) and lower (dashed) disk surfaces (all in code units). Ejection rates (middle) are measured in the control volumes with $r_1=1.5$ and $r_2 = 10.0$, while the accretion rate (top) is integrated at $r=5$. For comparison we show the asymptotic (vertical) mass fluxes through the jet, integrated from $r_1 = 1.5$ to $r_2 = 40.0$ at $z= -80, 80$.} \label{fig:bicase5_massfluxes} \end{figure*} \section{Resolution study} Finally we present example results of our resolution study in brief. Figure \ref{fig:resolution} shows the inner part of the disk at the dynamical time step $t=130$ when the inner disk evolution is rather violent. We compare simulation cb14 applying our standard resolution of $\Delta r = 0.0417 $, $\Delta z = 0.0333$ in the disk, with simulation cb16, applying a 3 times higher resolution, $\Delta r = 0.0165 $, $\Delta z = 0.00824$. Although the higher resolution run shows somewhat more sub-structure, such as internal shocks or a more peaked diffusivity distribution for $r < 3$, the main features of the disk dynamics are just the same. In particular the disk height is similar, as well as the structure and opening angle of the magnetic flux surfaces. Also the radially structured features of the outflow close to the disk is very similar in both hemispheres. The overall mass fluxes measured in the simulations are similar as well, however, the jet-counter jet asymmetry is somewhat more pronounced in the high-resolution run. We note that high-resolution simulations are particularly difficult to manage in diffusive MHD since the CF time stepping condition $\Delta t_{\eta} \leq (\Delta x)^2 / \eta$. \begin{figure} \centering \includegraphics[width=9.0cm]{fig13a.pdf} \includegraphics[width=8.0cm]{fig13b.pdf} \caption{Resolution study for our setup. Subset of the innermost disk-jet area which is most critical to resolution issues. Shown is the magnetic diffusivity distribution following Eq.~\ref{eq:magdiff_local} for simulation run cb14 with (low resolution, top) with run cb16 (high resolution, bottom). } \label{fig:resolution} \end{figure} \section{Conclusions} We have presented results of MHD simulations investigating the launching of jets and outflows from a magnetically diffusive disk in Keplerian rotation. The time evolution of the accretion disk structure is self-consistently taken into account. The simulations are performed in axisymmetry applying the MHD code PLUTO. Based, on paper I which studied how magnetic diffusivity and magnetization affect the disk and outflow properties, such as the mass and angular momentum fluxes, jet collimation, or jet radius, the present paper investigates the hemispheric symmetry of outflows. We have setup a numerical scheme that can treat the outflows from an equatorial disk in bipolar direction. The setup has been carefully checked against numerical artifacts triggering asymmetry in the disk-outflow evolution. We disturb the disk-outflow bipolar asymmetry applying several approaches. Bipolar symmetry is re-achieved on the long term by gravitational forces, in particular the component vertical to the disk, and (in some cases) a symmetric magnetic diffusivity prescription. In particular, we have obtained the following results. (1) A test case with a symmetric setup gave a perfectly symmetric bipolar evolution of disk and outflow for several thousand rotations, until numerical effects by the outer disk boundary condition start to disturb the symmetry of the outermost disk. The measured mass fluxes compare well with the one-hemispheric simulations in paper I. (2) We then applied various options to disturb the outflow bipolar symmetry. First we prescribed an initially asymmetric disk applying a different pressure scale height in both disk hemispheres, $\epsilon_{\rm up} = 0.1$, $\epsilon_{\rm down} = 0.15$. In general the disk structure evolves into a warped configuration, with warp amplitudes of a few initial disk scale heights. Electric currents are driven across the equatorial plane. The mass fluxes of the resulting outflows differ by about 10\% over less than 1000 time steps, until a symmetric launching is achieved again. The outflow rates at larger distance from the source differ similarly, however, asymmetry last longer, just because of the propagation time of the material launched from the disk. (3) We then started the simulation with symmetric initial setup, but disturbing the symmetric disk structure by a localized (in time and space) explosion at the time when a symmetric outflow is already well established. As for (2), a asymmetric outflow arises after the injection, however, after a time delay due to the propagation time of the asymmetric injection along the disk towards the main jet launching area. On the long term a symmetric outflow is re-established. (4) We then investigated the disk-jet evolution applying a {\em local} prescription of magnetic diffusivity $\eta = \eta(r,z,t)$ following the local disk evolution. We find that - as discussed in paper I - the distribution of magnetic diffusivity affects the disk evolution (and subsequently the outflow evolution) essentially, as it governs the coupling between matter and field, and, thus the angular momentum evolution, the disk accretion, the magneto-centrifugal acceleration, and also the mass loading of the outflow. We first applied a simple power law diffusivity distribution as motivated by earlier papers. We observed a strong feedback such that for low densities an almost ideal MHD situation leads to super-efficient accretion with weak outflows and strongly decaying disk mass. These features deserve further attention which will be the focus of a forthcoming paper. (5) For a more sophisticated magnetic diffusivity distribution following the density-weighted {\em local} sound speed, and an increasing diffusivity with radius, the accretion evolution is more persistent. We were able to follow the accretion-ejection process over many thousands of dynamical time steps. The most interesting result is that the bipolar asymmetry of jet and counter jet is {\em long-lasting}. We find that in this case the warped structure of the disk survive many dynamical time scales also in the inner disk. We interpret this as due to the lack of a symmetric diffusivity distribution. There are the same restoring forces of gravity as in the case of a symmetric diffusivity profile. However, the matter is more directly coupled to the distorted magnetic field which in principle opposes the return to symmetry. In the end we observe a persistent difference in the jet-counter jet mass fluxes - up to 30\% and lasting longer than the simulation run time. (6) In order to compare between internal and external effects we also investigated the launch of a jet outflow from a symmetric disk into an asymmetric ambient gas distribution. As probably expected the initial outflow is strongly asymmetric, with 20\% different mass fluxes for jet and counter jet, however as soon as the outflow has penetrated the asymmetric corona (i.e. when the outflow has left the grid, and the initial condition has swept out of the domain, the outflow returns to symmetry (after about 2000 dynamical time steps). In comparison to the simulations discussed about, the initial disk structure is symmetric and does not evolve into a warped structure. \acknowledgements We thank the Andrea Mignone and the PLUTO team for the possibility to use their code. S.S. acknowledges the hospitality by the Max Planck Institute for Astronomy. The simulations were performed on the THEO cluster of the Max Planck Institute for Astronomy. This work was financed by the SFB 881, subproject B4, of the German science foundation DFG, and partly by a scholarship of the Ministry of Science, Research, and Technology of Iran. \bibliographystyle{apj}
2,877,628,090,923
arxiv
\section{Introduction} With the rapid progress of quantum technologies, the design of efficient protocols to control and numerical methods to describe quantum systems quickly moved to the forefront of current research. To achieve a better performance, a crucial element is the ability to perform adiabatic transformations, i.e. transformations between states that are adiabatically connected \cite{kolodrubetz_geometry_2017}. For example, quantum annealing and adiabatic quantum computation are based on an adiabatic process transforming a simple initial ground state into a final non-trivial eigenstate, and were shown to be a universal tool for quantum computation \cite{albash_adiabatic_2018}. Likewise, any quantum gate operation can be designed using an adiabatic protocol \cite{aharonov_adiabatic_2007,nielsen_chuang_2010}. The experimental preparation of equilibrium states in isolated or nearly-isolated systems such as cold atoms or NV centers is often achieved by adiabatic transformations of the Hamiltonian, starting from a simple initial state. In some cases, including Floquet-engineered systems \cite{bukov_universal_2015,dalessio_long-time_2014,goldman_periodically_2014}, such a procedure is not only convenient but is actually required, since these systems do not naturally thermalize by interacting with their environment. In the context of thermodynamics, adiabatic (reversible) processes are also of crucial importance. They allow one to minimize the dissipative losses associated with an increase of entropy and achieve the maximal possible efficiency of energy conversion, e.g.~in heat engines and refrigerators \cite{jarzynski_equalities_2011,campo_more_2014}. On the theoretical side, adiabatic transformations underly many concepts including the Schrieffer-Wolff transformation \cite{schrieffer_relation_1966,bravyi_schriefferwolff_2011,wurtz_variational_2020}, and the dressing of quasiparticles by interactions underlying e.g. Fermi liquid theory \cite{landau_statistical_1980}. Such transformations not only allow us to theoretically understand the properties of low-energy Hamiltonians, but also provide a convenient tool to greatly improve the efficiency of numerical methods, allowing one to focus on particular subspaces of interest \cite{wurtz_variational_2020}. A standard limitation of our ability to use adiabatic transformations is that they, almost by definition, have to be extremely slow. In many-body interacting systems, unless we are interested in the ground state of a gapped system, the necessary time scales are exponentially large with the system size \cite{jarzynski_geometric_1995,kolodrubetz_geometry_2017}. A similar exponential slowing down is required if we are interested in following a ground state which either crosses a first-order phase transition \cite{de_grandi_adiabatic_2010,del_campo_assisted_2012}, enters a quantum glass regime \cite{mezard_spin_1987,nishimori_statistical_2001}, or follows from an annealing protocol solving a hard computational problem~\cite{farhi_science_2001}. From the computational point of view, strict upper bounds on the rate of parameter change result in heavy numerical costs. On the experimental side, they lead to a very slow state preparation and large energy processing times. Moreover, the necessary long time scales are generally inaccessible in experimental setups. Systems cannot be perfectly isolated from their environment, leading to decoherence and noise which can destroy the state or erase the information that adiabaticity is trying to preserve. Rather recently, it was realized that this problem can be circumvented and adiabatic transformations can be sped up, in principle arbitrarily, by adding an additional term to the Hamiltonian, suppressing all dynamical/diabatic transitions. Such ideas were first introduced in 2003 by M. Demirplak and S. Rice \cite{demirplak_adiabatic_2003} and independently in 2009 by M. Berry \cite{berry_transitionless_2009} and were subsequently termed counterdiabatic (CD) or transitionless driving. The topic of counterdiabatic driving and the related field of shortcuts to adiabaticity has recently gained tremendous attention in both experimental and theoretical literature \cite{del_campo_shortcuts_2013,guery-odelin_shortcuts_2019,torrontegui_chapter_2013,baksic_speeding_2016,claeys_floquet-engineering_2019,petiziol_accelerating_2019,petiziol_fast_2018,theis_counteracting_2018,vepsalainen_optimal_2018,zhou_floquet-engineered_2019,del_campo_focus_2019}. In counterdiabatic protocols one applies an additional term to the Hamiltonian, proportional to the generator of adiabatic transformations, the so-called adiabatic gauge potential (AGP). This extra term suppresses all diabatic(non-adiabatic) excitations/losses. The main difficulty of this approach is that the AGP is generally highly non-local. Furthermore, the AGP is not only useful in counterdiabatic driving, but also contains a wealth of information on the geometry of eigenstates and diabatic response \cite{kolodrubetz_geometry_2017} and serves as a very sensitive probe of quantum chaos~\cite{mohit_2020}. The exact AGP is local only in some special situations, including symmetry transformations or transformations of the ground state of a gapped system \cite{hastings_quasiadiabatic_2005,bravyi_short_2011,bachmann_automorphic_2012}. Fortunately, even if the exact AGP is generally out of reach, it was recently realized that in some specific instances we can find an approximate yet accurate local AGP using a variational minimization \cite{sels_minimizing_2017,kolodrubetz_geometry_2017,claeys_floquet-engineering_2019}. The resulting local AGP was shown to be highly efficient both in solving computationally difficult problems~\cite{Hartmann_2019, passarelli_2020} and performing efficient Schrieffer-Wolff transformations \cite{wurtz_variational_2020, wurtz_emergent_2020}. Still, several general and unanswered questions remain: \emph{(i) When do such local approximations apply? (ii) Which are the optimal protocols for local adiabatic evolution? (iii) Can we learn which states are most heavily affected by diabatic effects from these local approximations and is it possible to identify the states for which dissipation is minimal?} In this work, we first focus on finding an optimal path in the space of system's parameters to design local protocols for adiabatic state preparation. Very often, physical systems are controlled by multiple parameters, e.g. pressure, temperature, chemical potential, external electric and magnetic fields in thermodynamics or single-spin controls and two-spin interactions in quantum control. While the order in which these parameters are changed will not matter if everything happens perfectly adiabatically, the diabatic effects can vary drastically depending on how these parameters are tuned. It is then natural to ask for the optimal path in the space of parameters, minimizing diabatic effects. This will be the focus of this work. For concreteness, we will consider protocols satisfying the time-dependent Schr\"odinger equation (we set $\hbar=1$ throughout the text), \begin{equation} i\frac{\partial}{\partial t} \left| \psi(t) \right\rangle = H(\vec \lambda(t))\left| \psi(t) \right\rangle, \label{time dep eq} \end{equation} where the Hamiltonian depends on a set of time-dependent control parameters $\vec{\lambda}(t)$ and we initialize the system at $t=0$ in a stationary eigenstate of $H(\vec \lambda(0))$ (all our results immediately extend to mixed initial states). The question is then how to vary $\vec{\lambda}(t)$ such that the state remains close to an instantaneous eigenstate of $H(\vec{\lambda(t)})$. To answer this question, we analyze the adiabatic landscape of a fairly generic non-integrable 1D Ising model characterized by two independent couplings (cf. Eq.~\eqref{TFI}). Specifically, we show that the variational adiabatic gauge potential (VAGP), which gives the best local approximation to the exact AGP (see Eqs.~\eqref{eq:def_eig} and \eqref{eq:def_AGP}), forms a two-dimensional vector space, and the directions where the norm of the VAGP is minimal define the optimal paths minimizing diabatic effects. We mainly focus on infinite temperature states, where the equilibrium properties of the system are completely featureless. Nevertheless, the problem of adiabatic continuation remains well defined and highly nontrivial. We find that the evolution along the optimal direction is efficient; that is, eigenstates which are drawn from the middle of the spectrum remain close to the instantaneous eigenstates, maintaining small energy variance. As we will show below (see also Refs.~\cite{claeys_floquet-engineering_2019,mohit_2020}) the AGP can be expressed through the long-time limit of non-equal time correlation functions of the operators conjugate to the coupling. Therefore, they cannot be analyzed by the methods of equilibrium statistical mechanics. Our findings thus imply that temperature plays a much smaller role in adiabatic transformations than in equilibrium settings. Let us now introduce the Hamiltonian that we will analyze in this work, describing the quantum Ising model in the presence of a longitudinal and transverse field, as \begin{align} H = J \sum_{i} \sigma_i^z\sigma_{i+1}^z + h \sum_{i} \sigma_i^z + g \sum_i \sigma_i^x, \end{align} and we introduce a shorthand notation that is convenient for translationally-invariant systems \begin{align} H = J {Z}{Z} + h {Z} + g {X} \label{TFI}, \end{align} with \[ ZZ\equiv \sum_{i=1}^L\sigma^z_i\sigma^z_{i+1},\quad Z \equiv \sum_{i=1}^L\sigma^z_i, \quad X\equiv \sum_{i=1}^L\sigma^x_i, \] and so on. Fixing the coupling in front of the Ising interaction $ZZ$ to be unity, $J=1$, $h$ and $g$ will be taken as control parameters throughout this paper. The main results of our paper are summarized in Fig.~\ref{Flow Diagram sphere}: each point represents a choice of couplings defining a Hamiltonian, and the lines show the optimal adiabatic directions presented as a flow diagram. As will be discussed later, this diagram has a very rich structure and is in many respects similar to the standard equilibrium phase diagrams (except that, as already pointed out, it corresponds to an infinite temperature). Let us now summarize the most essential findings reflected in this figure, which will be explained in detail in the paper. \begin{figure}[ht!] \includegraphics[width=1.\columnwidth]{Figure1} \caption{Flow diagram indicating the optimal path for quantum control using a 3-body (purple) and 5-body ansatz (red) for a two-dimensional parameter space $(h,g)$. The horizontal and vertical axes are $h$ and $g$, respectively, with the two poles of the sphere given by $(h,g)=(0,0)$ [(a)] and $(\infty,\infty)$ [(d)]. Source flows can be observed at $(h,g)=(0,0)$ [(a)] and $(2,0)$ [(c)], where the optimal direction is approximately the radial one. The norm of the variational gauge potential is highly anisotropic: near the source flows the norm is small and nearly system-size independent along the optimal directions, while increasing drastically in the orthogonal direction and diverging exactly at the points (a) and (c). For the 5-body ansatz an additional singular point appears at $(h,g)=(1,0)$ [(b)], strongly disrupting the optimal directions in its vicinity.} \label{Flow Diagram sphere} \end{figure} \begin{itemize} \item Along the $h$-axis the flow diagram contains singularities, corresponding to Hamiltonians with exponentially-large degeneracies in the energy spectrum, which we call macroscopic degeneracies. These singularities serve as sources/sinks of adiabatic flows and play a similar role to critical points in equilibrium phase diagrams. \item Close to these singularities, the VAGP becomes infinitely anisotropic, with highly-anisotropic regions extending far away from the singularities. This high anisotropy implies that optimal directions, along which local adiabatic transformations are highly efficient, remain well defined. Such optimal directions define paths with minimal dissipation and maximum fidelity for state preparation. \item Near the singular points, there are special many-body ``dark'' states annihilated by the diverging part of the VAGP. These states exist throughout the entire energy spectrum; similar to the anisotropic regions these are highly robust and extend deep into the ergodic regimes, bearing many parallels with the recently-discovered quantum scars and the eigenstates of constraint models \cite{bernien_probing_2017,turner_weak_2018,turner_quantum_2018,moudgalya_exact_2018,sugiura_many-body_2019,khemani_signatures_2019,james_nonthermal_2019,ho_periodic_2019,choi_emergent_2019}. While these states form an exponentially-small fraction of the total Hilbert space, their total number can still be exponentially large; as they are immune to the usual dissipation they can be efficiently prepared both numerically and experimentally. \item The optimal adiabatic directions allow us to define adiabatic flows similar to the renormalization group flows, as shown in the figure, and these flows in turn define adiabatically-connected families of Hamiltonians. The norm of the exact AGP is equivalent to the Fubini-Study metric defining the distance between eigenstates of adiabatically-connected Hamiltonians \cite{kolodrubetz_geometry_2017}. Therefore, these flows can be interpreted as lines approximately minimizing the local distance between eigenstates (or more accurately between energy shells) of different Hamiltonians. Along these flows, both states and operators can be dressed to a very good accuracy under the unitary transformations generated by the local VAGP. In particular, such directions are characterized by the existence of nearly-conserved operators, which are locally-dressed operators conjugate to the coupling along these directions. \item Near the singular points, the VAGP diverges in all directions except for the optimal one. However, the divergent part of the VAGP is a well-defined local operator, implying that a local dressing can be used to efficiently perform adiabatic rotations near these singularities. Combined with the fact that all adiabatic flows terminate at one of such singularities, we arrive at the interesting conclusion that any optimal adiabatic path between two generic points goes through one of these singularities. In other words, the system first has to be brought to the singular point, then a local rotation needs to be performed, before going to the target point along a different flow line. Importantly, such a path can be always found locally by following the optimal direction of the adiabatic flow. \item The optimal directions generally depend on the support/size of the variational ansatz (see the top and bottom halves in Fig.~\ref{Flow Diagram sphere}), i.e. the support of the operator generating approximate adiabatic transformations. New singularities appear in the higher-order variational ansatz with an increased local support, reflecting higher-order divergences in the perturbative expansion of the AGP. These singularities arise from the degeneracies associated with higher-order interactions and appear at rational couplings, bearing many similarities to the divergences appearing in both KAM theory~\cite{Wayne_notes} and locator expansions~\cite{scardicchio_2017}. The emergence of higher-order singularities indicates that it is not possible to improve local dressing, by either adding additional local terms to the CD protocol or by slowing down the ramping rate in the absence of CD driving, without abruptly altering the path near these new singularities. \item The adiabatic flow diagram remains well-defined even at infinite temperature, where no structure exists in the equilibrium state according to statistical mechanics. Interestingly, many of its features persist at all temperatures, all the way down to the ground state at zero temperature. \end{itemize} We confirm these general findings with numerical simulations for the non-integrable 1D Ising model described by the Hamiltonian~\eqref{TFI}. Our results can have a broad range of applications in various problems, beyond simply finding optimal paths for annealing or state preparation. In particular, they can be used to find efficient local conservation laws and corresponding ``most-integrable'' directions, to find the nearest integrable (simple) points that are locally connected to a Hamiltonian of interest, to define most efficient ways of obtaining effective low-energy theories starting from a noninteracting model, and so on. This paper is organized as follows: In Sec.~\ref{sec:VAGP}, we introduce the VAGP and define the optimal adiabatic directions. Applications to approximate CD driving and slowest operators are also explained there. Sec.~\ref{sec:flow diagram} is the highlight of this paper, where we obtain the flow diagram that defines the optimal directions at each point of the coupling space. We demonstrate that both for conventional adiabatic driving and for the approximate CD protocols state preparation along the optimal paths shows a much better performance than along the orthogonal directions. We explain that the flows terminate/start at special sources/sinks, where the VAGP develops divergencies in the orthogonal directions, becoming infinitely anisotropic, and show how these singularities arise from the perturbative expansion of the exact AGP. We then explain the emergence of special dark states unaffected by the singular part of the VAGP. In Sec.~\ref{sec:bigger ansatz}, we study how the VAGP depends on the size of the variational ansatz and explain the emergence of new singularities near rational values of $h$. We then use the VAGP to construct approximate local conserved operators and analyze their life times in Sec.~\ref{sec: conserved operator}. Details of the perturbative expansion are given in Sec.~\ref{sec:perturbative} and Sec.~\ref{sec:conclusion} is reserved for conclusions. \section{Variational Adiabatic Gauge Potential}\label{sec:VAGP} In this section we will give a brief introduction to the concept of the (variational) adiabatic gauge potential, emphasizing its structure as a vector in a system with multiple controls (tunable parameters). Much of this discussion can be found in earlier papers \cite{sels_minimizing_2017,kolodrubetz_geometry_2017,claeys_floquet-engineering_2019}, but is included here in order to be self-contained and to make an explicit connection of VAGP with slow operators \cite{kim_slowest_2015,michailidis_slow_2018}, operator spreading \cite{nahum_operator_2018,khemani_operator_2018,von_keyserlingk_operator_2018,gopalakrishnan_hydrodynamics_2018,swingle_unscrambling_2018,parker_universal_2019,avdoshkin2019euclidean}, and emergent conservation laws \cite{mierzejewski_approximate_2015}, which will be relevant for the presented flow diagram. \subsection{Theoretical background} Let us consider a family of Hamiltonians $H(\vec \lambda)$, where $\vec \lambda$ specifies the space of available couplings or controls. Any protocol corresponds to a time-dependent choice of $\vec{\lambda}(t)$, with an adiabatic protocol corresponding to a vanishing time-derivative $|\dot{\vec{\lambda}}(t)|$. The effects of time-dependent couplings are most clearly illustrated in the instantaneous (co-moving) eigenstates of the Hamiltonian $|n(\vec \lambda)\rangle$, satisfying \begin{equation}\label{eq:def_eig} H(\vec \lambda)|n(\vec \lambda)\rangle = \epsilon_n(\vec \lambda) |n(\vec \lambda)\rangle. \end{equation} Any change in the control parameters corresponds to a change in the eigenstates, and one can formally define the adiabatic gauge potential (AGP) as the Hermitian operator $\vec{\mathcal{A}}(\vec \lambda)$ generating these basis changes \cite{kolodrubetz_geometry_2017}: \begin{equation}\label{eq:def_AGP} i \partial_{j} |n(\vec \lambda)\rangle=\mathcal{A}_j (\vec \lambda) |n(\vec \lambda)\rangle, \end{equation} in which $\partial_{j}$ is the partial derivative w.r.t. $\lambda_j$. Note that, since eigenstates are only defined up to a phase (or more general rotations in the presence of degeneracies), the AGP is not uniquely defined and supports a gauge freedom. We will be interested in finding the time evolution \eqref{time dep eq} of an initial pure state $|\psi(t=0)\rangle$ under time evolution governed by a time-dependent Hamiltonian $H(\vec{\lambda}(t))$, where the only explicit time dependence is through the control parameters~\footnote{Our discussion equally applies to the evolution of mixed states}. Expanding this state in the co-moving basis \begin{equation} |\psi(t)\rangle=\sum_n a_n(t) |n(\vec \lambda(t))\rangle, \end{equation} it is easy to check that the time evolution in this new basis is governed by the moving Hamiltonian \begin{align}\label{eq:mfHam} H_m(t)& = H(\vec \lambda(t))-\sum_j \dot \lambda_j \mathcal{A}_j (\vec \lambda (t)), \nonumber\\ & = H(\vec{\lambda}(t))-\dot{\vec{\lambda}}(t) \cdot \vec{\mathcal{A}}(\vec{\lambda}(t)). \end{align} Specifically, \ \begin{align} \label{comoving} & i \dot{a}_n(t) =\sum_{l} H_m^{n l}(t)\, a_l (t),\\ &H_m^{nl}(t)= \langle n(\vec{\lambda}(t))|H_m(t)|l(\vec{\lambda}(t))\rangle, \nonumber \end{align} which takes the form of a regular matrix representation of the Schr\"odinger equation, but with time-dependent basis states, which are accounted for by the second term in Eq.~\eqref{eq:mfHam}. In the limit $\dot{\vec{\lambda}} \to 0$ this additional term vanishes such that there are no transitions between instantaneous eigenstates of $H(\vec{\lambda})$. At non-vanishing $|\dot{\vec{\lambda}}|$ the extra term in the moving Hamiltonian, proportional to the AGP, cannot be neglected. Since $H(\vec{\lambda})$ is by construction diagonal in the co-moving frame, all diabatic excitations/losses are generated by the off-diagonal elements of the AGP. Following Ref.~\cite{kolodrubetz_geometry_2017}, Eq.~\eqref{eq:def_AGP} can be recast as an operator equation \begin{align} \left[H, G_j(\vec{\mathcal{A}}) \right] = 0, \label{equation exact gp} \end{align} in which \begin{align} G_j(\vec{\mathcal{A}}) \equiv \partial_{j} H + {i }[\mathcal{A}_j,H]. \label{eq:G_def} \end{align} The matrix $G_j(\vec{\mathcal{A}})$ is diagonal in the eigenbasis of $H$ and its diagonal matrix elements are given by $\partial_j \epsilon_n(\vec \lambda)$, the generalized forces conjugate to $\lambda_j$. In other words, one can view any infinitesimal deformation of the Hamiltonian along the $\lambda_j$ direction $\partial_j H$ as consisting of a spectrum change encoded in $G_j$ and an eigenbasis rotation encoded in $\mathcal{A}_j$. Eq.~\eqref{equation exact gp} remains well-defined in both the classical and thermodynamic limits. However, with the exception of symmetry transformations/integrable systems, the solutions to this equation are generally unstable to infinitesimal perturbations and might not even exist in either of these limits~\cite{jarzynski_geometric_1995,kolodrubetz_geometry_2017, mohit_2020}. Therefore, finding approximate local gauge potentials is essential to circumvent this problem. One goal of this paper is to convey that, even though the exact AGP might be ill-defined, such local approximations can be well-defined and meaningful. A particularly powerful approach to finding approximate solutions is the variational method. It is based on the observation that Eq.~\eqref{equation exact gp} can be interpreted as the minimization condition for the auxiliary action $S(\vec{\mathcal A})$ \cite{sels_minimizing_2017} \begin{align} {\delta S \over \delta \mathcal{A}_j} = 0, \quad \textrm{with}\quad S\equiv \sum_j {\rm Tr}[G_j(\vec{\mathcal A})^\dagger G_j(\vec{\mathcal A})]. \label{equation variational} \end{align} Approximate solutions of Eq.~\eqref{equation exact gp} can be found by choosing a specific subset of operators as an ansatz for the AGP and finding the minimum of the action. We call the resulting solution the (local) variational adiabatic gauge potential (VAGP). Also note that the action for the VAGP in Eq.~\eqref{equation variational} can be interpreted as the action at infinite temperature. In principle, it can be extended to finite temperatures through the introduction of a thermal state $\exp\left[-\beta H\right]$ in $S$ (see Ref.~\cite{sels_minimizing_2017}), although this strongly complicates the resulting minimization. In this paper we focus on variational manifolds consisting of all local operators with a given support (see Sec. \ref{subsec:opt_adiabatic_dir} for details). One can develop a similar expansion based on nested commutators of $\partial_{j} H$ and $H$ \cite{claeys_floquet-engineering_2019}. We checked that this second expansion leads to very similar conclusions. Despite being an approximate solution, as we discuss below, the local VAGP can be used to determine highly nontrivial properties of the system. Let us mention a few of them. \emph{Approximate counterdiabatic driving}. -- The notion of counterdiabatic (CD) driving immediately follows from this derivation, since the exact solution of $\vec{\mathcal{A}}$ can be used to completely suppress energy dissipation by evolving a system with the CD Hamiltonian including an additional term $\dot{\vec{\lambda}}\cdot\vec{\mathcal{A}}(\vec{\lambda})$, \begin{align} H_{\rm CD}(t) = H(\vec{\lambda}(t))+\dot{\vec{\lambda}}(t)\cdot\vec{\mathcal{A}}(\vec{\lambda}(t)). \label{Ham CDD} \end{align} Representing the evolution in the co-moving frame of $H(\vec{\lambda}(t))$, the additional counterdiabatic term cancels, such that the moving frame Hamiltonian is exactly given by $H(\vec{\lambda}(t))$, which is diagonal and hence does not lead to any excitations or dissipation. Namely, starting from any energy eigenstate $|\psi(t=0)\rangle = |n(\vec{\lambda}(0))\rangle$ the state at later times remains an instantaneous eigenstate $|n(\vec{\lambda}(t))\rangle$. In the limit of an infinitely fast rate of change $|\dot {\vec \lambda}|\to\infty$, the AGP dominates, and the resulting evolution can be seen as a pure dressing of the initial state. We will refer to a protocol corresponding to $ H(\vec{\lambda}(t))$, where no CD term is present, as the unassisted protocol. While the exact AGP generally cannot be realized in many-body systems, the use of local approximations from the variational minimization has already been shown to lead to a significant suppression of transitions \cite{sels_minimizing_2017,claeys_floquet-engineering_2019, Hartmann_2019, tamiro_2019, passarelli_2020}. As such, the availability of an accurate local VAGP can also be used to reduce dissipation and design efficient annealing protocols. \emph{Approximate state dressing}. -- Starting from an initial eigenstate of the instantaneous Hamiltonian, counterdiabatic driving can be interpreted as interpolating between two limits: $\dot{\vec{\lambda}}\to 0$ returns adiabatic state preparation, whereas $\dot{\vec{\lambda}} \to \infty$ dresses the initial state with the (approximate) gauge potential. Namely, in this limit the Schr\"odinger equation reduces to \begin{align} i \partial_t \left|\psi(t)\right\rangle = \dot{\vec{\lambda}}(t)\cdot\vec{\mathcal{A}}(\vec{\lambda}(t))\left|\psi(t)\right\rangle \end{align} For an exact AGP, $\left|\psi(t)\right\rangle = |\psi(\vec{\lambda}(t))\rangle$ and this equation reduces to \begin{align} i \partial_{\vec{\lambda}} |\psi(\vec{\lambda})\rangle = \vec{\mathcal{A}}(\vec{\lambda})|\psi(\vec{\lambda})\rangle. \end{align} This corresponds to a (quasi-)adiabatic dressing of the initial state \cite{hastings_quasiadiabatic_2005,bachmann_automorphic_2012, wurtz_variational_2020, wurtz_emergent_2020}. The possibility of such dressing with a (quasi-)local $\mathcal{A}$ is a crucial ingredient in classifying topological phases, where all ground states within a given phase can be adiabatically connected using a local dressing. \emph{Operator spreading.} -- A formal solution to Eq.~\eqref{equation exact gp} can be found using the Lehmann's representation as \begin{equation} \mathcal{A}_j=-\lim_{\epsilon \to 0^+}{1 \over 2}\int_{-\infty}^{\infty} dt\ {\rm sgn}(t)\ \mathrm e^{-\epsilon |t|}\left( {\partial_{j}} H\right)(t), \label{eq:A_Heisenberg} \end{equation} where \begin{equation} \left({\partial_{j}} H\right)(t)\equiv \mathrm e^{i H t} \left(\partial_{j} H \right) \mathrm e^{-i H t} \end{equation} is the operator conjugate to the parameter $\lambda_j$, $\partial_{j}H$, in the Heisenberg representation w.r.t. the instantaneous Hamiltonian $H$. For classical Hamiltonian systems, this representation originates from C. Jarzynski \cite{jarzynski_geometric_1995}. As mentioned before, the exact solution is highly sensitive to the choice of $\partial_{j}H$ and the limit $\epsilon \to 0$ will generally diverge in chaotic systems. Keeping $\epsilon$ finite then corresponds to finding an approximate AGP, which will be local for a local $\partial_{j}H$ due to the finite support of $\left({\partial_{j}} H\right)(t)$ at finite times, following recent results on operator spreading (e.g.~\cite{swingle_unscrambling_2018}) and Lieb-Robinson bounds \cite{Lieb:1972aa}. This representation has also been combined with the variational principle to find an efficient variational ansatz in chaotic many-body systems \cite{claeys_floquet-engineering_2019}. \emph{Conservation laws and slowest operators.} -- \label{sec:conservation_laws} A local AGP immediately implies an additional local conservation law, since $G_j(\vec{\mathcal{A}})$ by definition commutes with the Hamiltonian. Minimizing the action then corresponds to obtaining a `slowest operator' \cite{kim_slowest_2015}, minimizing the commutator with the Hamiltonian (setting the time scale for thermalization), which then becomes an exact conserved quantity if the local VAGP becomes an exact AGP. Interestingly, if we consider the representation of the AGP through Eq.~\eqref{eq:A_Heisenberg} with finite $\epsilon$, the corresponding $G_j(\vec{\mathcal{A}})$ exactly coincides with the approximately-conserved operator obtained by the time-averaging of $\left({\partial_{j}} H\right)(t)$ introduced in Ref.~\cite{mierzejewski_approximate_2015}. In particular, using Eq.~\eqref{eq:A_Heisenberg} it is easy to check that \begin{equation} G_j(\vec{\mathcal{A}})= \overline {\left({\partial_{j}} H\right)}\equiv {\epsilon\over 2}\int_{-\infty}^{\infty} dt\ \mathrm e^{-\epsilon |t|} \left({\partial_{j}} H\right)(t), \end{equation} namely, it is the part of $\partial_{j}H$ that is conserved and does not decay with time. \subsection{Optimal adiabatic directions} \label{subsec:opt_adiabatic_dir} From Eq.~\eqref{eq:mfHam} it can be seen that all diabatic transitions are induced by the AGP. For a time-dependent change along a certain direction $\dot{\vec{\lambda}} = |\dot{\lambda}|\, \vec{\mathsf n }_\lambda$, for a fixed rate of change $|\dot{\lambda}|$ along a direction set by a unit vector $\vec{\mathsf n }_\lambda$, these transitions can be expected to be maximally suppressed along directions where the norm of $\mathcal{A}_{\lambda}= \vec{\mathsf n }_\lambda \cdot \vec{\mathcal{A}}$ is minimal. In the same way that the gap between the ground state and the first excited state sets the time scale for quantum annealing, the norm of the AGP along a certain direction sets the scale for the rate of change of the control parameter $|\dot \lambda|$: for small $||\mathcal{A}_\lambda||$ the control parameter can be changed rather fast without inducing large diabatic effects, whereas for large $||\mathcal{A}_\lambda||$ even slow deformations of the Hamiltonian immediately lead to diabatic transitions. While the local VAGP is not exact, it contains information about transitions through local interactions, which are often the most damaging because they can lead to a large energy transfer. We will demonstrate below that this is indeed the case. Given a multi-dimensional space of control parameters, one can thus set the optimal direction as the direction for which the norm of the VAGP is minimal. In principle, one can define different norms, so the minimization procedure is not unique. For example, one could choose norms tailored for particular states, e.g. the ground state. In this work we will use the Fr\"obenius (L2) trace-norm, equivalent to the common infinite-temperature norm. These norms have the advantage that they can be easily calculated in large systems (including the thermodynamic limit) without any need to diagonalize the Hamiltonian. As such, the actual minimization is particularly straightforward. Remarkably, it was observed in Refs. \cite{sels_minimizing_2017,claeys_floquet-engineering_2019, Hartmann_2019, passarelli_2020} that this infinite-temperature norm still provides excellent results even considering e.g. only dissipation from the ground state. Colloquially, using the Fr\"obenius norm to find the VAGP is similar to optimizing an ice cream recipe inside a very hot oven and then applying this recipe inside a freezer to efficiently prepare the ice cream. Remarkably, this procedure works amazingly well in various systems. Rather than keeping the discussion maximally general, we will focus on a two-dimensional parameter space with controls set by $g$ and $h$ (see Eq.~\ref{TFI}), such that $\vec \lambda=(g,h)$ and analyze infinitesimal deformations $(h+\delta \cos \varphi , g+\delta \sin \varphi )$ with an infinitesimal $\delta$, such that $\vec{\mathsf n }_\lambda = (\cos \varphi, \sin \varphi)$. The generalization of this methodology to more parameters is straightforward. Crucially, the action $S$ defined in Eq.~\eqref{equation variational} is quadratic in the variational parameters, such that the minimization will give rise to a set of linear equations and the VAGP in an arbitrary direction will be a linear combination of the solutions corresponding to $\delta h$ ($\varphi=0$) and to $\delta g$ ($\varphi=\pi/2)$. We can write \begin{align} \mathcal{A}_{\lambda}(\varphi) \equiv \vec{\mathsf n }_\lambda \cdot \vec{\mathcal{A}}(\lambda)= \mathcal{A}_h \cos \varphi+ \mathcal{A}_g \sin \varphi, \label{AGP linear} \end{align} in which $\mathcal{A}_h$ and $\mathcal{A}_g$ minimize the action $S_h$ and $S_g$ respectively. The Hamiltonian is set by the parameters $\vec{\lambda}$, as denoted in the subscript (where we dropped the vector notation), while the argument denotes the direction in which this Hamiltonian is varied. Defining \begin{align} \tan 2\alpha = \frac{{\rm Tr}[\mathcal{A}_h^\dagger \mathcal{A}_g]+{\rm Tr}[\mathcal{A}_h \mathcal{A}_g^{\dagger}]}{{\rm Tr}[\mathcal{A}_h^\dagger \mathcal{A}_h]-{\rm Tr}[\mathcal{A}_g^\dagger \mathcal{A}_g]}, \end{align} it can easily be checked that the norm of the VAGP is minimal for $\varphi=\alpha \pm \pi/2$, $\alpha \in [-\pi/4,\pi/4]$, and maximal in the orthogonal directions $\varphi =\alpha$ and $\alpha+\pi$ if $||\mathcal{A}_g|| > || \mathcal{A}_h||$, while in the other case the extrema are exchanged (see also Appendix \ref{app:derive_opt_dir}). We will call these directions optimal and orthogonal respectively. In the following sections, we will analyze the geometric structure of these directions and the resulting anisotropy as a function of $(g,h)$. Note that this also highlights that the directions set by $\varphi$ and $\varphi+\pi$ are equivalent since they correspond to the same perturbation, only with a different sign (which does not influence the norm of the VAGP). For translationally-invariant spin-$1/2$ systems of size $L$ with periodic boundary conditions, like those described by the Hamiltonian~\eqref{TFI}, we define the $k$-body operator space $\mathcal{H}_k$, $k<L$, as the zero-momentum space of all operators having support of up to $k$ sites, where we will choose strings of Pauli matrices as basis operators: $\mathcal{H}_k = {\rm span}(S_k)$, with \begin{align} S_k= \{O_n| \ O_n=\sum_{p=1}^{L} \sigma_{p}^{s_1} \sigma_{p+1}^{s_2}\cdots \sigma_{p+k-1}^{s_k} \}, \end{align} where the index $n$ stands for the set $\{s_1,\dots,s_k\}$ and $\sigma^s_i$ is one of the Pauli operators $\{\sigma^x,\sigma^y,\sigma^z,1\}$ acting on the site $i$. To avoid double-counting the identity operator is excluded from the right boundary, i.e. ${s_k} \neq 1$. We will use a local variational ansatz with a fixed support: \begin{align} \mathcal{A}_\lambda(\varphi)=\sum_{O_n\in S_k } c_n(\vec{\lambda},\varphi) O_n. \label{variational ansatz} \end{align} We call this the $k$-body ansatz of the variational calculation, and solve Eq.~\eqref{equation variational} with the ansatz~\eqref{variational ansatz}. Since all operators $O_n$ are traceless and orthogonal, satisfying ${\rm Tr} (O_n O_m)/N=\mathcal D \delta_{nm}$, where $\mathcal D=2^N$ is the Hilbert space dimension, the minimization problem is straightforward and the solution is formally given by \begin{align} \mathcal{A}_\lambda(\varphi)=-i\, {\rm ad}_{P_kHP_k}^{-1} \left(\vec{\mathsf n }_\lambda \cdot \partial_{\vec{\lambda}} {H}\right), \label{solution variational} \end{align} where ${\rm ad}_{P_kHP_k} {\mathcal{A}} \equiv [{P}_k{H}{P}_k,\mathcal{A}]$, ${\rm ad}_{P_kHP_k}^{-1}$ is the pseudo-inverse of ${\rm ad}_{P_kHP_k}$, and ${P}_k$ is a super-operator which projects an operator onto $\mathcal{H}_k$. In the limit where this operator basis is complete we can consider e.g. projectors on eigenstates as basis operators, which returns the formal solution \begin{equation}\label{eq:mat_el_A} \mathcal{A}_{\lambda}(\varphi) = i \sum_{m\neq n} \left| m \right\rangle \frac{ \left\langle m | \vec{\mathsf n }_\lambda \cdot \partial_{\vec{\lambda}} {H} |n \right\rangle }{\epsilon_n-\epsilon_m}\left\langle n\right|, \end{equation} which can be checked to be equivalent to Eq.~\eqref{eq:A_Heisenberg}. \section{Adiabatic flow diagram of the quantum Ising model with local VAGP} \label{sec:flow diagram} In this section we will discuss in detail the flow diagram and the emerging physical implications for a particular, but fairly generic, quantum Ising model, which we introduced earlier in Eq.~\eqref{TFI}. We will first analyze this diagram using the VAGP obtained within the lowest-order approximation, which already yields non-trivial results. Namely, we will consider a variational manifold with support up to three sites for the VAGP. The motivation for this ansatz is that, as we discuss below, it reproduces the leading-order behavior and the most important singularities of the exact AGP near the strongest macroscopic degeneracy points. These singularities underly several key properties of the adiabatic flows and allow us to reveal the origin of special dark weakly-thermalizing states similar to those found in e.g. Ref.~\cite{wurtz_emergent_2020}. In the next section, we will then show how the results of this section are affected by adding terms with a larger support into the variational manifold. Before discussing our findings, let us mention a few properties of the Ising model that will be relevant later in the paper. \begin{itemize} \item There are two integrable lines corresponding to i) $g=0$: the so-called classical Ising model with strictly local integrals of motion ($z$-magnetization for each spin) and ii) $h=0$: the transverse field Ising model, which maps to free fermions through the Jordan-Wigner transformation and which has quasi-local integrals of motion constructed from fermion bilinears \cite{calabrese_introduction_2016,essler_quench_2016}. There is an additional trivially-integrable point corresponding to $\sqrt{h^2+g^2}\to\infty$, which describes noninteracting spins. Away from these points the model is believed to be chaotic, satisfying the eigenstate thermalization hypothesis (ETH) \cite{kim_testing_2014}. \item The ground state of the Ising model undergoes a quantum phase transition from an anti-ferromagnet corresponding to small magnetic field to a paramagnet at large magnetic field \cite{sachdev_2011}. On the integrable lines, the critical line separating the two phases terminates at the points $(h,g) = (2,0)$ and $(0,1)$. We note that changing the sign of the $ZZ$ coupling moves this phase transition line from the ground state to the most excited state. Therefore, this sign does not affect our ``infinite temperature'' flow diagram. \item The ``classical Ising'' line $g=0$ additionally contains macroscopic (exponential) degeneracies of the spectrum at any rational value of the longitudinal field $h$. In particular, at $h=0$ and $H=ZZ$, any configuration with the same number of domain walls has the same energy, e.g. $\left| \dots\uparrow \uparrow \downarrow \dots \right\rangle$ and $\left|\dots \uparrow \downarrow \downarrow \dots \right\rangle$. At $h=2$ and $H=ZZ+2Z$, any local spin flip from a local ``down'' to ``up'' state that creates two domain walls does not change the energy of the system, e.g. $\left|\dots \downarrow \downarrow \downarrow \dots \right\rangle$ and $\left|\dots \downarrow \uparrow \downarrow \dots \right\rangle$ are degenerate. In a similar way, at other rational points of $h$ one can always find many combinations of spin flips leaving the energy of the system invariant. Finally, the $h\to\infty$ point is also macroscopically degenerate: the energy does not change under arbitrary spin flips preserving total magnetization. \end{itemize} \subsection{Flow diagram for the 3-body variational ansatz} As mentioned in Section~\ref{sec:VAGP}, one can systematically define the adiabatic flow diagram by following the directions of the minimal norm of the VAGP. The resulting diagram with respect to the couplings $(h, g)$ as obtained within the 3-body variational ansatz for the VAGP is shown in the bottom half of Fig.~\ref{Flow Diagram sphere} as well as in Fig.~\ref{Flow Diagram}. Note that, on the one hand, the representation of the diagram on a sphere is more natural since all the Hamiltonians with large magnetic field are equivalent to each other up to trivial spin rotation and correspond to the same point in Fig.~\ref{Flow Diagram sphere}. On the other hand, the ``Cartesian'' representation shown in Fig.~\ref{Flow Diagram} is easier to visualize in the most interesting regime where neither $h$ nor $g$ are too large. One can observe that the optimal flows form radial patterns centered around singularities at $(h,g)=(0,0)$ and at $(2,0)$ (as well as near $\sqrt{h^2+g^2}\to\infty$ in the spherical representation). Interestingly, these singularities lie at the endpoints of any adiabatic flow: if we start at any generic point $(h,g)$ and follow the optimal adiabatic direction, we will end up in one of these singularities. Likewise, these singular points are good starting points for quantum state preparation in e.g. quantum annealing protocols, because any point of the control space $(h,g)$ can be reached by starting at either of these singularities. At first sight this result seems surprising: these singular points are clearly the points corresponding to large macroscopic degeneracies, where adiabatic transformations are ill-defined. Indeed, our common understanding of adiabatic transformations suggests that one should avoid situations with closing gaps between eigenstates. Thus, naively, one should generally avoid such singular points. As we will show, this reasoning only applies to the orthogonal azimuthal directions, where the norm of the VAGP becomes divergent and strong diabatic effects come into play. However, such divergences remain suppressed in the radial directions. Let us also point out that at the singular points the Hamiltonian splits into a sum of mutually commuting terms, such that its eigenstates are factorisable and thus easy to prepare. The radial flow near $h=0$ implies that the optimal deformation of the Hamiltonian is along the instantaneous magnetic field, $(\delta h, \delta g)\propto (h,g)$. Intuitively, one can understand this result using the domain wall picture: at small magnetic fields one can think about the Ising model as a weakly interacting gas of domain walls separating regions of positive and negative magnetization. The number of the domain walls is conserved by the $ZZ$-interaction. In this manifold of states the $Z$-magnetic field plays the role of an effective linear potential and the $X$-magnetic field plays the role of the domain-wall hopping amplitude. The two terms can be combined into an effective non-interacting Hamiltonian describing these domain walls. The radial deformation of $h$ and $g$ then amounts to a simultaneous rescaling of these two parameters of the effective Hamiltonian, which does not induce diabatic transitions between the eigenstates. Similar considerations apply to the other singularity at $(2,0)$, where the effective Hamiltonian becomes the PXP model~\cite{Turner_2018} with $h-2$ playing the role of the potential and $g$ playing the role of the magnetic field. At the third degenerate point, at infinite magnetic field, the radial deformation is trivially the most adiabatic direction, since it simply amounts to rescaling the full Hamiltonian. We emphasize that, while this intuition can generally be justified by considering low-energy effective Hamiltonians, the optimal directions remain well-defined for all eigenstates. We justify this conclusion below by analytically constructing the VAGP near these points, where the radial directions are explicitly shown to be non-singular. \begin{figure}[t] \begin{center} \includegraphics[width=1.\columnwidth]{Figure2_2} \caption{The flow diagram indicating the optimal direction at each point for the 3-body variational ansatz. Each point in this diagram corresponds to a Hamiltonian set by $(h,g)$ and the arrows denote the optimal direction for deformations $(\delta h, \delta g)$. Colors represent the norm of the VAGP along these optimal directions. Source flows are clearly visible at $(h,g)=(0,0)$ and $(2,0)$.} \label{Flow Diagram} \end{center} \end{figure} \subsection{State preparation along the optimal flow directions} \label{Sec better state} \begin{figure*} \includegraphics[width=0.9\textwidth]{Figure3} \caption{Energy variance of a generic initial eigenstate evolved along the optimal direction with finite $\dot{\lambda}$ [top row (a,b)] and with infinite $\dot{\lambda}$ [bottom row (c,d)] along either the optimal direction [left column (a,c)] or the orthogonal one [right column (b,d)]. Full lines show the energy variance during the protocol as function of $\lambda(t)$, where the blue lines represent the unassisted protocol ($k=0$) and the other lines represent CD driving with $k$-body VAGPs \eqref{Ham CDD}. The inset details the energy variance at the end of the protocol as a function of $k$. The end points for both protocols are given by $(h,g) = (0.5,0.5)$, with the optimal protocol starting from $(\epsilon,\epsilon)$ and the orthogonal one from $(1-\epsilon,\epsilon)$ with $\epsilon = 10^{-2}$ (see also inset of Fig.~\ref{fig:Evardotlam}). System size is $L=12$. In the unassisted protocol, the energy variance is already much smaller along the optimal directions than along the orthogonal directions. Applying local $2$-body CD driving, the energy variance drastically reduces even more along the optimal direction, while it only gradually decreases in the orthogonal direction. At infinite $\dot{\lambda}$ the state along the optimal direction can similarly be accurately approximated by a $2$-body dressing of the initial state, whereas the accuracy along the orthogonal direction only gradually increases.} \label{EneVar Optimal} \end{figure*} Before discussing the emergent features of the flow diagram in more detail, let us immediately analyze its implications for quantum state preparation. All calculations and presented diagrams hold at the operator level, so it is natural to first ask if the (operator) flows for the VAGP are representative of similar flows in the context of quantum state preparation, where only a single eigenstate is relevant. Second, if such state preparation is assisted with local counterdiabatic driving using the VAGP, a follow-up question is if the optimal directions using the VAGP are also the ones where the approximate counterdiabatic driving is maximally effective. Here, we will present numerical evidence that suggests a positive answer to these two questions. Since we are not necessarily interested in the ground state and will consider excited states, a good measure for the proximity of any prepared state $| \psi \rangle$ to an eigenstate of the instantaneous Hamiltonian is the energy variance $\delta E$: \begin{equation} \delta E^2=\langle \psi | H(\vec \lambda)^2 |\psi\rangle-\langle \psi| H (\vec \lambda) |\psi\rangle^2, \label{eq:energy_var} \end{equation} where $|\psi\rangle$ is the state prepared according to a protocol following a particular, e.g. optimal, path. If the system is prepared in an exact eigenstate of $H(\vec \lambda)$, this energy variance clearly reduces to zero, whereas a non-zero value indicates how strongly this state has mixed with different-energy eigenstates. We will consider unassisted state preparation protocols and approximate CDD protocols, where the adiabatic evolution is assisted by the strictly local VAGP. In both cases we will compare different paths in control space. For the CDD protocols we numerically solve the Schr\"odinger equation using the Hamiltonian~\eqref{Ham CDD} along a given path $\vec\lambda(t)$ with $\vec{\mathcal{A}}({\vec \lambda})$ replaced by its variationally-obtained approximation. The initial state is chosen to be one of the eigenstates of the initial Hamiltonian $H$ near the middle of the spectrum, and we then compute the energy variance at the final value of $\vec \lambda$ according to Eq.~\eqref{eq:energy_var}. While the results are presented for a single (generic) eigenstate, we checked that these are representative for most eigenstates (exceptions will be discussed in Sec.~\ref{Sec dark states}). All protocols are characterized by the total time duration $T$, where the limit of large $T$ corresponds to adiabatic evolution, while the limit of small $T$ corresponds to the instantaneous quench for the unassisted protocol and to a dressing of the initial state with the VAGP for the CDD protocol. We choose a smooth protocol to help eliminate diabatic effects at the protocol boundaries~\cite{kolodrubetz_geometry_2017} \begin{align} \lambda(t) = \sin^2 \left( {\pi\over 2} \sin^2\left({\pi t\over 2T}\right) \right), \quad t\in[0,T] \label{CDD drive sinsin}, \end{align} interpolating from $\lambda(0)=0$ to $\lambda(T)=1$, where we set the total protocol duration $T=2$ for concreteness, and take $(h(t),g(t)) = (h(0),g(0))+\lambda(t)(h(T),g(T))$. However, we checked that all the presented results remain qualitatively similar for other time dependences. In Fig.~\ref{EneVar Optimal} we present the resulting energy variance of the final state for different preparation protocols with the same final Hamiltonian but different initial Hamiltonians, corresponding to different directions of state preparation. For the optimal protocol, the initial point is chosen as $(h,g)=(0+\epsilon,0+\epsilon)$, with a small $\epsilon=0.01$ lifting the degeneracies of the eigenstates, which is then linearly evolved along the radial direction to the final point $(h,g)=(0.5,0.5)$ (cf. green line in the inset of Fig.~\ref{fig:Evardotlam}). This can be contrasted with the state preparation protocol along the orthogonal direction, taking as initial control parameters $(1-\epsilon, \epsilon)$ and again linearly deforming the Hamiltonian to the same final point $(0.5,0.5)$ (cf. red line). We checked that starting from another point along the orthogonal direction, namely $(\epsilon, 1-\epsilon)$, leads to similar results (cf. Fig.~\ref{fig:app:state preparation}). \begin{figure} \includegraphics[width=1.\columnwidth]{Figure4} \caption{Final energy variance of a generic initial eigenstate as function of protocol rate for unassisted adiabatic state preparation (see also Fig.~\ref{EneVar Optimal}). The end point is given by $(0.5,0.5)$ and initial points are given by $(\epsilon,\epsilon)$ (optimal), $(1-\epsilon,\epsilon)$ and $(\epsilon,\epsilon-1)$ (both orthogonal), as also shown in the inset. The optimal path always outperforms the orthogonal ones.} \label{fig:Evardotlam} \end{figure} In order to compare the unassisted protocols, we consider a linear ramp $\dot{\lambda} = 1/T$ and present the final energy variance for different ramp rates along different directions in Fig.~\ref{fig:Evardotlam}. It is clear that the protocol along the optimal direction generally has an energy variance that is orders of magnitude smaller than the energy variance along the orthogonal direction. Even more, when increasing $T$ (nearing adiabaticity), the energy variance for the optimal path decreases much faster, as indicated by the steeper slope in the log-log scale. Interestingly, for evolution along the sub-optimal direction starting at $(1-\epsilon, \epsilon)$, the energy variance does not decrease in the interval $0.01\leq 1/T \leq 0.1$, indicating a complicated landscape of energy level crossings. A similar situation occurs, for example, in Floquet systems \cite{weinberg_adiabatic_2017}. Still, we checked that eventually the energy variance starts decreasing again for $1/T\leq 0.005$. Using the calculated VAGP for approximate local CDD (see Eq.~\eqref{Ham CDD}) to improve on the unassisted protocol, panels (a) and (c) in Fig.~\ref{EneVar Optimal} show the energy variance for the CDD protocols along the optimal direction with either finite duration $T=2$ (a) or infinitely fast $T\to 0$ (c), which effectively corresponds to dressing the initial state with the VAGP. Different colors correspond to a different size of the variational ansatz for the VAGP, with the unassisted protocol included as reference. Panels (b) and (d) show related results for state preparation along the orthogonal direction. Again, it is clear from the plot that the energy variance is generally smaller for state preparation along the optimal direction. Even more, including (approximate) local counterdiabatic terms can be used to drastically reduce the energy variance along the optimal direction. We note that along the optimal direction the VAGP for 1-body ansatz is found to be exactly zero; therefore the results for $k=0$ and 1 completely overlap each other. While including the approximate counterdiabatic term along the orthogonal direction also systematically reduces the energy variance with increasing ansatz size, its effect is not as pronounced as along the optimal direction. \subsection{Asymptotic behavior of the VAGP near singular points} \label{Sec Anisotropy} \begin{figure} \includegraphics[width=1.0 \columnwidth]{Figure5} \caption{The norm of the VAGP for the 3-body anzatz with different $h$ and $g$. The vertical axis is the norm and the horizontal axis is $r=\sqrt{h^2+g^2}$. The ratio between $h$ and $g$ is fixed to satisfy $g=0.2\, h$. The norm in the optimal direction (blue) is nearly constant for small $r$. By contrast, the norm along the orthogonal direction (red) diverges as $O(1/r)$ as $r$ approaches to zero.} \label{Divergence2} \end{figure} From the structure of adiabatic flows shown in Figs.~\ref{Flow Diagram sphere} and~\ref{Flow Diagram}, it is clear that the points $(0,0)$ and $(2,0)$ play a special role, serving as sources/sinks of these flows. As already mentioned, these points also correspond to Hamiltonians with macroscopic (exponential) degeneracies in their energy spectrum. As will be discussed in this section, these points control many important properties of the AGP, including the large anisotropy between optimal and orthogonal directions and the existence of special dark/non-thermal states far from the edges of the spectrum. In order to understand these properties, we consider perturbative expansions of the exact AGP near these two singular points. The full formalism will be developed in Sec.~\ref{sec:perturbative}, and here we will focus on the leading-order terms only. Near $(h,g)=(0,0)$, the dominant term in the perturbative expansion is given by \begin{align}\label{eq:agp_expansion} \mathcal A_{\lambda}(\varphi) \approx & \frac{1}{r} {{\sin\varphi \cos\theta-\cos\varphi \sin\theta}\over 4 \cos^2\theta} (Y-ZYZ)+\dots \end{align} with $r = |\vec{\lambda}|= \sqrt{g^2+h^2}$. Here the angle $\theta$ characterizes the magnetic field in the Hamiltonian $H(h,g)$ through $(h,g) = (r \cos \theta, r \sin\theta)$, whereas the angle $\varphi$ characterizes the direction in which this magnetic field is perturbed $(\delta h,\delta g) \propto (\cos \varphi, \sin \varphi)$. From Eq.~\eqref{eq:agp_expansion} it is clear that the AGP diverges at $(0,0)$ for a general $\varphi$. However, along the radial direction $\varphi=\theta$ the singular term exactly vanishes, indicating that the radial direction is the optimal one. It is also evident that the anisotropy between the optimal and orthogonal directions diverges near this singularity. This perturbative expansion also highlights that the variational ansatz for the VAGP minimally requires 3-body terms in order to correctly capture the singularity and the corresponding anisotropy. The increasing anisotropy as the magnetic field goes to zero is clearly visible in the 3-body VAGP, as illustrated in Fig.~\ref{Divergence2}. In this plot we show the norm of the 3-body VAGP along the optimal and orthogonal directions as a function of $r$ at a fixed angle $\theta=\arctan(0.2)$, such that $g=0.2\ h$. The lines are the fits to the constant (optimal) and $1/r$ (orthogonal) asymptotes expected from perturbation theory. Interestingly, the perturbative scaling of the norm of VAGP extends up to a relatively large value of the coupling $r=0.4$, such that the effects from the singular point can remain important deep into the ergodic regime of the flow diagram. In Appendix \ref{app:scalingterms}, the individual weights of the terms in the expansion are compared with the scalings from perturbation theory, and it is confirmed that the dominant terms are of the form \eqref{eq:agp_expansion}. The operator divergence can immediately be connected to the eigenstate structure of the Hamiltonian at $(0,0)$. As already noted, the energy of the model only depends on the number of domain walls, leading to macroscopic degeneracies in the eigenspectrum. The operator $Y-ZYZ$ can be seen as a `dressed' version of the spin flip operator $Y$, which however only creates a spin flip if it does not change the number of domain walls, connecting the degenerate eigenstates. This macroscopic degeneracies in $H$ and their splitting by the perturbation effectively dominate the perturbative AGP and lead to well-defined local terms. A very similar structure emerges near the second singularity $(2,0)$, where the perturbative expansion of the exact AGP yields (see again Sec.~\ref{sec:perturbative}) \begin{equation}\label{eq:pertPYP} \mathcal{A}_{\lambda}({\varphi}) \approx \frac{\sin \varphi \cos \theta-\cos\varphi \sin \theta}{8r \cos^2\theta} PYP +\dots, \end{equation} where now $r = \sqrt{(h-2)^2+g^2}$ and $\varphi$ is again the angle characterizing the deformation $\vec {\delta \lambda}$. We introduced the notation $P$ for the projector on the down state of the spin along the $z$-direction. In the extended notation, the $PYP$ term reads \begin{equation} PYP=\frac{1}{4}\sum_j \left({1-\sigma_{j-1}^z}\right) {\sigma_{j}^y} \left( {1-\sigma_{j+1}^z} \right). \end{equation} Same as near the $(0,0)$-singularity, the AGP diverges as $r\to 0$ except in the radial direction $\phi=\theta$. Therefore the AGP again becomes infinitely anisotropic in the limit $r\to 0$. This singularity is precisely reflected in the flow diagram indicating that the optimal directions are radial. Interestingly, and not accidentally, the leading-order singularity of the AGP is nothing but the generator of spin rotations of the effective low-energy $PXP$ model emerging near the $(2,0)$ point~\cite{Turner_2018}. This model was already shown to satisfy highly unusual properties, including the existence of weakly thermalizing quantum scar states~\cite{Turner_2018} and the existence of nearby integrable deformations of the Hamiltonian~\cite{Khemani_2019}. In the next section, we will show that some (and probably all) unusual properties of this model are encoded in the exact AGP and can be observed in its local variational approximation. Since it was recently noted that the AGP generates the effective Schrieffer-Wolff Hamiltonian, it is also worthwhile to note that the effective PXP Hamiltonian can be obtained by performing the Schrieffer-Wolff transformation using the VAGP \cite{wurtz_variational_2020}. \subsection{Many-body dark states} \label{Sec dark states} \begin{figure*} \includegraphics[width=0.8\textwidth]{Figure6} \caption{Energy variance of a dark (a) and a bright (b) eigenstate as function of $\lambda$. Blue lines represent the unassisted protocol ($k=0$) and the other lines represent CD driving with a $k$-body VAGP using a sin-square ramp \eqref{CDD drive sinsin}. Inset details the final energy variance as function of $k$. Note the different vertical scales in both figures. The starting point of the protocol is $(h,g) = (2,0)$ and the final point is $(h,g)=(2,0.5)$. System size is $L=12$. Initial states are the ``dark state" $\state{\psi_1}=\state{\uparrow \uparrow \downarrow \downarrow \uparrow \uparrow \downarrow \downarrow \uparrow \uparrow \downarrow \downarrow }$ (a) and the N\'eel state $\state{\psi_2}=\state{\uparrow \downarrow\uparrow \downarrow \uparrow \downarrow \uparrow \downarrow \uparrow \downarrow \uparrow \downarrow }$ (b). Even for the unassisted protocol, the final energy variance is much smaller for $\state{\psi_1}$ than for $\state{\psi_2}$. Introducing a local counterdiabatic term rapidly decreases the final energy variance in the dark states, whereas the energy variance remains largely unchanged in the N\'eel state (see insets).} \label{Dark states} \end{figure*} The local structure of singularities of the AGP near the macroscopically degenerate points also allows for the existence of special states that are simultaneously eigenstates of the Hamiltonian and are annihilated by (or are possibly other eigenstates of) the leading divergent part of the AGP. From Eq.~\eqref{eq:mfHam} it is clear that such states should be largely immune to {\em any} time-dependent protocols $\vec \lambda(t)$. They are thus approximately {\em dark states}. Let us start by analyzing such states near the singularity at $(2,0)$. From Eq.~\eqref{eq:agp_expansion} it follows that the divergent part of the AGP in any direction except the radial one scales as \[ \mathcal{A}_{\rm s}\propto {1\over r}PYP. \] We can readily see that $\mathcal{A}_{\rm s}$ has many zero eigenstates that are simultaneous eigenstates of $H$ at $r=0$. An example of such a state is \begin{align} \state{\psi_1}=\state{\uparrow \uparrow \downarrow \downarrow \uparrow \uparrow \downarrow \downarrow \uparrow \uparrow \downarrow \downarrow }. \end{align} There are (exponentially) many other such dark states, which can e.g. be obtained by increasing the length of the domains of $|\uparrow\rangle$ spins. From Eqs.~\eqref{eq:mfHam} and \eqref{comoving}, the time evolution of such a $\state{\psi_1}$ in the co-moving basis under an arbitrary time-dependent protocol is given by \begin{align} \label{eq:schr_eq_dark_states} i{\partial \over \partial t}\state{\psi_1} &=(H-\dot{\lambda} \mathcal A_{\rm n})\state{\psi_1}, \end{align} where $\mathcal A_n$ is the remaining non-divergent part of the AGP as \begin{align} \mathcal{A}_{\lambda} \state{\psi_1} = \left(\mathcal{A}_{\rm s}+\mathcal{A}_{\rm n}\right)\state{\psi_1} = \mathcal{A}_{\rm n}\state{\psi_1}. \end{align} We see that the state $|\psi_1\rangle$ is unaffected by the term $\mathcal{A}_{\rm s}$, the main source of diabatic excitations in general states, and is thus only weakly excited. Because this statement is general and is not tuned to the details of the protocol $\vec\lambda(t)$, this state approximately behaves as a many-body dark state. The remaining non-divergent terms $\mathcal{A}_{\rm n}$ entering Eq.~\eqref{eq:schr_eq_dark_states} can be further suppressed by means of local CDD. As we show in Sec.~\ref{sec:perturbative}, $\mathcal A_{\rm n}$ has a well defined expansion in terms of local operators and thus the dark states only acquire local dressing near singularities and remain highly nonthermal (with e.g. low entanglement entropy) even far from the singularity, in the ergodic regime. To demonstrate the advantage of the many-body dark state in the context of quantum state preparation, we consider a CD protocol with the VAGP, starting at the singular point $(2,0)$ and subsequently increasing the transverse magnetic field up to the point $(2,0.5)$. We consider two scenarios, starting with two different initial states, a dark state $\state{\psi_1}$ and a bright (non-dark) N\'eel state that is not annihilated by the singular part of the AGP: \begin{align} \state{\psi_2}=\state{\uparrow \downarrow\uparrow \downarrow \uparrow \downarrow \uparrow \downarrow \uparrow \downarrow \uparrow \downarrow } . \end{align} We choose the protocol given by Eq.~\eqref{CDD drive sinsin} with protocol duration $T=1$. The results of the simulations are shown in Fig.~\ref{Dark states}. In Appendix \ref{app:darkstates}, we analyze the dressing of less symmetric dark and bright initial states, and show that they exhibit a very similar qualitative behavior. Even for the unassisted protocol (blue lines), we can already see in the figure that the energy variance of the dressed dark state is a factor of $20$ smaller than that of the bright state. This ratio quickly increases if we increase the protocol duration. The difference between the dark and bright states becomes even more pronounced in the presence of the local CD term. We see that the energy variance of the bright N\'eel state $\state{\psi_2}$ is almost unaffected by the counterdiabatic term, only decreasing from $1.496$ to $1.468$ as we go from the unassisted protocol to the CDD with the $3$-body ansatz. On the other hand, the energy variance of the prepared dark state reduces from $0.085$ (unassisted) in the unassisted protocol to $0.001$ ($3$-body CDD) for the dark state. Such a small energy variance implies that the prepared state is very close to an eigenstate of the system. The fact that this state is prepared in a short time $T=1$ using a local CD Hamiltonian also implies that this state is nonthermal, e.g. it exhibits area law entanglement. It is easy to check that the dark states, i.e. the zero-energy eigenstates of the $PYP$ Hamiltonian, are simultaneously the zero-energy eigenstates of the low-energy effective $PXP$ Hamiltonian. Interestingly, the AGP allows us to find these special states without prior knowledge of the effective Hamiltonian. One can similarly analyze the structure of the AGP near the other singularity at $(0,0)$. From Eq.~\eqref{eq:agp_expansion} it follows that the divergent part of the AGP is given by \[ \mathcal{A}_{\rm s} \propto Y-ZYZ. \] This operator clearly annihilates two pairs of states: i) fully-polarized states $\state{\uparrow \uparrow\dots \uparrow \uparrow}$ and $\state{\downarrow \downarrow\dots\downarrow\downarrow}$ and ii) the two N\'eel states $\state{\uparrow \downarrow\uparrow \downarrow\dots \uparrow \downarrow}$ and $\state{\downarrow\uparrow \downarrow\uparrow\dots \downarrow \uparrow}$. The two N\'eel states are clearly the degenerate ground states, such that it is not surprising that they can be efficiently dressed locally as we introduce a nonzero finite magnetic field. The two ferromagnetic states are the most excited states, i.e. the states with maximal energy. As we turn on the $Z$-magnetic field, one of the polarized states remains the most excited state -- it is again not surprising that this state can be locally dressed. However, the second polarized state quickly enters the energy continuum and yet, because it is annihilated by $\mathcal A_s$, it only weakly hybridizes with other states and remains highly non-thermal. This dark state was recently discovered in Ref.~\cite{wurtz_emergent_2020} (cf. Fig. 4 there) as a state with anomalously low entanglement. Interestingly, in this case the ground and most excited states can be immediately determined as the zero states of the AGP, without any need to diagonalize the full Hamiltonian. \section{Flow diagram with the higher order variational ansatz}\label{sec:bigger ansatz} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{Figure7} \caption{The norm of the VAGP for different ansatz sizes $k$. The upper figure corresponds to $(h,g)=(5/3,1/10)$ and the lower figure to $(1/3, 1/3)$. The blue (red) points are the norms of the AGP in the optimal (orthogonal) direction. The AGP shows a high degree of anisotropy for $k\gtrsim 3$.} \label{Optimal Directions} \end{center} \end{figure} \subsection{Scaling of the VAGP norm with the ansatz size} Having analyzed the emerging adiabatic flow diagram within the 3-body variational ansatz, we now consider what happens when we increase the support of the VAGP to more than three sites. First, we study how the norm of the VAGP changes with the increasing ansatz size $k$. A slow increase of $||\mathcal A_\lambda||$ with $k$ indicates that increasing the support of the ansatz only has a small effect on the VAGP, such that its local approximation is stable and accurate. Conversely, a fast increase of $||\mathcal A_\lambda ||$ with ansatz size would indicate that the exact AGP is highly non-local and the local variational ansatz is not very stable. In Fig.~\ref{Optimal Directions}, we analyze the norms of the VAGP in the optimal (blue) and orthogonal (red) directions at two different sets of couplings: $(5/3,1/10)$ (top) and $(1/3, 1/3)$ (bottom). The first point is close to the $g=0$ classical Ising line and relatively far from the singular points, whose structure is explained below. The second point is dominated by its proximity to the $(0,0)$ singularity, but it is not too close to it. In both cases we observe a large anisotropy between the optimal and orthogonal directions starting from $k=3$. In particular, we see that the norm of the VAGP in the orthogonal direction rapidly increases to a large value as $k$ reaches $3$ and then remains relatively flat for the first set of couplings, and increases more gradually with $k$ for the second set of couplings. In both cases the AGP norm in the optimal direction increases slowly with $k$. As we will show below, when we keep increasing the ansatz size, new singularities affecting the VAGP start to emerge. These singularities can discontinuously change the optimal direction, at the same time drastically reducing the anisotropy of the AGP. \subsection{Emergence of new singular points} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{Figure8} \caption{The flow diagram indicating the optimal direction at each point for the 8-body ansatz. Each point in this diagram corresponds to a Hamiltonian set by $(h,g)$ and the arrows denote the optimal direction for deformations $(\delta h, \delta g)$. The color now represents the logarithm of the ratio of the norm in the optimal direction over that in the orthogonal direction, ranging from blue (nearly anisotropic) to yellow (highly isotropic). Source flows are clearly visible not just at $(h,g)=(0,0)$ and $(2,0)$, but also at $(1,0)$ and $(2/3,0)$. } \label{Flow diagram 7body} \end{center} \end{figure} In Sec.~\ref{sec:perturbative} and Appendix~\ref{app:perturbative}, where we discuss the perturbative expansion of the AGP for small values of $g$, we show that new singularities emerge in correspondence with the degenerate points along the line $g=0$ when increasing the support of the VAGP ansatz. For example, in the second-order approximation a new singularity at $h=1$ appears, in the third-order a singularity appears at $h=2/3$, etc. These singularities correspond to correlated rearrangements of spins leaving the energy of the unperturbed Hamiltonian invariant, which correspondingly involve longer and longer strings of operators in the AGP. In other words, distinguishing degenerate states from each other through local operators requires operators with increasing support, which will arise at higher orders in the perturbative expansion. For example, the leading-order singular term near $(h,g)=(1,0)$ reads (see Appendix~\ref{app:perturbative}): \begin{align}\label{eq:Adiv_h1} \mathcal{A}_\lambda(\varphi) =& \frac{\sin \theta}{32 \cos^2\theta} \left(\sin \theta \cos \varphi - 2 \cos \theta \sin \varphi\right)\nonumber\\ &\quad \times P(XY+YX)P+\dots, \end{align} where we now parametrize the magnetic field in the Hamiltonian as $(h,g) = (1+r\cos\theta, r \sin \theta)$ and the direction in which we perturb is again given by $(\delta h, \delta g) \propto (\cos\varphi, \sin \varphi)$. One can readily see that this singularity is not radial and only develops around $\theta=\pi/2$. It is weaker than the previously-analyzed singularities at $(0,0)$ and $(2,0)$ due to the absence of the $1/r$ divergent prefactor (cf. Eqs.~\eqref{eq:agp_expansion} and~\eqref{eq:pertPYP}), so the divergence is confined to a narrow angular region. The optimal direction near $\theta=\pi/2$ is again the one where the divergent part of $\mathcal{A}_\varphi$ vanishes, corresponding to $2\cot\varphi = 2\cot \theta$, which implies that $\delta\varphi\approx 2\delta \theta$, where $\delta \varphi=\pi/2-\varphi,\; \delta\theta=\pi/2-\theta$. Hence, the optimal direction is no longer radial, except exactly at the singularity, where $\theta=\pi/2$. Since the operator part of the diverging contribution to the AGP contains four-body operators, this singularity will only manifest in the VAGP if we use a $4$-body ansatz or higher. This is exactly what is shown in Fig.~\ref{Flow Diagram sphere}, where the flow diagram for the 5-body ansatz contains sources/sinks at both the 3-body singularities $(0,0)$ and $(2,0)$ and the additional singularity $(1,0)$. \begin{table}[t] \begin{tabular}{|c |c | c|} \hline $h$ & Operator & Degenerate states\\ \hline 0 & $Y-ZYZ$ & $\state{\cdots\uparrow \uparrow \downarrow\cdots} \leftrightarrow \state{\cdots\uparrow \downarrow \downarrow \cdots}$ \\ \hline 2 & $PYP$ & $\state{\cdots\downarrow \uparrow \downarrow\cdots} \leftrightarrow \state{\cdots\downarrow \downarrow \downarrow\cdots}$ \\ \hline 1 & $P(XY+YX)P$ & $\state{\cdots\downarrow \uparrow \uparrow \downarrow\cdots} \leftrightarrow \state{\cdots\downarrow \downarrow \downarrow \downarrow\cdots}$ \\ \hline \raisebox{-0mm}{$2\over3$}& \raisebox{-0mm}{$P(YXX+XYX$} & \raisebox{-0mm}{$\state{\cdots\downarrow \uparrow \uparrow \uparrow \downarrow\cdots} \leftrightarrow \state{\cdots\downarrow \downarrow \downarrow \downarrow \downarrow\cdots}$} \\ &\raisebox{0mm}{$\ \ \ \ \ \ +XXY-YYY)P$} &\\ \hline $\vdots$ &$\vdots$ &$\vdots$ \\ \hline \end{tabular} \caption{Singular contribution to the VAGP at different singular points $(h,0)$ and corresponding spin flips conserving the energy. Operators with increasing support lead to weaker divergences appearing in higher-order perturbative terms at rational values of $h$.} \label{table divergent terms} \end{table} Increasing the support of the ansatz will lead to additional singularities, which can be captured in higher-order terms in the perturbative expansion. As such, higher-order singularities will become even more suppressed in orders of $r$, such that they will manifest themselves only some distance away from the degenerate $g=0$ line. In Fig.~\ref{Flow diagram 7body}, we show the flow diagram for the 8-body variational ansatz. The arrows again indicate the optimal directions, and the color now represents the anisotropy, i.e. the ratio of the VAGP norm along the optimal and the orthogonal directions, with yellow indicating a higher anisotropy. New singularities at $h=1$ and $h=2/3$ become visible in this plot, accompanied by additional, non-radial, structures around them. Clearly, the leading-order singular term can be singled out either perturbatively or variationally. As argued above, the corresponding operators should connect states that are exactly degenerate at the corresponding singular point. In Table~\ref{table divergent terms} we summarize these leading-order operators and illustrate how they connect degenerate states through correlated spin flips, inducing both the macroscopic degeneracies in the eigenspectrum and the divergences in the VAGP. We note an interesting feature following from Fig.~\ref{Flow diagram 7body}: as we increase the size of the variational ansatz, in some regions the optimal direction can switch. This is most clearly visible near the point $h=1$ and small $g$. Within the 3-body ansatz, the optimal direction is nearly horizontal (cf. Fig.~\ref{Flow Diagram}), while in the higher-body ansatz ($k>4$) the optimal direction is nearly vertical (cf. Fig.~\ref{Flow diagram 7body}). This discontinuity indicates that it is impossible to improve the accuracy of the VAGP in the horizontal direction by increasing the support of the variational ansatz: the new singularity prevents us from doing so. The only way to continue improving local state preparation is to change the direction. It is clear that such a sudden change should introduce some ambiguity in finding the optimal path in the space of couplings in the vicinity of the singularity. Indeed, we see that regions of small anisotropy surround the singularity at $(1,0)$ -- in such regions the difference between the optimal and the orthogonal directions is less pronounced. \section{VAGP and approximately conserved operators}\label{sec: conserved operator} As we discussed above, the VAGP for deformations along the direction $\lambda_j$ is found by minimizing the norm of the operator $G_j$ (cf. Eqs.~\eqref{eq:G_def} and~\eqref{equation variational}). If the VAGP is exact, then $G_j$ is a conserved operator conjugate to the direction $\lambda_j$. However, for an approximate VAGP, $G_j$ is only approximately conserved because it has a non-zero commutator with the Hamiltonian. It is clear that the norm of the commutator $[G_j,H]$ is a measure for the accuracy of this approximate conservation law: the smaller the norm, the better the conservation law. In some sense, this norm serves as a proxy to the magnitude of the difference between the exact and the local variational AGP. If this difference is small, we can simultaneously implement accurate local counterdiabatic driving and construct a local nearly-conserved operator. These qualitative considerations are indeed correct, as we show below by analyzing the accuracy of such conservation laws in the optimal directions at different couplings and different ansatz sizes. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{Figure9.pdf} \caption{Inverse lifetime $\Gamma_j$ for nearly-conserved operators constructed using the VAGP along the optimal directions for $k=3,5,7$. Increasing the support of the VAGP increases the lifetime. The maxima are observed near the quantum critical point $(0,1)$ and singular point $(2,0)$, which indicates that the infinite temperature VAGP is still ``aware" of the zero temperature quantum critical behavior. Inverse lifetime along (a) a line with fixed $g$, (b) a line with fixed $h$, and (c) the line $g=h$.} \label{[HG]} \end{center} \end{figure*} A more convenient and physical measure characterizing the accuracy of the conservation law is the lifetime of $G_j$ measured in an eigenstate $\left|n\right\rangle$ of the Hamiltonian. The latter can be computed from the short-time expansion of the connected non-equal time correlation function~\cite{kim_slowest_2015}: \begin{align} &{1\over 2} \langle n | G_j(t) G_j(0) + G_j(0) G_j (t) |n\rangle_c \nonumber\\ &= \langle n| G_j^2(0) |n\rangle_c-{t^2\over 2} |\langle n | [H, G_j]^2 |n\rangle_c|+\mathcal O(t^4). \end{align} From this expansion one can define a state-averaged normalized decay rate (inverse lifetime) for the operator $G_j$ as \begin{align} \Gamma_j^2={\left|{\rm Tr}\left[[H,G_j]^2\right]\right| \over {\rm Tr}[G_j^2]} = \frac{||[H,G_j]||^2}{||G_j||^2}. \label{eq [HG]} \end{align} A small decay rate indicates that the operator $G_j$ is nearly conserved, at least up to times of the order $1/\Gamma_j$. For the exact AGP, obviously, $\Gamma_j=0$. In Fig.~\ref{[HG]} we show the lifetimes of the operators $G_j$ computed in the optimal direction, i.e. the direction shown by arrows in Fig.~\ref{Flow diagram 7body}, as a function of i) $h$ at fixed $g=0.2$ (panel a); ii) as a function of $g$ at fixed $h=0.15$, and iii) as a function of the total magnetic field along the diagonal direction $h=g$. Different lines on each panel refer to different ansatz sizes. In all the cases we chose the direction $\lambda_j$ to be the optimal one for the corresponding ansatz size. In the panel (a), showing $\Gamma_j$ as a function of $h$ at a fixed small value of $g$, we see several characteristic features. First of all, it is clear that increasing the ansatz size increases the lifetime of the nearly-conserved operators. Furthermore, the decay rate exhibits non-monotonic peaks at the singular points of the AGP. As we increase the ansatz size $\Gamma_j$ becomes more sensitive to the higher-order singularities. Thus the effect of the singularity near $h=2$ is very strong at $k=3$, i.e.~at the 3-body ansatz level but becomes very small for larger $k$. This picture is consistent with our previous analysis, suggesting that the divergent contributions to the VAGP, corresponding to leading singularities, are local and as such can be eliminated by the local VAGP. Higher-order singularities then require a VAGP with increasing support. Another very interesting feature emerges if we analyze the dependence of $\Gamma_j$ on $g$ at fixed small $h=0.15$ [panel (b)]. Namely, the decay rate exhibits a clear maximum near $g=1$, corresponding to the quantum critical point at zero temperature~\cite{sachdev_2011}. Interestingly, the maximum in $\Gamma$ is clearly pronounced despite the fact that we analyze the operator lifetimes at infinite temperature, where static observables do not exhibit any signatures associated with criticality, consistent with recent results from Ref.~\cite{wurtz_emergent_2020}. At $h=0$, i.e.~in the limit of the integrable transverse field Ising model, this result is known from prior work~\cite{del_campo_assisted_2012, kolodrubetz_geometry_2017}. The plot shown in Fig.~\ref{[HG]} suggests that, even if the integrability is broken, the maximum of $\Gamma_j$ remains well-defined and again highlights how temperature plays a much less important role when we define quantum criticality through the diabatic response encoded in the AGP. \section{Perturbative expansion} \label{sec:perturbative} In this final section we present a derivation of the divergences appearing in the VAGP by developing a perturbative expansion of the exact AGP in small $g$ near $g=0$, i.e. near the classical Ising limit, using the integral representation of the AGP given by Eq.~\eqref{eq:A_Heisenberg}. We will outline only a sketch of the derivation here and provide some key results, further details of all calculations can be found in Appendix~\ref{app:perturbative}. We will denote the Hamiltonian at the solvable point $g=0$ as $H_0 = ZZ + h Z$ and find a perturbative expansion for $\mathcal{A}_{\lambda}$ at $H=H_0+gX$ in powers of $g$ for general $\partial_{\lambda} H$, \begin{align} \mathcal{A}_{\lambda} = \mathcal{A}_{\lambda}^{(0)}+g \mathcal{A}_{\lambda}^{(1)}+ \mathcal{O}(g^2). \end{align} The first-order contribution can be found by setting $g=0$ in Eq.~\eqref{eq:A_Heisenberg}, \begin{align} \label{eq:A_Interaction_0} \mathcal A_\lambda^{(0)} = -{1\over 2}\lim_{\epsilon\to 0^+} \int_{-\infty}^{\infty} dt\, {\rm sgn}(t)\, \mathrm e^{-\epsilon |t|} (\partial_\lambda H^{(0)})(t), \end{align} where any time-dependence is taken to be in the interaction picture, $(\partial_\lambda H^{(0)})(t) \equiv e^{iH_0t}(\partial_{\lambda}H) e^{-iH_0t}$. The next order can be found by taking the derivative w.r.t.~$g$ in Eq.~\eqref{eq:A_Heisenberg}, \begin{align} \label{eq:A_Interaction_1} \mathcal A_\lambda^{(1)} = -\frac{i}{2}\lim_{\epsilon\to 0^+} \int_{-\infty}^{\infty} dt\, {\rm sgn}(t)\, \mathrm e^{-\epsilon |t|} \left[\chi(t),(\partial_{\lambda}H^{(0)})(t)\right], \end{align} where \begin{equation} \chi(t) = \int_{0}^t d\tau\, X(\tau), \quad X(t) = e^{iH_0t}X e^{-iH_0t}. \label{eq:X_t_def} \end{equation} In order to simplify the notations we use $X(t)$ instead of $X^{(0)}(t)$. Higher-order terms can be found by taking higher-order derivatives of Eq.~\eqref{eq:A_Heisenberg}, leading to an iterative evaluation scheme. We will only analyze the first two orders here. We will separately calculate the dominant terms for $\partial_{\lambda}H = X$ and $\partial_{\lambda}H=Z$, yielding $\mathcal{A}_g$ and $\mathcal{A}_h$ correspondingly. Given a general perturbation $(\delta h, \delta g) \propto (\cos\varphi, \sin \varphi)$, we can write $\mathcal{A}_{\lambda}(\varphi) = \cos\varphi \mathcal{A}_h + \sin \varphi \mathcal{A}_g$. Given that $H_0=ZZ+hZ$, in the interaction picture $Z^{(0)}(t) = Z$ is time-independent, and hence $\mathcal{A}_h^{(0)}=0$. For $\mathcal{A}_g$, we need to first evaluate $X(t)$, which can be done analytically (see Eq.~\eqref{eq:X(t)_Ising} and Appendix~\ref{app:perturbative}). It represents a sum of eight different independent operators with support up to $k=3$ with time-dependent coefficients. For $h\neq 0,2$ the integral of $X(t)$ is well behaved in the limit $\epsilon\to 0$ and we can find \begin{align} \mathcal A_g^{(0)}=&{1\over 2h}{2-h^2\over 4-h^2} Y+{1\over 2 (4-h^2)} (YZ+ZY) \nonumber\\ &\qquad-{1\over h (4-h^2)} ZYZ. \label{eq:A_g_0} \end{align} This expression clearly diverges at $h=0$ and $h=2$. Collecting the diverging terms near these singularities, we recover the expressions quoted earlier (Eqs.~\eqref{eq:agp_expansion} and~\eqref{eq:pertPYP}) in the limit $\varphi\to \pi/2$ and $\theta\to 0$. Exactly at the singular points the divergent terms commutes with the Hamiltonian $H_0$ and can be subtracted from the AGP. This sudden discontinuity is not accidental, since the direction along $g$ becomes exactly radial at the singular points, which is optimal. The cancellation of divergences also follows from Eq.~\eqref{eq:X(t)_Ising} and arises from the fact that the limits $\epsilon\to 0$ and $h\to 0,2$ do not commute. An explicit evaluation of Eq.~\eqref{eq:A_Interaction_0} at $h=0$ returns: \begin{align} \mathcal A_g^{(0)}={1\over 8}(YZ+ZY). \end{align} similarly at $h=2$ we find \begin{align} \mathcal A_g^{(0)}={5\over 32} Y+{1\over 32} (YZ+ZY)-{3\over 32} ZYZ. \end{align} The first non-vanishing contribution to $\mathcal{A}_{h}$ is $\mathcal{A}_h^{(1)}$, which can be immediately obtained from Eq.~\eqref{eq:A_Interaction_1} (see again Appendix~\ref{app:perturbative} for details): \begin{eqnarray} \mathcal{A}_h^{(1)} = -\frac{1}{(h^2-4)^{2}}\Bigg(\frac{h^4-2h^2+8}{2h^2}Y +\frac{3h^2-4}{h^2} ZYZ\nonumber\\ - h (ZY+YZ)\Bigg). \quad \label{eq:A_h_1} \end{eqnarray} In a similar fashion, one can compute an exact analytic expression for $\mathcal{A}_g^{(1)}$, showing the emergence of the new singularity at $h=1$. This expression is rather long, so is is only explicitly given in the Appendix~\ref{app:perturbative}. Interestingly, while formally $\mathcal A_h^{(1)}$ is obtained as a higher-order term than $\mathcal A_g^{(0)}$, it contains the same type of singularities at $h=0$ and $h=2$. Moreover, it also only contains terms with support of up to three sites: both these terms will appear in, e.g. the $3$-body variational ansatz. Physically, $\mathcal A_h^{(1)}$ plays the same role as $\mathcal A_g^{(0)}$ because both appear as the leading non-vanishing contributions to the AGP in the perturbative expansion. For this reason it suffices to analyze the following ``leading order'' perturbative AGP: \begin{equation} \mathcal A_{\lambda}(\varphi) \approx \cos\varphi\, \mathcal A_h^{(1)} +\sin\varphi\, \mathcal A_g^{(0)}. \label{eq:A_gh_0} \end{equation} As we will show next, the AGP in this form allows us to understand key features of the adiabatic flows near the singularities at $(0,0)$ and $(2,0)$. Using Eqs.~\eqref{eq:A_g_0} and~\eqref{eq:A_h_1}, we can minimize the norm of the perturbative AGP~\eqref{eq:A_gh_0} with respect to $\varphi$ and find the optimal direction as \begin{align} \tan(2\varphi) =\frac{2 g \left(h^6+24 h^2-32\right)}{h (h^2-4) \left(h^4-2 h^2+8\right)} + O(g^2). \label{phi analytic} \end{align} The corresponding perturbative flow diagram is shown in Fig.~\ref{fig flow analytic}. It is clearly highly similar to the variational flow diagram obtained for the $3$-body variational ansatz (cf. Fig.~\ref{Flow Diagram}), confirming how the (numerically straightforward) variational approach is able to identify the most important local contributions to the AGP. It is easy to check that from Eq.~\eqref{eq:A_gh_0} we can recover the asymptotic behavior of the AGP close to the singularities at $(0,0)$ and $(2,0)$ (cf.~Eqs.~\eqref{eq:agp_expansion} and~\eqref{eq:pertPYP}). \begin{figure}[ht] \begin{center} \includegraphics[width=1.\columnwidth]{Figure10} \caption{Flow diagram of the first-order perturbative calculation as given by Eq.~\eqref{phi analytic}. Two sources/sinks of the flows are observed at $(h,g)=(0,0)$ and $(2,0)$, reproducing the results shown in Fig.~\ref{Flow Diagram}.} \label{fig flow analytic} \end{center} \end{figure} \section{Conclusions}\label{sec:conclusion} We developed a general approach for analyzing the adiabatic landscape in systems described by a family of Hamiltonians characterized by several controls (couplings). This approach is based on minimizing the norm of the local variational adiabatic gauge potential, which serves as the local generator of adiabatic transformations. We applied this method to a one-dimensional Ising model in the presence of both transverse and longitudinal fields. In this model we determined the optimal directions as those where the norm of the adiabatic gauge potential is minimal, which can be used to immediately define continuous paths along which diabatic effects are suppressed (cf.~Fig.~\ref{Flow Diagram sphere}). Along these optimal paths one can design highly efficient local and experimentally feasible counterdiabatic driving protocols. These paths are also useful for various other applications, including finding local nearly-conserved operators, dressed elementary excitations such as quasiparticles or domain walls (see also Refs.~\cite{wurtz_variational_2020, wurtz_emergent_2020}), constructing effective Hamiltonians via the Schrieffer-Wolff transformation, numerically computing approximate eigenstates using efficient numerical methods such as the DMRG-X algorithm~\cite{khemani_dmrgx_2017}, designing optimal paths for quantum annealing protocols, suppressing dissipative losses in thermal machines and more. Interestingly, finding these optimal paths does not require diagonalizing the Hamiltonian of the system either exactly or approximately and can be done even in the thermodynamic limit. We found that these optimal paths always start/terminate at the points corresponding to Hamiltonians exhibiting macroscopic degeneracies of the spectrum, which play a role similar to the role of quantum critical points in equilibrium phase diagrams. As we approach these singularities, the anisotropy between the optimal and the orthogonal directions diverges. The most divergent contributions to the adiabatic gauge potential are local and can be singled out either perturbatively or variationally. Increasing the support of the variational gauge potential, additional (weaker) divergences start to emerge, strongly affecting the flow diagram in their vicinity. Close to these singularities we can identify special dark states: mutual eigenstates of the Hamiltonian at the singular point and the divergent part of the adiabatic gauge potential. These dark states are highly robust against various time-dependent perturbations and can be efficiently locally dressed by the non-divergent part of the VAGP. They persist deep in the ergodic regime extending far away from the singularities. Physically, these dressed dark states correspond to spin configurations that can remain non-thermal for extremely long times. Our method provides a general prescription of finding such non-thermal states in interacting systems. Finally, we showed that the optimal directions are associated with the existence of local nearly-conserved operators. Thus there is an interesting and direct connection between our ability to perform efficient local adiabatic transformations along particular directions and the existence of long-lived operators, which are locally dressed deformations of the Hamiltonian along these optimal directions. \begin{acknowledgements} The authors thank Dries Sels, Maksym Serbyn, and Jonathan Wurtz for useful discussions and valuable comments. S.S. was supported by JSPS Overseas Research Fellowships (201860254). P.W.C. gratefully acknowledges support from a Francqui Foundation Fellowship from the Belgian American Educational Foundation (BAEF), Boston University's Condensed Matter Theory Visitors program, and EPSRC Grant No. EP/P034616/1. A.D. was supported by a grant of the Russian Science Foundation (Project No.~17-12-01587). A.P. was supported by the NSF Grant DMR-1813499 and the AFOSR Grant FA9550-16- 1-03. This research was supported in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the program - Thermalization, Many body localization and Hydrodynamics (Code: ICTS/hydrodynamics2019/11). \end{acknowledgements} \onecolumngrid \newpage
2,877,628,090,924
arxiv
\section{Introduction} Let $\Xsp$ be a $2$-dimensional manifold possibly non-connected and having a boundary, and $\Partition$ be a one-dimensional foliation on $\Xsp$. We will say that $\Partition$ belongs to class $\classFol$ if it satisfies the following three conditions. \begin{enumerate} \item\label{fol:enum:leaves_are_closed} Each leaf $\omega$ of $\Partition$ is a closed subset of $\Xsp$. \item\label{fol:enum:bnd_consists_of_leaves} Every connected component $\omega$ of $\partial\Xsp$ is a leaf of $\Partition$. \item\label{fol:enum:loctriv_fol} Let $\omega \in \Partition$ be a leaf, and $J=[0,1)$ if $\omega\subset\partial\Xsp$, and $J=(-1,1)$ otherwise. Then there exists an open neighborhood $\Usp$ of $\omega$ and a homeomorphism $\phi:\bR\times J \to \Usp$ such that $\phi(\bR\times 0) = \omega$ and $\phi(\bR\times t)$ is a leaf of $\Partition$ for all $t\in J$, see Figure~\ref{fig:foliation_classF}. \end{enumerate} Roughly speaking, \emph{a $1$-dimensional foliation} $\Partition$ is a partition of $\Xsp$ which looks like a partition of $\bR^2$ into parallel lines \emph{near each point $x\in\Xsp$}. Then $\Partition$ belongs to class $\classFol$ whenever it looks like partition of $\bR^2$ into parallel lines \emph{near each leaf $\omega\in\Partition$}. In particular, each leaf of $\Partition$ is homeomorphic to $\bR$. \begin{figure}[ht] \begin{tabular}{ccc} \includegraphics[height=1.3cm]{foliation_loc_triv_a} & \qquad \qquad & \includegraphics[height=1.3cm]{foliation_loc_triv_b} \\ (a) leaf in the boundary & & (b) leaf in the interior \end{tabular} \caption{}\label{fig:foliation_classF} \end{figure} \vspace*{-2mm} \begin{definition} Let $\Xsp_i$ be a surface with a foliation $\Partition_i$, $i=1,2$. Then a homeomorphism $h:\Xsp_1 \to \Xsp_2$ will be called \emph{foliated} if it maps leaves of $\Partition_1$ onto leaves of $\Partition_2$. In this case we will also write $h: (\Xsp_1,\Partition_1) \to (\Xsp_2,\Partition_2)$. \end{definition} The aim of the present paper is to describe a topological structure of foliations belonging to class $\classFol$ up to foliated homeomorphisms, see Theorem~\ref{th:open_strips} below. Such foliations on the plane were studied by W.~Kaplan~\cite{Kaplan:DJM:1940} and they appear as foliations by level sets of pseudoharmonic functions on $\bR^2$, see W.~Kaplan~\cite[Theorem~42]{Kaplan:DJM:1940}, W.~Boothby~\cite{Boothby:AJM_1:1951}, \cite{Boothby:AJM_2:1951}, M.~Morse and J.~Jenkins~\cite{JenkinsMorse:AJM:1952}, M.~Morse~\cite{Morse:FM:1952}. We will improve Kaplan's construction and extend it to foliations on arbitrary surfaces. Topological structure of singular foliations on surfaces, in particular, foliations by orbits of flows, were studied by A.~Andronov and L.~Pontryagin~\cite{AndronovPontryagin:DANSSSR:1937}, M.~Peixoto~\cite{Peixoto:Top:1962}, \cite{Peixoto:Top:1963}, S.~Aranson and V.~Grines~\cite{AransonGrines:MatSb:1973, AransonGrines:UMN:1986}, I.~Bronstein and I.~Nikolayev~\cite{BronsteinNikolayev:TA:1997}, S.~Aranson, E.~Zhuzhoma, and V.~Medvedev~\cite{AransonZhuzhomaMedvedev:MatSb:1997}, L.~Plachta~\cite{Plachta:fol1:TA:2003, Plachta:fol2:MMFMP:2001,Plachta:fol3:MMFMP:2001}, A.~Oshemkov and V.~Sharko~\cite{OshemkovSharko:MatSb:1998}, S.~Aranson, V.~Grines and V.~Kaimanovich~\cite{AransonGrinesKaimanovich:JDCS:2003}, M.~Farber~\cite{Farber:AMSP:2004}, N.~Budnytska and O.~Prishlyak~\cite{BudnytskaPryshlyak:UMJ:2009}, N.~Budnyts'ka and T.~Rybalkina~\cite{BudnytskaRybalkina:UMJ:2012} and many others. Results of the paper could also be applied to singular foliations without non-closed leaves on surfaces by removing singularities. This will be done in subsequent papers by the authors. \smallskip \subsection*{Special leaves} Suppose $\Partition$ is a foliation of class $\classFol$ on a surface $\Xsp$. Let $\Ysp = \Xsp/\Partition$ be the space of leaves, and $\prj:\Xsp\to\Ysp$ be the corresponding quotient map. Endow $\Ysp$ with the \emph{quotient topology}, so a subset $\Vsp\subset\Ysp$ is open if and only if its inverse $\prj^{-1}(\Vsp)$ is open in $\Xsp$. For a subset $\Usp\subset\Xsp$ its \emph{saturation}, $S(\Usp)$, with respect to $\Partition$ is the union of all leaves of $\Partition$ intersecting $\Usp$. Equivalently, $S(\Usp) = \prj^{-1}(\prj(\Usp))$. Since each leaf of $\Partition$ is a closed subset of $\Xsp$, it follows that $\Ysp$ is a $T_1$-space. However, in general, $\Ysp$ is not a Hausdorff space. \begin{lemma}\label{lm:classFol_prop} If $\Partition\in\classFol$ then the projection map $\prj:\Xsp\to\Ysp$ is open. \end{lemma} \begin{proof} We have to prove that for each open $\Vsp \subset \Xsp$ its saturation $S(\Vsp)$ is open as well. Thus for each $x\in S(\Vsp)$ we should find an open saturated subset $\Wsp$ such that $x\in\Wsp=S(\Wsp) \subset S(\Vsp)$. Let $\omega$ be the leaf containing $x$. Put $J=[0,1)$ whenever $\omega\subset\partial\Xsp$ and $J=(-1,1)$ otherwise. Then by definition of class $\classFol$ there exists a foliated homeomorphism $\phi:\bR\times J \to \Usp$ such that $\phi^{-1}(x)=(t,0) \in \bR\times 0$ for some $t\in\bR$. Then $\phi^{-1}(\Vsp\cap\Usp)$ is an open neighborhood of $(t,0)$, whence there exists $\eps>0$ such that if we denote $K = J \cap (-\eps,\eps)$, then $t\times K \subset \phi^{-1}(\Vsp\cap\Usp)$. But $K$ is open in $J$, whence $\bR\times K$ is open in $\bR\times J$. Therefore $\phi(\bR\times K)$ is saturated and open in $\Usp$ which in turn is open in $\Xsp$. Hence $\phi(\bR\times K)$ is open in $\Xsp$ and $x\in\phi(\bR\times K) \subset S(\Vsp)$. Therefore $S(\Vsp)$ is open in $\Xsp$. \end{proof} \begin{definition} Let $\omega$ be a leaf of $\Partition$ and $y=\prj(\omega)\in\Ysp$. We will say that $\omega$ is a \emph{special} leaf and $y$ is a \emph{special} point of $\Ysp$ whenever $\Ysp$ is not Hausdorff at $y$, that is $y \not = \cap_{y\in \Vsp} \overline{\Vsp}$, where $\Vsp$ runs over all open neighborhoods of $y$. \end{definition} \begin{example}\label{exmp:special_leaves}\rm Consider the foliation on $\bR^2$ shown in Figure~\ref{fig:example:special_leafs}(a). It splits by bold leaves $\alpha$, $\beta$, $\gamma$, and $\delta$ into five ``strips'' $A$, $B$, $C$, $D$, $E$ foliated by ``parallel'' lines, see Figure~\ref{fig:example:special_leafs}(b). Moreover, the space of leaves $\Ysp$ has the structure as in Figure~\ref{fig:example:special_leafs}(c), where bold lines correspond to strips, and thin lines just indicate that $\alpha$ belongs to the closure of $A$ and $B$, $\beta$ belongs to the closures $B$ and $C$ and so on. In particular, $\Ysp$ looses Hausdorff property at $\alpha$, $\beta$, $\gamma$, and $\delta$. More precisely, the subspace $\Ysp\setminus\{\alpha,\beta,\gamma,\delta\}$ is Hausdorff, however each neighborhood of $\alpha$ intersect each neighborhood of $\beta$, and the same holds for pairs $\{\beta, \gamma\}$ and $\{\gamma, \delta\}$. Therefore the leaves $\alpha$, $\beta$, $\gamma$ and $\delta$ are special. \end{example} \begin{figure}[ht] \begin{tabular}{ccccc} \includegraphics[height=1.7cm]{example_special_leafs} & \qquad & \includegraphics[height=1.7cm]{example_special_leafs_strips} & \qquad & \includegraphics[height=1.7cm]{example_space_of_leaves} \\ (a) Foliation $\Partition$ & & (b) Strips decomposition & & (c) Space of leaves $\Ysp = \Xsp/\Partition$ \end{tabular} \caption{}\label{fig:example:special_leafs} \end{figure} \begin{definition} A subset $\strip \subset \bR^2$ will be called a \emph{model strip} if there exist $a<b$ such that \begin{enumerate} \item[\rm(1)] $\bR\times(a,b) \ \subset \ \strip \ \subset \ \bR\times[a,b]$; \item[\rm(2)] the intersection $\strip \ \cap \ \bR\times\{a,b\}$ is a disjoint union of open intervals. \end{enumerate} Put \begin{align*} \partial_{-} \strip &= \strip \cap (\bR \times \lbrace a \rbrace), & \partial_{+} \strip &= \strip \cap (\bR \times \lbrace b \rbrace), & \partial S &= \partial_{-} S \cup \partial_{+} S. \end{align*} A model strip $\bR\times(a,b)$ will be called \emph{open}. \end{definition} Each model strip $\strip$ admits a natural $1$-dimensional foliation into parallel lines $\bR\times t$ and boundary intervals from $\partial\strip$. We will call this foliation \emph{canonical}. The following lemma implies that this foliation belongs to class $\classFol$. \begin{lemma}\label{lm:leaf_shrinking}{\rm (e.g.~\cite{MaksymenkoPolulyakh:PGC:2015}).} Let $a<b \in \bR$, $\Xsp = \bR^2\setminus\bigl( (-\infty,a]\cup[b,+\infty) \bigr)$, and $\eps>0$. Then there exists a homeomorphism $\phi:\bR^2\to\Xsp$ such that \begin{enumerate} \item[\rm(a)] $\phi$ is fixed outside $\bR\times(-\eps,\eps)$; \item[\rm(b)] $\phi$ preserves foliations by horizontal lines, that is $\phi(\bR\times t) = t\times\bR$ for $t\not=0$ and $\phi(\bR\times 0) = (a,b)\times 0$, see Figure~\ref{fig:leaf_shrinking}. \end{enumerate} \end{lemma} \begin{figure}[ht] \includegraphics[height=1.5cm]{leaf_shrinking} \caption{}\label{fig:leaf_shrinking} \end{figure} \begin{example} The foliation in Example~\ref{exmp:special_leaves} splits into five model strips such that \begin{align*} &A \ \cong \ E \ \cong \ \bR\times(0,1) \ \bigcup \ (0,1) \times 1, \\ & B \ \cong \ C \ \cong \ D \ \cong \ \bR\times(0,1) \ \bigcup \ \bigl( (0,1) \cup (2,3) \bigr) \times 1. \end{align*} \end{example} Let $\bR\times[-1,1]$ be a model strip, $\phi_{+}, \phi_{-}:\bR\times\{-1\} \to \bR\times\{+1\}$ be two homeomorphisms given by \begin{align*} \phi_{+}(t,-1) &= (t,1), & \phi_{-}(t,-1) &= (-t,1), \end{align*} for $t\in\bR$, and $C = \bR\times[-1,1] / \phi_{+}$ and $M = \bR\times[-1,1] / \phi_{-}$ be the quotient spaces. Thus $C$ (resp. $M$) is obtained from $\bR\times[-1,1]$ by identifying its boundary lines via preserving (resp. reversing) orientation homeomorphism. Therefore $C$ is a cylinder and $M$ is a M\"obius band. Moreover, the canonical foliation on $\bR\times[-1,1]$ yields certain foliations $\Partition_C$ and $\Partition_M$ on $C$ and $M$ respectively also belonging to class $\classFol$. We will call $C$ a \emph{standard cylinder} and $M$ a \emph{standard M\"obius band}. \subsection*{Foliation associated with a regular function.} A continuous function $f:\bR^2\to\bR$ will be called \emph{regular} whenever for each $z\in\bR^2$ there are local coordinates $(u,v)$ in which $z=(0,0)$ and $f(u,v) = u+\mathrm{const}$. It follows that the partition $\Partition$ of $\bR^2$ into connected components of level-sets $f^{-1}(t)$, $t\in\bR$, of $f$ is a \emph{foliation} is a usual sense, i.e.\! it is \emph{locally} homeomorphic with a partition of $\bR^2$ into parallel lines. We will say that $\Partition$ is a \emph{foliation associated with $f$}. Notice that \emph{$f$ has no local extremes}, whence all leaves of $f$ are homeomorphic with $\bR$. Indeed, if $\Partition$ has a closed leaf $\omega$, then by Jordan theorem $\omega$ bounds a $2$-disk. Since $f$ is constant on $\omega$, it must have a local extreme inside that disk, which gives a contradiction. Let $J\subset\bR$ be a connected subset, i.e.\! either open or closed or half-closed interval. Then by a \emph{cross-section} $\crosssect:J\to\bR^2$ of $\Partition$ we will mean a continuous path intersecting each leaf at most once. It easily follows that $\crosssect$ is a cross-section if and only if the composition $f\circ\crosssect:J\to\bR$ is strictly monotone. By a \emph{saturation of a cross-section} $\crosssect:J\to\bR^2$ we will mean the saturation of its image $S(\crosssect(J))$ and denote it simply by $S(\crosssect)$, c.f.~\cite[\S1.4]{Kaplan:DJM:1940} Kaplan~\cite[Theorem~30]{Kaplan:DJM:1940} proved that for a cross-section $\crosssect:[a,b]\to\bR^2$ of $\Partition$ its saturation $S(\crosssect)$ is foliated homeomorphic with $\bR\times[a,b]$ foliated by parallel lines. However, this result can be misleading, since $S(\crosssect)$ is not necessarily a closed subset of $\Xsp$. For instance, consider the foliation in Figure~\ref{fig:example:special_leafs}(b). Let $\sigma:[a,b]\to\bR^2$ be a cross-section passing through the special leaf $\alpha$ and such that $\sigma(a)\in A$ and $\sigma(b) \in B$. Then $\overline{S(\crosssect)} \setminus S(\crosssect) = \beta$. \subsection*{Kaplan's construction.} In~\cite[Theorem~29]{Kaplan:DJM:1940} W.~Kaplan has shown that the foliation $\Partition$ associated with a regular function $f$ belongs to class $\classFol$. In fact, he associated to $\Partition$ a family of pairs $\xi = \{(\omega_i,\crosssect_i)\}_{i=-a}^{b}$ for some $a,b\in\bN\cup\{\infty\}$, where \begin{itemize} \item [(i)] $\omega_i$ is a leaf being special for $i\not=0$; \item[(ii)] $\crosssect_0:(-1,1)\to\bR^2$, $\crosssect_i:[0,1)\to\bR^2$ for $i>0$, and $\crosssect_i:(-1,0]\to\bR^2$ for $i<0$ are certain proper cross-sections of $\Partition$; \item[(iii)] $\crosssect_{i}(0) \in \omega_i$ for all $i$, \begin{align*} \crosssect_i[0,1) \ \cap \ S \Bigl( \mathop{\cup}\limits_{j=0}^{i-1}\crosssect_j[0,1) \Bigr) &= \crosssect_{i}(0), & i>0, \\ \crosssect_i(-1,0] \ \cap \ S \Bigl( \mathop{\cup}\limits_{j=i+1}^{0}\crosssect_j(-1,0] \Bigr) &= \crosssect_{i}(0), & i<0. \end{align*} \end{itemize} Kaplan proved that $\xi$ determines $\Partition$ up to a foliated homeomorphism. As noted above $S(\crosssect_{0})$ is foliated homeomorphic with $\bR\times(0,1)$ while $S(\crosssect_{i})$, $i\not=0$, is foliated homeomorphic with a strip $\bR\times[0,1)$. Therefore the family $\xi$ determines at most countable family of strips $\{\Vsp_i = S(\crosssect_i)\}$ such that $\Vsp_{i+1}$ is glued to $\Vsp_i$ along the interval $\omega_i$ in their boundaries. Kaplan's aim was to decrease the family of such strips as much as possible, see first paragraph of~\cite[Section~3.1]{Kaplan:DJM:1940}. However, the construction of family $\xi$ then becomes ambiguous and depends on a particular choice of special leaves and cross-sections. This is illustrated in Figure~\ref{fig:example:Kaplan_construction}(b), where two such families for the same foliation are presented. \begin{figure}[ht] \begin{tabular}{ccc} \includegraphics[height=3cm]{fig_intro_01} & \qquad\qquad & \includegraphics[height=3.3cm]{fig_intro_02} \\ (a) Foliation & & (b) Two distinct maximal families of cross-sections \end{tabular} \caption{}\label{fig:example:Kaplan_construction} \end{figure} On the other hand cutting $\bR^2$ along special leaves is an unambiguous procedure and it gives a canonical decomposition of $\bR^2$. In the present paper we extends Kaplan's results to foliations $\Partition$ from class $\classFol$ on arbitrary surfaces $\Xsp$ and describe the topological structure of connected components of $\Xsp\setminus\Sigma$ and their closures, where $\Sigma$ is the union of all special leaves of $\Partition$. \begin{theorem}\label{th:open_strips} Let $\Xsp$ be a connected $2$-dimensional manifold and $\Partition$ be a foliation on $\Xsp$ belonging to class $\classFol$. Suppose that the family $\Sigma$ of all special leaves of $\Partition$ is locally finite, and let $\Qsp$ be a connected component of $\Xsp\setminus(\Sigma \cup \partial\Xsp)$. Then the following statements hold true. \begin{enumerate} \item\label{th:open_strips:Q} $\Qsp$ is foliated homeomorphic either with a standard cylinder $C$ or a standard M\"obius band $M$ or an open model strip $\bR\times(-1,1)$. Moreover, in the first two cases $\Qsp = \Xsp$. \item\label{th:open_strips:closure_of_Q} Suppose $\Qsp$ is foliated homeomorphic with an open model strip. Fix any foliated homeomorphism $\phi:\bR\times(-1,1)\to\Qsp$ and denote \begin{align*} \Qmin &= \phi\bigl(\bR\times(-1,0]\bigr), & \Qmax &= \phi\bigl(\bR\times[0,1)\bigr). \end{align*} Then the closures $\overline{\Qmin}$ and $\overline{\Qmax}$ are foliated homeomorphic to some model strips. \end{enumerate} \end{theorem} This theorem implies that the topological structure of the foliation $\Partition\in\classFol$ is uniquely determined by the combinatorics of gluing model strips. Also notice that the intersection $(\overline{\Qmin}\cap\overline{\Qmax}) \setminus \phi(\bR\times0)$ can be non-empty, whence one can not expect that $\overline{\Qsp}=\overline{\Qmin}\cup\overline{\Qmax}$ is homeomorphic with a model strip. The proof of Theorem~\ref{th:open_strips} will be given in \S\ref{sect:proof:1:th:open_strips} and~\ref{sect:proof:2:th:open_strips}. \section{Special points of non-Hausdorff spaces}\label{sect:spec_points} Throughout this section $\Ysp$ be a topological space. \begin{definition}\label{def:spec_point} Let $y\in \Ysp$ and $\beta_{y}$ be the family of all neighborhoods of $y$. Then the following set \[ \bnd{y} := \mathop{\cap}\limits_{\Vsp\in\beta_{y}} \overline{\Vsp} \] will be called the \emph{Hausdorff closure} of $y$. We will say that $y$ is a \emph{special} point of $\Ysp$ whenever $y\not=\bnd{y}$. The set of all special points of $\Ysp$ will be denoted by $\YspecPtSet$. \end{definition} Notice that $\Ysp$ is Hausdorff if and only if $y=\bnd{y}$ for all $y\in\Ysp$, i.e.\! when $\YspecPtSet=\varnothing$. \begin{lemma}\label{lm:spec_points} \begin{enumerate} \item\label{enum:lm:prop:symm_pt_bnd1} Let $y,z\in\Ysp$. Then $y \in \bnd{z}$ if and only if $z\in \bnd{y}$, however, in general, $\bnd{y} \not=\bnd{z}$. \item\label{enum:lm:prop:img_of_bnd_pt1} Let $f:\Ysp \to \Zsp$ be a continuous map into a Hausdorff topological space $\Zsp$. Then $f(\bnd{y}) = f(y)$ for all $y\in\Ysp$. \item\label{enum:lm:prop:compl_to_spec_Hausdorff1} The set $\Ysp\setminus\YspecPtSet$ of all non-special points is Hausdorff. \end{enumerate} \end{lemma} \begin{proof} (\ref{enum:lm:prop:symm_pt_bnd1}) Suppose $y\in\bnd{z} = \mathop{\cap}\limits_{\Vsp\in\beta_{z}} \overline{\Vsp}$, that is $y$ belongs to the closure of each neighborhood of $z$ which means in turn that every neighborhood of $y$ intersect every neighborhood of $z$. The latter property is symmetric with respect to $y$ and $z$, whence $z\in\bnd{y}$ as well. \smallskip (\ref{enum:lm:prop:img_of_bnd_pt1}) Suppose $z\in \bnd{y}$ but $f(y) \not= f(z)$. Since $\Zsp$ is Hausdorff, there exist open disjoint neighborhoods $\Wsp_{f(y)}$ and $\Wsp_{f(z)}$ of points $f(y)$ and $f(z)$. But then their inverses $\Vsp_y = f^{-1}(\Wsp_{f(y)})$ and $\Vsp_z = f^{-1}(\Wsp_{f(z)})$ are disjoint open neighborhoods of $y$ and $z$ respectively. Hence $z$ can not belongs to $\bnd{y}$ which contradicts to the assumption. \smallskip (\ref{enum:lm:prop:compl_to_spec_Hausdorff1}) Let $y,z\in\Ysp\setminus\YspecPtSet$ be two distinct points. Thus $y=\bnd{y} \not =z$ and so there exist disjoint neighborhoods $\Vsp_y$ and $\Vsp_z$ of $y$ and $z$ respectively. This implies that $\Ysp\setminus\YspecPtSet$ is Hausdorff. \end{proof} \subsection*{Non-Hausdorff one-dimensional manifolds} Let $\Ysp$ be a $T_1$-topological space locally homeomorphic with open sets of $[0,1)$. Notice that we allow $\Ysp$ to be non-Hausdorff. Then as usual the set of points having an open neighborhood homeomorphic with $(0,1)$ will be denoted by $\Int{\Ysp}$ and called the \emph{interior} of $\Ysp$, while its complement $\partial\Ysp:=\Ysp\setminus\Int{\Ysp}$ will be called the \emph{boundary} of $\Ysp$. \begin{lemma}\label{lm:connected_components_of_nonspec_pts} Suppose that the set $\YspecPtSet$ of special points of $\Ysp$ is locally finite. Then every connected component $\Wsp$ of $\Ysp\setminus\YspecPtSet$ is open in $\Ysp$ and is homeomorphic with one of the following spaces: $[0,1)$, $(0,1)$, $[0,1]$, $S^1$. In the last two cases, i.e.\! when $\Wsp$ is compact, $\Wsp$ is a connected component of $\Ysp$. Every connected component of $\Ysp\setminus(\YspecPtSet\cup\partial\Ysp)$ is homeomorphic with $(0,1)$. \end{lemma} \begin{proof} Since $\Ysp$ is a $T_1$-space, every point $y\in\Ysp$ is a closed subset. Also since $\YspecPtSet$ is locally finite, it follows that $\YspecPtSet$ is a closed subset, whence by~(\ref{enum:lm:prop:compl_to_spec_Hausdorff1}) of Lemma~\ref{lm:spec_points} $\Ysp\setminus\YspecPtSet$ is a Hausdorff topological space locally homeomorphic with $[0,1)$. Hence every connected component $\Wsp$ of $\Ysp\setminus\YspecPtSet$ is a one-dimensional manifold and so it is homeomorphic with one of the spaces $[0,1)$, $(0,1)$, $[0,1]$, $S^1$. Moreover, since $\Ysp\setminus\YspecPtSet$ is locally connected, we obtain that $\Wsp$ is open in $\Ysp\setminus\YspecPtSet$ and therefore in $\Ysp$ as well. Suppose $\Wsp$ is compact, i.e.\! it is homeomorphic either with $[0,1]$ or with $S^1$. Let us show that then $\Wsp$ is also closed in $\Ysp$. This will imply that $\Wsp$ is a connected component of $\Ysp$. Let $\{y_i\}_{i\in\bN} \subset \Wsp$ be a sequence converging to some $z\in\Ysp$. We should prove that $z\in\Wsp$. Since $\Wsp$ is compact, that sequence also converges to some $y\in\Wsp$. Hence if $\Vsp_y$ and $\Vsp_z$ are any two open neighborhoods of $y$ and $z$ respectively, then there exists $n>0$ such that $y_n\in\Vsp_y\cap\Vsp_z$. Thus $\Vsp_y\cap\Vsp_z\not=\varnothing$, which implies that $z\in\bnd{y} = \{y\}$, that is $z=y\in\Wsp$. We leave the last statement for the reader. \end{proof} Suppose $\Ysp$ is connected and not homeomorphic with a circle. Let $\{ \Wsp_{\ai} \}_{\ai\in\aSet}$ be the family of all connected components of $\YspecPtSet \cup \partial\Ysp$. Then due to Lemma~\ref{lm:connected_components_of_nonspec_pts} for each $\ai\in\aSet$ there exists a homeomorphism $\phi_{\ai}:(-1,1) \to \Wsp_{\ai}$. Consider the following collection of subsets: \begin{align*} \bSet &= \bigl\{ \phi_{\ai}(-1,-\onehalf], \ \phi_{\ai}[\onehalf, 1) \bigr\}_{\ai\in\aSet}\,. \end{align*} Let $\Jint\in\bSet$. Then we denote $\JOint := \phi_{\ai}(-1,-\onehalf)$ if $\Jint= \phi_{\ai}(-1,-\onehalf]$, and $\JOint:=\phi_{\ai}(\onehalf,1)$ if $\Jint= \phi_{\ai}[\onehalf,1)$ for some $\ai\in\aSet$. Thus each $\Jint\in\bSet$ is homeomorphic with a half-open segment $[0,1)$, and $\JOint$ is the subset of $\Jint$ corresponding to $(0,1)$. \begin{lemma}\label{lm:nbh_spec_points} Let $y\in\partial\Ysp$. Then there exists a unique $\Jint\in\bSet$ such that $y\in\overline{\Jint}$. In this case $\Vsp:=\{y\} \cup \JOint$ is an open neighborhood of $y$ and there exists a homeomorphism $\psi:[0,1) \to \{y\} \cup \Jint$ such that $\psi(0)=y$ and $\psi(0,1)=\Vsp$. Suppose $y\in\YspecPtSet\setminus\partial\Ysp$. Then there exist two distinct elements $\JA, \JB\in\bSet$ such that $y\in\overline{\JA}\cap\overline{\JB}$ and $y\not\in\overline{\JC}$ for all other $\JC\in\bSet$. Moreover, the set $\Vsp:=\JOA \cup \{y\} \cup \JOB$ is an open neighborhood of $y$ and there exists a homeomorphism $\mu:[-1,1] \to \JA \cup \{y\} \cup \JB$ such that $\psi(0)=y$ and $\psi(-1,1)=\Vsp$. \end{lemma} \begin{proof} We will consider only the case $y\in\YspecPtSet\setminus\partial\Ysp$. Notice that the family $\{\phi_{\ai}[-1,1]\}_{\ai\in\aSet}$ is locally finite and consists of closed sets. Therefore its union $\Zsp = \mathop{\cup}\limits_{\ai\in\aSet} \phi_{\ai}[-1,1]$ is closed. Hence the set $T = (\YspecPtSet\setminus\{y\}) \cup \partial\Ysp \cup \Zsp$ is closed and does not contain $y$. Therefore there exits a neighborhood $\Jj\subset \Ysp\setminus T$ of $y$ and a homeomorphism $\mu: [-\onehalf,\onehalf] \to \Jj$ such that $\mu(0)=y$ and $\mu(-\onehalf,\onehalf)$ is an open neighborhood of $y$. Notice that $\Jj\setminus\{y\}$ consists of exactly two connected components $\Ac=\mu[-\onehalf,0)$ and $\Bc=\mu(0,\onehalf]$, and is contained in $\Ysp\setminus \bigl( \YspecPtSet \cup \partial\Ysp \cup \Zsp\bigr) = \mathop{\cup}\limits_{\Jint\in\bSet}\JOint$. Hence $\Ac \subset \JOA$ and $\Bc \subset \JOB$ for some $\JA, \JB\in\bSet$, see Figure~\ref{fig:spec_pt_nbh}. \begin{figure}[ht] \includegraphics[height=2.3cm]{spec_pt_nbh} \caption{}\label{fig:spec_pt_nbh} \end{figure} Moreover, any other neighborhood of $y$ intersects both $\Ac$ and $\Bc$ and therefore both $\JA$ and $\JB$. Hence $y\in\overline{\JA}\cap\overline{\JB}$ and $y\not\in\overline{\JC}$ for all other $\JC\in\bSet$ distinct from $\JA$ and $\JB$. Fix any homeomorphisms $\ahom:[-1,0) \to \JA$ and $\bhom:(0,1] \to \JB$. Notice that $\Ac$ is not contained in any compact subset $P$ of $\JA$, since otherwise $y \in\overline{\Ac} \subset P \subset \JA$, which contradicts to the assumption that $y\not\in\JA$. This implies that $\ahom^{-1}(\Ac) = [a,0) \subset (-1,0)$, where $a=\ahom^{-1}\circ\mu(-\onehalf)\in(-1,0)$. By the same arguments, $\bhom^{-1}(\Bc) = (0,b]\subset (0,1)$, where $b=\bhom^{-1}\circ\mu(\onehalf)\in(0,1)$. \begin{sublemma} $\JA \not=\JB$. \end{sublemma} \begin{proof} If $\JA=\JB$, then we have a homeomorphism $\chom=\bhom^{-1}\circ\ahom:[-1,0)\to(0,1]$. Hence there exists $c \in(a,0)$ such that $c'=\chom(c) \in (0,b)$. Then $\ahom(c) = \Ac$ and $\bhom\circ\chom(c)\in\Bc$. But $\bhom\circ\chom(c) = \ahom(c)$, and so $\Ac\cap\Bc\not=\varnothing$, which contradicts to the assumption. \end{proof} Now fix arbitrary orientation preserving homeomorphisms $\aahom:[-1,-\onehalf]\to[-1,a]$ and $\bbhom:[\onehalf,1]\to[b,1]$ and define the map $\psi:[-1,1] \to \JA \cup \{y\} \cup \JB$ by the formula \[ \psi(t) = \begin{cases} \ahom^{-1}\circ \aahom(t), & t\in[-1, -\onehalf], \\ \mu(t), & t\in[-\onehalf, \onehalf],\\ \bhom^{-1}\circ \bbhom(t), & t\in[\onehalf,1]. \end{cases} \] One easily checks that $\psi$ is a required homeomorphism. \end{proof} \section{Partitions} Let $\Xsp$ be a topological space, $\Partition$ be a partition of $\Xsp$, $\Ysp = \Xsp/\Partition$ be the quotient space, and $\prj:\Xsp\to\Ysp$ be the corresponding quotient map. We will endow $\Ysp$ with the \emph{factor topology}, so a subset $\Vsp \subset \Ysp$ is open if and only if its inverse $\prj^{-1}(\Vsp)$ is open in $\Xsp$. A \emph{saturation} $S(\Usp)$ of a subset $\Usp\subset X$ with respect to $\Partition$ is the union of all $\elem \in\Partition$ such that $\elem\cap \Usp\not=\varnothing$. Equivalently, $S(\Usp) = \prj^{-1}(\prj(\Usp))$. A subset $\Usp$ is \emph{saturated} if $\Usp = S(\Usp)$. Evidently, if $A \cap S(B) = \varnothing$, then $S(A) \cap S(B)=\varnothing$ as well. \begin{lemma}\label{lm:prop} \begin{enumerate} \item\label{enum:lm:prop:T1_space} $\Ysp$ is a $T_1$-space if and only if each element $\elem\in\Partition$ is closed. \item\label{enum:lm:prop:prj_open_equiv_conditions} The following conditions are equivalent: \begin{enumerate} \item\label{enum:lm:prop:e:prj_open} the map $\prj:\Xsp\to\Ysp$ is open; \item\label{enum:lm:prop:e:saturation_is_open} for each $x\in\Xsp$ there exists an open neighborhood $\Usp$ whose saturation $S(\Usp)$ is open; \item\label{enum:lm:prop:e:cover_with_open_restr} there exists an open cover $\beta=\{\Usp_i\}_{i\in\Lambda}$ of $\Xsp$ such that for each $i\in\Lambda$ the restriction $\prj|_{\Usp_i}: \Usp_i\to\prj(\Usp_i)$ is an open map. \end{enumerate} \item\label{enum:lm:prop:prj_open} If $\prj$ is open then for each saturated subset $B$ we have that \begin{gather} \label{equ:X_setminus_ovrSA} \Xsp\setminus\overline{B} = S(\Xsp\setminus\overline{B}), \\ \label{equ:image_of_closure_B} p(\overline{B}) = \overline{p(B)}. \end{gather} In particular, $\overline{S(A)}$ and $\Xsp\setminus\overline{S(A)}$ are saturated for each subset $A \subset \Xsp$. \item\label{enum:lm:prop:loc_finite_families} Let $\beta = \{\Wsp_i\}_{i\in\Lambda}$ be a family of subsets of $\Ysp$, and $\alpha = \{\prj^{-1}(\Wsp_i)\}_{i\in\Lambda}$ be the corresponding family of their inverses in $\Xsp$. If $\beta$ is locally finite, then so is $\alpha$. Conversely, if $\alpha$ is locally finite and $\prj$ is open then $\beta$ is locally finite as well. \item\label{enum:lm:prop:loc_finite_families_nbh} Suppose $\Xsp$ is a normal topological space and $\alpha = \{\elem_i\}_{i\in\bN}$ is a locally finite family of mutually disjoint closed subsets of $\Xsp$. Then for each $i\in\bN$ there exists a neighborhood $\Usp_i$ of $\elem_i$ such that $\overline{\Usp_i}\cap\overline{\Usp_j}=\varnothing$ for $i\not=j$. \item\label{enum:lm:prop:homeomorphism} Let $f:A \to B$ be a bijection between topological spaces. Suppose that $\{\Ksp_i\}_{i\in\Lambda}$ is a locally finite cover of $A$ by closed sets. If each of the restrictions $f|_{\Ksp_i}: \Ksp_i\to B$ is continuous, then $f$ is continuous it self. Moreover, suppose the family $\{\psi(\Ksp_i)\}_{i\in\Lambda}$ is locally finite, $f(\Ksp_i)$ is closed in $B$, and the restriction $f|_{\Ksp_i}: \Ksp_i\to f(\Ksp_i)$ is a homeomorphism for each $i\in\Lambda$. Then $f$ is a homeomorphism. \end{enumerate} \end{lemma} \begin{proof} Statements (\ref{enum:lm:prop:T1_space}), (\ref{enum:lm:prop:prj_open_equiv_conditions}), and~(\ref{enum:lm:prop:homeomorphism}) are easy and we leave them for the reader. \smallskip (\ref{enum:lm:prop:prj_open}) Suppose $\prj$ is an open map and let $B \subset \Xsp$ be a saturated subset. Then $\Xsp\setminus B$ is also saturated, i.e.\! $S(\Xsp\setminus B)=\Xsp\setminus B$, and so \[ \Xsp\setminus\overline{B} \ \subset \ S(\Xsp\setminus\overline{B}) \ \subset \ S(\Xsp\setminus B) = \Xsp\setminus B. \] Hence \[ \overline{B} \ \supset \ \Xsp\setminus S(\Xsp\setminus\overline{B}) \ \supset \ B. \] As $\Xsp\setminus\overline{B}$ is open, $S(\Xsp\setminus\overline{B})$ is open as well, and therefore $\Xsp\setminus S(\Xsp\setminus\overline{B})$ is a closed subset containing $B$. Therefore it must contain the closure $\overline{B}$, hence $\overline{B} = \Xsp\setminus S(\Xsp\setminus\overline{B})$, which implies~\eqref{equ:X_setminus_ovrSA}. \smallskip Let us prove~\eqref{equ:image_of_closure_B}. Since $p$ is continuous, $p^{-1}(\overline{p(B)})$ is a closed subset containing $B$. Therefore it contains $\overline{B}$, and so $p(\overline{B}) \subset \overline{p(B)}$. Conversely, by~\eqref{equ:X_setminus_ovrSA}, $\overline{B}$ is saturated and closed. Therefore, by definition of the quotient topology, $p(\overline{B})$ is a closed subset and it contains $p(B)$. Hence it also contains $\overline{p(B)}$, i.e.\! $p(\overline{B}) \supset \overline{p(B)}$. \smallskip (\ref{enum:lm:prop:loc_finite_families}) Suppose $\beta$ is a locally finite family and $x\in \Xsp$. We should find a neighborhood $\Usp$ of $x$ which intersects only finitely many elements from $\alpha$. Let $y=\prj(x)$. Since $\beta$ is locally finite, there exists a neighborhood $\Vsp$ of $y$ intersecting only finitely many elements $\Wsp_{i_1},\ldots,\Wsp_{i_k} \in \beta$. Then $\prj^{-1}(\Vsp)$ is an open neighborhood of $x$ intersecting only the following elements $\prj^{-1}(\Wsp_{i_1}),\ldots,\prj^{-1}(\Wsp_{i_k})$ of $\alpha$. Conversely, suppose $\alpha$ is locally finite and $\prj$ is open. Let $y\in\Ysp$ and $x\in\Xsp$ be such that $\prj(x)=y$. Then there exists a neighborhood $\Usp$ of $x$ intersecting only finitely many elements, say $\prj^{-1}(\Wsp_{i_1}),\ldots,\prj^{-1}(\Wsp_{i_k})$, of $\alpha$. Therefore its saturation $S(\Usp) = \prj^{-1}(\prj(\Usp))$ also intersects only $\prj^{-1}(\Wsp_{i_1}),\ldots,\prj^{-1}(\Wsp_{i_k})$. Since $\prj$ is open, the image $\prj(\Usp)$ is an open neighborhood of $y$. We claim that $\prj(\Usp)$ intersects only the elements $\Wsp_{i_1},\ldots,\Wsp_{i_k}$ of $\alpha$. Indeed, if $\prj(\Usp) \cap \Wsp_i \not=\varnothing$ for some $i\in \Lambda$, then $\prj^{-1}(\prj(\Usp)) \cap \prj^{-1}(\Wsp_i)\not=\varnothing$ which is possible only when $i \in \{i_1,\ldots,i_k\}$. \smallskip (\ref{enum:lm:prop:loc_finite_families_nbh}) For each $i\in\bN$ consider the following subfamily $\alpha_i = \{ \omega_j \}_{j\geq i}$ of $\alpha$, so $\alpha=\alpha_1$ and $\alpha_{i+1} \subset \alpha_i$ for all $i\in\bN$. Then each $\alpha_{i}$ is locally finite as well, and therefore the union $A_i = \mathop{\cup}\limits_{j=i}^{\infty} \elem_i$ is a closed subset of $\Xsp$. Since $\Xsp$ is normal and $\elem_1$ and $A_2$ are mutually disjoint and closed, there exists an open neighborhood $\Usp_1$ of $\elem_1$ such that $\overline{\Usp_1} \cap A_2 = \varnothing$. Then $\elem_2$ and $\overline{\Usp_1} \cup A_3$ are mutually disjoint and closed, whence there exists an open neighborhood $\Usp_2$ of $\elem_2$ which does not intersect $\overline{\Usp_1} \cup A_3$. Repeating these arguments so on we will construct for each $i\in\bN$ an open neighborhood $\Usp_i$ of $\elem_i$ such that $\overline{\Usp_i}$ does not intersect $\bigl(\cup_{j=1}^{i-1} \overline{\Usp_j}\bigr)\cup A_{i+1}$. Then $\overline{U_i} \cap \overline{U_j} = \varnothing$ for all $i\not=j\in\bN$. \end{proof} \begin{definition}\label{def:loc_triv_partition} We will say that a partition $\Partition$ is \emph{locally trivial} if for each $\omega\in\Partition$ there exists an open neighborhood $\Usp$, a topological space $J$, a point $t_0\in J$, and a homeomorphism $\phi:\omega\times J \to \Usp$ such that $\phi(\omega \times t)$ is an element of $\Partition$ for all $t\in J$ and $\phi(x,t_0) = x$ for all $x\in\omega$. \end{definition} In particular, a foliation belonging to class $\classFol$ is a locally trivial partition. Notice that in the notation of Definition~\ref{def:loc_triv_partition} $\Usp$ is saturated and open in $\Xsp$, whence its image $\Vsp =\prj(\Usp)$ is open in $\Ysp$ and we have the following commutative diagram: \begin{equation}\label{equ:reformulation_loc_triv_partition} \xymatrix{ \omega\times J \ar[rr]^-{\phi}_-{\cong} \ar[d]_-{q_2} && \Usp = \prj^{-1}(\Vsp) \ar[d]^-{\prj} \\ J \ar[rr]^-{\xi}_-{1-1} && \Vsp } \end{equation} where $q_2$ is a projection onto the second multiple and $\xi$ is the induced one-to-one continuous map but it is not necessarily a homeomorphism. \begin{lemma}\label{lm:loc_triv_relations} The following conditions are equivalent: \begin{enumerate} \item\label{lm:loc_triv_relations:enum:prj_is_ltfibr} the quotient map $\prj:\Xsp\to\Ysp$ is a locally trivial fibration; \item\label{lm:loc_triv_relations:enum:partit_lt__prj_open} partition $\Partition$ is locally trivial and the quotient map $\prj:\Xsp\to\Ysp$ is open. \end{enumerate} \end{lemma} \begin{proof} \eqref{lm:loc_triv_relations:enum:prj_is_ltfibr}$\Rightarrow$\eqref{lm:loc_triv_relations:enum:partit_lt__prj_open}. Suppose $\prj$ is a locally trivial fibration. We claim that then $\Partition$ is locally trivial. Indeed, let $\omega\in\Partition$ and $y=\prj(\omega)\in\Ysp$. Since $\prj$ is locally trivial, there exists a neighborhood $\Vsp$ of $y$ and the following commutative diagram: \begin{equation}\label{equ:reformulation_loc_triv_fibration} \xymatrix{ \omega\times \Vsp \ar[rr]^-{\phi}_-{\cong} \ar[d]_-{q_2} && \Usp_{\omega} = \prj^{-1}(\Vsp) \ar[d]^-{\prj} \\ \Vsp \ar[rr]^-{\xi=\mathrm{id}_{\Vsp}} && \Vsp } \end{equation} in which $\phi$ is a homeomorphism. This diagram coincides with~\eqref{equ:reformulation_loc_triv_partition} for $J=\Vsp$, and therefore $\Partition$ is a locally trivial partition. Let us prove that $\prj$ is an open map. Notice that in Diagram~\eqref{equ:reformulation_loc_triv_fibration} $q_2$ is an open map as a coordinate projection. Since $\phi$ is a homeomorphism, it follows that the restriction $\prj|_{\Usp_{\omega}}$ is an open map as well. But then $\beta=\{\Usp_{\omega}\}_{\omega\in\Partition}$ is an open cover of $\Xsp$ such that each restriction $\prj|_{\Usp_{\omega}}$ is open. Therefore by~(\ref{enum:lm:prop:prj_open_equiv_conditions}) of Lemma~\ref{lm:prop} $\prj$ is open. \eqref{lm:loc_triv_relations:enum:partit_lt__prj_open}$\Rightarrow$\eqref{lm:loc_triv_relations:enum:prj_is_ltfibr}. Suppose $\prj$ is an open map and $\Partition$ is locally trivial. We claim that then in~\eqref{equ:reformulation_loc_triv_partition} the map $\xi$ is open, and therefore it is a homeomorphism. This will imply that $\prj$ is a locally trivial fibration. Let $T \subset J$ be an open subset. Then $\phi\circ q_2^{-1}(T)$ is open in $\Usp$. Since $\prj$ is open, we get that $\xi(T) = \prj\circ\phi\circ q_2^{-1}(T)$ is open in $\Vsp$. Thus $\xi$ is an open map. \end{proof} \begin{definition}\label{def:spec_element} An element $\omega \in \Partition$ will be called \emph{special} if its image $y=\prj(\omega)\in\Ysp$ is a special point of $\Ysp$, i.e. $y\not=\bnd{y}:=\mathop{\cap}\limits_{\Vsp\in\beta_{y}} \overline{\Vsp}$, where $\beta_{y}$ is the family of all neighborhoods of $y$, see Definition~\ref{def:spec_point}. Let also \begin{align*} \bnd{\omega} &= \underset{N(\omega)} {\bigcap} \overline{S\left( N(\omega) \right)}, & \bnds{\omega} &= \underset{N_{S}(\omega)} {\bigcap} \overline{N_{S}(\omega)}, \end{align*} where $N(\omega)$ runs over \emph{all open} neighborhoods of $\omega$ and $N_S(\omega)$ runs over \emph{all saturated open} neighborhoods of $\omega$. \end{definition} \begin{lemma}\label{lm:special_elements} Let $\omega\in\Partition$ and $y = \prj(\omega)$. Then \begin{align*} \bnd{\omega} & \ \subset \ \bnds{\omega} \ \subset \ \prj^{-1}(\bnd{y}). \end{align*} If $\prj$ is an open map, then \begin{align*} \bnd{\omega} &= \bnds{\omega} = \prj^{-1}(\bnd{y}), & \prj(\bnds{\omega}) &= \bnd{y}. \end{align*} \end{lemma} \begin{proof} First we establish relations between $\bnd{\omega}$ and $\bnds{\omega}$. Notice that the family $\mathcal{A} = \{ S(N(\omega)) \}$ of saturations of all open neighborhoods of $\omega$ includes the family $\mathcal{B} = \{ N_S(\omega) \}$ of all saturated open neighborhoods of $\omega$. Therefore the intersection $\bnd{\omega}$ of the larger family $\mathcal{A}$ is contained in the intersection $\bnds{\omega}$ of the smaller family $\mathcal{B}$, that is $\bnd{\omega} \ \subset \ \bnds{\omega}$. If $\prj$ is an open map, so the saturation of an open set is open, then $\mathcal{A} = \mathcal{B}$, and therefore $\bnd{\omega}=\bnds{\omega}$. \smallskip Now we will describe relationships between $\bnds{\omega}$ and $\bnd{y}$. By definition of the quotient topology on $\Ysp$ the map $\prj$ induces a bijection between the families $\mathcal{B}$ and $\beta_{y}$. Moreover, if $N_S(\omega) \in \mathcal{B}$ is an open saturated neighborhood of $\omega$ and $\Vsp = \prj(N_S(\omega))$ is an open neighborhood of $y$, then, due to continuity of $\prj$, we have that $\prj(\overline{N_S(\omega)}) \subset \overline{\Vsp}$. Hence $\prj(\bnds{\omega}) \subset \bnd{y}$, that is $\bnds{\omega} \ \subset \ \prj^{-1}(\bnd{y})$. If $\prj$ is open, then, due to~\eqref{equ:image_of_closure_B}, $\prj(\overline{N_S(\omega)}) = \overline{\prj(N_S(\omega))} = \overline{\Vsp}$, whence \begin{equation}\label{equ:p_bnd_omega} \prj(\bnds{\omega}) = \prj\Bigl(\,\bigcap_{N_{S}(\omega)\in\mathcal{B}}\overline{N_{S}(\omega)} \,\Bigr) = \prj\Bigl(\,\bigcap_{V \in \beta_{y}} \prj^{-1}(\overline{V}) \,\Bigr) = \bigcap_{V \in \beta_{y}} \overline{V} = \bnd{y}. \end{equation} Finally, as $\bnds{\omega}$ is saturated as an intersection of saturated sets, it follows from~\eqref{equ:p_bnd_omega} that $\bnds{\omega} = \prj^{-1}(\bnds{y})$. \end{proof} \begin{lemma}\label{lm:local_prop} Suppose that the following conditions hold true: \begin{enumerate} \item[\rm(a)] $\prj:\Xsp\to\Ysp$ is a locally trivial fibration with fiber $\bR$; \item[\rm(b)] the set $\Sigma$ of special elements of $\Xsp$ is locally finite; \item[\rm(c)] $\Ysp$ is a $T_1$-space locally homeomorphic with open subsets of $[0,1)$. \end{enumerate} Then every connected component $\Qsp$ of $\Xsp\setminus\Sigma$ is open in $\Xsp$ and is foliated homeomorphic with one of the following five stripped surfaces: model strips $\bR\times(0,1)$, $\bR\times[0,1)$, $\bR\times[0,1]$, or standard cylinder $C$, or standard M\"obius band $M$. Moreover, in the last three cases, $\Qsp$ is also closed in $\Xsp$. \end{lemma} \begin{proof} By (a) and Lemma~\ref{lm:loc_triv_relations} $\prj$ is an open map. Therefore by Lemma~\ref{lm:special_elements} $\Sigma=\prj^{-1}(\YspecPtSet)$ and $\prj(\Sigma)=\YspecPtSet$, where $\YspecPtSet$ is the set of special points of $\Ysp$. Then by (b) and~(\ref{enum:lm:prop:loc_finite_families}) of Lemma~\ref{lm:prop} $\YspecPtSet$ is also a locally finite family of points. Due to (c) each point in $\Ysp$ is closed, whence $\YspecPtSet$ is closed in $\Ysp$. Let $\Wsp$ be a connected component $\Ysp\setminus\YspecPtSet$. Then $\Wsp$ is open in $\Ysp$ and open closed in $\Ysp\setminus\YspecPtSet$. Therefore $\Qsp = \prj^{-1}(\Wsp)$ is open in $\Xsp$ and open closed in $\Xsp\setminus\Sigma$, i.e.\! $\Qsp$ is a connected component of $\Xsp\setminus\Sigma$. Moreover, due to (a) the restriction $\prj:\Qsp \to\Wsp$ is a locally trivial fibration with fiber $\bR$, and by Lemma~\ref{lm:connected_components_of_nonspec_pts} $\Wsp$ is homeomorphic with one of the following spaces: $(0,1)$, $[0,1)$, $[0,1]$, $S^1$. Therefore in the first three cases (when $\Wsp$ is contractible) $\Qsp$ is fiber-wise homeomorphic to a product $\bR\times\Wsp$, and in the last case, when $\Wsp\cong S^1$, $\Qsp$ is fiber-wise homeomorphic either with the standard cylinder $C$ or with the standard M\"obius band $M$. It remains to show that every connected component $\Qsp$ of $\Xsp\setminus\Sigma$ can be represented as $\Qsp = \prj^{-1}(\Wsp)$ for some connected component $\Wsp$ of $\Ysp\setminus\YspecPtSet$. Let $\Wsp = \prj(\Qsp)$. We claim that $\Wsp$ is open closed in $\Ysp\setminus\YspecPtSet$. Indeed, let $\Wsp'$ be the connected component of $\Ysp\setminus\YspecPtSet$ containing $\Wsp$. Then as noted above $\prj^{-1}(\Wsp')$ is connected and contains $\Qsp$, whence $\Qsp = \prj^{-1}(\Wsp')$, and so $\Wsp = \Wsp'$. \end{proof} \section{Proof of~\eqref{th:open_strips:Q} of Theorem~\ref{th:open_strips}}\label{sect:proof:1:th:open_strips} Let $\Xsp$ be a $2$-dimensional manifold and $\Partition$ be a $1$-dimensional foliation on $\Xsp$ belonging to class $\classFol$ and such that the set $\Sigma$ of special leaves of $\Xsp$ is locally finite. Let also $\Ysp=\Xsp/\Partition$ be the space of leaves endowed with the corresponding factor topology and $\prj:\Xsp\to\Ysp$ be the factor map. We claim that $\prj$ satisfies conditions (a)--(c) of Lemma~\ref{lm:local_prop}. Indeed, by Lemma~\ref{lm:classFol_prop} $\prj$ is open, and by Lemma~\ref{lm:loc_triv_relations} it is a locally trivial fibration with fiber $\bR$, so condition (a) holds. Condition (b) holds by assumption and condition (c) directly follows from definition of class $\classFol$. Therefore by Lemma~\ref{lm:local_prop} every connected component $\Xsp\setminus\Sigma$ is foliated homeomorphic with one of the spaces: $\bR\times(0,1)$, $\bR\times[0,1)$, $\bR\times[0,1]$, $C$, $M$. Applying the above result to the surface $\Xsp\setminus\partial\Xsp$ we get that every connected component of $\Xsp\setminus(\Sigma\cup\partial\Xsp)$ is foliated homeomorphic with one of the spaces: $\bR\times(0,1)$, $C$, or $M$. Statement~\eqref{th:open_strips:Q} of Theorem~\ref{th:open_strips} is proved. \section{Trapezoids}\label{sect:trapezoids} The results of this section will be used for the proof of (2) of Theorem~\ref{th:open_strips}. Let $c<d$ and $\alpha,\beta:(c,d] \to \bR$ be two continuous functions such that $\alpha(y) < \beta(y)$ for all $y\in(c,d]$. Then the subset \[\trap = \{ (x,y) \in \bR^2 \mid \alpha(y) \leq x \leq \beta(y), \ c< y\leq d \}\] will be called a \emph{half open trapezoid} or simply a \emph{trapezoid}. In this case $[\alpha(d), \beta(d)] \times d$ is the \emph{upper base} of $\trap$, $d$ is the \emph{level} of the upper base, $d-c$ is the \emph{altitude} of $\trap$, and the set \[ \roof(\trap) := \{ (\alpha(y), y) \}_{y\in(c,d]} \ \cup \ [\alpha(d), \beta(d)] \times d \ \cup \ \{ (\beta(y), y) \}_{y\in(c,d]} \] is the \emph{roof} of $\trap$, see Figure~\ref{fig:trapezoid}a). \begin{figure}[ht] \begin{tabular}{ccc} \includegraphics[height=2.5cm]{trapezoid} & \qquad \qquad & \includegraphics[height=2.5cm]{trap_construction} \\ a) Half open trapezoid & & b) Construction of trapezoid \end{tabular} \caption{Half open trapezoid}\label{fig:trapezoid} \end{figure} Notice that if $\phi:\bR\times(c,d] \to \bR\times(c,d]$ is a homeomorphism preserving second coordinate, i.e. $\phi(\bR\times y) =\bR\times y$ for all $y\in(c,d]$, then $\phi(\trap)$ is a trapezoid as well. In general, $\alpha$ and $\beta$ can be non-bounded or have no limits when $y\to c+0$. Suppose, in addition, that there exist finite or infinite limits \begin{align*} \lim\limits_{y\to c+0} \alpha(y) &= a, &\lim\limits_{y\to c+0} \beta(y) = b \end{align*} such that $a<b$. Then $(a,b)\times c$ will be called the \emph{(lower) base} of $\trap$. If $a$ and $b$ are finite numbers, then $\trap$ will be called a \emph{trapezoid with bounded base}, and the set \[\overline{\trap} = \trap \cup [a,b]\times c\] will be a \emph{closed} trapezoid. In particular, if $\alpha$ and $\beta$ are constant functions, then the trapezoid $\trap$ will be called a \emph{rectangle}. \begin{lemma}\label{lemma:graph_neighborhood} Let $J=(a,b)\times 0 \subset \bR^2$ be an open interval, $N = J \ \cup \ \bR^2 \times(0,+\infty)$, and $U$ be an open neighborhood of $J$ in $N$. Then there exists a half open trapezoid $\trap \subset U$ with base $J$, see Figure~\ref{fig:trapezoid}b). \end{lemma} \begin{proof} Fix any two sequences $\{a_i\}_{i=0}^{\infty}, \{b_i\}_{i=0}^{\infty} \subset (a,b)$ such that $\lim\limits_{i\to\infty} a_i= a$, $\lim\limits_{i\to\infty} b_i = b$, and \[ \cdots < a_{i+1} < a_i < \cdots < a_0 < b_0 < \cdots < b_i < b_{i+1} < \cdots \] Let also $J_i = [a_i,b_i]\times 0$. Since $U$ is an open neighborhood of $J_i$ and $J_i$ is compact, there exist $r_i>0$ such that $J_i \times [0,r_i] \subset U$. One can assume that $\lim\limits_{i\to\infty} r_i = 0$ and $\{r_i\}$ is strictly decreasing. Now let $\alpha, \beta:(0,d]\to(0,+\infty)$ be a unique continuous function such that for each $i\geq0$ \begin{itemize} \item[(i)] $\alpha(a_i) = \beta(b_i) = r_{i+1}$; \item[(ii)] the restrictions $\alpha|_{[a_{i+1},a_i]}$ and $\beta|_{[b_{i},b_{i+1}]}$ are linear. \end{itemize} Then one easily checks that the function $\alpha$ and $\beta$ are strictly monotone and their inverses $\alpha^{-1}$ and $\beta^{-1}$ determine a half open trapezoid $T\subset\Usp$ with base $J$. \end{proof} \begin{proposition}\label{prop:trap_make_rectangles} Let $\trap_i \subset \bR\times(c,d]$, $i\in\bN$, be a half open trapezoid with upper base at level $d_i \in(c,d]$ such that $\trap_i\cap\trap_j = \varnothing$ for $i\not=j$ and $\lim\limits_{i \to \infty} d_i = c$. Then there exists a homeomorphism $\eta:\bR\times(c,d]\to\bR\times(c,d]$ such that \begin{itemize} \item[(i)] $\eta(\bR\times y) = \bR\times y$ for all $y\in(c,d]$; \item[(ii)] $\eta(\trap_i)$ is a half open rectangle. \end{itemize} \end{proposition} \begin{proof} We need the following three lemmas. It will be convenient to say that for a function $f:[a,b]\to\bR$ its \emph{graph} is the subset $\{(f(y), y) \mid y\in [a,b]\} \ \subset \ \bR^2$, so we just swap coordinates with respect to the usual definition. In particular, for $q\in\bR$ a vertical segment $q\times[a,b]$ can be regarded as a graph of a constant function $[a,b]\mapsto q$. \begin{sublemma}\label{lm:funcction_uk}{\rm {(c.f.~\cite[Lemma~6.1.1]{Maksymenko:BSM:2006}})}. Let $\Delta_k = \{ (y_1,\ldots,y_k) \in \bR^k \mid y_1<y_2<\ldots<y_k \}$ and $q_1 < q_2 <\ldots < q_k \in \bR$. Then there exists a $C^{\infty}$ function $u_k:\bR\times\Delta_k\to\bR$ having the following properties: \begin{itemize} \item[\rm(a)] the correspondence $x \mapsto u_k(x; y_1,\ldots,y_k)$ is an orientation preserving homeomorphism $\bR\to\bR$ for all $(y_1,\ldots,y_k)\in\Delta_k$; \item[\rm(b)] $u_k(x; q_1,\ldots,q_k) = x$ for all $x\in\bR$; \item[\rm(c)] $u_k(y_i; y_1,\ldots,y_k) = q_i$ for $i=1,\ldots,k$. \end{itemize} \end{sublemma} \begin{proof} The construction of $u_k$ is similar to \cite[Lemma~6.1.1]{Maksymenko:BSM:2006}. For instance, one can set \begin{align*} u_1(x; y_1) &= x - y_1 + q_1, & u_2(x; y_1,y_2) &= q_1 + \frac{q_2-q_1}{y_2-y_1} (x - y_1). \end{align*} We leave the details for the reader. \end{proof} \begin{sublemma}\label{lm:rectification_finite} Let $\func_i: (c, s] \to \bR$, $i = 1, \ldots, k$, be a finite family of continuous functions such that $\func_i(y) \not= \func_j(y)$ whenever $i\not=j$ and $y \in (c, s]$. Then there exists a homeomorphism $\phi: \bR\times(c,s]\to \bR\times(c,s]$ such that \begin{itemize} \item[\rm(1)] $\phi(\bR\times y) = \bR\times y$ for all $y \in (c,s]$; \item[\rm(2)] $\phi$ is fixed on $\bR\times s$; \item[\rm(3)] $\phi$ maps the graph $\{(\func_i(y), y) \mid y\in(c,s])\}$ of $\func_i$, $i \in \{1, \ldots, k\}$, onto a vertical segment $\func_i(s) \times [a, s]$. \end{itemize} \end{sublemma} \begin{proof} One can assume that $\func_i < \func_j$ for $i<j$. Let $u_k:\bR\times\Delta_k\to\bR$ be a function from Lemma~\ref{lm:funcction_uk} constructed for the numbers $q_i = \func_i(s)$, $i \in \{1, \ldots, k\}$. Then a homeomorphism $\phi$ satisfying (1)-(3) can be defined by the following formula: \[ \phi(x,y) = \bigl( u_k(x; \func_1(y), \ldots,\func_k(y)), \ y \bigr). \] Indeed, (1) is evident. Moreover, due to property (b) of $u_k$ we have that \[ \phi(x,s)= \bigl( u_k(x; \func_1(s), \ldots,\func_k(s)), \ s \bigr) = (x,s) \] which proves (2). Finally, by property (c) of $u_k$ \[ \phi(\func_i(y),y)=\bigl( u_k(\func_i(y); \func_1(y), \ldots,\func_k(y)), \ y \bigr) = (\func_i(s), y), \] so (3) is also satisfied. \end{proof} \begin{sublemma}\label{lemma:rectification_inifinite} Let $\{\dd_i\}_{i\in\bN} \subset (c,d]$ be a sequence with $\lim\limits_{i \to \infty} d_i = c$, and for each $i\in\bN$ let $\func_i:(c, \dd_i]\to\bR$ be a continuous function such that the graphs of $\func_i$ and $\func_j$ are mutually disjoint for $i\not= j$. Then there exists a homeomorphism $\eta:\bR\times(c,d]\to\bR\times(c,d]$ such that \begin{itemize} \item[(i)] $\eta(\bR\times y) = \bR\times y$ for all $y\in(c,d]$; \item[(ii)] $\eta$ maps the graph $\{(\func_i(y), y) \mid y\in(c,\dd_i] \}$ of $\func_i$ onto a vertical segment $q_i \times(c,\dd_i]$ for some $q_i\in\bR$, $i\in\bN$. \end{itemize} \end{sublemma} \begin{proof} One can assume, in addition, that $\{\dd_i\}_{i\in\bN}$ is non-increasing. Let us remove repeating elements from $\{\dd_i\}_{i\in\bN}$ and denote the obtained sequence by $\{s_i\}_{i\in\bN}$. Thus there is an increasing sequence of indices $1 = j_1 < j_2 < \cdots < j_n < \cdots$ such that \[s_i = d_{j_i} = d_{j_{i}+1} = \cdots = d_{j_{i+1} - 1} \ > \ s_{i+1} = d_{j_{i+1}} = \cdots . \] Then by Lemma~\ref{lm:rectification_finite} there exists a homeomorphism $\phi_1:\bR\times(c,s_1]\to\bR\times(c,s_1]$ preserving second coordinate and sending the graphs of functions $\func_{j_1},\ldots,\func_{j_2-1}$ onto vertical segments. Let us extend $\phi_1$ by the identity on $\bR\times[s_1,d]$ to a homeomorphism of all of $\bR\times(c,d]$. Denote by $\func_i^1$ the image of the graph of $\func_i$ under $\phi_1$. Then again there exists a homeomorphism $\phi_2:\bR\times(c,d]\to\bR\times(c,d]$ preserving second coordinate, fixed on $\bR\times[s_2,d]$, and sending the graphs of functions $\func^1_{j_1},\ldots,\func^1_{j_3-1}$ onto vertical segments. Hence the composition $\phi_2\circ\phi_1$ preserves second coordinate and sends the graphs of functions $\func_{j_1},\ldots,\func_{j_3-1}$ onto vertical segments. Denote by $\func^2_{i}$ the graph of the function $\func_i$ under $\phi_2\circ\phi_1$. Then by similar arguments, we will construct an infinite family of homeomorphisms $\phi_1,\ldots,\phi_k,\ldots$ of $\bR\times(c,d]$ such that each $\phi_k$ preserves second coordinate, is fixed on $\bR\times[s_k,d]$, and sends the graphs of functions $\func^1_{j_1},\ldots,\func^1_{j_{k+1}-1}$ onto vertical segments. Since $\lim\limits_{i\to\infty} s_i = c$, it follows that the infinite composition \[ \eta = \cdots \circ \phi_m \circ\phi_{m-1} \circ \cdots \circ \phi_1:\bR\times(c,d]\to\bR\times(c,d] \] is a well defined homeomorphism satisfying the statement of lemma. \end{proof} To deduce Proposition~\ref{prop:trap_make_rectangles} assume that $\trap_i$ is defined by functions $\alpha_i,\beta_i:(c,d_i]\to\bR$. Denote $\func_{2i-1} = \alpha_i$ and $\func_{2i} = \beta_i$. Then existence of $\eta$ is guaranteed by Lemma~\ref{lemma:rectification_inifinite}. \end{proof} \subsection*{Level-preserving homeomorphisms between trapezoids.} \sloppy Let $q_2:\bR^2\to\bR$, $q_2(x,y)=y$, be the standard projection onto the second coordinate and \begin{align*} \strap &= \{ (x,y) \in \bR \mid \alpha(y) \leq x \leq \beta(y), \ a < y \leq b \}, \\ \trap &= \{ (x,y) \in \bR \mid \gamma(y) \leq x \leq \delta(y), \ c < y \leq d \} \end{align*} be two trapezoids with finite bases, where $\alpha,\beta:(a,b]\to\bR$ and $\gamma,\delta:(c,d]\to\bR$ are continuous functions such that $\alpha<\beta$ and $\gamma<\delta$. \fussy Let $A \subset \strap$ and $B\subset\trap$ be two subsets. Then a map $\xi: A \to B$ will be called \emph{level-preserving} whenever \[ q_2\circ \xi(x,y) = q_2\circ \xi(x',y) \] for all $x,x',y$ such that $(x,y), (x',y)\in A$. \begin{lemma}\label{lm:level_pres_homeo_of_roofs} Every level-preserving homeomorphism $\xi:\roof(\strap) \to \roof(\trap)$ between roofs of trapezoids extends to a level-preserving homeomorphism $\xi:\strap \to \trap$. Moreover, if $\strap$ and $\trap$ have finite bases, then $\xi$ also extends to a level-preserving homeomorphism $\xi:\overline{\strap} \to \overline{\trap}$ between their closures. \end{lemma} \begin{proof} As $\xi$ is level-preserving, we have a well defined homeomorphism $\sigma:(a,b]\to(c,d]$ given by $\sigma(y) = q_2\circ \xi(\alpha(y),y)$. Then $\xi$ extends to a homeomorphism $\strap\to\trap$ by \[ \xi(x,y) = \left( \gamma(\sigma(y)) + \frac{\delta(\sigma(y))-\gamma(\sigma(y))}{\beta(y)-\alpha(y)} (x-\alpha(y)), \ \sigma(y) \right). \] Moreover, if in addition $\strap$ and $\trap$ have finite bases, so $\alpha$ and $\beta$ are defined and continuous on $[a,b]$ and $\gamma$ and $\delta$ are defined and continuous on $[c,d]$, then the same formulas define homeomorphisms $\sigma:[a,b]\to[c,d]$ and $\xi:\overline{\strap}\to\overline{\trap}$. \end{proof} \section{Proof of~\eqref{th:open_strips:closure_of_Q} of Theorem~\ref{th:open_strips}}\label{sect:proof:2:th:open_strips} Let $\Partition$ be a partition on $\Xsp$ of class $\classFol$ such that the family $\Sigma$ of all special leaves is locally finite. Let also $\bar{\Sigma} = \Sigma \cup \partial\Xsp$ be the union of all special and boundary leaves of $\Xsp$, $\Qsp$ be a connected component of $\Xsp\setminus\bar{\Sigma}$ homeomorphic with an open model strip, and $\phi:\bR\times(-1,1)\to\Qsp$ be a foliated homeomorphism. Denote, see Figure~\ref{fig:q_properties}: \begin{align} \label{equ:notations_for_q} \Qmin &= \phi\bigl(\bR\times(-1,0]\bigr), & \Ksp &= \phi\bigl(\bR\times 0\bigr), & \Qmax &= \phi\bigl(\bR\times[0,1)\bigr). \end{align} We should prove that the closures $\overline{\Qmin}$ and $\overline{\Qmax}$ are foliated homeomorphic to some model strips. It suffices to prove this only for $\overline{\Qmin}$. \begin{figure}[ht] \includegraphics[height=1.4cm]{q_properties} \caption{}\label{fig:q_properties} \end{figure} \begin{lemma}\label{lm:Qmin_omega} {\rm 1)}~$\overline{\Qmin} \setminus \Qsp = \overline{\Qmin} \setminus \Qmin$ \ and \ $\overline{\Qmax} \setminus \Qsp = \overline{\Qmax} \setminus \Qmax$. {\rm 2)}~ $\overline{\Qsp} \setminus \Qsp= (\overline{\Qmin} \setminus \Qmin) \cup (\overline{\Qmax} \setminus \Qmax) \ \subset \ \bar{\Sigma}$. {\rm 3)}~Let $\omega$ be a leaf in $\overline{\Qmin} \setminus \Qmin$, $J = (-1,1)\times 0 \subset \bR^2$, and $N = J \ \cup \ \bR \times (0, 1]$. \begin{enumerate} \item[\rm(a)] Then $\Qsp \cup\omega$ is open in $\overline{\Qsp}$ and $\Qmin\cup\omega$ is open in $\overline{\Qmin}$. \item[\rm(b)] There exists a foliated homeomorphism $\psi:N \to \Qmin \cup \omega$ such that $\psi(J) = \omega$. \item[\rm(c)] Let $\Usp \subset \Xsp$ be an open neighborhood of $\omega$ and $T\subset \psi^{-1}(\Usp)$ be a subset with compact closure such that \begin{align*} &\overline{\Usp} \ \cap \ (\overline{\Qmin}\setminus\Qmin) \ = \ \omega, & J \ \subset \ (\overline{T}\setminus T) \ \subset \ \bR\times 0. \end{align*} Then $\psi(T)$ is closed in $\Xsp$. In particular, if $T$ is a trapezoid with base $J$, then $\psi(T \cup J)$ is closed in $\Xsp$. \end{enumerate} \end{lemma} \begin{proof} 1)~Denote $\iQmin = \Qmin \setminus \Ksp$ and $\iQmax = \Qmax \setminus \Ksp$. Then $K\subset \overline{\iQmin}$, so \[\overline{\Qmin} = \overline{\iQmin \cup \Ksp} = \overline{\iQmin} \cup \Ksp =\overline{\iQmin}.\] Moreover, as $\iQmin$ and $\iQmax$ are open in $\Xsp$ and disjoint, we get that $\overline{\Qmin} \cap \iQmax = \overline{\iQmin} \cap \iQmax = \varnothing$, whence $\overline{\Qmin} \setminus \Qsp = \overline{\Qmin} \setminus (\Qmin \cup \iQmax) = \overline{\Qmin} \setminus \Qmin$. The proof for $\Qmax$ is similar. \smallskip 2) It follows from (1) that $\overline{\Qsp} \setminus \Qsp = (\overline{\Qmin} \setminus \Qsp) \cup (\overline{\Qmax} \setminus \Qsp) = (\overline{\Qmin} \setminus \Qmin) \cup (\overline{\Qmax} \setminus \Qmax)$. Let us prove that $\overline{\Qsp} \setminus \Qsp\subset\bar{\Sigma}$. Suppose $\overline{\Qsp}\setminus\Qsp \not\subset \bar{\Sigma}$. Then there exists a connected component $P$ of $\Xsp\setminus\bar{\Sigma}$ distinct from $\Qsp$ and such that $\overline{\Qsp}\cap P \not=\varnothing$. But $P$ is open in $\Xsp$, whence $P\cap\Qsp\not=\varnothing$ and so $P=\Qsp$ which contradicts to the assumption. \smallskip 3a) Notice that the family $\bar{\Sigma}\setminus\{\omega\}$ is locally finite as well as $\bar{\Sigma}$. Therefore the set \[ \Wsp := \Xsp\setminus(\bar{\Sigma}\setminus\omega) = (\Xsp\setminus\bar{\Sigma})\cup\omega \] is open in $\Xsp$. Due to 2), $\Qsp = \overline{\Qsp} \cap (\Xsp\setminus\bar{\Sigma})$, whence $\Qsp \cup \omega = \overline{\Qsp} \cap \bigl( (\Xsp\setminus\bar{\Sigma})\cup\omega \bigr) = \overline{\Qsp} \cap \Wsp$ is open in $\overline{\Qsp}$. Similarly, due to 1), $\Qmin=\overline{\Qmin} \cap \Qsp$, whence $\Qmin\cup\omega = \overline{\Qmin} \cap \overline{\Qsp} \cap \bigl( (\Xsp\setminus\bar{\Sigma})\cup\omega \bigr) = \overline{\Qmin} \cap \Wsp$ is open in $\overline{\Qmin}$. \smallskip 3b) Notice that $\Qmin\cup\omega$ is saturated and by Lemma~\ref{lm:nbh_spec_points} $\prj(\Qmin\cup\omega)$ is homeomorphic with $[0,1]$. Since $\prj:\Qmin\cup\omega\to\prj(\Qmin\cup\omega)$ is a locally trivial fibration with fiber $\bR$, we obtain that $\Qmin\cup\omega$ is foliated homeomorphic with $\bR\times[0,1]$ and therefore with $N$. \smallskip 3c) It suffices to prove that $\psi(T)$ is closed in $\overline{\Qmin}\setminus\Usp$ being a closed subset of $\Xsp$, which will imply that $\psi(T)$ is closed in $\Xsp$ as well. Let $\{z_i\}_{i\in\bN} \subset \psi(T)$ be a sequence converging to some $z\in\overline{\Qmin}$. We should prove that $z\in\psi(T)$ as well. Let $(x_i,y_i)=\psi^{-1}(z_i) \in T$. Since $\overline{T}$ is compact, one can assume that $\{(x_i,y_i)\}$ converges to some $(\bar{x},\bar{y}) \in \overline{T}$. If $(\bar{x},\bar{y}) \in T$, then $z=\lim\limits_{i\to\infty} z_i = \lim\limits_{i\to\infty} \psi(x_i,y_i) = \psi(\bar{x},\bar{y}) \in \psi(T)$. Otherwise, we have that $(\bar{x},\bar{y}) \in \overline{T}\setminus T \subset \bR\times 0$, so $\bar{y}=0$, and thus $\lim\limits_{i\to\infty} y_i = \bar{y} =0$. This implies that $z\not\in\Qmin = \psi\bigl(\bR\times(0,1]\bigr)$. Hence $z \in \overline{\Usp} \cap(\overline{\Qmin}\setminus\Qmin) = \omega = \psi(J) \subset \psi(T)$. \end{proof} Due to (\ref{enum:lm:prop:loc_finite_families_nbh}) of Lemma~\ref{lm:prop} there exists a family $\cU = \{U_\omega\}_{\omega \in \bar{\Sigma}}$ of neighborhoods of elements of $\bar{\Sigma}$ such that the closures of elements of $\cU$ are pairwise disjoint in $\Xsp$. Let $\{\omega_i \}_{i\in\Lambda}$ be all the leaves contained in $\overline{\Qmin}\setminus\Qmin$. Then $\Lambda$ is at most countable set and one can assume that either $\Lambda=\{1,\ldots,k\}$ for some finite $k$ or $\Lambda=\bN$. By Lemma~\ref{lm:Qmin_omega} for each $i\in\Lambda$ there exists a foliated homeomorphism $\phi_i:N \to \Qmin \cup \omega_i$ such that $\psi_i(J) = \omega_i$. Then $\psi_{i}^{-1}(U_{\omega_i})$ is an open neighborhood of $J = (-1,1)\times 0$, whence by Lemma~\ref{lemma:graph_neighborhood} there exists a trapezoid $\trap_{i} \,\subset\, \psi_{i}^{-1}(U_{\omega_i}) \, \cap \, \bR\times(0,1)$ with base $J$. Put \[ \widehat{\trap}_i = \trap_i \cup J. \] Then by Lemma~\ref{lm:Qmin_omega} $\psi_i(\widehat{\trap}_i)$ is closed in $\overline{\Qmin}$. \begin{figure}[ht] \includegraphics[height=2.9cm]{proof2} \caption{}\label{fig:proof_2} \end{figure} Denote $\strap_{i} = \phi^{-1} \circ \psi_{i}(\trap_{i})$. Then $\{\strap_{i} \mid i\in\Lambda\}$ is a family of trapezoids in $\bR\times(-1,0]$. Assume that the upper base of $\strap_i$ in contained in $\bR\times \dd_i$ for some $\dd_i\in (-1,0)$. If $\Lambda$ is infinite, then decreasing, if necessary, $\trap_{i}$ (and therefore $\strap_i$), one can assume that $\lim\limits_{i\to\infty} \dd_i = -1$. Then by Proposition~\ref{prop:trap_make_rectangles} one can change $\phi$ so that $\strap_i = [a_i, b_i] \times (-1, d_i]$ is a ``half open rectangle'' for some $a_i,b_i\in\bR$. Then $[a_i,b_i] \cap [a_j,b_j]=\varnothing$ for $i\not=j\in\Lambda$. Let also $J_i = (a_i,b_i) \times \{-1\}$ and \[ M \ := \ \bR\times(-1,0] \ \bigcup \ \mathop{\cup}\limits_{i\in\Lambda} J_i. \] Then $M$ is a \emph{half model strip}. Our aim is to construct a foliated homeomorphism between $M$ and $\overline{\Qmin}$. Denote $\widehat{\strap}_i = \strap_i \cup J_i$, $i\in\Lambda$, and \[ \Zsp \, := \, M \,\setminus\, \mathop{\cup}\limits_{i\in\Lambda} \bigl(\widehat{\strap}_i\setminus\roof(\strap_i)\bigr) \ \subset \ \bR\times(0,1]. \] \begin{lemma}\label{lm:hat_strapi_is_loc_finite} $\{\Zsp\}\cup \{\widehat{\strap}_i\}_{i\in\Lambda}$ is a locally finite cover of $M$ by closed sets. \end{lemma} \begin{proof} It is evident, that $\widehat{\strap}_i$ is closed in $M$. Moreover $\widehat{\strap}_i\setminus\roof(\strap_i)$ is open in $M$, whence $\Zsp$ is closed in $M$ as well. Therefore it remains only to show that each $z=(x,y)\in M$ has an open neighborhood $\Vsp$ intersecting only finitely many elements $\widehat{\strap}_i$. If $y=-1$, then $z\in(a_i,b_i)\times\{-1\} \subset \widehat{\strap}_i$ for some $i\in\Lambda$. Hence $\Vsp=\widehat{\strap}_i\setminus\roof(\strap_i)$ is an open neighborhood of $z$ in $N$ intersecting only $\widehat{\strap}_i$. Suppose that $y>-1$. Fix any $t$ such that $-1<t<y$. Then $\Vsp = \bR\times(t,0]$ is an open neighborhood of $z$ in $M$. By assumption $\lim\limits_{i\to\infty}d_i = -1$, whence there exists $n>0$ such that $-1<d_i<t$ for all $i>n$, and so $\widehat{\strap}_i\cap\Vsp=\varnothing$. \end{proof} \begin{lemma}\label{lm:psii_trap_is_loc_finite} $\{\phi(\Zsp)\}\cup \{\psi_i(\widehat{\trap}_i)\}_{i\in\Lambda}$ is a locally finite cover of $\overline{\Qmin}$ by closed sets. \end{lemma} \begin{proof} By 3c) of Lemma~\ref{lm:Qmin_omega} each $\psi_i(\widehat{\trap}_i)$ is closed in $\Xsp$. Furthermore, \begin{align*} \phi(\Zsp) &= \phi\Bigl( M \,\setminus\, \mathop{\cup}\limits_{i\in\Lambda}\, \bigl( \widehat{\strap}_i\setminus\roof(\strap_i) \bigr) \Bigr) = \overline{\Qmin} \,\setminus\, \mathop{\cup}\limits_{i\in\Lambda}\, \psi_i\bigl(\widehat{\trap}_i\setminus\roof(\trap_i)\bigr), \end{align*} and it is also evident that $\widehat{\trap}_i\setminus\roof(\trap_i)$ is open in $N$. But due to 3b) of Lemma~\ref{lm:Qmin_omega} $\psi_i$ is a homeomorphism of $N$ onto the open subset $\Qmin\cup\omega_i$ of $\overline{\Qmin}$. Therefore $\psi_i(\widehat{\trap}_i\setminus\roof(\trap_i))$ is open in $\overline{\Qmin}$, whence $\phi(\Zsp)$ is closed in $\overline{\Qmin}$. It remains to show that $\{\psi_i(\widehat{\trap}_i)\}_{i\in\Lambda}$ is a locally finite family. Let $q\in\overline{\Qmin}$. If $q\in\omega_i$, then $\Usp_{\omega_i}$ is an open neighborhood of $q$ intersecting only one set $\psi_i(\widehat{\trap}_i)$. Suppose $q\in\overline{\Qmin}\setminus\Qmin$ and let $z=(x,y) = \phi^{-1}(q) \in \bR\times(-1,0] \subset M$. Then by Lemma~\ref{lm:hat_strapi_is_loc_finite} there exists an open neighborhood $\Vsp$ of $z$ in $\bR\times(-1,0]$ intersecting only finitely many $\widehat{\strap}_i$. But the map $\phi:\bR\times(-1,0] \to \Qmin$ is a homeomorphism, whence $\phi(\Vsp)$ is an open neighborhood of $q$ in $\Qmin$ intersecting only finitely many $\psi_i(\widehat{\trap}_i) = \phi(\strap_i) \cup\omega_i$. \end{proof} Notice that the composition $\psi^{-1}\circ\phi|_{\strap_i}: \strap_i\to\trap_i$ is a level-preserving homeomorphism, however in general it can not be extended to a homeomorphism between their bases. Nevertheless, $\psi^{-1}\circ\phi$ yields a level-preserving homeomorphism $\roof(\strap_i)\to \roof(\trap_i)$, and therefore by Lemma~\ref{lm:level_pres_homeo_of_roofs} it extends to a level-preserving homeomorphism $\xi_i:\overline{\strap_i} \to \overline{\trap_i}$. Now define the following map $\eta:M \to \overline{\Qmin}$ by \[ \eta(z) = \begin{cases} \psi_{i} \circ \xi_i(z), & z\in \widehat{\strap}_i \ \text{for some } i\in\Lambda, \\ \phi(z), & z\in\Zsp. \end{cases} \] We claim that $\eta$ is the required homeomorphism. Indeed, evidently, $\eta$ is a bijection. Furthermore, by Lemma~\ref{lm:hat_strapi_is_loc_finite} $\{\Zsp\}\cup \{\widehat{\strap}_i\}_{i\in\Lambda}$ is a locally finite closed cover of $M$, and by Lemma~\ref{lm:psii_trap_is_loc_finite} their images $\{\phi(\Zsp)\} \cup \{\psi_i(\widehat{\trap}_i)\}_{i\in\Lambda}$ constitute a locally finite closed cover of $\overline{\Qmin}$. Finally, the restrictions $\eta|_{\Zsp}$ and $\eta|_{\widehat{\strap}_i}$ are homeomorphisms. Then by~(\ref{enum:lm:prop:homeomorphism}) of Lemma~\ref{lm:prop} $\eta$ is a homeomorphism. Theorem~\ref{th:open_strips} is completed.
2,877,628,090,925
arxiv
\section{Introduction} High resolution lithography is a central part of the semiconductor industry. Industrial lithography is usually mask based, to ensure a high production speed. In mask based lithography a photonic beam projects a direct image of a mask onto a substrate. By superposition of images from several masks combined with etching and metal deposition processing steps, micro-chips are created. The lithography resolution is one of the most important parameters in determining how small integrated circuits can be made (thickness of wires etc.) and the size of the circuits ultimately determines the speed of the electronic components~\cite{Borkar11}. In standard photolithography the wavelength of the light being used determines the resolution: the smaller the wavelength, the higher the resolution. The present industrial photolithography standard is the immersion scanner using a \SI{193}{nm} light source in a high numerical aperture medium~\cite{ITRS}. The industry is currently implementing the next generation of lithography devices, extreme ultraviolet lithography (EUV) based on a \SI{13.5}{nm} wavelength light source, which together with immersion techniques and multiple patterning is expected to be able to scale down to critical dimensions below \SI{14}{\nano\meter}~\cite{ITRS}. Atom lithography has been suggested as a possible future step for high resolution lithography. The de Broglie wavelength of thermal atoms is much smaller than the wavelength of optical photons. For helium atoms at thermal energies between \SI{20}{meV} and \SI{60}{meV}, the corresponding wavelength is \SI{0.1}{nm} or less. This makes atom beams in principle a very attractive candidate for pattern generation. One approach in atom lithography is to use a beam of metastable atoms for the pattern generation~\cite{Berggren95,Baldwin05}. When an atom hits the substrate it decays and the energy of the metastable state is transferred to the substrate. Various difficulties have prevented atom lithography from being used industrially. A major problem has been to create coherent atom beams. This will be discussed in more details later in this section. Another problem is that low energy metastable atoms do not penetrate solid materials, so it is not clear how mask based atom lithography can be performed. This problem was in principle already solved more than 20 years ago, using binary holography. Binary holography was originally developed by Lohmann and Paris~\cite{Lohmann67} to shape incident electromagnetic beams and it uses masks made out of a set of regions that are either completely transparent or opaque. Originally the method was developed to create holograms for electromagnetic waves using a computer, and the procedure is often referred to as computer generated holography~(CGH) in the literature. Due to the de~Broglie-wavelength associated with a matter wave, the method also works for atom beams. Using the method, an approximation of any arbitrary pattern can be generated on a substrate by using a mask that represents a Fourier transform of the desired pattern. The work of Lohmann and Paris was used by Onoe and Kaneko~\cite{Onoe79} to develop grid based binary holography (GBH). In GBH the masks are based on a grid of identical holes that are either completely open or closed. Masks produced this way can also be used to approximate arbitrarily shaped beams. Using GBH Fuijta~\textit{et al.}~\cite{Fujita96} successfully created patterns using a beam of metastable Ne atoms and a silicon nitride mask with \SI{30}{nm} holes created by electron lithography and reactive ion etching. Provided the incident beam is coherent and of a wavelength significantly smaller than the grid period, the resolution limit in GBH is given by the grid period. With present day electron lithography technology, patterns can be made at a resolution of \SI{5}{nm} or less~\cite{Manfrinato13}, which means that atom based lithography in the sub-\SI{10}{nm} regime could be possible. However, this still has to be demonstrated. After the work of Fujita~\textit{et al.}~\cite{Fujita96}, several experiments have been performed on the manipulation of atom and molecular beams with ``optical'' elements created out of free standing, solid structures. In 2012, a Fresnel zone plate etched into a silicon nitride membrane was used to focus a neutral helium beam down to a spot size of sub-micron diameter~\cite{Eder12}. In this study it was found that the resolution was limited by the velocity spread of the beam~\cite{Eder12}. Further work by the same group has shown focusing with a photon-sieve structure~\cite{Eder15}. More recently a diffraction grating for molecular interferometry was created out of a graphene membrane~\cite{Arndt15}~(see also~\cite{Pritchard09}). This year helium diffraction from a micron scale structure was achieved for the first time~\cite{Nesse17}. As mentioned above, the main problem for atom lithography has been to create coherent beams. Significant progress has been made here within the last few years. The spatial coherence of a molecular beam is determined by the velocity distribution~\cite{Patton2006}. Recently a beam of metastable helium atoms with an average wavelength of around \SI{0.06}{\nano\meter} and a very narrow wavelength distribution $\lambda/\delta\lambda = 200$, was produced using a pulsed source~\cite{Even2015}. The ultimate coherent beam would seem to be a Bose Einstein condensate (BEC). A BEC of metastable atoms, which brings an assembly of atoms into exactly the same energy state, was created some years back~\cite{Robert01,Santos01}. Recently, Zeilinger~\textit{et al.} generated a beam of BEC metastable helium atoms; that is, a perfectly coherent beam of metastable atoms~\cite{Zeilinger14}. However, the Fraunhofer diffraction formula does not apply for a BEC~\cite{Fouda2016}. Furthermore, the de Broglie wavelength of a dropping BEC of helium is very large, about \SI{30}{\nano\meter} after a drop of half a meter. For high resolution lithography one wants to use small wavelengths. One can think of experimental ways to get around this, for example by moving the mask relative to the BEC so that the BEC wavelength relative to the mask gets smaller, but in any case, considerable amendments would have to be made to the theory we present here. The holographic structure of the mask means that the influence of local mask errors is less prominent in the final pattern. This makes GBH interesting for photon and mask based electron lithography. Furthermore, the exposed areas are more evenly distributed across the whole mask which makes it thermally more stable. This is particularly an issue in EUV lithography. For photonic lithography applications the hole structure can be placed on a suitable substrate. In this paper the work of Onoe and Kaneko~\cite{Onoe79} is further developed. We investigate the contrast and robustness of the patterns created by the hologram masks and provide an algorithm that can be used to select the open hole fraction of a mask over a large range. From a mask fabrication point of view the most desirable is a solution that minimizes the number of holes, but from a chip fabrication point of view it may be better to maximize the number of holes, in order to reduce the heating of the mask. Similar to Omoe and Kaneko we work in the Fraunhofer diffraction limit. For a real appliation a lens may need to be introduced to bring the pattern closer and to increase resolution. Several experiments have been carried out focussing molecular beams using zoneplates~\cite{Eder2017,Eder15,Koch2008,Doak1999,Carnal1991} or electromagnetic~\cite{Gardner2017} lenses and this principle could be applied here. \smallskip The rest of this paper is organized as follows; First a presentation of the theoretical framework of GBH followed by a discussion of the specific approach to mask generation is given, then the new method that allows for an arbitrary choice of the open-fraction of the mask is presented. We also look at parts of the mask generation method that hasn't previously been discussed and highlight parameters that are important to the properties of the final mask. The method is then applied to some illustrative examples. Finally we explore the contrast of masks made with different open hole fractions, and characterize the robustness of the masks. \section{\label{sec:GBBH}Grid-based binary holograms} \begin{figure*} \centering\includegraphics[width=1.2\columnwidth]{system.pdf} \caption{\label{fig:system}Diagram of the system setup. Plane waves are entering the binary mask at normal incidence. The desired pattern is produced at an angle of transmission $\theta_t$ away from the normal direction in the screen plane. This is one way to encode different phases. The central circle represent the zeroth order diffraction peak, and the two other circles represent the first order diffraction peaks in the perpendicular direction to our patterns. These locations are not encoded for in the mask and show random intensity distributions. In a lithography setup they could be blocked before hitting the screen. } \end{figure*} \subsection{Theoretical formulation} Holograms are constructed to impose a given amplitude and phase on different parts of a field interacting with them~\cite{Goodman05}. A binary hologram is a special type of hologram where the mask is binary, that is, it is made from parts that are either completely open or closed to the incident field. When the open and closed sections of the binary mask form a grid, then one talks about GBH. Consider the geometry depicted in Fig.~\ref{fig:system}; here a beam is incident normally onto a binary mask and a pattern is observed on a screen located a distance $d$ behind it. In the following, it will be assumed that the front and back surfaces of the mask and the surface of the screen all are planar and parallel. A coordinate system is defined so that the back surface of the mask coincides with the plane $x_3 = 0$ and the screen is located a distance $x_3 = d > 0$ behind it. The incident beam is modeled by the incident plane scalar wave \begin{equation} \psi_0(\mathbf{x}) = \exp(\mathrm{i}\mathbf{k}\cdot\mathbf{x}) = \exp(\mathrm{i}kx_3), \end{equation} where the incident wave vector is given by $\mathbf{k} = k\hat{\mathbf{x}}_3$, with $k = 2\pi/\lambda$ where $\lambda$ denotes the wavelength of the incident beam; in the case of a matter wave (atoms, molecules or electrons), $\lambda$ represents the de Broglie wavelength. The presence of the mask will transform the incident field at the front side of the mask into the ``mask field'' \begin{align} \psi_M(\mathbf{x}_\|) &= T \psi_0(\mathbf{x})\rvert_{x_3 = -\tau}, \label{eq:psim} \end{align} at the back side of the mask where $\tau\ll d$ is the thickness of the mask and $\mathbf{x}_\| = (x_1,x_2,0)$ represents a point in the $x_1x_2$ plane --- the ``mask plane''. In writing Eq.~\eqref{eq:psim}, we have introduced the transfer operator (or function), $T$, that encodes the details of the mask, such as its finite thickness and binary nature, and relates the field at the front side of the mask to the field at the back side of the mask. In the simplest case, we have an ideal mask with a small thickness $\tau$ so that the operator $T$ equals the binary function \begin{align} T_m(\mathbf{x}_\|) &= \begin{cases} 1 & \text{ if }\mathbf{x}_\| \in \text{hole} \\ 0 & \text{ otherwise} \end{cases}. \end{align} When it is applied to the incoming field at $x_3=-\tau\approx 0$ one gets \begin{align} \psi_M(\mathbf{x})\rvert_{x_3\approx 0} = \psi_M(\mathbf{x}_\|\rvert 0) = T_m(\mathbf{x}_\|)\psi_0(\mathbf{x}_\|). \label{eq:psim2} \end{align} \begin{widetext} Let $\mathbf{x}'_\parallel$ denote a point on the surface of the screen $x_3=d$. Following standard diffraction theory, the field in the screen plane can be written as $[\psi_s(\mathbf{x}')\rvert_{x_3=d} = \psi_s(\mathbf{x}'_\|\rvert d)]$~\cite{Goodman05,Adams94} \begin{align} \psi_s(\mathbf{x}'_\|\rvert d) &= -\frac{\mathrm{i}}{2\pi} \frac{k}{d} \exp{ (\mathrm{i}kd) } \exp{ \left( \frac{\mathrm{i}}{2}\frac{k}{d} x_\parallel'^2 \right) } \int\mathrm{d}^2x_\parallel\, \psi_M(\mathbf{x}_\parallel | 0) \exp{ \left( -\mathrm{i}\frac{k}{d} \mathbf{x}'_\| \cdot \mathbf{x}_\| \right)}, \label{eq:fraunhofer} \end{align} which is the Fraunhofer zone limit of the Rayleigh-Sommerfeld diffraction formula. This expression is valid when the screen plane is many wavelengths away from the mask plane ($d\gg\lambda$)~\cite{Goodman05}. A quantitative criterion for the validity of Eq.~\eqref{eq:fraunhofer} is $d > kL_m^2/\pi$ where $L_m$ represents the lateral extent of the mask~\cite{Goodman05}. To obtain the field closer to the mask, the Fresnel approximation or the full Rayleigh-Sommerfeld diffraction formula can be employed. In the current work we will only use the Fraunhofer approximation because of its simplicity, but the GBH method presented later can also be valid for mask fields found by other methods. Equation~\eqref{eq:fraunhofer} together with Eq.~\eqref{eq:psim2} makes it possible to find the response from a binary mask at a plane a long distance behind the mask. However, what we really want is to construct a desirable mask for a given final pattern in the screen plane (see Fig.~\ref{fig:system}). Therefore we now turn the problem around, and find an expression for the field just behind the mask given a target intensity in a screen plane far behind the mask. We call this field $\psi_m(\mathbf{x}_\parallel|0)$ to differentiate it from the mask field $\psi_M(\mathbf{x}_\parallel|0)$ created by the incident field and the transfer function of the binary mask given by Eq.~\eqref{eq:psim2}. In order to determine the unknown mask field $\psi_m(\mathbf{x}_\parallel|0)$ from the knowledge of the field in the screen plane, $\psi_s(\mathbf{x}_\parallel'| d)$, one starts by noting that the integral that appears in Eq.~\eqref{eq:fraunhofer} is the Fourier transform of $\psi_m(\mathbf{x}_\parallel|0)$ evaluated at the wave vector $\mathbf{K}_\parallel=(k/d)\mathbf{x}_\parallel'$. By taking the inverse Fourier transform of both sides of Eq.~\eqref{eq:fraunhofer} and using that $\mathrm{d}^2K_\parallel = (k/d)^2 \mathrm{d}^2x_\parallel'$, one readily finds \begin{align} \psi_m(\mathbf{x}_\parallel | 0) &= \frac{\mathrm{i}}{2\pi} \frac{k}{d} \exp{\left(-\mathrm{i}kd\right)} \int\mathrm{d}^2{x}_\parallel'\, \exp \left(-\frac{\mathrm{i}}{2} \frac{k}{d} x_\parallel'^2 \right) \psi_s(\mathbf{x}_\parallel'\rvert d) \exp{\left( \mathrm{i}\frac{k}{d} \mathbf{x}'_\parallel \cdot \mathbf{x}_\parallel \right)}. \label{eq:inverse_fraunhofer} \end{align} \end{widetext} Equation~\eqref{eq:inverse_fraunhofer} states that the mask field $\psi_m$ is the inverse Fourier transform of the field in the screen plane $\psi_s$ times a propagating factor depending on the mask-screen separation. \begin{figure} \centering\includegraphics[width=0.8\columnwidth]{target.pdf} \caption{\label{fig:target} Example of a discretized target pattern with $N_s~\times~N_s$ points. These points are indexed from $1$ to $N_s$ in each direction, which corresponds to real space coordinates given by Eq.~\eqref{fig:target_coordinates}. } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{gridex.pdf} \caption{(Color online)(a) Illustration of the subcell-structure with three open subcells (white cells) assuming $S=3$. The different rows have different phases $\phi = (2\pi r)/S$, with $r=1,\mathellipsis,S$, associated with them. Each open subcell leads to a contribution to the overall field $\Psi_m$ from the cell in the direction $\exp(\mathrm{i}\phi)$. The field corresponding to the shown subcell therefore is $\Psi_m = \left[2+\exp\left({\mathrm{i}2\pi/3}\right)\right]/3$. This must be scaled with the incident field. (b) All points in the complex plane that can be realized by a configuration of a subcell. The thick vectors (blue) correspond to one open subcell from each row. The thin vector (red) represents the contribution to the field from the open subcells shown in Fig.~(a). The amplitude of the vectors must be scaled so that the available discretization region covers the amplitude range of the mask field $\Psi_m(\mathbf{n})$.} \label{fig:gridex} \end{figure} If the \textit{desired} intensity distribution in the screen plane $x_3=d$ is denoted $I_s(\mathbf{x}_\parallel'\rvert d)$, then the field in this plane can be written in the form [$I_s=|\psi_s|^2$] \begin{align} \label{eq:desired_screen_field} \psi_s(\mathbf{x}'_\parallel| d) &= \sqrt{I_s(\mathbf{x}'_\parallel | d)} \exp\left[\mathrm{i}\Phi(\mathbf{x}'_\| \rvert d)\right], \end{align} where $\Phi(\mathbf{x}'_\|\rvert d)$ is a random phase function that is independent of the choice made for the intensity $I_s$. Without loss of generality, we will in the following assume the phase function to be spatially uncorrelated and uniformly distributed on the interval $[0, 2\pi)$. An example target pattern is shown in Fig.~\ref{fig:target}. Two levels of gray are used to denote inside and outside of the pattern. In most cases the absolute intensity level of the target pattern $I_s(\mathbf{x}'_\parallel | d)$ is not important; the main concern is the contrast, that is the relative difference between the intensity inside and outside of the pattern. Therefore an intensity rescaled image serves as our target pattern for the GBH. As a result of Eq.~\eqref{eq:desired_screen_field}, the field $\psi_m(\mathbf{x}_\parallel | 0)$ will have a complex phase variation over the (back) surface of the mask. Ideally, the field $\psi_M(\mathbf{x}_\parallel | 0 )$ that is related to the incident field via Eq.~\eqref{eq:psim2}, should equal $\psi_m(\mathbf{x}_\parallel | 0 )$. However, at normal incidence, this is not possible since the operator $T_m$ does not change the phase of the field on which it operates, only its amplitude. In order to allow for a changing phase in $\psi_M(\mathbf{x}_\parallel | 0 )$ over the surface of the mask, one could, for instance, assume that the field of incidence is impinging \textit{non-normally} ($\theta_0\neq\ang{0}$) onto this surface. With a phase change along $\psi_0(\mathbf{x}_\parallel)$, a changing phase is available for $\psi_M(\mathbf{x}_\parallel | 0 )$. An alternative (but equivalent) approach is to keep $\theta_0=\ang{0}$ for the incident field, but form the image in a region of the screen plane that corresponds to a \textit{non-zero} polar angle of transmission $\theta_t$. This situation is presented in Fig.~\ref{fig:system}. In the former approach the phase difference is due to the difference in propagation path of the incident field, while in the latter approach, it is due to the path difference of the transmitted field. Hence, the two approaches are in principle equivalent but it should be noted that in the latter case $\psi_M(\mathbf{x}_\parallel | 0 )$ is different from $\psi_m(\mathbf{x}_\parallel | 0 )$. The latter approach has the advantage that if an image is formed in the region around the angles of transmission $(\theta_t,\phi_t)$, where $\phi_t$ denotes the azimuthal angle of transmission, then a similar image will be formed in the direction $(\theta_t,\phi_t+\ang{180})$ only rotated \ang{180} relative the first image; see Fig.~\ref{fig:system} (where $\phi_t=\pm\ang{90}$). In this work we use the second approach since for a given mask structure it results in two (or more) images, and the case of normal incidence is usually more easy to handle experimentally. The details of this approach will be discussed in the following subsection. It should be mentioned that a possible third alternative is a combination of the former two but this is technically more complicated and will not be discussed here. \subsection{Numerical construction of GBHs} Equations~\eqref{eq:inverse_fraunhofer} and \eqref{eq:desired_screen_field} allow us to calculate the mask field $\psi_m(\mathbf{x}_\parallel | 0)$ required to produce a given intensity in the screen plane. However, when using a binary mask, any general mask field cannot be realized with a desired (high) precision; instead only an approximate solution is expected and the purpose of this subsection is to describe how an approximate mask field can be constructed given the intensity distribution (the ``pattern'') in the screen plane [see Eq.~\eqref{eq:desired_screen_field}]. The procedure to construct the desired mask starts by discretizing the target intensity pattern $I_s(\mathbf{x}_\parallel' | d)$ in the screen plane. If this pattern in the screen plane fits inside a square region of sides $L_s$, we discretize $I_s(\mathbf{x}_\parallel' | d)$ onto a grid of $N_s \times N_s$ points of coordinates \begin{subequations} \label{eq:screen_plane_discretization} \begin{align} \mathbf{x}_\parallel'(\mathbf{n}) &= \big(x_1'(n_1), x_2'(n_2), 0 \big) \end{align} where ($i=1,2$) \begin{align} x_i'(n_i) &= -\frac{L_s}{2} + \left(n_i - \frac{1}{2}\right) \Delta x_s, \label{fig:target_coordinates} \end{align} \end{subequations} with $n_{i}=1,\ldots,N_s$ and $\Delta x_s = L_s/N_s$. See example target pattern in Fig.~\ref{fig:target}. To obtain the discretization in the mask plane, one recalls from Eq.~\eqref{eq:inverse_fraunhofer} that $\psi_m(\mathbf{x}_\parallel | 0)$ is expressed as the inverse Fourier transform of a function containing $\psi_s(\mathbf{x}_\parallel'|d)$ [or $I_s(\mathbf{x}_\parallel'|d)$]. Hence, the discretization in the plane of the mask (direct space) is determined from the assumed discretization in the screen plane (Fourier space) [Eq.~\eqref{eq:screen_plane_discretization}]. Let $\Delta x_m$ and $L_m$ denote the discretization interval and spatial extent of direct space, respectively. Since the relevant Fourier variable is $\mathbf{K}_\parallel = (k/d)\mathbf{x}_\parallel'$, the Nyquist frequency and sampling interval (for both orthogonal directions) of Fourier space are given by~\cite{Book:NumericalRecipies} $K_\star=\pi/\Delta x_m= L_sk/(2d)$ and $\Delta K=2\pi/L_m = k\Delta x_s/d$, respectively. Thus, one is led to conclude that \begin{align} \label{eq:mask-plane-discretiatin-parameter} \Delta x_m &= \frac{d}{k} \frac{2\pi}{L_s}, & L_m &= N_s \Delta x_m, \end{align} so the discretization of the mask plane becomes \begin{subequations} \label{eq:mask_plane_discretization} \begin{align} \mathbf{x}_\parallel(\mathbf{n}) &= \big( x_1(n_1), x_2(n_2), 0 \big), \end{align} where \begin{align} x_i(n_i) &= -\frac{L_m}{2} + \left(n_i - \frac{1}{2}\right) \Delta x_m. \end{align} \end{subequations} With the use of Eqs.~\eqref{eq:screen_plane_discretization} and \eqref{eq:mask_plane_discretization}, the discrete version of Eq.~\eqref{eq:inverse_fraunhofer} can be written in the form \begin{widetext} \begin{align} \psi_m\left(\mathbf{x}_\parallel(\mathbf{n}) | 0 \right) &= \frac{\mathrm{i}}{2\pi} \frac{k}{d} \exp{\left(-\mathrm{i}kd\right)} \sum\limits_{\mathbf{n'}} \Delta{x}_s^2\, \exp \left(-\frac{\mathrm{i}}{2} \frac{k}{d}x_\parallel'^2(\mathbf{n}') \right) \psi_s(\mathbf{x}_\parallel'(\mathbf{n}') | d) \exp{\left( \mathrm{i}\frac{k}{d} \mathbf{x}'_\|(\mathbf{n}') \cdot \mathbf{x}_\|(\mathbf{n}) \right)}. \label{eq:discretized_mask} \end{align} \end{widetext} In this and later equations it is implicitly understood that $\psi_s(\mathbf{x}_\parallel' | d)$ is given in terms of Eq.~\eqref{eq:desired_screen_field}. Following the terminology previously introduced by Onoe and Kaneko~\cite{Onoe79}, a \textit{cell} is defined as the square region of the mask plane centered at $\mathbf{x}_\parallel(\mathbf{n})$ and having sides $\Delta x_m$. The aim is to create a scheme which, within the restrictions of binary masks, can be used to generate masks that approximately give rise to the desired mask field for each cell. Several methods have been proposed in the literature for the purpose of approximating the field from a single cell. Here we adopt the method introduced by Onoe and Kaneko~\cite{Onoe79} for the construction of grid based binary holograms --- what they call \textit{pure binary holograms}. This method operates on the principle of creating a \textit{subgrid} within each cell which structure encodes the desired $\psi_m$ that is associated with that cell for given well-defined angles of transmission $(\theta_t,\phi_t)$ to be defined below; cf. Figs.~\ref{fig:system} and \ref{fig:gridex}(a). To see how the procedure works, we will, for reasons of simplicity, assume a plane wave incident normally onto the surface of the mask. For a given location in the screen plane, $\mathbf{x}_\parallel'$, the phase difference, at this point on the screen, between the diffracted fields from each of the open subcells is given by the geometrical distance between each subcell and the point $\mathbf{x}_\parallel'$ on the screen. Since one in principle wants to be able to assign an arbitrary phase to the field of each cell, the phase difference over a cell should be $2\pi$ (at least). This corresponds to a path difference of $\lambda$, which is satisfied for waves propagating in the direction defined by the polar angle of transmission [see Fig.~\ref{fig:system}] \begin{subequations} \label{eq:pattern_angles_of_transmission} \begin{align} \theta_t\approx \theta= \sin^{-1}\left(\frac{\lambda}{\Delta x_m}\right) = \sin^{-1}\left(\frac{L_s}{d}\right). \label{eq:pattern_angle} \end{align} Without loss of generality it will in the following be assumed, consistent with the illustration in Fig.~\ref{fig:system}, that the desired patterns are formed along the $x_2$-axis, i.e. the azimuthal angles of transmission for the directions where the images are formed will be \begin{align} \phi_t\approx \phi=\pm \ang{90}. \label{eq:pattern_angle-phi} \end{align} \end{subequations} It should be noted that the angles of transmission $(\theta,\phi)$ are identical to the directions of the first order diffraction peak from a grating of period $\Delta x_m$ when illuminated at normal incidence by a wave of wavelength $\lambda$. Note also that a phase change that is a multiple of $2\pi$ can also be made to work. Let the subgrid that is defined inside each grid cell be a $S \times S$ square lattice~\footnote{It is easy to extended the procedure to the case of a different subdivision along the two axes, but this will not be discussed here.} where $S$ denotes a positive integer [Fig.~\ref{fig:gridex}(a)]. In total $S$ different values for the phase of the field from each cell can be assigned in such a way that the minimum separation between these values are $2\pi/S$ [Fig.~\ref{fig:gridex}(a)]. When the images are formed along the $x_2$-axis, each row of the subgrid corresponds to a separate phase. Since there are $S$ subcells with the same phase, a contribution to the overall field with this phase can be chosen with a discretized amplitude of $S$ steps. Figure~\ref{fig:gridex}(a) exemplifies the case $S=3$ which is the smallest value for $S$ that gives rise to phase differences that are not multiples of $\pi$ and leads to non-parallel vectors in the complex plane [Fig.~\ref{fig:gridex}(b)]. Moreover, Fig.~\ref{fig:gridex}(a) also presents the subcell structure for a given cell and the different relative phases associated with each row of the subcell structure. For instance, opening the central subcell seen in Fig.~\ref{fig:gridex}(a) will contribute a normalized term $\exp(\mathrm{i}4\pi/3)/3$ to the field. We have dropped the incident field and hole size from this expression since these factors will only affect the intensity and not the contrast of the final pattern. Opening one subcell from each of the three rows gives a contribution to the field that in the complex plane can be represented by the three thick blue vectors seen in Fig.~\ref{fig:gridex}(b) --- these vectors form a (hexagonal) basis (for the field) and opening more subcells will cause more steps to be taken in the directions of these vectors. Figure~\ref{fig:gridex}(b) shows as filled circles \textit{all} possible points in the complex plane reachable by different combinations of open subcells. Each row of the subgrid lattice represents a direction and a possible step in the complex plane for the field; opening $s\in\{0,1,\ldots,S\}$ subcells from row $r$ of the subgrid lattice corresponds to taking $s$ steps in the complex plane of the field along the direction vector corresponding to row $r$ [one of the thick blue vectors in Fig.~\ref{fig:gridex}(b)]. The absolute amplitude of these steps can be chosen by changing the intensity of the incident field, but only the difference between the levels is important for pattern generation. In this way, the contribution to the mask field from a single cell can be calculated as a sum over its open subcells \begin{align} \Psi_m(\mathbf{n}) &= \frac{1}{S}\sum_{r=1}^{S} h\big(r|\mathbf{n}\big) \exp\left(\mathrm{i}\frac{2\pi}{S}r\right), \label{eq:cell-contrib} \end{align} where $h(r|\mathbf{n})$ is the number of open subcells in row $r$ of the cell centered at $\mathbf{x}_\parallel(\mathbf{n})$. Strictly speaking in writing Eq.~\eqref{eq:cell-contrib} we have again neglected the incoming field. The function $h(r|\mathbf{n})$ must be chosen so that \begin{align} \Psi_m(\mathbf{n}) \approx \psi_m(\mathbf{x}_\parallel(\mathbf{n})|0) \end{align} is satisfied with the least error; therefore, the procedure of creating a mask is reduced to selecting an optimal configuration of open subcells. This is an optimization problem, and the efficient solution will be discussed in Sec.~\ref{sec:numerical_details}. Only the relative amplitude and phase between the components of $\psi_m$ must be approximated by $\Psi_m$. The mask field can therefore be rescaled to use the approximation in $\Psi_m$ in different ways. This will also be discussed further in Sec.~\ref{sec:numerical_details}. Any subcell from a given row of cell $\mathbf{n}$ gives an equal contribution to the field $\Psi_m(\mathbf{n})$. Therefore, it does not matter which of the subcells one opens in a row in order to give a contribution to $\Psi_m(\mathbf{n})$. This choice can therefore be made by uniform random selection over the subcells in the row. Due to the subdivision of the cells, the discretization of the mask plane is changed, but the subcells are only used to help create an approximate field for the entire mask, so the scaling relation between the mask and the pattern in the screen plane is still given by Eq.~\eqref{eq:mask-plane-discretiatin-parameter}. The target patterns are formed around the angles of transmissions $(\theta, \phi)$ defined in Eq.~\eqref{eq:pattern_angles_of_transmission}. The vertical coordinate of the center positions of the patterns is therefore \begin{align} \label{eq:size_and_offset} x_2' = \pm d\tan(\theta). \end{align} This construction leads to a criterion for the size of the cells. Equation~\eqref{eq:pattern_angle} tells us that the method only works when the wavelength $\lambda$ is smaller than the cell width $\Delta x_m$. If the wavelength is much smaller than the cell size, the angle at which the pattern is formed will be too small and the patterns will overlap with the specular peak as well as with each other. To get good results it is therefore important to have comparable wavelength and cell size. \section{Open-fraction optimization} \begin{figure*} \centering \includegraphics[width=\textwidth]{gridx4.pdf} \caption{Illustration of the subcell-structure with $4\times 4$ subcell structure ($S=4$). The different rows have different phases $\Phi = 2\pi r/S$, with $r=1,2,3,4$. Each open subcell leads to a contribution to the overall field for the cell $\Psi_m$ in the direction of $\exp(i\Phi)$. The field from this example cell is therefore $\Psi_m = (2+\mathrm{i})/4$. Two possible paths to the desired value $\Psi_m$ is shown. The left cell takes the direct path A, while the right cell takes the longer path B, and therefore has a larger fraction of open subcells, while producing the same overall field $\Psi_m$.} \label{fig:gridex4} \end{figure*} In Sec.~\ref{sec:GBBH} we presented a method that can be used to make GBHs that approximately reproduce an arbitrary pattern. The method describes which positions in a grid of possible holes that should be open and which should be closed. In a typical application of this technique a physical mask structure will be produced from this theoretical grid with open holes of a certain shape and size. The physical mask is then illuminated by an electromagnetic or a matter wave beam. Physical realizations of masks will not be discussed here, but it would be important that the holes fall on the grid positions and that their shape is similar. Both the manufacturing step and the illumination step have certain limitations. Depending on whether the manufacturing process is done with milling or in an additive process, the ideal mask would have few or many holes. During illumination the open area of the mask should ideally be made as large as possible to minimize heating of the mask. Because of the way that the GBH method is structured there are many different grid patterns that produce similar patterns in the screen plane. We call these masks equivalent since they produce the same or close to the same pattern. Many of the equivalent masks are the result of using a different realization of the phase $\Phi$ in Eq.~\eqref{eq:desired_screen_field} as well as the choice of which subcells to open specifically in each row. Another option for selecting between very different, but still theoretically equivalent masks is possible if the number of subdivisions $S$ for each direction of a cell is chosen to be an even number. Under this assumption, pairs of rows in the subcell structure correspond to vectors that are parallel but point in the opposite direction in the complex plane for the field [see Figs.~\ref{fig:gridex4}]. This observation allows us (for even $S$) to rewrite Eq.~\eqref{eq:cell-contrib} in the following form \begin{align} \Psi_m(\mathbf{n}) &= \frac{1}{S} \sum_{r=1}^{S/2} \left[ h\big(r|\mathbf{n}\big) - h\big(S/2+r|\mathbf{n}\big) \right] \exp\left(\mathrm{i}\frac{2\pi}{S}r\right). \label{eq:Psi_m_sum_rewritten} \end{align} Several selections for $h(r|\mathbf{n})$ can produce identical results for $\Psi_m(\mathbf{n})$. The reason is that it is no longer just the individual contributions from the rows that matter, but the difference between pairs of rows. Several different paths in the complex plane lead to exactly the same point, and therefore, to the same field $\Psi_m(\mathbf{n})$. This situation is illustrated in Fig.~\ref{fig:gridex4} where the value $S=4$ is assumed. In this figure two paths are considered --- called Path~A and Path~B --- and they do have identical 2nd and 4th rows but different 1st and 3rd rows (counted from the bottom). Even if the number of open subcells in the 1st and 3rd rows are different for the two cells, it is readily confirmed from Eq.~\eqref{eq:Psi_m_sum_rewritten} that they do produce identical contribution to the field from the $r=2$ term. This can be understood by viewing the two subcells as paths in the complex plane for the field [Fig.~\ref{fig:gridex4}(c)]. It should be noted that for the example given in Fig.~\ref{fig:gridex4}, the subcell structure of rows~\num{2} and \num{4} were deliberately set to the same for the two cells; alternatively, we could have chosen configurations for the two cells corresponding to more extreme open-to-closed subcell ratios. If we only choose to open the minimum number of subcells in a single row in each pair of rows that produce opposite contributions, we get a minimally open mask. This is the choice made for Path A in Fig.~\ref{fig:gridex4}. If we open as many subcells as possible for each pair of rows, similar to what was done for the pair of rows~\num{1} and \num{3}, we get a maximally open solution. When changing from the minimum solution to the maximum for all the cells in a mask, the fraction of open subcells goes from $f$ to $(1-f)$. When using the minimum solution, corresponding to going directly to the desired value in the complex plane, we will at most open half of the subcells. The reason being that we want either a positive or a negative contribution from each cell and are only opening subcells in a single row for each pair of rows. If we assume that the complex values of the desired mask field $\Psi_m$ are uniformly distributed across the available discretization area, then there is an equal amount of points that fall at every position along both the real and imaginary axes. Each of these positions on each axis is encoded with between \num{0} and \num{4} open subcells each from a total of \num{8} possible for the pair of rows. This leads to an open-fraction of \SI{27.8}{\percent} when using the minimum solution. By using the maximum method, this changes to \SI{72.2}{\percent}. These numbers are based on a completely uniform random pattern, since real masks encode patterns with correlations the minimum open-fraction will in reality be different. It is also possible to choose an open-fraction anywhere between $f$ and $(1-f)$. This is done by not opening all the possible subcells. The cell shown in Fig.~\ref{fig:gridex4}(b) is an example of a possible intermediary choice where we have only opened the maximum amount of subcells in one of the pair of rows. We would get a similar overall field if we open more subcells in rows \num{2} and \num{4}. \section{\label{sec:numerical_details}Numerical details} \begin{figure}[tbp] \centering \includegraphics[width=0.75\columnwidth]{gridscale.pdf} \caption{\label{fig:gridscale} Illustration of differently scaled mask fields. Both figures show the same \num{1000} values from the same mask field, but (a) uses $\gamma_\mathrm{max}$ for scaling and (b) uses $\gamma_\eta\rvert_{\eta=1}$. Some points fall outside the plot in (b) and have been removed.} \end{figure} To use the presented method for generating binary holograms we start with the pattern we want to create in the screen plane. This pattern only represents the desired intensity distribution and does not contain any phase information. We find the amplitude by taking the square root of the intensity pattern and multiply with a random phase function in accordance with Eq.~\eqref{eq:desired_screen_field} to get the field we want to construct in the screen plane. The random phase leads to a more even mask field with a more uniform amplitude, which is easier to construct with GBH. The field that we need to approximate with the hologram in each cell is then given by Eq.~\eqref{eq:discretized_mask}, which can be computed efficiently using a fast Fourier transform~\cite{Book:NumericalRecipies}. Before the mask can be constructed, one needs to scale the amplitude of the mask field $\Psi_m(\mathbf{n})$ so that it fits within the region of the complex plane that our approximation scheme can cover, which for simplicity is defined as going from \num{-1} to \num{1} along both the real and imaginary axes in Eq.~\eqref{eq:Psi_m_sum_rewritten}. This is done by dividing the amplitude of the field by a factor $\gamma$. The rescale factor can be defined in a number of different ways. One approach is to make sure that all the values of $\Psi_m(\mathbf{n})$ actually fall within the square region available for the approximation, and this means defining the rescale factor as \begin{equation} \gamma_\mathrm{max} = \max|\Psi_m(\mathbf{n})|. \label{eq:rescale_maximum} \end{equation} This rescaling sets the largest amplitude in the field to one, which means that potentially the field may go beyond the edge of the available region in the complex plane. Figure~\ref{fig:gridscale}(a) shows an example of the desired mask field for a test pattern as points in the complex plane. From Fig.~\ref{fig:gridscale}(a) we can see that the scaling option presented in Eq.~\eqref{eq:rescale_maximum} leaves parts of the available region in the complex plane uncovered by a typical pattern. Another approach to rescaling the pattern would focus on filling the available region as evenly as possible, but this will also leave some values outside the region that is available for the approximation scheme. This sort of scaling could be defined as \begin{equation} \gamma_\eta = \eta\left\langle |\Psi_m(\mathbf{n})| \right\rangle_\mathbf{n}, \label{eq:rescale_mean} \end{equation} where $\langle\cdot\rangle_\mathbf{n}$ defines an average over the values of the discrete mask points and $\eta$ is a parameter to tune the scaling. Figure~\ref{fig:gridscale}(b) shows a mask field scaled by $\gamma_\eta$ for $\eta=1$. When the desired mask field is rescaled in this way, the final intensity of the pattern may change, depending on, among other things, the incident field $\psi_0$ and the hole size, but the pattern will still be the same. To perform the approximation with GBH, we need to find a choice for $h(r|\mathbf{n})$ so that the result in Eq.~\eqref{eq:cell-contrib} comes close to the required value for $\Psi_m(\mathbf{n})$. Figure~\ref{fig:gridex} illustrates the situation for $S=3$ and shows the resulting field for a cell for all choices of $h(r|\mathbf{n})$. When the desired mask field has been rescaled to fit within the available approximation region, the selection of the appropriate $h(r|\mathbf{n})$ can be performed. This can also be done in several different ways. The simplest approach can be taken when $S=4$. In this configuration the contribution from the different rows are aligned with the axes and it is simple to take the real and imaginary part of the field and clamp them to the closest possible solution given by one or the other row pair since they correspond to two perpendicular directions in the complex plane. This will leave us with two rows that are always completely closed, and two that are potentially open. This is therefore the minimum solution described in the previous section. A maximum open-fraction is found by opening the rest of the subcells in the rows where some are already potentially open, and then opening the same number of subcells at random in the corresponding row. To find a mask with a specific open-fraction, one approach is to pairwise open or close subcells at random in pairs of rows within a randomly selected cell until the desired open-fraction is reached. For an arbitrary value of $S$ a way to find the optimal approximation is to precompute $\Psi_m(\mathbf{n})$ for a cell using Eq.~\eqref{eq:cell-contrib} given all possible combinations of choices for $h(r|\mathbf{n})$ and store the results in a search-tree. We can then search through all the options and pick the closest approximation for each cell. The number of possible options grows very rapidly with $S$. This procedure leads to the number of open subcells that are required for each row in each individual cell of the mask. Which subcells to open can then be selected at random. These methods leave us with one possible realization of a mask for a given target pattern. This grid structure tells us where we need holes on a square grid to create a GBH mask for our pattern. The mask can be manufactured physically by creating an array of holes with similar geometry in a film that is suitable to block our incident wave. It is also possible to check the masks by performing simulations of the effect of the mask. This can either be done through a direct calculation based on Eq.~\eqref{eq:fraunhofer} or by performing a more physically accurate simulation that takes into account the final mask geometry. The direct calculation is convenient since it is mainly a Fourier transform of the mask structure that can be performed using an FFT. The physical mask geometry can be taken into account by either performing the Fourier transform on a large discretized version of the mask geometry or by taking the superposition of analytic diffraction solutions for the individual holes in the mask. \section{Examples} \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{F_figure_comb.pdf} \caption{\label{fig:F_min_max_diff}Simulation results for masks created for the same target pattern. The $I^\mathrm{min}_s$ figure shows the result from a minimally opened mask, while the $I^\mathrm{max}_s$ shows the result from a maximally opened mask. The difference figure shows the difference $|I^\mathrm{min}_s - I^\mathrm{max}_s|$ between the two versions. All axes are similar and the intensity values are comparable and of arbitrary unit. Parts of the specular peak for $I^\mathrm{max}_s$ is saturated above the scale.} \end{figure*} Examples will now be given of different GBH masks generated by the method described in the preceeding section, and their performance will be evaluated by simulation. First we will look qualitatively at how we can adjust the open-fraction of the mask, and see what impact this has on the generated pattern. Then we will perform a more quantitative investigation, where we investigate the behavior of both the contrast and the error tolerance of the masks as we force different open-fractions. Finally we will study the contrast and the error tolerance of masks generated with different scaling of the mask fields. The simulations presented in the following sections were all performed for binary masks generated with subdivision $S=4$ since this allows us to generate the mask without large lookup tables and it is the smallest subdivision that allows for easy adjustment of the open-fraction of a mask by opening and closing hole pairs. The intensity patterns created by the masks were found using the direct method of Fraunhofer propagation described by Eq.~\eqref{eq:fraunhofer}. When nothing else is specified the mask patterns were rescaled using the maximum scale factor $\gamma_\mathrm{max}$ given by Eq.~\eqref{eq:rescale_maximum}, while $\gamma_\eta$ was used in cases where $\eta$ is given. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{mask_figure.pdf} \caption{\label{fig:sbs-masks}Parts of the masks (size $40 \times 40$ subcells) used to generate the two intensity patterns presented in Fig.~\ref{fig:F_min_max_diff}: (a) mask with the minimum number of holes (open subcells); (b) mask with the maximum number of holes. Light (dark) colored squares represent open (closed) subcells. Gray lines show cell boundaries.} \end{figure} Figure~\ref{fig:F_min_max_diff} presents the central area of two simulated intensity patterns in the screen plane. The masks used to obtain the patterns in the screen plane depicted in Figs.~\ref{fig:F_min_max_diff}(a) and \ref{fig:F_min_max_diff}(b) were generated by assuming the same target pattern; an image of the character \textit{F}. The width and height of the character were approximately a third of the width and height of the pattern area. The image used for the target had a resolution of \num{600}$\times$\num{600} pixels. When using a scheme with $S=4$ subdivisions, this leads to a mask with \num{2400}$\times$\num{2400} subcells. For the $I^\text{min}_s$-pattern the mask was created with the minimum number of open subcells, while for the $I^\text{max}_s$-pattern the maximum number of subcells were opened. Figure~\ref{fig:sbs-masks} shows cutouts of the same corner of the two masks used to generate the results in Figs.~\ref{fig:F_min_max_diff}(a) and \ref{fig:F_min_max_diff}(b). The minimally open mask had in this case \SI{7.3}{\percent} open subcells, while the maximally open mask had \SI{92.7}{\percent} open subcells. These numbers are not necessarily the same for other mask realizations because of the random phase that is applied in the beginning. These two masks were produced using the same random phase for the target field. The number of the open subcells is inverted when going from the minimum to the maximum solution, but the masks themselves are not necessarily inverted. A closer inspection of the two cutouts in Fig.~\ref{fig:sbs-masks} makes it apparent that the number of open subcells in a cell is inverted, but which subcells that are chosen to be opened or closed in a particular row is picked at random. Figures~\ref{fig:F_min_max_diff}(a) and \ref{fig:F_min_max_diff}(b) show the intensity distribution behind the mask with the minimum and the maximum number of open holes respectively. The resulting intensity patterns $I^\text{min}_s$ and $I^\text{max}_s$ above and below the position of the center specular peak are almost identical. The absolute difference between them is presented in Fig.~\ref{fig:F_min_max_diff}(c). The main difference is a speckle pattern around the borders of the pattern. The speckle noise is higher towards the edges than at the center of the patterns. This noise depends on the random selections made when constructing the two different masks. This leads to visible noise in the difference image shown in Fig.~\ref{fig:F_min_max_diff}(c). This effect is most pronounced along the vertical axis, with noise primarily at the top and bottom of the pattern. The reason is most likely that this is the direction away from the center maxima in which the pattern is formed, and that the variation away from the path length assumed in the approximations is at its largest. If the mask is directly inverted when going from a minimum to maximum solution, so that the random choices are similar, the intensity in the patterns become identical with the same noise. The results shown in Figs.~\ref{fig:F_min_max_diff}(a) and \ref{fig:F_min_max_diff}(b) are comparable and the colorbar is adjusted to show the strongest intensity observed in the pattern area of either of the two realizations. The specular point at the center is much stronger than the rest of the figure and oversaturates the colorbar. Opening a larger area of the mask does not affect the patterns that are generated, but the extra intensity goes into the specular point. This point is a single pixel in these simulations, and the intensity value of this pixel changes by several orders of magnitude when going from the minimum to the maximum solution. If the simulations take the actual geometry of the holes into account, the central peak would broaden due to diffraction and there would be a visible difference between the central area of the two patterns. \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{owl_figure_comb.pdf} \caption{\label{fig:owl_min_max_diff}Similar to Fig.~\ref{fig:F_min_max_diff}, with more complex test pattern. Shows only the intensity in the target area where the pattern is formed.} \end{figure*} Figure~\ref{fig:owl_min_max_diff} shows results similar to those presented in Fig.~\ref{fig:F_min_max_diff}, but for a much more complex pattern of an owl. The target image still had a resolution of \num{600}$\times$\num{600} pixels. In this case the minimal mask had $7.6\%$ open subcells, while the maximum mask had $92.4\%$. As with the intensity patterns in Fig.~\ref{fig:F_min_max_diff} the difference between the minimum and maximum pattern is primarily in the noise in the edges of the pattern [Fig.~\ref{fig:owl_min_max_diff}(c)] due to the randomly chosen phase. Note that the images in Fig.~\ref{fig:owl_min_max_diff} are cutouts of the full simulation results showing only the region where the pattern is formed. \section{Contrast measurements} According to Eq.~\eqref{eq:Psi_m_sum_rewritten}, the pattern formed from masks with an even number of subcell rows (i.e. value of $S$) will only depend on the difference between the number of open subcells in the row pairs. This is an approximation, and to investigate how well it holds we need to compare the patterns generated from several equivalent masks with different open-fractions. The goal of the GBH method is to create sharp, arbitrary patterns. How well different masks are able to perform this task can be quantified by measuring the contrast of the final pattern. We define the contrast as \begin{equation} \alpha = \frac{|\bar{I}_\text{in}-\bar{I}_\text{out}|}{|\bar{I}_\text{in}+\bar{I}_\text{out}|}, \end{equation} where $\bar{I}_\text{in}$ is the average intensity inside our pattern and $\bar{I}_\text{out}$ is the average intensity outside [see Fig.~\ref{fig:target}]. We only look at the intensity that falls inside the area predicted by Eq.~\eqref{eq:size_and_offset}, and regard the parts of the pattern area we want to expose as inside our pattern and the other parts as outside. This definition only works for binary patterns with no intermediary values. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{alpha_vs_openpart_new.pdf} \caption{\label{fig:alpha_vs_openpart_sbs_a}Contrast $\alpha$ of test patterns as a function of the open-fraction of the masks used when making the patterns. Average intensities are based on \num{10} mask realizations.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{openfrac_sbs.pdf} \caption{\label{fig:alpha_vs_openpart_sbs_b}Intensity patterns created from masks of open-fractions 0.1, 0.5 and 0.9 (left-to-right.} \end{figure} Several different masks corresponding to the F-pattern shown in Fig.~\ref{fig:target} were generated for different open-fractions in the range from \SI{10}{\percent} to \SI{90}{\percent}. The resulting contrast measurements $\alpha$ are shown in Fig.~\ref{fig:alpha_vs_openpart_sbs_a} as a function of the open-fraction. The mean intensities were found by looking at the intensity patterns generated by \num{10} different masks for each open-fraction level. These masks were generated using the same scaling $\gamma_\mathrm{max}$, but with different random phases. Figure~\ref{fig:alpha_vs_openpart_sbs_a} shows that the contrast is highest for masks close to the minimum or maximum number of open subcells and goes down towards an open-fraction of \SI{50}{\percent}. Figure \ref{fig:alpha_vs_openpart_sbs_b} shows three intensity patterns created by masks of open-fraction \SI{10}{\percent}, \SI{50}{\percent} and \SI{90}{\percent}, respectively. When the open-fraction is around \SI{50}{\percent}, there is a clear increase in the mean value for the intensity outside of the pattern. Similar behavior is also observed for other patterns, and when using more complex methods for calculating the intensity pattern that takes the hole geometry into account. Inspection of Fig.~\ref{fig:alpha_vs_openpart_sbs_b} and several other similar test patterns show that there is an increase in the intensity of the noise around the edges of the target area when we increase the open-fraction. Test patterns where a large part of the pattern occupies the area next to the edges behave differently to what is shown in Fig.~\ref{fig:alpha_vs_openpart_sbs_a}. The reason is that there is a much larger amount of noise in the inside area when it is located along the edge. The approximations that are used when generating the masks hold better close to the center of the pattern area. This could explain why the intensity patterns in general become more noisy in regions away from the center. For masks with an open-fraction of \SI{50}{\percent} a lot of extra holes have been added to the pattern. According to the approximations these should cancel perfectly, but in practice this is not the case. For the masks with minimum open-fraction, all of the open subcells directly contribute to forming the pattern. For masks that have \SI{50}{\percent} open subcells, extra pairs that might not cancel completely have been added, resulting in a higher amount of noise. For open-fraction above \SI{50}{\percent}, the contrast improves again as the mask moves towards what is effectively an inverted mask from the minimum configuration [see Fig.~\ref{fig:alpha_vs_openpart_sbs_a}]. For masks with the maximum open-fraction, the cancellations work better since there is now a much higher number of open subcells. \subsection{Tolerance to errors} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{alpha_vs_err_new.pdf} \caption{\label{fig:alpha_vs_err}Contrast $\alpha$ as a function of the probability $\epsilon$ of having an error in a subcell. The different symbols correspond to masks created with several different initial open-fractions. Series with complementing open-fraction ($f$ and $1-f$) are overlapping. Each data point is based on the mean contrast $\alpha$ from intensity patterns computed for a series of mask realisations.} \end{figure} We now turn to the discussion of error tolerance of the generated masks. If we introduce errors in the masks by randomly changing open and closed subcells with a certain error probability, we can use the change in the contrast as a measure for the robustness of the mask. Figure \ref{fig:alpha_vs_err} shows the robustness for masks created with several different open-fractions. As in Fig.~\ref{fig:alpha_vs_openpart_sbs_a} the behavior is symmetric around an open-fraction of $50\%$ with pairs of overlapping curves in the figure. As the error probability increases the contrast goes down for all the different masks, which is the expected behavior. When we introduce errors we change the mask towards a mask with a random hole pattern on a grid. This means that the contrast should not go towards zero, but would converge towards the value associated with the response of a random mask. We see from Fig.~\ref{fig:alpha_vs_err} that the spread in robustness goes down as we increase the error probability. The $10\%$ and $90\%$ masks start with the highest contrast, but also decreases faster than any of the other open-fractions in between. The contrast of the pattern corresponding to the mask created with $50\%$ open subcells is more robust as errors are introduced. Its contrast falls off slower than what is observed for all other open-fractions. The reason for this is probably that it started with a high amount of noise. It has the worst contrast of all the choices of open-fraction made. The behavior observed in Fig.~\ref{fig:alpha_vs_err} continues if we raise the error probability above $0.15$, with the contrast for the other open-fractions converging towards the contrast of the \SI{50}{\percent} open case. Figure~\ref{fig:F_sbs} presents a comparison of patterns generated using different masks. The left column and the right column show the response from masks with \SI{10}{\percent} and \SI{50}{\percent} open subcells respectively. The two rows shows masks with error probability $\epsilon=0.0$ and $0.1$. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{F_sbs.pdf} \caption{\label{fig:F_sbs} Simulated response from masks made with open-fractions of \num{0.1} (first column) and \num{0.5} (second column) and error probabilities \num{0.0} (first row) and~\num{0.1} (second row). All other parameters were held constant when generating the different masks. The intensity values are comparable and of arbitrary unit.} \end{figure} \section{Scaling mask pattern} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{alpha_vs_eta_vs_of_new.pdf} \caption{\label{fig:eta_of_alpha} Using the minimum method for generating masks with different field scaling $\gamma_\eta$ shows a variation in (a) open-fraction of the subcells and (b) contrast $\alpha$ as a function of the scaling parameter $\eta$. (c) Contrast $\alpha$ as a function of the open-fraction controlled by change in the mask field scaling $\gamma_\eta$.} \end{figure} We will now investigate the dependence of the contrast and the tolerance to error for masks created from an initial field that is scaled in such a way that not all possible values fit within the range that is representable by the discretized GBH masks [see Eq.~\eqref{eq:rescale_mean}]. To this end, we will use a procedure very similar to what was used in the previous section, for instance, to generate the results of Fig.~\ref{fig:F_min_max_diff}. The only difference is that the rescale step is now performed using $\gamma_\eta$ given by Eq.~\eqref{eq:rescale_mean} for a range of $\eta$. When $\eta=4$ the new rescaling factor is approximately equal to the old one, $\gamma_\eta \approx \gamma_\text{max}$. All of the results presented here were generated using the minimum method. Figure~\ref{fig:eta_of_alpha} presents both the open-fraction of subcells and the contrast for a series of different scaling parameters $\eta$. Figure~\ref{fig:eta_of_alpha}(a) shows that for a smaller scaling parameter $\eta$ the open-fraction of the mask is larger. The reason is that a larger fraction of the components of the mask field lie in the outer part or outside the discretization region with a smaller $\eta$. At the same time there is also a change in the measured contrast for the patterns created by these masks, as shown in Fig.~\ref{fig:eta_of_alpha}(b). The variation in the contrast is smaller than the variation observed in the previous section, but the results show a smooth behavior and a maximum contrast for $\eta = 1.2$. When the value of $\eta$ is taken to be smaller than \num{4} the rescale value becomes larger and the components of the mask field better fill out the discretization region. This leads to a more even use of the available levels and a higher degree of accuracy when representing a large part of the field values, but also a limited scale that does not have the range to accurately represent the larger amplitudes. As we decrease $\eta$ from \num{4}, the contrast initially improves as the mask pattern is better encoded. When $\eta < 1$, points with amplitude equal to the mean are shifted to the top of the encodable region, a larger part of the mask values are clamped to the size of the region and not represented properly, and the contrast of the produced pattern drops again. Figure~\ref{fig:eta_of_alpha}(c) removes the direct dependence on $\eta$ by combining the results of Figs.~\ref{fig:eta_of_alpha}(a) and \ref{fig:eta_of_alpha}(b) and creates a comparable plot to Fig.~\ref{fig:alpha_vs_openpart_sbs_a}. This figure shows a very different behavior from what we saw previously. Now the change in open hole fraction is connected to a change in how the approximation scheme is utilized. The change in contrast isn't as large as we saw in the previous sections, but there is a clear trend which shows a maximum contrast at an open part fraction close to $0.25$. This scaling gives a very uniform distribution of mask field points in the approximation region, and thus represents a solution that comes close to the best utilization of the approximation scheme. Comparing the results of Fig.~\ref{fig:alpha_vs_openpart_sbs_a} to Fig.~\ref{fig:eta_of_alpha}(c) shows that the better way of changing the open-fraction of subcells in a mask is done by changing the way the discretization scheme in the GBH approximation is used. This allows for a change of open-fraction in almost the entire range between \num{0.0} and \num{0.5}, while still keeping a similar or better contrast than the original solution investigated using $\gamma_\text{max}$, as opposed to the scheme of adding additional open subcells by using the different solutions to Eq.~\eqref{eq:Psi_m_sum_rewritten}. Additional holes can still be added to change the open-fraction and therefore extend this scheme to work for open-fractions above \num{0.5}. These results are not necessarily similar for all target patterns. Different patterns leads to mask fields that are differently distributed in the complex plane. The field scaling that produces the most uniform utilization of the available approximation region will therefore depend on the pattern. However, several mask patterns were investigated, and they all showed similar behavior. Optimal contrast for a given pattern and desired open fraction was found by scanning over the parameter space of interest. A more sophisticated optimization technique could also have been used, but we did not explore this. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{alpha_vs_err_eta_new.pdf} \caption{\label{fig:alpha_err_eta} Contrast $\alpha$ as a function of the probability of having an error in a subcell of the mask for masks created different scaling $\eta$. Each data point is based on the mean contrast $\alpha$ from intensity patterns computed for a series of mask realisations.} \end{figure} Figure~\ref{fig:alpha_err_eta} presents the robustness of masks generated with different $\gamma_\eta$. This figure is similar to Fig.~\ref{fig:alpha_vs_err}, but shows the contrast behavior up to the higher error probability of $0.25$. The series with $\eta=4.0$ is close to the best cases presented previously using the maximum scaling, this is as expected since $\gamma_\text{max} \approx \gamma_{\eta}|_{\eta=4.0}$ for this test pattern. The series that correspond to using a scaling parameter $\eta$ that better utilize the available approximation region perform better. For series with $\eta < 1.5$ the contrast is above $0.9$ when \SI{10}{\percent} of the subcells are flipped (i.e. $\epsilon=0.1$). These values for $\eta$ correspond to masks with an open-fraction from just under \num{0.2} and up to almost \num{0.5}. This high level of robustness shows that using more subcells to encode the signal doesn't only improve the initial contrast, it also improves the robustness of the masks to errors in which subcells are opened or closed at random. \section{Conclusion and future work} We have presented a new technique that extends the grid based binary holography method by Onoe and Kaneko~\cite{Onoe79}. The new technique lets us freely decide between grids with a low or high number of open subcells. This allows the grid based holography method to be adjusted to better fit specific user needs, whether this means tuning the open hole fraction down for faster production and more rigid masks, or tuning it up to prevent problems such as heating of the mask during exposure. The technique can be extended to work for any selected open-fraction between the two extreme solutions shown, and investigations into the contrast of the patterns produced with different masks lead us to the conclusion that choosing an intermediary open-fraction between the extrema is best done by scaling the initial field. This scaling impacts both the contrast of the generated patterns and the open-fraction of the generated masks, leading to a tradeoff between these two parameters when selecting the scaling for generating masks. All results presented here work in the Fraunhofer diffraction limit. Future work should include the integration of lenses to reduce the size of the optical system and improve resolution. \begin{acknowledgments} The authors gratefully acknowledge support from the Research Council of Norway, Fripro Project 213453 and Forny Project 234159. The research of I.S. was supported in part by the Research Council of Norway Contract No. 216699 and The French National Research Agency~(ANR) under contract ANR-15-CHIN-0003-01. \end{acknowledgments} \nocite{apsrev41Control} \bibliographystyle{apsrev4-1}
2,877,628,090,926
arxiv
\section{Introduction} \begin{figure*}[t] \centering \vspace{-0.05in} \includegraphics[width=.9\linewidth]{images/radar-v2.pdf} \vspace{-0.1in} \caption{Evaluation tracks of BiBench. Our benchmark evaluates the performance of binarization algorithms on a range of comprehensive evaluation tracks, including: ``Learning Task", ``Neural Architecture", ``Corruption Robustness", ``Training Consumption", ``Theoretical Complexity", and ``Hardware Inference".} \label{fig:ovewview} \end{figure*} The rising of deep learning leads to the persistent contradiction between larger models and the limitations of deployment resources. Compression technologies have been widely studied to address this issue, including quantization~\citep{gong2014compressing,wu2016quantized,Vanhoucke2011ImprovingTS,gupta2015deep}, pruning~\citep{han2015learning, han2016deep, he2017channel}, distillation~\citep{hinton2015distilling,xu2018training,chen2018darkrank,Yim2017A,zagoruyko2017paying}, lightweight architecture design~\citep{mobilenet,mobilenet_v2,shufflenet,shufflenet_v2}, and low-rank decomposition~\citep{denton2014exploiting,lebedev2015speeding,jaderberg2014speeding,lebedev2016fast}. These technologies are essential for the practical application of deep learning. As a compression approach that reduces the bit-width to 1-bit, network binarization is regarded as the most aggressive quantization technology~\citep{rusci2020memory,choukroun2019low,ijcai2022p603,shang2022network,zhang2022pokebnn,bethge2020meliusnet,bethge2019back,martinez2019training,helwegen2019latent}. The binarized models take little storage and memory and accelerate the inference by efficient bitwise operations. Compared to other compression technologies like pruning and architecture design, network binarization has potent topological generics, as it only applies to parameters. As a result, it is widely studied in academic research as a standalone compression technique rather than just a 1-bit specialization of quantization~\citep{Gong:iccv19,gholamisurvey}. Some state-of-the-art (SOTA) binarization algorithms have even achieved full-precision performance with binarized models on large-scale tasks~\citep{deng2009imagenet,liu2020reactnet}. However, existing network binarization is still far from practical, and two worrisome trends appear in current research: \textbf{\textit{Trend-1}: Accuracy comparison scope is limited.} In recent research, several image classification tasks (such as CIFAR-10 and ImageNet) have become standard options for comparing accuracy among different binarization algorithms. While this helps to clearly and fairly compare accuracy performance, it causes most binarization algorithms to be only engineered for image inputs (2D visual modality), and their insights and conclusions are rarely verified in other modalities and tasks. The use of monotonic tasks also hinders a comprehensive evaluation from an architectural perspective. Furthermore, data noise, such as corruption~\citep{hendrycks2018benchmarking}, is a common problem on low-cost edge devices and is widely studied in compression, but this situation is hardly considered existing binarization algorithms. \textbf{\textit{Trend-2}: Efficiency analysis remains theoretical.} Network binarization is widely recognized for its significant storage and computation savings, with theoretical savings of up to 32$\times$ and 64$\times$ for convolutions, respectively~\citep{rastegari2016xnor,bai2021binarybert}. However, these efficiency claims lack experimental evidence due to the lack of hardware library support for deploying binarized models on real-world hardware. Additionally, the training efficiency of binarization algorithms is often ignored in current research, leading to negative phenomena during the training of binary networks, such as increased demand for computation resources and time consumption, sensitivity to hyperparameters, and the need for detailed optimization tuning. This paper presents \textbf{BiBench}, a network \textbf{bi}narization \textbf{bench}mark designed to evaluate binarization algorithms comprehensively in terms of accuracy and efficiency Table~\ref{tab:overview}. Using BiBench, we select 8 representative binarization algorithms that are extensively influential and function at the operator level (details and selection criteria are in Appendix~\ref{app:algo}) and benchmark algorithms on 9 deep learning datasets, 13 neural architectures, 2 deployment libraries, 14 hardware chips, and various hyperparameter settings. We invested approximately 4 GPU years of computation time in creating BiBench, intending to promote a comprehensive evaluation of network binarization from both accuracy and efficiency perspectives. We also provide in-depth analysis of the benchmark results, uncovering insights and offering suggestions for designing practical binarization algorithms. \section{Background} \label{sec:background} \begin{table*}[t] \vspace{-0.05in} \begin{center} \renewcommand\arraystretch{0.9} \caption{ Comparison between BiBench and existing binarization works along evaluation tracks.} \label{tab:overview} \vspace{-0.15in} \setlength{\tabcolsep}{4.2mm} {\begin{threeparttable} \fontsize{8.pt}{\baselineskip}\selectfont \begin{tabular}{lccccccccc} \toprule \multirow{2.4}{*}{Algorithm} & \multicolumn{3}{c}{Technique} & \multicolumn{3}{c}{Accurate Binarization} & \multicolumn{3}{c}{Efficient Binarization} \\ \cmidrule(r){2-4}\cmidrule(r){5-7}\cmidrule(r){8-10} & $s$ & $\tau$ & $g$ & {\#Task} & {\#Arch} & Robust & Train & Comp & Infer\\ \midrule BNN~\citep{courbariaux2016binarized} & $\times$ & $\times$ & $\surd$ & 3 & 3 & * & $\surd$ & $\surd$ & $\surd$ \\ XNOR~\citep{rastegari2016xnor} & $\surd$ & $\times$ & $\times$ & 2 & 3 & * & $\surd$ & $\surd$ & $\surd$ \\ DoReFa~\citep{Dorefa-Net} & $\surd$ & $\times$ & $\times$ & 2 & 2 & * & $\times$ & $\surd$ & $\times$ \\ Bi-Real~\citep{liu2018bi} & $\times$ & $\times$ & $\surd$ & 1 & 2 & $\times$ & $\times$ & $\surd$ & $\times$ \\ XNOR++~\citep{bulat2019xnor} & $\surd$ & $\times$ & $\times$ & 1 & 2 & $\times$ & $\times$ & $\times$ & $\times$ \\ ReActNet~\citep{liu2020reactnet} & $\times$ & $\surd$ & $\times$ & 1 & 2 & $\times$ & $\times$ & $\surd$ & $\times$ \\ ReCU~\citep{xu2021recu} & $\times$ & $\surd$ & $\surd$ & 2 & 4 & $\times$ & $\times$ & $\times$ & $\times$ \\ FDA~\citep{xu2021learning} & $\times$ & $\times$ & $\surd$ & 1 & 6 & $\times$ & $\times$ & $\times$ & $\times$ \\ \midrule \textit{Our Benchmark (\textbf{BiBench})} & $\surd$ & $\surd$ & $\surd$ & \textbf{9} & \textbf{13} & \textbf{$\surd$} & \textbf{$\surd$} & \textbf{$\surd$} & \textbf{$\surd$} \\% \bottomrule \end{tabular} \begin{tablenotes} \item[1] ``$\surd$" and ``$\times$" indicates the track is considered in the original binarization algorithm, while ``*" indicates only being studied in other related studies. ``$s$", ``$\tau$", or ``$g$" indicates ``scaling factor", ``parameter redistribution", or ``gradient approximation" techniques proposed in this work, respectively. And we also present a more detailed list of these techniques (Table~\ref{tab:algos_detail}) for binarization algorithms in Appendix~\ref{app:algo}. \end{tablenotes} \end{threeparttable}} \end{center} \vspace{-5pt} \end{table*} \subsection{Network Binarization} \label{subsec:NetworkBinarization} Binarization compresses weights $\boldsymbol{w}\in \mathbb{R}^{c_\text{in}\times c_\text{out}\times k\times k}$ and activations $\boldsymbol{a}\in \mathbb{R}^{c_\text{in}\times w\times h}$ to 1-bit in computationally dense convolution, where $c_\text{in}$, $k$, $c_\text{out}$, $w$, and $h$ denote the input channel, kernel size, output channel, input width, and input height. The computation can be expressed as \begin{equation} \label{eq:quantized_net} \boldsymbol o = \alpha\operatorname{popcount}\left(\operatorname{xnor}\left(\operatorname{sign}(\boldsymbol{a}), \operatorname{sign}(\boldsymbol{w})\right)\right), \end{equation} where $\boldsymbol{o}$ denotes the outputs and $\alpha\in \mathbb{R}^{c_\text{out}}$ denotes the optional scaling factor calculated as $\alpha=\frac{\|w\|}{n}$~\citep{courbariaux2016binarized,rastegari2016xnor}, $\operatorname{xnor}$ and $\operatorname{popcount}$ are bitwise instructions defined as~\citep{arm64,x86}. The $\operatorname{popcount}$ counts the number of bits with the "one" value in the input vector and writes the result to the targeted register. While binarized parameters of networks offer significant compression and acceleration benefits, their limited representation can lead to decreased accuracy. Various algorithms have been proposed to address this issue to improve the accuracy of binarized networks~\citep{yuan2021comprehensive}. The majority of binarization algorithms aim to improve binarized operators (as Eq.~(\ref{eq:quantized_net}) shows), which play a crucial role in the optimization and hardware efficiency of binarized models~\citep{alizadeh2018a,geiger2020larq}. These operator improvements are also flexible across different neural architectures and learning tasks, demonstrating the generalizability of bit-width compression~\citep{wang2020bidet,ijcai2022p603,zhao2022pb}. Our BiBench considers 8 extensively influential binarization algorithms that focus on improving operators and can be broadly classified into three categories: scaling factors, parameter redistribution, and gradient approximation~\citep{courbariaux2016binarized,rastegari2016xnor,Dorefa-Net,liu2018bi,bulat2019xnor,liu2020reactnet,xu2021recu,xu2021learning}. Note that for selected binarization algorithms, the techniques requiring specified local structures or training pipelines are excluded for fairness, \textit{i.e.}, the bi-real shortcut of Bi-Real~\citep{Bi-Real} and duplicate activation of ReActNet~\citep{liu2020reactnet} in CNN neural architectures. See Appendix~\ref{app:algo} for more information on the algorithms in our BiBench. \subsection{Challenges for Binarization} Since around 2015, network binarization has garnered significant attention in various fields of research, including but not limited to vision and language understanding. However, several challenges still arise during the production and deployment of binarized networks in practice. The goal of binarization production is to train accurate binarized networks that are resource-efficient. Some recent studies have demonstrated that the performance of binarization algorithms on image classification tasks may not always generalize to other learning tasks and neural architectures~\citep{qin2020bipointnet,wang2020bidet,qin2021bibert,liu2022bit}. In order to achieve higher accuracy, some binarization algorithms may require several times more training resources compared to full-precision networks. Ideally, binarized networks should be hardware-friendly and robust when deployed on edge devices. However, most mainstream inference libraries do not currently support the deployment of binarized networks on hardware~\citep{tensorrt,acl,snpe}, which limits the performance of existing binarization algorithms in practice. In addition, the data collected by low-cost devices in natural edge scenarios is often of low quality and even be corrupted, which can negatively impact the robustness of binarized models~\citep{lin2018defensive,ye2019adversarial,cygert2021robustness}. However, most existing binarization algorithms do not consider corruption robustness when designed. \section{BiBench: Tracks and Metrics for Network Binarization} In this section, we present BiBench, a benchmark for accurate and efficient network binarization. Our evaluation consists of 6 tracks and corresponding metrics, as shown in Figure~\ref{fig:ovewview}, which address the practical challenges of producing and deploying binarized networks. Higher scores on these metrics indicate better performance. \subsection{Towards Accurate Binarization} \label{subsec:TowardsAccurateBinarization} In our BiBench, the evaluation tracks for accurate binarization are ``Learning Task", ``Neural Architecture" (for production), and ``Corruption Robustness" (for deployment). \ding{172}\ \textbf{Learning Task}. We comprehensively evaluate network binarization algorithms using 9 learning tasks across 4 different data modalities. For the widely-evaluated 2D visual modality tasks, we include image classification on CIFAR-10~\citep{CIFAR} and ImageNet~\citep{krizhevsky2012imagenet} datasets, as well as object detection on PASCAL VOC~\citep{hoiem2009pascal} and COCO~\citep{lin2014microsoft} datasets. In the 3D visual modality tasks, we evaluate the algorithm on ModelNet40 classification~\citep{wu20153d} and ShapeNet segmentation~\citep{chang2015shapenet} datasets of 3D point clouds. For the textual modality tasks, we use the natural language understanding in the GLUE benchmark~\citep{wang2018glue}. For the speech modality tasks, we evaluate the algorithms on the Speech Commands KWS dataset~\citep{warden2018speech}. See Appendix.~\ref{app:task} for more details on the tasks and datasets. To evaluate the performance of a binarization algorithm on this track, we use the accuracy of full-precision models as a baseline and calculate the mean relative accuracy for all architectures on each task. The Overall Metric (OM) for this track is then calculated as the quadratic mean of the relative accuracies across all tasks~\citep{curtis2000quadratic}. The equation for this evaluation metric is as follows: \begin{equation} \label{eq:track1} \textrm{OM}_\textrm{task}=\sqrt{\frac{1}{N}\sum\limits_{i=1}^N\mathbb{E}^2\left(\frac{\boldsymbol{A}_{\textrm{task}_i}^{bi}}{\boldsymbol{A}_{\textrm{task}_i}}\right)}, \end{equation} where $\boldsymbol{A}_{\textrm{task}_i}^{bi}$ and $\boldsymbol{A}_{\textrm{task}_i}$ denote the accuracy of the binarized and full-precision models on the $i$-th task, respectively, $N$ is the number of tasks, and $\mathbb{E}(\cdot)$ is the mean operation. Note that the quadratic mean form is used uniformly in BiBench to unify all overall metrics of tracks, which helps prevent certain poor performers from disproportionately influencing the metric and allows for a more accurate measure of overall performance on each track. \ding{173}\ \textbf{Neural Architecture}. We evaluate various neural architectures, including mainstream CNN-based, transformer-based, and MLP-based architectures, to assess the generalizability of binarization algorithms from the perspective of neural architecture. Specifically, we use standard ResNet-18/20/34~\citep{he2016deep} and VGG~\citep{simonyan2015very} to evaluate CNN architectures, and apply the Faster-RCNN~\citep{NIPS2015_5638} and SSD300~\citep{liu2016ssd} frameworks as detectors. To evaluate transformer-based architectures, we binarize BERT-Tiny4/Tiny6/Base~\citep{kenton2019bert} with the bi-attention mechanism~\citep{qin2021bibert} for convergence. We also evaluate MLP-based architectures, including PointNet$_\text{vanilla}$ and PointNet~\citep{qi2017pointnet} with EMA aggregator~\citep{qin2020bipointnet}, FSMN~\citep{zhang2015feedforward}, and Deep-FSMN~\citep{zhang2018deep}, due to their linear unit composition. Detailed descriptions of these architectures can be found in Appendix~\ref{app:arch}. Similar to the overall metric for learning task track, we build the overall metric for neural architecture track: \begin{equation} \scriptsize \textrm{OM}_\textrm{arch}=\sqrt{\frac{1}{3}\left(\mathbb{E}^2\left(\frac{\boldsymbol{A}_{\textrm{CNN}}^{bi}}{\boldsymbol{A}_{\textrm{CNN}}}\right)+\mathbb{E}^2\left(\frac{\boldsymbol{A}_{\textrm{Transformer}}^{bi}}{\boldsymbol{A}_{\textrm{Transformer}}}\right)+\mathbb{E}^2\left(\frac{\boldsymbol{A}_{\textrm{MLP}}^{bi}}{\boldsymbol{A}_{\textrm{MLP}}}\right)\right) }. \end{equation} \ding{174}\ \textbf{Corruption Robustness}. The corruption robustness of binarization on deployment is important to handle scenarios such as perceptual device damage, a common issue with low-cost equipment in real-world implementations. To assess the robustness of binarized models to corruption of 2D visual data, we evaluate algorithms on the CIFAR10-C~\citep{hendrycks2018benchmarking} benchmark. Therefore, we evaluate the performance of binarization algorithms on corrupted data compared to normal data using the corruption generalization gap~\citep{zhang2022delving}: \begin{equation} \boldsymbol{G}_{\textrm{task}_i} = \boldsymbol{A}_{\textrm{task}_i}^{\textrm{norm}} - \boldsymbol{A}_{\textrm{task}_i}^{\textrm{corr}}, \end{equation} where $\boldsymbol{A}^{\textrm{corr}}_{\textrm{task}i}$ and $\boldsymbol{A}^{\textrm{norm}}{\textrm{task}_i}$ denote the accuracy results under all architectures on the $i$-th corruption task and corresponding normal task, respectively. The overall metric on this track is then calculated by \begin{equation} \textrm{OM}_\textrm{robust}=\sqrt{\frac{1}{C}\sum\limits_{i=1}^C\mathbb{E}^2\left(\frac{\boldsymbol{G}_{\textrm{task}_i}}{\boldsymbol{G}_{\textrm{task}_i}^{bi}}\right) }. \end{equation} \subsection{Towards Efficient Binarization} We evaluate the efficiency of network binarization in terms of ``Training Consumption" for production, ``Theoretical Complexity" and ``Hardware Inference" for deployment. \ding{175}\ \textbf{Training Consumption}. We consider the occupied training resource and the hyperparameter sensitivity of binarization algorithms, which impact the consumption of a single training and the overall tuning process. To evaluate the ease of tuning binarization algorithms to optimal performance, we train their binarized networks with various hyperparameter settings, including different learning rates, learning rate schedulers, optimizers, and even random seeds. We align the epochs for binarized and full-precision networks and compare their consumption and time. The metric used to evaluate the training consumption track is based on both training time and sensitivity to hyperparameters. For one binarization algorithm, we have \begin{equation} \textrm{OM}_\textrm{train}=\sqrt{\frac{1}{2}\left(\mathbb{E}^2\left(\frac{\boldsymbol{T}_\textrm{train}}{\boldsymbol{T}_\textrm{train}^{bi}}\right) +\mathbb{E}^2\left(\frac{\operatorname{std}(\boldsymbol{A}_\textrm{hyper})}{\operatorname{std}(\boldsymbol{A}^{bi}_\textrm{hyper})}\right)\right)}, \end{equation} where the set $\boldsymbol{T}_\textrm{train}$ represents the time spent on a single training instance, $\boldsymbol{A}_\textrm{hyper}$ is the set of results obtained using different hyperparameter configurations, and $\operatorname{std}(\cdot)$ calculates standard deviation values. \ding{176}\ \textbf{Theoretical Complexity}. To evaluate complexity, we compute the compression and speedup ratios before and after binarization on architectures such as ResNet18. The evaluation metric is based on model size (MB) and computational floating-point operations (FLOPs) savings during inference. Binarized parameters occupy 1/32 the storage of their 32-bit floating-point counterparts~\citep{rastegari2016xnor}. Binarized operations, where a 1-bit weight is multiplied by a 1-bit activation, take approximately 1*1/64 FLOPs on a CPU with 64-bit instruction size~\citep{Dorefa-Net,liu2018bi,li2019additive}. The compression ratio $\boldsymbol{r}_{c}$ and speedup ratio $\boldsymbol{r}_{s}$ are \begin{equation} \begin{aligned} \boldsymbol{r}_{c}=&\frac{|\boldsymbol{M}|_{\ell0}}{\frac{1}{32}\left(|\boldsymbol{M}|_{\ell0}-|\boldsymbol{\hat{M}}|_{\ell0}\right)+|\boldsymbol{\hat{M}}|_{\ell0}},\\ \boldsymbol{r}_{s}=&\frac{\operatorname{FLOPs}_{\boldsymbol{M}}}{\frac{1}{64}\left(\operatorname{FLOPs}_{\boldsymbol{M}}-\operatorname{FLOPs}_{\boldsymbol{\hat{M}}}\right)+\operatorname{FLOPs}_{\boldsymbol{\hat{M}}}}, \end{aligned} \end{equation} where $\boldsymbol{M}$ and $\boldsymbol{\hat{M}}$ are the number of full-precision parameters that remains in the original and binarized models, respectively, and $\operatorname{FLOPs}_{\boldsymbol{M}}$ and $\operatorname{FLOPs}_{\boldsymbol{\hat{M}}}$ represent the computation related to these parameters. The overall metric for theoretical complexity is \begin{equation} \textrm{OM}_\textrm{comp}=\sqrt{\frac{1}{2}\left(\mathbb{E}^2(\boldsymbol{r}_{c})+\mathbb{E}^2(\boldsymbol{r}_{s})\right)}. \end{equation} \ding{177}\ \textbf{Hardware Inference}. As binarization is not widely supported in hardware deployment, only two inference libraries, Larq's Compute Engine~\citep{geiger2020larq} and JD's daBNN~\citep{zhang2019dabnn}, can deploy and evaluate binarized models on ARM hardware in practice. We focus on ARM CPU inference on mainstream hardware for edge scenarios, such as HUAWEI Kirin, Qualcomm Snapdragon, Apple M1, MediaTek Dimensity, and Raspberry Pi (details in Appendix~\ref{app:hardware}). For a given binarization algorithm, we use the savings in storage and inference time under different inference libraries and hardware as evaluation metrics: \begin{equation} \textrm{OM}_\textrm{infer}=\sqrt{\frac{1}{2}\left(\mathbb{E}^2\left(\frac{\boldsymbol{T}_\textrm{infer}}{\boldsymbol{T}_\textrm{infer}^{bi}}\right) +\mathbb{E}^2\left(\frac{\boldsymbol{S}_\textrm{infer}}{\boldsymbol{S}_\textrm{infer}^{bi}}\right)\right)}, \end{equation} where $\boldsymbol{T}_\textrm{infer}$ is the inference time and $\boldsymbol{S}_\textrm{infer}$ is the storage used on different devices. \section{BiBench Implementation} \label{sec:bibench_eval} This section presents the implementation details, training, and inference pipelines of BiBench. \noindent\textbf{Implementation details.} BiBench is implemented using the PyTorch~\citep{paszke2019pytorch} package. The definitions of the binarized operators are contained in individual, separate files, enabling the flexible replacement of the corresponding operator in the original model when evaluating different tasks and architectures. When deployed, well-trained binarized models for a particular binarization algorithm are exported to the Open Neural Network Exchange (ONNX) format~\citep{onnxruntime} and provided as input to the appropriate inference libraries (if applicable for the algorithm). \noindent\textbf{Training and inference pipelines.} \textit{Hyperparameters}: Binarized networks are trained for the same number of epochs as their full-precision counterparts. Inspired by the results in Section~\ref{sec:consumption}, we use the Adam optimizer for all binarized models for well converging. The default initial learning rate is $1e-3$ (or $0.1\times$ the default learning rate), and the learning rate scheduler is CosineAnnealingLR~\citep{loshchilov2017sgdr}. \textit{Architecture}: BiBench follows the original architectures of full-precision models, binarizing their convolution, linear, and multiplication units with the selected binarization algorithms. Hardtanh is uniformly used as the activation function to prevent all-one features. \textit{Pretraining}: All binarization algorithms use finetuning. For each one, all binarized models are initialized using the same pre-trained model for the specific neural architecture and learning task to eliminate inconsistency at initialization. \section{BiBench Evaluation and Analysis} This section presents and analyzes the evaluation results in BiBench. The main accuracy results are in Table~\ref{tab:acc1}, and the efficiency results are in Table~\ref{tab:eff}. Additional details can be found in Appendix~\ref{sec:FullResults}. \subsection{Accuracy Tracks} The accuracy results for network binarization are presented in Table~\ref{tab:acc1}. These results were obtained using the metrics defined in Section~\ref{subsec:TowardsAccurateBinarization} for each accuracy-related track. \subsubsection{Learning Task: Performance Varies Greatly by Algorithms and Modalities} We present the evaluation results of binarization on various learning tasks. In addition to the overall metric $\textrm{OM}_\textrm{task}$, we also provide the relative accuracy of binarized networks compared to their full-precision counterparts. \textbf{The impact of binarized operators is crucial and significant}. With fully unified training pipelines and architectures, a substantial variation in performance appears among binarization algorithms across every learning task. For example, the SOTA FDA algorithm exhibits a 21.3\% improvement in accuracy on GLUE datasets compared to XNOR++, and the difference is even greater at 33.0\% on ShapeNet between XNOR and ReCU. This suggests that binarized operators play a crucial role in the learning task track, and their importance is also confirmed in other tracks. \textbf{Binarization algorithms vary greatly under different data modalities}. When comparing various learning tasks, it is notable that binarized networks suffer a significant drop in accuracy on the language understanding GLUE benchmark but can approach full-precision performance on the ModelNet40 point cloud classification task. This and similar phenomena suggest that the direct transfer of binarization insights across learning tasks is non-trivial. For overall performance, both ReCU and ReActNet have high accuracy across various learning tasks. While ReCU performs best on most individual tasks, ReActNet ultimately stands out in the overall metric comparison. Both algorithms apply reparameterization in the forward propagation and gradient approximation in the backward propagation. \subsubsection{Neural Architecture: Binarization on Transformers Is Challenging} \textbf{Binarization exhibits a clear advantage on CNN- and MLP-based architectures compared to transformer-based ones}. Advanced binarization algorithms can achieve 78\%-86\% of full-precision accuracy on CNNs, and binarized networks with MLP architectures can even approach full-precision performance (\textit{e.g.}, Bi-Real 87.83\%). In contrast, transformer-based architectures suffer significant performance degradation when binarized, and none of the algorithms achieve an overall accuracy metric higher than 70\%. These results indicate that transformer-based architectures, with their unique attention mechanisms, require specialized binarization designs rather than direct binarization. The overall winner on the architecture track is the FDA algorithm, which performs best on both CNNs and transformers. The evaluation of these two tracks shows that binarization algorithms that use statistical channel-wise scaling factors and custom gradient approximation, such as FDA and ReActNet, have some degree of stability advantage. \begin{figure*}[t] \centering \vspace{-0.05in} \includegraphics[width=0.95\linewidth]{images/bar-v6.pdf} \vspace{-0.15in} \caption{Comparisons of accuracy under different training settings. } \label{fig:ablation} \end{figure*} \subsubsection{Corruption Robustness: Binarization Exhibits Robust to Corruption} \textbf{Binarized networks can approach full-precision level robustness for corruption}. Interestingly, binarized networks demonstrate robustness comparable to full-precision counterparts when evaluated for corruption. Evaluation results on the CIFAR10-C dataset reveal that binarized networks perform similarly to full-precision networks on typical 2D image corruption tasks. In some cases, such as ReCU and XNOR-Net, binarized networks outperform their full-precision counterparts. If the same level of robustness to corruption is required, the binarized network version typically requires little additional design or supervision to achieve it. As such, binarized networks generally exhibit similar robustness to corruption as full-precision networks, which appears to be a general property of binarized networks rather than a specific characteristic of certain algorithms. \subsection{Efficiency Tracks} We analyze the efficiency metrics of training consumption, theoretical complexity, and hardware inference (Table~\ref{tab:eff}). \subsubsection{Training Consumption: Binarization Could Be Stable yet Generally Expensive} \label{sec:consumption} We thoroughly examine the training cost of binarization algorithms on ResNet18 for CIFAR10 and present the sensitivity and training time results for different binarization algorithms in Table~\ref{tab:eff} and Figure~\ref{fig:speed_compute}, respectively. \textbf{``Binarization$\not=$sensitivity": existing techniques can stabilize binarization-aware training}. It is commonly believed that the training of binarized networks is more sensitive to training settings than full-precision networks due to the representation limitations and gradient approximation errors introduced by the high degree of discretization. However, we find that the hyperparameter sensitivities of existing binarization algorithms are polarized, with some being even more hyperparameter-stable than the training of full-precision networks, while others fluctuate greatly. This variation is due to the different techniques used by the binarized operators of these algorithms. Hyperparameter-stable binarization algorithms often share the following characteristics: (1) \textit{channel-wise scaling factors} based on learning or statistics; (2) \textit{soft approximation} to reduce gradient error. These stable algorithms may not necessarily outperform others, but they can simplify the tuning process in production and provide reliable accuracy with a single training. The preference for hyperparameter settings is also clear and stable binarized networks. Statistical results in Figure~\ref{fig:ablation} show that training with Adam optimizer, the learning rate equaling to the full-precision network (1$\times$), and the CosineAnnealingLR scheduler is more stable than other settings. Based on this, we use this setting as part of the standard training pipelines when evaluating binarization. \textbf{Soft approximation in binarization leads to a significant increase in training time}. Comparing the time consumed by each binarization algorithm, we found that the training time of algorithms using custom gradient approximation techniques such as Bi-Real and ReActNet increased significantly. The metric about the training time of FDA is even as high as 20.62\%, meaning it is almost 5$\times$ the training time of a full-precision network. \subsubsection{Theoretical Complexity: Different Algorithms have Similar Complexity} \textbf{There is a minor difference in theoretical complexity among binarization algorithms}. The leading cause of the difference in compression rate is each model's definition of the static scaling factor. For example, BNN does not use any factors and has the highest compression. In terms of theoretical acceleration, the main difference comes from two factors: the reduction in the static scaling factor also improves theoretical speedup, and real-time re-scaling and mean-shifting for activation add additional computation, such as in the case of ReActNet, which reduces the speedup by 0.11 times. In general, the theoretical complexity of each method is similar, with overall metrics in the range of $[12.71, 12.94]$. These results suggest that binarization algorithms should have similar inference efficiency. \begin{table*}[t] \renewcommand\arraystretch{0.8} \centering \vspace{-0.05in} \caption{Deployment capability of different inference libraries on real hardware.} \label{tab:cap} \vspace{-0.15in} \setlength{\tabcolsep}{5.mm} {\fontsize{8.pt}{\baselineskip}\selectfont \begin{tabular}{lllcccc} \toprule Infer. Lib. & Provider & $s$ Granularity & $s$ Form & Flod BN & Act. Re-scaling & Act. Mean-shifting \\ \midrule Larq & Larq & Channel-wise & FP32 & $\surd$ & $\times$ & $\surd$ \\ daBNN & JD & Channel-wise & FP32 & $\surd$ & $\times$ & $\times$ \\ \midrule Algorithm & Deployable & $s$ Granularity & $s$ Form & Flod BN & Act. Re-scaling & Act. Mean-shifting \\ \midrule BNN & {$\surd$} & N/A & N/A & N/A & $\times$ & $\times$ \\ XNOR & {$\times$} & Channel-wise & FP32 & $\surd$ & $\surd$ & $\times$ \\ DoReFa & {$\surd$} & Channel-wise & FP32 & $\surd$ & $\times$ & $\times$ \\ Bi-Real & {$\surd$} & Channel-wise & FP32 & $\surd$ & $\times$ & $\times$ \\ XNOR++ & {$\times$} & Spatial-wise & FP32 & $\times$ & $\times$ & $\times$ \\ ReActNet & {$\surd$} & Channel-wise & FP32 & $\surd$ & $\times$ & $\surd$ \\ ReCU & {$\surd$} & Channel-wise & FP32 & $\surd$ & $\times$ & $\times$ \\ FDA & {$\surd$} & Channel-wise & FP32 & $\surd$ & $\times$ & $\times$ \\ \bottomrule \end{tabular}} \end{table*} \subsubsection{Hardware Inference: Immense Potential on Edge Devices Despite Limited Supports} One of the advantages of the hardware inference track is that it provides valuable insights into the practicalities of deploying binarization in real-world settings. This track stands out from other tracks in this regard. \textbf{Limited inference libraries lead to almost fixed paradigms of binarization deployment.} The availability of open-source inference libraries that support the deployment of binarization algorithms on hardware is quite limited. After investigating the existing options, we found that only Larq~\citep{geiger2020larq} and daBNN~\citep{zhang2019dabnn} offer complete deployment pipelines and primarily support deployment on ARM devices. As shown in Table~\ref{tab:cap}, both libraries support channel-wise scaling factors in the floating-point form that must be fused into the Batch Normalization (BN) layer. However, neither of them supports dynamic activation statistics or re-scaling during inference. Larq also includes support for mean-shifting activation with a fixed bias. These limitations in the available inference libraries' deployment capabilities have significantly impacted the practicality of deploying binarization algorithms. For example, the scale factor shape of XNOR++ caused its deployment to fail, and XNOR also failed due to its activation re-scaling technique. These constraints have resulted in a situation where the vast majority of binarization methods have almost identical inference performance, with the mean-shifting operation of ReActNet on activation having only a slight impact on efficiency. As a result, binarized models must adhere to fixed deployment paradigms and have almost identical efficiency performance. \textbf{Born for the edge: more promising for lower-power edge computing}. After evaluating the performance of binarized models on a range of different chips, we found that the average speedup of the binarization algorithm was higher on chips with lower computing power (Figure~\ref{fig:speed_compute}). This counter-intuitive result is likely since higher-performance chips tend to have more acceleration from multi-threading when running floating-point models, leading to a relatively slower speedup of binarized models on these chips. In contrast, binarization technology is particularly effective on edge chips with lower performance and cost. Its extreme compression and acceleration capabilities can enable the deployment of advanced neural networks at the edge. These findings suggest that binarization is well-suited for low-power, cost-sensitive edge devices. \begin{figure} \vspace{-0.1in} \begin{center} \includegraphics[width=.95\linewidth]{images/speed-compute-v1.pdf} \end{center} \vspace{-0.1in} \caption{The lower the chip's computing power, the higher the inference speedup of deployed binarized models.} \label{fig:speed_compute} \vspace{-10pt} \end{figure} \subsection{Suggested Paradigm of Binarization Algorithm} Based on our evaluation and analysis, we propose the following paradigm for achieving accurate and efficient network binarization using existing techniques: (1) \textbf{Soft gradient approximation} presents great potential. It improves performance by increasing training rather than deployment cost. And all accuracy-winning algorithms adopt this technique, \textit{i.e.}, ReActNet, ReCU, and FDA. By further exploring this technique, FDA outperforms previous algorithms on the architecture track, indicating the great potential of the soft gradient approximation technique. (2) \textbf{Channel-wise scaling factors} are currently the optimal option for binarization. The gain from the floating-point scaling factor is demonstrated in accuracy tracks, and deployable consideration limits its form to channel-wise. This means a balanced trade-off between accuracy and efficiency. (3) \textbf{Pre-binarization parameter redistributing} is an optional but beneficial operation that can be implemented as a mean-shifting for weights or activations before binarization. Our findings indicate that this technique can significantly enhance accuracy with little added inference cost, as seen in ReActNet and ReCU. It is important to note that, despite the insights gained from benchmarking on evaluation tracks, \textbf{none of the binarization techniques or algorithms work well across all scenarios so far}. Further research is needed to overcome the current limitations and mutual restrictions between production and deployment, and to develop binarization algorithms that consider both deployability and efficiency. Additionally, it would be helpful for inference libraries to support more advanced binarized operators. In the future, the focus of binarization research should be on addressing these issues. \section{Discussion} In this paper, we propose BiBench, a versatile and comprehensive benchmark toward the fundamentals of network binarization. BiBench covers 8 network binarization algorithms, 9 deep learning datasets (including a corruption one), 13 different neural architectures, 2 deployment libraries, 14 real-world hardware, and various hyperparameter settings. Based on these scopes, we develop evaluation tracks to measure the accuracy under multiple conditions and efficiency when deployed on actual hardware. By benchmark results and analysis, BiBench summarizes an empirically optimized paradigm with several critical considerations for designing accurate and efficient binarization algorithms. BiBench aims to provide a comprehensive and unbiased resource for researchers and practitioners working in model binarization. We hope BiBench can facilitate a fair comparison of algorithms through a systematic investigation with metrics that reflect the fundamental requirements for model binarization and serve as a foundation for applying this technology in broader and more practical scenarios. \section{Details of Benchmark} \subsection{Details of Binarization Algorithm} \label{app:algo} \textbf{General Binarization}: Previous research has deemed lower bit-width quantization methods as more aggressive~\citep{rusci2020memory,choukroun2019low,ijcai2022p603}, as they often provide higher compression and faster processing at the cost of lower accuracy. Among all quantization techniques, 1-bit quantization (binarization) is considered the most aggressive~\citep{ijcai2022p603}, as it poses significant accuracy challenges but offers the greatest compression and speed benefits. \textit{Training}. When training a binarized model, the sign function is commonly used in the forward pass and gradient approximations such as STE are applied during the backward pass to enable the model to be trained. Since the parameters are quantized to binary, network binarization methods typically use a simple sign function as the quantizer rather than using the same quantizer as in multi-bit (2-8 bit) quantization \citep{Gong:iccv19,gholamisurvey}. Specifically, as \cite{Gong:iccv19} describes, for multi-bit uniform quantization, given the bit width b and the floating-point activation/weight $x$ following in the range $(l, u)$, the complete quantization-dequantization process of uniform quantization can be defined as \begin{equation} Q_U(\boldsymbol{x})=\operatorname{round}\left(\frac{\boldsymbol{x}}{\Delta}\right) \Delta, \end{equation} where the original range $(l, u)$ is divided into $2^b-1$ intervals Pi, $i \in (0, 1, \cdots, 2^b-1)$, and $\Delta = \frac{u-l}{2^b-1}$ is the interval length. When $b=1$, the $Q_U (\boldsymbol{x})$ equals the sign function, and the binary function is expressed as \begin{equation} Q_B(\boldsymbol{x}) = \operatorname{sign}(\boldsymbol{x}). \end{equation} Therefore, binarization can be regarded formally as the 1-bit specialization of quantization. \textit{Deployment}. For efficient deployment on real-world hardware, binarized parameters are grouped in blocks of 32 and processed simultaneously using 32-bit instructions, which is the key principle for achieving acceleration. To compress binary algorithms, instructions such as XNOR (or the combination of EOR and NOT) and popcount are used to enable the deployment of binarized networks on real-world hardware. The XNOR (exclusive-XOR) gate is a combination of an XOR gate and an inverter, and XOR (also known as EOR) is a common instruction that has long been available in assembly instructions for all target platforms. The popcount instruction, or Population Count per byte, counts the number of bits with a specific value in each vector element in the source register, stores the result in a vector and writes it to the destination register~\citep{arm64}. This instruction is used to accelerate the inference of binarized networks~\citep{BNN,rastegari2016xnor} and is widely supported by various hardware, such as the definitions of popcount in ARM and x86 in \citep{arm64} and \citep{x86}, respectively. \textit{Comparison with other compression techniques}. Current network compression technologies primarily focus on reducing the size and computation of full-precision models. Knowledge distillation, for instance, guides the training of small (student) models using the intermediate features and/or soft outputs of large (teacher) models~\citep{hinton2015distilling,xu2018training,chen2018darkrank,Yim2017A,zagoruyko2017paying}. Model pruning~\citep{han2015learning, han2016deep, he2017channel} and low-rank decomposition~\citep{denton2014exploiting,lebedev2015speeding,jaderberg2014speeding,lebedev2016fast} also reduce network parameters and computation via pruning and low-rank approximation. Compact model design, on the other hand, creates a compact model from scratch \citep{mobilenet,mobilenet_v2,shufflenet,shufflenet_v2}. While these compression techniques effectively decrease the number of parameters, the compressed models still use 32-bit floating-point numbers, which leaves scope for additional compression using model quantization/binarization techniques. Compared to multi-bit (2-8 bit) model quantization, which compresses parameters to integers\citep{gong2014compressing,wu2016quantized,Vanhoucke2011ImprovingTS,gupta2015deep}, binarization directly applies the sign function to compress the model to a more compact 1-bit~\citep{rusci2020memory,choukroun2019low,ijcai2022p603,shang2022network,qin2020forward}. Additionally, due to the use of binary parameters, bitwise operations (XNOR and popcount) can be applied during inference at deployment instead of integer multiply-add operations in 2-8 bit model quantization. As a result, binarization is considered to be more hardware-efficient and can achieve greater speedup than multi-bit quantization. \textbf{Selection Rules}: When creating the BiBench, we considered various binarization algorithms with enhanced operator techniques in binarization research, and the timeline of considered algorithms is Figure~\ref{fig:timeline} and we list their details in Table~\ref{tab:all_algos}. We follow two general rules when selecting algorithms for our benchmark: (1) \textbf{The selected algorithms should function on binarized operators} that are the fundamental components for binarized networks (as discussed in Section~\ref{subsec:NetworkBinarization}). We exclude algorithms and techniques that require specific local structures or training pipelines to ensure a fair comparison. (2) \textbf{The selected algorithms should have an extensive influence to be representative}, \textit{i.e.}, selected from widely adopted algorithms or the most advanced ones. Specifically, we chose algorithms based on the following detailed criteria to ensure representativeness and fairness in evaluations: Operator Techniques (Yes/No), Year, Conference, Citations (up to 2023/01/25), Open source availability (Yes/No), and Specific Structure / Training-pipeline requirements (Yes/No/Optional). We analyze the techniques proposed in these works. Following the general rules we mentioned, all considered binarization algorithms should have significant contributions to the improvement of the binarization operator (Operator Techniques: Yes) and should not include techniques that are bound to specific architectures and training pipelines to complete well all the evaluations of the learning task, neural architecture, and training consumption tracks in BiBench (Specified Structure / Training-pipeline: No/Optional, Optional means the techniques are included but can be decoupled with binarized operator totally). We also consider the impact and reproducibility of these works. We prioritized the selection of works with more than 100 citations, which means they are more discussed and compared in binarization research and thus have higher impacts. Works in 2021 and later are regarded as the SOTA binarization algorithms and prioritized. Furthermore, we hope the selected works have official open-source implementations for reproducibility. Based on the above selections, eight binarization algorithms, \textit{i.e.}, BNN, XNOR-Net, DoReFa-Net, Bi-Real Net, XNOR-Net++, ReActNet, FDA, and ReCU, stand out and are fully evaluated by our BiBench. \begin{table}[t] \caption{The considered operator-level binarization algorithms and our final selections in BiBench. Bold means that the algorithm has an advantage in that column.} \label{tab:all_algos} \vspace{-0.1in} \setlength{\tabcolsep}{1.8mm}{ \begin{tabular}{llccccc} \toprule {Algorithm} & {Year} & {Conference} & {\tabincell{c}{Citation\\(2023/01/25)}} & {\tabincell{c}{Operator\\Techniques}} & {\tabincell{c}{Open\\Source}} & {\tabincell{c}{Specified Structure /\\Training-pipeline}} \\ \midrule {BitwiseNN}~\citep{kim2016bitwise} & {2016} & ICMLW & \textbf{274} & \textbf{Yes} & No & \textbf{No} \\ \rowcolor{blue!10}\textbf{DoReFa}~\citep{Dorefa-Net} & 2016 & ArXiv & \textbf{1831} & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ \rowcolor{blue!10}\textbf{XNOR-Net}~\citep{rastegari2016xnor} & 2016 & ECCV & \textbf{4474} & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ \rowcolor{blue!10}\textbf{BNN}~\citep{BinaryNet} & {2016} & NeurIPS & \textbf{2804} & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ {LBCNN}~\citep{juefei2017local} & 2017 & CVPR & \textbf{257} & \textbf{Yes} & \textbf{Yes} & Yes \\ {LAB}~\citep{hou2017loss} & 2017 & ICLR & \textbf{204} & \textbf{Yes} & \textbf{Yes} & Yes \\ {ABC-Net}~\citep{ABC-Net} & 2017 & NeurIPS & \textbf{599} & \textbf{Yes} & \textbf{Yes} & Yes \\ {DBF}~\citep{tseng2018deterministic} & 2018 & IJCAI & 10 & \textbf{Yes} & No & Yes \\ {MCNs}~\citep{wang2018modulated} & 2018 & CVPR & 30 & \textbf{Yes} & No & Yes \\ {SBDs}~\citep{hu2018training} & 2018 & ECCV & 93 & \textbf{Yes} & No & \textbf{No} \\ \rowcolor{blue!10}\textbf{Bi-Real Net}~\citep{Bi-Real} & 2018 & ECCV & \textbf{412} & \textbf{Yes} & \textbf{Yes} & \textbf{Opt} \\ PCNN~\citep{gu2019projection} & {2019} & AAAI & 68 & \textbf{Yes} & No & Yes \\ CI-BCNN~\citep{wang2019learning} & {2019} & CVPR & 90 & \textbf{Yes} & \textbf{Yes} & Yes \\ \rowcolor{blue!10}\textbf{XNOR-Net++}~\citep{bulat2019xnor} & 2019 & BMVC & \textbf{131} & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ ProxyBNN~\citep{he2020proxybnn} & 2020 & ECCV & 16 & \textbf{Yes} & No & Yes \\ Si-BNN~\citep{wang2020sparsity} & 2020 & AAAI & 28 & \textbf{Yes} & No & \textbf{No} \\ EBNN~\citep{bulat2020high} & 2020 & ICLR & 38 & \textbf{Yes} & \textbf{Yes} & Yes \\ RBNN~\citep{lin2020rotated} & 2020 & NeurIPS & 79 & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ \rowcolor{blue!10}\textbf{ReActNet}~\citep{liu2020reactnet} & 2020 & ECCV & \textbf{182} & \textbf{Yes} & \textbf{Yes} & \textbf{Opt} \\ SA-BNN~\citep{liu2021sa} & \textbf{2021} & AAAI & 7 & \textbf{Yes} & No & \textbf{No} \\ S$^2$-BNN~\citep{shen2021s2} & \textbf{2021} & CVPR & 11 & \textbf{Yes} & \textbf{Yes} & {Yes} \\ MPT~\citep{diffenderfer2021multi} & \textbf{2021} & ICLR & 43 & \textbf{Yes} & \textbf{Yes} & Yes \\ \rowcolor{blue!10}\textbf{FDA}~\citep{xu2021learning} & \tabincell{l}{\textbf{2021}} & NeurIPS & 18 & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ \rowcolor{blue!10}\textbf{ReCU}~\citep{xu2021recu} & \textbf{2021} & ICCV & 27 & \textbf{Yes} & \textbf{Yes} & \textbf{No} \\ LCR-BNN~\citep{shang2022lipschitz} & \textbf{2022} & ECCV & {1} & \textbf{Yes} & \textbf{Yes} & Yes \\ PokeBNN~\citep{zhang2022pokebnn} & \textbf{2022} & CVPR & 6 & \textbf{Yes} & \textbf{Yes} & {Yes} \\ \bottomrule \end{tabular}} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/timeline.pdf} \vspace{-0.15in} \caption{Timeline of the operator-level binarization algorithms we have considered. The algorithms selected for BiBench are in bold, and the citation is counted till 2023/01/25.} \label{fig:timeline} \end{figure} \textbf{Algorithm Details}: \textbf{BNN}~\citep{courbariaux2016binarized}: During the training process, BNN uses the straight-through estimator (STE) to calculate gradient $\boldsymbol{g_{x}}$ which takes into account the saturation effect: \begin{equation} \mathtt{sign}(\boldsymbol{x})= \begin{cases} +1,& \mathrm{if} \ \boldsymbol x \ge 0\\ -1,& \mathrm{otherwise} \end{cases}\qquad \boldsymbol{g_{x}}= \begin{cases} \boldsymbol{g_b},& \mathrm{if} \ \boldsymbol x \in \left(-1, 1\right)\\ 0,& \mathrm{otherwise}. \end{cases} \end{equation} And during inference, the computation process is expressed as \begin{equation} \boldsymbol o = \operatorname{sign}(\boldsymbol{a}) \circledast \operatorname{sign}(\boldsymbol{w}), \end{equation} where $\circledast$ indicates a convolutional operation using XNOR and bitcount operations. \textbf{XNOR-Net}~\citep{rastegari2016xnor}: XNOR-Net obtains the channel-wise scaling factors $\boldsymbol \alpha=\frac{\left\|\boldsymbol{w}\right\|}{\left|\boldsymbol{w}\right|}$ for the weight and $\boldsymbol{K}$ contains scaling factors $\beta$ for all sub-tensors in activation $\boldsymbol{a}$. We can approximate the convolution between activation $\boldsymbol{a}$ and weight $\boldsymbol{w}$ mainly using binary operations: \begin{equation} \label{eq:xnor-net} \boldsymbol o = (\operatorname{sign}(\boldsymbol{a}) \circledast \operatorname{sign}(\boldsymbol{w})) \odot \boldsymbol{K} \boldsymbol \alpha, \end{equation} where $\boldsymbol{w} \in \mathbb{R}^{c \times w \times h}$ and $\boldsymbol{a} \in \mathbb{R}^{c \times w_{\text {in }} \times h_{\text {in }}}$ denote the weight and input tensor, respectively. And the STE is also applied in the backward propagation of the training process. \textbf{DoReFa-Net}~\citep{Dorefa-Net}: DoReFa-Net applies the following function for $1$-bit weight and activation: \begin{equation} \label{eq:dorefa} \boldsymbol o = (\operatorname{sign}(\boldsymbol{a}) \circledast \operatorname{sign}(\boldsymbol{w})) \odot \boldsymbol \alpha. \end{equation} And the STE is also applied in the backward propagation with the full-precision gradient. \textbf{Bi-Real Net}~\citep{liu2018bi}: Bi-Real Net proposes a piece-wise polynomial function as the gradient approximation function: \begin{equation} \operatorname{bireal}\left(\boldsymbol{a}\right)=\left\{\begin{array}{lr} -1 & \text { if } \boldsymbol{a}<-1 \\ 2 \boldsymbol{a}+\boldsymbol{a}^2 & \text { if }-1 \leqslant \boldsymbol{a}<0 \\ 2 \boldsymbol{a}-\boldsymbol{a}^2 & \text { if } 0 \leqslant \boldsymbol{a}<1 \\ 1 & \text { otherwise } \end{array}, \quad \frac{\partial \operatorname{bireal}\left(\boldsymbol{a}\right)}{\partial \boldsymbol{a}}= \begin{cases}2+2 \boldsymbol{a} & \text { if }-1 \leqslant \boldsymbol{a}<0 \\ 2-2 \boldsymbol{a} & \text { if } 0 \leqslant \boldsymbol{a}<1 \\ 0 & \text { otherwise }\end{cases}\right. . \end{equation} And the forward propagation of Bi-Real Net is the same as Eq.~(\ref{eq:dorefa}). \textbf{XNOR-Net++}~\citep{bulat2019xnor}: XNOR-Net++ proposes to re-formulate Eq.~(\ref{eq:xnor-net}) as: \begin{equation} \boldsymbol{o} = (\operatorname{sign}(\boldsymbol{a}) \circledast \operatorname{sign}(\boldsymbol{w})) \odot \boldsymbol \Gamma, \end{equation} and we adopt the $\boldsymbol \Gamma$ as the following form in experiments (achieve the best performance in the original paper): \begin{equation} \boldsymbol \Gamma=\boldsymbol \alpha \otimes \boldsymbol \beta \otimes \boldsymbol \gamma, \quad \boldsymbol \alpha \in \mathbb{R}^{\boldsymbol{o}}, \boldsymbol \beta \in \mathbb{R}^{h_{\text {out }}}, \boldsymbol \gamma \in \mathbb{R}^{w_{\text {out }}}, \end{equation} where $\boldsymbol \alpha$, $\boldsymbol \beta$, and $\boldsymbol \gamma$ are learnable during training. \textbf{ReActNet}~\citep{liu2020reactnet}: ReActNet defines an RSign as a binarization function with channel-wise learnable thresholds: \begin{equation} \boldsymbol{x}=\operatorname{rsign}\left(\boldsymbol{x}\right)=\left\{\begin{array}{ll} +1, & \text { if } \boldsymbol{x}>\boldsymbol \alpha \\ -1, & \text { if } \boldsymbol{x} \leq \boldsymbol \alpha \end{array} .\right. \end{equation} where $\boldsymbol \alpha$ is a learnable coefficient controlling the threshold. And the forward propagation is \begin{equation} \boldsymbol o = (\operatorname{rsign}(\boldsymbol{a}) \circledast \operatorname{sign}(\boldsymbol{w})) \odot \boldsymbol \alpha. \end{equation} \textbf{ReCU}~\citep{xu2021recu}: As described in their paper, ReCU is formulated as \begin{equation} \operatorname{recu}(\boldsymbol{w})=\max \left(\min \left(\boldsymbol{w}, Q_{(\tau)}\right), Q_{(1-\tau)}\right), \end{equation} where $Q_{(\tau)}$ and $Q_{(1-\tau)}$ denote the $\tau$ quantile and $1-\tau$ quantile of $\boldsymbol{w}$, respectively. And other implementations also strictly follow the original paper and official code. \textbf{FDA}~\citep{xu2021learning}: FDA computes the gradients of $\boldsymbol{o}$ in the backward propagation as: \begin{equation} \frac{\partial \ell}{\partial \mathbf{t}}=\frac{\partial \ell}{\partial \boldsymbol{o}} \boldsymbol{w}_2^{\top} \odot\left(\left(\mathbf{t} \boldsymbol{w}_1\right) \geq 0\right) \boldsymbol{w}_1^{\top} +\frac{\partial \ell}{\partial \boldsymbol{o}} \eta^{\prime}(\mathbf{t}) +\frac{\partial \ell}{\partial \boldsymbol{o}} \odot \frac{4 \omega}{\pi} \sum_{i=0}^n \cos ((2 i+1) \omega \mathbf{t}), \end{equation} where $\frac{\partial \ell}{\partial \boldsymbol{o}}$ is the gradient from the upper layers, $\odot$ represents element-wise multiplication, and $\frac{\partial \ell}{\partial \mathbf{t}}$ is the partial gradient on $\mathbf{t}$ that backward propagates to the former layer. And $\boldsymbol{w}_1$ and $\boldsymbol{w}_2$ are weights in the original models and the noise adaptation modules respectively. FDA updates them as \begin{equation} \frac{\partial \ell}{\partial \boldsymbol{w}_1}=\mathbf{t}^{\top} \frac{\partial \ell}{\partial \boldsymbol{o}} \boldsymbol{w}_2^{\top} \odot\left(\left(\mathbf{t} \boldsymbol{w}_1\right) \geq 0\right),\qquad \frac{\partial \ell}{\partial \boldsymbol{w}_2}=\sigma\left(\mathbf{t} \boldsymbol{w}_1\right)^{\top} \frac{\partial \ell}{\partial \boldsymbol{o}}. \end{equation} In Table~\ref{tab:algos_detail}, we detail the techniques included in the selected binarization algorithms. \begin{table}[!ht] \centering \caption{The details of the selected binarization algorithms in BiBench.} \label{tab:algos_detail} \vspace{-0.1in} \setlength{\tabcolsep}{2.5mm}{ \begin{threeparttable} \begin{tabular}{lcccccc} \toprule \multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{Scaling Factor} & \multicolumn{2}{c}{Parameter Redistribution} & \multicolumn{2}{c}{Gradient Approximation} \\ ~ & weight & activation & weight & activation & weight & activation \\ \midrule BNN & w/o & w/o & w/o & w/o & STE & STE \\ \midrule XNOR & \tabincell{c}{Statistics\\by Channel} & \tabincell{c}{Statistics\\by Channel} & w/o & w/o & STE & STE \\ \midrule DoReFa & \tabincell{c}{Statistics\\by Layer} & w/o & w/o & w/o & STE & STE \\ \midrule Bi-Real & \tabincell{c}{Statistics\\by Channel} & w/o & w/o & w/o & STE & \tabincell{c}{Differentiable\\Piecewise\\Polynomial\\Function} \\ \midrule XNOR++ & \tabincell{c}{Learned by\\Custom-size\\($o\times h_\text{out} \times w_\text{out}$)} & w/o & w/o & w/o & STE & STE \\ \midrule ReActNet & \tabincell{c}{Statistics\\by Channel} & w/o & w/o & w/o & STE & \tabincell{c}{Differentiable\\Piecewise\\Polynomial\\Function} \\ \midrule ReCU & \tabincell{c}{Statistics\\by Channel} & w/o & \tabincell{c}{balancing\\(mean-shifting)} & w/o & \tabincell{c}{Rectified\\Clamp Unit} & \tabincell{c}{Rectified\\Clamp Unit} \\ \midrule FDA & \tabincell{c}{Statistics\\by Channel} & w/o & w/o & mean-shifting & \tabincell{c}{Decomposing\\Sign with\\Fourier Series} & \tabincell{c}{Decomposing\\Sign with\\Fourier Series} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[1] ``STE" indicates the Straight Through Estimator, and ``w/o" means no special technique is used. \end{tablenotes} \end{threeparttable}} \end{table} \subsection{Details of Learning Tasks} \label{app:task} \textbf{Selection Rules}: To evaluate the performance of the binarization algorithm in a wide range of learning tasks, we must select a variety of representative tasks. Firstly, we choose representative perception modalities in deep learning, including (2D/3D) vision, text, and speech. These modalities have seen rapid progress and broad impact, so we select specific tasks and datasets within these modalities. Specifically, (1) in the 2D vision modality, we select the essential image classification task and the prevalent object detection task, with datasets including CIFAR10 and ImageNet for the former and Pascal VOC and COCO for the latter. ImageNet and COCO datasets are more challenging and significant, while CIFAR10 and Pascal VOC are more fundamental. For other modalities, binarization is still challenging even with the underlying tasks and datasets in the field, since there are few related binarization studies: (2) in the 3D vision modality, the basic point cloud classification ModelNet40 dataset is selected to evaluate the binarization performance, which is regarded as one of the most fundamental tasks in 3D point cloud research and is widely studied. (3) In the text modality, the General Language Understanding Evaluation (GLUE) benchmark is usually recognized as the most popular dataset, including nine sentence- or sentence-pair language understanding tasks. (4) The keyword spotting task was chosen as the base task in the speech modality, specifically the Google Speech Commands classification dataset. Based on the above reasons and rules, we have selected a series of challenging and representative tasks for BiBench to evaluate binarization comprehensively and have obtained a series of reliable and informative conclusions and experiences. \textbf{Dataset Details:} \textbf{CIFAR10}~\citep{CIFAR}: The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images commonly used to train machine learning and computer vision algorithms. This dataset is widely used for image classification tasks. There are 60,000 color images, each of which measures 32x32 pixels. All images are categorized into 10 different classes: airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Each class has 6000 images, where 5000 are for training and 1000 are for testing. The evaluation metric of the CIFAR-10 dataset is accuracy, defined as: \begin{equation} Accuracy = \frac{TP+TN}{TP+TN+FP+FN}, \end{equation} where \textit{TP} (True Positive) means cases correctly identified as positive, \textit{TN} (True Negative) means cases correctly identified as negative, \textit{FP} (False Positive) means cases incorrectly identified as positive and \textit{FN} (False Negative) means cases incorrectly identified as negative. To estimate the accuracy, we should calculate the proportion of \textit{TP} and \textit{TN} in all evaluated cases. \textbf{ImageNet}~\citep{krizhevsky2012imagenet}: ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images are collected from the web and labeled by human labelers using a crowd-sourced image labeling service called Amazon Mechanical Turk. As part of the Pascal Visual Object Challenge, ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) was established in 2010. There are approximately 1.2 million training images, 50,000 validation images, and 150,000 testing images in total in ILSVRC. ILSVRC uses a subset of ImageNet, with about 1000 images in each of the 1000 categories. ImageNet also uses accuracy to evaluate the predicted results, which is defined above. \textbf{Pascal VOC07}~\citep{hoiem2009pascal}: The PASCAL Visual Object Classes 2007 (VOC07) dataset contains 20 object categories including vehicles, households, animals, and other: airplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, TV/monitor, bird, cat, cow, dog, horse, sheep, and person. As a benchmark for object detection, semantic segmentation, and object classification, this dataset contains pixel-level segmentation annotations, bounding box annotations, and object class annotations. The VOC07 dataset uses mean average precision($mAP$) to evaluate results, which is defined as: \begin{equation} mAP = \frac{1}{n} \sum_{k=1}^{k=n} AP_k \end{equation} where $AP_k$ denotes the average precision of the k-th category, which calculates the area under the precision-recall curve: \begin{equation} AP_k = \int^1_0 p_k(r)dr. \end{equation} Especially for VOC07, we apply 11-point interpolated $AP$, which divides the recall value to $\{ 0.0, 0.1, \dots, 1.0 \}$ and then computes the average of maximum precision value for these 11 recall values as: \begin{align} AP = \frac{1}{11} \sum_{r \in \{ 0.0, \dots, 1.0 \}} AP_r = \frac{1}{11} \sum_{r \in \{ 0.0, \dots, 1.0 \} } p_{\textrm{interp}}r. \end{align} The maximum precision value equals to the right of its recall level: \begin{equation} p_{\textrm{interp}}r = \max_{\tilde{r} \geq r} p(\tilde{r}). \end{equation} \textbf{COCO17}~\citep{lin2014microsoft}: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images. According to community feedback, in the 2017 release, the training/validation split was changed from 83K/41K to 118K/5K. And the images and annotations are the same. The 2017 test set is a subset of 41K images from the 2015 test set. Additionally, 123K images are included in the unannotated dataset. The COCO17 dataset also uses mean average precision ($mAP$) as defined above PASCAL VOC07 uses, which is defined as above. \textbf{ModelNet40}~\citep{wu20153d}: The ModelNet40 dataset contains point clouds of synthetic objects. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular due to the diversity of categories, clean shapes, and well-constructed dataset. In the original ModelNet40, 12,311 CAD-generated meshes are divided into 40 categories, where 9,843 are for training, and 2,468 are for testing. The point cloud data points are sampled by a uniform sampling method from mesh surfaces and then scaled into a unit sphere by moving to the origin. The ModelNet40 dataset also uses accuracy as the metric, which has been defined above in CIFAR10. \textbf{ShapeNet}~\citep{chang2015shapenet}: ShapeNet is a large-scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University, and the Toyota Technological Institute in Chicago, USA. Using WordNet hypernym-hyponym relationships, the repository contains over 300M models, with 220,000 classified into 3,135 classes. There are 31,693 meshes in the ShapeNet Parts subset, divided into 16 categories of objects (\textit{i.e.}, tables, chairs, planes, \textit{etc.}). Each shape contains 2-5 parts (with 50 part classes in total). \textbf{GLUE}~\citep{wang2018glue}: General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE, and WNLI. Among them, SST-2, MRPC, QQP, MNLI, QNLI, RTE, and WNLI use accuracy as the metric, which is defined in CIFAR10. CoLA is measured by Matthews Correlation Coefficient (MCC), which is better in binary classification since the number of positive and negative samples are extremely unbalanced: \begin{equation} MCC = \frac{TP \times TN - FP \times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}. \end{equation} And STS-B is measured by Pearson/Spearman Correlation Coefficient: \begin{equation} r_{\textit{Pearson}} = \frac{1}{n-1} \sum^n_{i=1} \left( \frac{X_i - \bar{X}}{s_X} \right) \left( \frac{Y_i - \bar{Y}}{s_Y} \right), r_{\textit{Spearman}} = 1-\frac{6\sum d^2_i}{n(n^2-1)}, \end{equation} where $n$ is the number of observations, $s_X$ and $s_Y$ indicate the sum of squares of $X$ and $Y$ respectively, and $d_i$ is the difference between the ranks of corresponding variables. \textbf{SpeechCom.}~\citep{warden2018speech}: As part of its training and evaluation process, SpeechCom provides a collection of audio recordings containing spoken words. Its primary goal is to provide a way to build and test small models that detect a single word that belongs to a set of ten target words. Models should detect as few false positives as possible from background noise or unrelated speech while providing as few false positives as possible. The accuracy metric for SpeechCom is also the same as CIFAR10. \textbf{CIFAR10-C}~\citep{hendrycks2018benchmarking}: CIFAR10-C is a dataset generated by adding 15 common corruptions and 4 extra corruptions to the test images in the Cifar10 dataset. It benchmarks the frailty of classifiers under corruption, including noise, blur, weather, and digital influence. And each type of corruption has five levels of severity, resulting in 75 distinct corruptions. We report the accuracy of the classifiers under each level of severity and each corruption. Meanwhile, we use the mean and relative corruption error as metrics. Denote the error rate of $\textit{Network}$ under $\textit{Settings}$ as $E^{\textit{Network}}_{\textit{Settings}}$. The classifier's aggregate performance across the five severities of the corruption types. The Corruption Errors of a certain type of $\textit{Corruption}$ is computed with the formula: \begin{equation} CE^{\textit{Network}}_{\textit{Corruption}} = \sum^5_{s=1} E^{\textit{Network}}_{s, \textit{Corruption}} / \sum^5_{s=1} E^{\textit{AlexNet}}_{s, \textit{Corruption}}. \end{equation} To make Corruption Errors comparable across corruption types, the difficulty is usually adjusted by dividing by AlexNet's errors. \subsection{Details of Neural Architectures} \label{app:arch} \textbf{ResNet}~\citep{he2016deep}: Residual Networks, or ResNets, learn residual functions concerning the layer inputs instead of learning unreferenced functions. Instead of making stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. There is empirical evidence that these networks are easier to optimize and can achieve higher accuracy with considerably increased depth. \textbf{VGG}~\citep{simonyan2015very}: VGG is a classical convolutional neural network architecture. It is proposed by an analysis of how to increase the depth of such networks. It is characterized by its simplicity: the network utilizes small 3$\times$3 filters, and the only other components are pooling layers and a fully connected layer. \textbf{MobileNetV2}~\citep{mobilenet_v2}: MobileNetV2 is a convolutional neural network architecture that performs well on mobile devices. This model has an inverted residual structure with residual connections between the bottleneck layers. The intermediate expansion layer employs lightweight depthwise convolutions to filter features as a source of nonlinearity. In MobileNetV2, the architecture begins with an initial layer of 32 convolution filters, followed by 19 residual bottleneck layers. \textbf{Faster-RCNN}~\citep{NIPS2015_5638}: Faster R-CNN is an object detection model that improves Fast R-CNN by utilizing a region proposal network (RPN) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. A fully convolutional network is used to predict the bounds and objectness scores of objects at each position simultaneously. RPNs use end-to-end training to produce region proposals of high quality and instruct the unified network where to search. Sharing their convolutional features allows RPN and Fast R-CNN to be combined into a single network. Faster R-CNN consists of two modules. The first module is a deep, fully convolutional network that proposes regions, and the second is the detector that uses the proposals for giving the final prediction boxes. \textbf{SSD}~\citep{liu2016ssd}: SSD is a single-stage object detection method that discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. During prediction, each default box is adjusted to match better the shape of the object based on its scores for each object category. In addition, the network automatically handles objects of different sizes by combining predictions from multiple feature maps with different resolutions. \textbf{BERT}~\citep{kenton2019bert}: BERT, or Bidirectional Encoder Representations from Transformers, improves upon standard Transformers by removing the unidirectionality constraint using a masked language model (MLM) pre-training objective. By masking some tokens from the input, the masked language model attempts to estimate the original vocabulary id of the masked word based solely on its context. An MLM objective differs from a left-to-right language model in that it enables the representation to integrate the left and right contexts, which facilitates pre-training a deep bidirectional Transformer. Additionally, BERT uses a next-sentence prediction task that pre-trains text-pair representations along with the masked language model. Note that we replace the direct binarized attention with a bi-attention mechanism to prevent the model from completely crashing~\citep{qin2021bibert}. \textbf{PointNet}~\citep{qi2017pointnet}: PointNet is a unified architecture for applications ranging from object classification and part segmentation to scene semantic parsing. The architecture directly receives point clouds as input and outputs either class labels for the entire input or point segment/part labels. \textbf{PointNet-Vanilla} is a variant of PointNet, which drops off the T-Net module. And for all PointNet models, we apply the EMA-Max~\citep{qin2020bipointnet} as the aggregator, because directly following the max pooling aggregator will cause the binarized PointNets to fail to converge. \textbf{FSMN}~\citep{zhang2015feedforward}: Feedforward sequential memory networks or FSMN is a novel neural network structure to model long-term dependency in time series without using recurrent feedback. It is a standard fully connected feedforward neural network containing some learnable memory blocks. As a short-term memory mechanism, the memory blocks encode long context information using a tapped-delay line structure. \textbf{Deep-FSMN}~\citep{zhang2018deep}: The Deep-FSMN architecture is an improved feedforward sequential memory network (FSMN) with skip connections between memory blocks in adjacent layers. By utilizing skip connections, information can be transferred across layers, and thus the gradient vanishing problem can be avoided when building very deep structures. \subsection{Details of Hardware} \label{app:hardware} \textbf{Hisilicon Kirin}~\citep{kirin}: Kirin is a series of ARM-based systems-on-a-chip (SoCs) produced by HiSilicon. Their products include Kirin 970, Kirin 980, Kirin 985, \textit{etc.} \textbf{MediaTek Dimensity}~\citep{mediatek}: Dimensity is a series of ARM-based systems-on-a-chip (SoCs) produced by MediaTek. Their products include Dimensity 820, Dimensity 8100, Dimensity 9000, \textit{etc.} \textbf{Qualcomm Snapdragon}~\citep{singh2014evolution}: Snapdragon is a family of mobile systems-on-a-chip (SoC) processor architecture provided by Qualcomm. The original Snapdragon chip, the Scorpio, was similar to the ARM Cortex-A8 core based upon the ARMv7 instruction set, but it was enhanced by the use of SIMD operations, which provided higher performance. Qualcomm Snapdragon processors are based on the Krait architecture. They are equipped with an integrated LTE modem, providing seamless connectivity across 2G and 3G LTE networks. \textbf{Raspberrypi}~\citep{raspberrypi}: Raspberry Pi is a series of small single-board computers (SBCs) developed in the United Kingdom by the Raspberry Pi Foundation in association with Broadcom. Raspberry Pi was originally designed to promote the teaching of basic computer science in schools and in developing countries. As a result of its low cost, modularity, and open design, it is used in many applications, including weather monitoring, and is sold outside the intended market. It is typically used by computer hobbyists and electronic enthusiasts due to the adoption of HDMI and USB standards. \textbf{Apple M1}~\citep{applem1}: Apple M1 is a series of ARM-based systems-on-a-chip (SoCs) designed by Apple Inc. As a central processing unit (CPU) and graphics processing unit (GPU) for Macintosh desktops and notebooks, as well as iPad Pro and iPad Air tablets. In November 2020, Apple introduced the M1 chip, followed by the professional-oriented M1 Pro and M1 Max chips in 2021. Apple launched the M1 Ultra in 2022, which combines two M1 Max chips in a single package. The M1 Max is a higher-performance version of the M1 Pro, with larger GPU cores and memory bandwidth. \section{Full Results} \label{sec:FullResults} \subsection{Evaluation Results of All Tracks} Table~\ref{tab:appendix-vision-acc}-\ref{tab:appendix-shapenet-acc} shows the accuracy of different binarization algorithms on 2D and 3D vision tasks, including CIFAR10, ImageNet, PASCAL VOC07, COCO14 for 2D vision tasks and ModelNet40 for 3D vision task. And for each task, we cover several representative model architectures and binarize them with the binarization algorithms mentioned above. We also evaluate binarization algorithms on language and speech tasks, for which we test TinyBERT (4 layers and 6 layers) on GLUE Benchmark and FSMN and D-FSMN on the SpeechCommand dataset. Results are listed in Table~\ref{tab:appendix-language-acc}. To demonstrate the robustness corruption of binarized algorithms, we show the results on the CIFAR10-C benchmark, which is used to benchmark the robustness to common perturbations in Table~\ref{tab:appendix-corr-acc} and Table~\ref{tab:appendix-corr-acc2}. It includes 15 kinds of noise, blur, weather, and digital corruption, each with five levels of severity. The sensitivity of hyperparameters while training is shown in Table~\ref{tab:appendix-sensitivity-acc1}-\ref{tab:appendix-sensitivity-acc2}. For each binarization algorithm, we use SGD or Adam optimizer, 1$\times$ or 0.1$\times$ of the original learning rate, cosine or step learning scheduler, and 200 training epochs. Each case is tested five times to show the training stability. We also calculate the mean and standard deviation (std) of accuracy. The best accuracy and the lowest std for each binarization algorithm are bolded. We conduct comprehensive deployment and inference on various kinds of hardware, including the Kirin series (970, 980, 985, 990, and 9000E), Dimensity series (820 and 9000), Snapdragon series (855+, 870 and 888), Raspberrypi (3B+ and 4B) and Apple M1 series (M1 and M1 Max). Limited to the support of frameworks, we can only test BNN and ReAct with Larq compute engine and only BNN with daBNN. We convert models to enable the actual inference on real hardware, including ResNet18/34 and VGG-Small on Larq, and only ResNet18/34 on daBNN. And we test 1, 2, 4, and 8 threads for each hardware and additionally test 16 threads for Apple Silicons on Larq. And daBNN only supports single-thread inference. Results are showcased in Table~\ref{app:tab:eff1}-\ref{app:tab:eff4}. \begin{table}[t] \renewcommand\arraystretch{2.7} \centering \caption{Accuracy on 2D and 3D Vision Tasks.} \label{tab:appendix-vision-acc} \vspace{-0.1in} \setlength{\tabcolsep}{1.5mm} \begin{tabular}{lllccccccccccc} \toprule \multirow{2}{*}{Task} & & \multirow{2}{*}{Arch.} & \multirow{2}{*}{FP32} & \multicolumn{8}{c}{Binarization Algorithm} \\ & & & & {BNN} & {XNOR} & {DoReFa} & {Bi-Real} & {XNOR++} & {ReActNet} & {ReCU} & {FDA} \\ \midrule \multirow{4}{*}{CIFAR10} & & ResNet20 & 91.99 & 85.31 & 85.53 & 85.18 & 85.56 & 85.41 & 86.18 & 86.42 & 86.38 \\ & & ResNet18 & 94.82 & 89.69 & 91.4 & 91.55 & 91.20 & 90.04 & 91.55 & 92.79 & 90.42 \\ & & ResNet34 & 95.34 & 90.82 & 89.58 & 90.95 & 92.50 & 90.59 & 92.69 & 93.64 & 89.59 \\ & & VGG-Small & 93.80 & 89.66 & 89.65 & 89.66 & 90.25 & 89.34 & 90.27 & 90.84 & 89.48 \\ \midrule {ImageNet} & & ResNet18 & 69.90 & 52.99 & 53.99 & 53.55 & 54.79 & 52.43 & 54.97 & 54.51 & 54.63 \\ \midrule \multirow{2}{*}{VOC07} & & Faster-RCNN & 76.06 & 58.54 & 56.75 & 58.07 & 60.90 & 56.60 & 61.90 & 62.10 & 60.10 \\ & & SSD300 & 77.34 & 9.09 & 33.72 & 30.70 & 31.90 & 9.41 & 38.41 & 9.80 & 43.68 \\ \midrule {COCO14} & & Faster-RCNN & 27.20 & 21.20 & 20.50 & 21.30 & 22.20 & 21.60 & 22.80 & 23.30 & 22.40 \\ \midrule \multirow{2}{*}{ModelNet40} & & PointNet$_\textrm{vanilla}$ & 86.80 & 85.13 & 83.47 & 85.21 & 85.37 & 85.66 & 85.13 & 85.21 & 85.49\\ & & PointNet & 88.20 & 9.08 & 80.75 & 78.77 & 77.71 & 63.25 & 76.50 & 81.12 & 79.62 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[t] \renewcommand\arraystretch{2.7} \centering \caption{Accuracy on ShapeNet dataset.} \label{tab:appendix-shapenet-acc} \vspace{-0.1in} \setlength{\tabcolsep}{1.mm} \begin{tabular}{llllccccccccccc} \toprule \multirow{2}{*}{Task} & \multirow{2}{*}{Arch.} & \multirow{2}{*}{Category} & \multirow{2}{*}{FP32} & \multicolumn{8}{c}{Binarization Algorithm} \\ & & & & {BNN} & {XNOR} & {DoReFa} & {Bi-Real} & {XNOR++} & {ReActNet} & {ReCU} & {FDA} \\ \midrule \multirow{17}{*}{ShapeNet} & \multirow{17}{*}{PointNet} & Airplane & 83.7 & 37.5 & 74.14 & 67.2 & 67.61 & 30.36 & 66.12 & 31.61 & 65.34 \\ ~ & ~ & Bag & 79.6 & 44.2 & 49 & 55.34 & 47.11 & 37.44 & 50.28 & 38.58 & 48.62 \\ ~ & ~ & Cap & 92.3 & 44.3 & 73.32 & 51.21 & 61.41 & 40.37 & 56.73 & 40.13 & 56 \\ ~ & ~ & Car & 76.8 & 24.3 & 55.27 & 52.24 & 49.39 & 24.07 & 49.11 & 23.92 & 58.5 \\ ~ & ~ & Chair & 90.9 & 61.6 & 85.62 & 83.96 & 83.6 & 41.89 & 83.83 & 41.5 & 83.27 \\ ~ & ~ & Earphone & 70.2 & 38.5 & 30.97 & 34.94 & 35.24 & 26.3 & 36.72 & 23.01 & 34.46 \\ ~ & ~ & Guitar & 91.1 & 32.9 & 69.17 & 67.9 & 65.99 & 23.45 & 64.18 & 28.38 & 78.69 \\ ~ & ~ & Knife & 85.7 & 43 & 78.16 & 76.16 & 75.53 & 37.62 & 75.01 & 38.81 & 77.07 \\ ~ & ~ & Lamp & 82 & 51.2 & 69 & 68.75 & 60 & 49.45 & 66.13 & 48.41 & 67.45 \\ ~ & ~ & Laptop & 95.5 & 49.4 & 93.29 & 92.93 & 92.79 & 41.89 & 92.93 & 42.28 & 93.66 \\ ~ & ~ & Motorbike & 64.4 & 16.3 & 19.04 & 18.88 & 18.69 & 13.18 & 18.59 & 11.26 & 20.38 \\ ~ & ~ & Mug & 93.6 & 49.1 & 64.32 & 53.56 & 52.01 & 47.58 & 52.51 & 46.83 & 53.48 \\ ~ & ~ & Pistol & 80.8 & 25.5 & 62.29 & 59.15 & 51.43 & 26.96 & 53.79 & 27.81 & 62.61 \\ ~ & ~ & Rocket & 54.4 & 26.9 & 30.95 & 27.92 & 26.61 & 22.38 & 26.01 & 19.32 & 23.08 \\ ~ & ~ & Skateboard & 70.7 & 41.2 & 45.7 & 50.15 & 43.78 & 28.63 & 43.74 & 26.71 & 45.81 \\ ~ & ~ & Table & 81.4 & 51.3 & 73.68 & 72.69 & 69.72 & 45.74 & 69.56 & 45.21 & 73.45 \\ ~ & ~ & Overall & 80.81875 & 39.82 & 60.87 & 58.31 & 56.31 & 33.58 & 56.58 & 33.36 & 58.68 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.5} \centering \caption{Accuracy on Language and Speech Tasks.} \label{tab:appendix-language-acc} \vspace{-0.1in} \setlength{\tabcolsep}{1.2mm} \begin{tabular}{lllcccccccccccc} \toprule \multicolumn{2}{l}{\multirow{2}{*}{Task}} & \multirow{2}{*}{Arch.} & \multirow{2}{*}{FP32} & \multicolumn{8}{c}{Binarization Algorithm} \\ & & & & {BNN} & {XNOR} & {DoReFa} & {Bi-Real} & {XNOR++} & {ReActNet} & {ReCU} & {FDA} \\ \midrule \multirow{27}{*}{GLUE} & \multirow{3}{*}{MNLI-m} & BERT-Tiny$_\textrm{4L}$ & 82.81 & 36.90 & 41.20 & 52.31 & 55.09 & 37.27 & 55.52 & 38.55 & 59.41 \\ & & BERT-Tiny$_\textrm{6L}$ & 84.76 & 37.01 & 51.17 & 63.09 & 66.81 & 37.98 & 66.47 & 37.95 & 68.46 \\ & & BERT-Base & 84.88 & 35.45 & 41.40 & 60.67 & 62.47 & 35.45 & 60.22 & 35.45 & 63.49 \\ \cmidrule{2-15} & \multirow{3}{*}{MNLI-mm} & BERT-Tiny$_\textrm{4L}$ & 83.08 & 36.54 & 41.55 & 53.01 & 55.57 & 36.07 & 55.89 & 37.62 & 59.76 \\ & & BERT-Tiny$_\textrm{6L}$ & 84.42 & 36.47 & 50.92 & 63.87 & 66.82 & 38.11 & 67.64 & 36.91 & 68.98 \\ & & BERT-Base & 85.45 & 35.22 & 41.18 & 60.96 & 63.17 & 35.22 & 61.19 & 35.22 & 63.72 \\ \cmidrule{2-15} & \multirow{3}{*}{QQP} & BERT-Tiny$_\textrm{4L}$ & 90.47 & 66.19 & 73.69 & 75.79 & 77.38 & 64.97 & 76.92 & 67.32 & 78.92\\ & & BERT-Tiny$_\textrm{6L}$ & 85.98 & 63.18 & 78.90 & 80.93 & 82.42 & 63.19 & 82.95 & 63.3 & 83.19\\ & & BERT-Base & 91.51 & 63.18 & 71.93 & 77.07 & 80.01 & 63.18 & 81.16 & 63.18 & 83.26 \\ \cmidrule{2-15} & \multirow{3}{*}{QNLI} & BERT-Tiny$_\textrm{4L}$ & 87.46 & 51.71 & 60.59 & 61.15 & 61.92 & 52.79 & 62.67 & 53.99 & 62.29 \\ & & BERT-Tiny$_\textrm{6L}$ & 90.79 & 52.22 & 62.75 & 66.88 & 69.72 & 51.84 & 70.27 & 51.32 & 72.72 \\ & & BERT-Base & 92.14 & 51.8 & 60.29 & 70.78 & 70.14 & 54.07 & 69.44 & 51.87 & 72.43 \\ \cmidrule{2-15} & \multirow{3}{*}{SST-2} & BERT-Tiny$_\textrm{4L}$ & 92.43 & 52.98 & 79.93 & 82.45 & 84.06 & 54.01 & 84.17 & 54.24 & 86.12\\ & & BERT-Tiny$_\textrm{6L}$ & 90.25 & 58.14 & 84.74 & 86.23 & 87.73 & 69.38 & 87.95 & 52.40 & 87.72 \\ & & BERT-Base & 93.23 & 52.29 & 78.78 & 86.01 & 86.35 & 53.32 & 84.4 & 52.40 & 87.93 \\ \cmidrule{2-15} & \multirow{3}{*}{CoLA} & BERT-Tiny$_\textrm{4L}$ & 49.61 & 6.55 & 7.22 & 12.69 & 16.86 & 0 & 14.71 & 6.25 & 17.80 \\ & & BERT-Tiny$_\textrm{6L}$ & 54.17 & 2.57 & 12.57 & 15.97 & 17.94 & 0 & 15.24 & 2.24 & 22.21 \\ & & BERT-Base & 59.71 & 4.63 & 0 & 4.74 & 15.95 & 0 & 4.63 & 0.40 & 4.63 \\ \cmidrule{2-15} & \multirow{3}{*}{STS-B} & BERT-Tiny$_\textrm{4L}$ & 86.35 & 4.31 & 18.05 & 18.74 & 22.65 & 7.45 & 22.73 & 8.20 & 27.56 \\ & & BERT-Tiny$_\textrm{6L}$ & 89.79 & 1.04 & 14.72 & 22.31 & 24.59 & 5.70 & 23.40 & 8.22 & 37.15 \\ & & BERT-Base & 90.06 & 6.94 & 12.19 & 18.26 & 20.76 & 4.99 & 8.73 & 6.59 & 10.14 \\ \cmidrule{2-15} & \multirow{3}{*}{MRPC} & BERT-Tiny$_\textrm{4L}$ & 85.50 & 68.30 & 71.74 & 71.99 & 71.74 & 68.30 & 71.74 & 71.25 & 71.49 \\ & & BERT-Tiny$_\textrm{6L}$ & 87.71 & 68.30 & 70.76 & 71.74 & 71.49 & 68.30 & 71.74 & 69.04 & 71.74 \\ & & BERT-Base & 86.24 & 68.30 & 68.3 & 70.02 & 70.27 & 68.30 & 71.25 & 68.30 & 69.04 \\ \cmidrule{2-15} & \multirow{3}{*}{RTE} & BERT-Tiny$_\textrm{4L}$ & 65.34 & 56.31 & 53.43 & 56.31 & 55.59 & 54.15 & 57.76 & 61.01 & 59.20 \\ & & BERT-Tiny$_\textrm{6L}$ & 68.95 & 56.31 & 54.51 & 54.51 & 58.12 & 49.09 & 53.43 & 58.84 & 54.87 \\ & & BERT-Base & 72.20 & 53.43 & 57.04 & 55.23 & 54.51 & 54.87 & 54.51 & 55.23 & 55.23 \\ \midrule \multicolumn{2}{l}{\multirow{2}{*}{Speech Commands}} & FSMN & 94.89 & 56.45 & 56.45 & 68.65 & 73.60 & 75.04 & 73.80 & 56.45 & 56.45 \\ & & D-FSMN & 97.51 & 88.32 & 92.03 & 78.92 & 85.11 & 56.77 & 83.80 & 92.11 & 93.91\\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{0.98} \centering \caption{Results for Robustness Corruption on CIFAR10-C Dataset with Different Binarization Algorithms (1/2). } \label{tab:appendix-corr-acc} \vspace{-0.1in} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{llrrrrrrrrrrrr} \toprule \multirow{2}{*}{Noise} & \multirow{2}{*}{FP32} & \multicolumn{8}{c}{Binarization Algorithm} \\ & & {BNN} & {XNOR} & {DoReFa} & {Bi-Real} & {XNOR++} & {ReActNet} & {ReCU} & {FDA} \\ \midrule Origin & 94.82 & 89.69 & 91.40 & 91.55 & 91.20 & 90.04 & 91.55 & 92.79 & 90.42 \\ gaussian\_noise-1 & 78.23 & 74.22 & 76.00 & 74.97 & 74.95 & 75.07 & 75.15 & 78.25 & 77.36\\ gaussian\_noise-2 & 56.72 & 56.73 & 62.44 & 55.94 & 58.33 & 57.52 & 55.97 & 61.32 &60.48 \\ gaussian\_noise-3 & 36.93 & 42.69 & 47.58 & 39.56 & 43.47 & 40.79 & 37.99 & 43.32 &44.26 \\ gaussian\_noise-4 & 31.03 & 38.35 & 41.43 & 33.24 & 36.65 & 34.68 & 31.47 & 35.91 & 37.30 \\ gaussian\_noise-5 & 25.54 & 34.05 & 36.22 & 28.66 & 31.78 & 30.13 & 25.49 & 30.19 & 32.09 \\ ipulse\_nosie-1 & 82.54 & 84.57 & 86.94 & 84.68 & 87.30 & 85.72 & 86.89 & 88.73 & 85.8 \\ ipulse\_nosie-2 & 70.12 & 77.13 & 80.74 & 77.35 & 81.14 & 79.62 & 80.25 & 82.80 & 80.3 \\ ipulse\_nosie-3 & 59.88 & 70.58 & 75.01 & 69.20 & 74.59 & 71.82 & 72.16 & 76.05 & 72.6\\ ipulse\_nosie-4 & 40.59 & 54.42 & 59.39 & 49.48 & 56.66 & 52.61 & 49.79 & 58.44 & 56.45 \\ ipulse\_nosie-5 & 26.03 & 39.86 & 41.54 & 32.72 & 37.28 & 35.12 & 28.42 & 38.26 & 39.98\\ shot\_noise-1 & 85.75 & 81.51 & 81.31 & 81.84 & 81.58 & 80.66 & 81.88 & 83.98 & 82.42 \\ shot\_noise-2 & 76.61 & 72.04 & 74.02 & 72.21 & 72.81 & 73.03 & 72.32 & 76.70 & 75.1\\ shot\_noise-3 & 52.21 & 53.90 & 57.08 & 50.66 & 54.59 & 53.76 & 51.31 & 57.22 & 56.56 \\ shot\_noise-4 & 44.13 & 47.58 & 51.29 & 43.59 & 48.36 & 46.64 & 44.21 & 48.78 & 48.91 \\ shot\_noise-5 & 32.73 & 39.93 & 40.79 & 33.80 & 38.50 & 36.47 & 31.79 & 36.46 & 37.8\\ speckle\_noise-1 & 86.30 & 81.29 & 81.94 & 80.93 & 80.77 & 81.14 & 82.17 & 84.17 & 82.62\\ speckle\_noise-2 & 71.94 & 68.07 & 70.14 & 67.5 & 69.22 & 69.35 & 68.26 & 72.94 & 71.70\\ speckle\_noise-3 & 64.47 & 62.12 & 64.13 & 60.24 & 63.44 & 62.50 & 61.14 & 66.89 & 64.27\\ speckle\_noise-4 & 49.81 & 51.93 & 53.77 & 47.93 & 52.75 & 50.59 & 48.39 & 54.13 & 52.40 \\ speckle\_noise-5 & 38.70 & 44.25 & 43.60 & 38.65 & 43.16 & 42.09 & 37.78 & 42.13 & 42.57\\ gaussian\_blur-1 & 94.17 & 89.03 & 90.5 & 89.33 & 90.56 & 89.00 & 91.05 & 92.16 & 89.33\\ gaussian\_blur-2 & 87.04 & 78.3 & 81.98 & 78.81 & 80.42 & 77.75 & 81.20 & 84.80 & 78.93\\ gaussian\_blur-3 & 75.15 & 67.74 & 68.27 & 67.67 & 68.16 & 64.54 & 67.42 & 73.62 & 66.29\\ gaussian\_blur-4 & 59.5 & 55.17 & 53.63 & 55.74 & 54.08 & 52.44 & 52.72 & 60.32 & 53.37\\ gaussian\_blur-5 & 36.03 & 37.31 & 33.96 & 37.50 & 37.54 & 36.77 & 34.08 & 39.22 & 34.93\\ defocus\_blur-1 & 94.2 & 88.73 & 91.06 & 89.1 & 90.32 & 88.91 & 90.92 & 91.98 & 89.58\\ defocus\_blur-2 & 92.75 & 85.97 & 88.99 & 86.59 & 88.31 & 85.58 & 87.91 & 90.47 & 87.01\\ defocus\_blur-3 & 87.38 & 79.02 & 82.43 & 78.88 & 80.71 & 77.58 & 80.88 & 84.85 & 79.52\\ defocus\_blur-4 & 76.99 & 69.13 & 71.02 & 68.29 & 68.33 & 65.96 & 68.42 & 74.40 & 68.22\\ defocus\_blur-5 & 52.09 & 48.85 & 51.99 & 48.82 & 49.17 & 48.45 & 46.92 & 55.70 & 48.27 \\ glass\_blur-1 & 54.93 & 56.57 & 51.72 & 57.94 & 56.78 & 57.29 & 56.27 & 58.82 & 58.92 \\ glass\_blur-2 & 56.37 & 57.93 & 53.46 & 60.42 & 59.21 & 59.32 & 58.03 & 60.25 & 60.56 \\ glass\_blur-3 & 59.21 & 61.43 & 56.98 & 64.11 & 61.72 & 62.41 & 60.39 & 62.84 & 63.32 \\ glass\_blur-4 & 45.65 & 46.50 & 42.72 & 48.48 & 47.19 & 47.83 & 46.88 & 49.09 & 49.23 \\ glass\_blur-5 & 49.19 & 49.52 & 46.40 & 52.06 & 49.83 & 50.02 & 49.08 & 51.14 & 51.82 \\ otion\_blur-1 & 89.40 & 81.57 & 83.27 & 82.00 & 83.11 & 81.48 & 84.19 & 86.21 & 82.83 \\ otion\_blur-2 & 81.95 & 71.52 & 74.71 & 73.38 & 72.48 & 70.99 & 74.35 & 77.75 & 74.09 \\ otion\_blur-3 & 72.48 & 61.87 & 66.21 & 63.86 & 63.39 & 61.57 & 63.85 & 68.31 & 64.36 \\ otion\_blur-4 & 72.79 & 62.40 & 66.18 & 63.94 & 62.84 & 62.03 & 64.54 & 67.88 & 64.13 \\ otion\_blur-5 & 63.91 & 54.14 & 57.98 & 56.07 & 55.90 & 54.35 & 55.60 & 59.71 & 56.65 \\ zoo\_blur-1 & 87.36 & 78.25 & 81.31 & 78.56 & 79.45 & 77.20 & 80.22 & 83.69 & 78.55 \\ zoo\_blur-2 & 83.89 & 74.84 & 77.73 & 75.39 & 75.88 & 72.83 & 75.74 & 80.46 & 74.72 \\ zoo\_blur-3 & 77.73 & 69.00 & 70.98 & 68.81 & 69.03 & 66.56 & 68.34 & 74.33 & 68.21 \\ zoo\_blur-4 & 71.39 & 64.12 & 65.21 & 63.79 & 62.81 & 61.01 & 62.47 & 68.67 & 62.58 \\ zoo\_blur-5 & 60.60 & 55.15 & 55.83 & 55.4 & 54.38 & 52.66 & 52.24 & 59.51 & 53.94 \\ brighness-1 & 94.31 & 89.29 & 90.84 & 89.53 & 90.8 & 89.30 & 90.97 & 92.06 & 89.74 \\ brighness-2 & 94.03 & 88.25 & 90.42 & 88.71 & 89.66 & 88.50 & 90.64 & 91.64 & 88.77 \\ brighness-3 & 93.53 & 87.40 & 89.38 & 87.38 & 89.17 & 87.31 & 89.63 & 90.72 & 87.84 \\ brighness-4 & 92.74 & 85.45 & 88.27 & 86.12 & 87.78 & 85.44 & 88.16 & 89.58 & 86.39 \\ brighness-5 & 90.36 & 80.95 & 85.22 & 81.65 & 83.79 & 80.99 & 84.45 & 86.53 & 82.04 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.05} \centering \caption{Results for Robustness Corruption on CIFAR10-C Dataset with Different Binarization Algorithms (2/2).} \label{tab:appendix-corr-acc2} \vspace{-0.1in} \setlength{\tabcolsep}{2.35mm} \begin{tabular}{llrrrrrrrrrrrr} \toprule \multirow{2}{*}{Noise} & \multirow{2}{*}{FP32} & \multicolumn{8}{c}{Binarization Algorithm} \\ & & {BNN} & {XNOR} & {DoReFa} & {Bi-Real} & {XNOR++} & {ReActNet} & {ReCU} & {FDA} \\ \midrule fog-1 & 94.04 & 88.17 & 90.89 & 88.84 & 89.91 & 88.51 & 90.84 & 92.08 & 89.43 \\ fog-2 & 93.03 & 84.58 & 88.85 & 85.48 & 87.26 & 84.77 & 88 & 89.87 & 86.76 \\ fog-3 & 90.69 & 78.07 & 85.2 & 80.07 & 83.32 & 78.94 & 83.77 & 86.82 & 82.78 \\ fog-4 & 86.72 & 69.56 & 78.92 & 72.27 & 75.89 & 71.01 & 77.96 & 81.56 & 75.62 \\ fog-5 & 68.6 & 49.04 & 53.9 & 52.33 & 52.88 & 49.68 & 57.67 & 62.29 & 55.18 \\ frost-1 & 89.97 & 83.66 & 84.7 & 84.07 & 85.76 & 83.64 & 85.51 & 87.75 & 84.85 \\ frost-2 & 84.42 & 77.88 & 78.97 & 77.47 & 78.96 & 77.26 & 79.14 & 81.4 & 79.16 \\ frost-3 & 74.85 & 67.67 & 69.3 & 67.14 & 68.76 & 66.03 & 69.58 & 72.54 & 70.14 \\ frost-4 & 73.32 & 65.93 & 67.14 & 65.97 & 68.52 & 65.37 & 67.93 & 71.44 & 69.41 \\ frost-5 & 62.13 & 55.02 & 56.67 & 55.62 & 56.77 & 54.26 & 57.69 & 61.11 & 59.88 \\ snow-1 & 89.26 & 84.58 & 85.44 & 84.59 & 86.43 & 85.07 & 85.95 & 87.82 & 85.67 \\ snow-2 & 78.96 & 72.01 & 73.37 & 73.14 & 73.42 & 72.43 & 73.65 & 78.05 & 73.84 \\ snow-3 & 82.85 & 75.6 & 76.84 & 75.74 & 76.95 & 75.74 & 77.76 & 79.98 & 75.95 \\ snow-4 & 80.29 & 70.84 & 72.56 & 71.93 & 72.39 & 71.25 & 72.97 & 76.12 & 72.16 \\ snow-5 & 74.94 & 63.85 & 66.94 & 65.9 & 65.71 & 64.24 & 66.29 & 70.33 & 66.53 \\ contrast-1 & 93.82 & 87.23 & 89.96 & 87.93 & 89.68 & 87.88 & 90.16 & 91.53 & 88.84 \\ contrast-2 & 90.53 & 76.02 & 84 & 77.27 & 82.1 & 76.3 & 82.8 & 86.02 & 81.37 \\ contrast-3 & 85.84 & 63.97 & 77.1 & 65.77 & 72.77 & 64.18 & 74.62 & 79.30 & 71.89 \\ contrast-4 & 75.08 & 44.07 & 62.37 & 47.35 & 55.57 & 44.94 & 59.87 & 65.72 & 55.14\\ contrast-5 & 29.36 & 20 & 25.67 & 20.18 & 22.18 & 21.04 & 25.28 & 25.66 & 24.04 \\ elastic\_transfor-1 & 89.97 & 82.97 & 84.5 & 83.38 & 84.74 & 82.48 & 84.42 & 86.54 & 83.25\\ elastic\_transfor-2 & 89.43 & 82.12 & 84.79 & 82.44 & 84.07 & 81.97 & 84.61 & 86.20 & 83.15 \\ elastic\_transfor-3 & 85.52 & 77.56 & 80.71 & 77.92 & 79.11 & 77 & 79.53 & 82.27 & 78.35 \\ elastic\_transfor-4 & 79.48 & 73.75 & 74.83 & 73.92 & 73.41 & 72.77 & 73.98 & 77.53 & 74.17 \\ elastic\_transfor-5 & 75.02 & 70.97 & 70.22 & 71.36 & 71.03 & 71.63 & 71.65 & 75.31 & 71.49 \\ jpeg\_copression-1 & 87.36 & 83.28 & 83.93 & 84.07 & 83.83 & 83.77 & 84.3 & 85.65 & 83.63 \\ jpeg\_copression-2 & 81.68 & 80.09 & 79.66 & 79.77 & 80.03 & 80.29 & 80.59 & 81.66 & 79.8 \\ jpeg\_copression-3 & 79.98 & 78.55 & 78.21 & 78.32 & 78.27 & 78.99 & 78.51 & 79.94 & 78.44 \\ jpeg\_copression-4 & 77.17 & 77.12 & 75.78 & 77.44 & 77.04 & 77.46 & 77.08 & 77.67 & 76.71 \\ jpeg\_copression-5 & 73.85 & 74.51 & 73.04 & 74.65 & 74.13 & 75.26 & 74.16 & 74.85 & 74.19 \\ pixelate-1 & 92.57 & 86.97 & 88.19 & 87.39 & 88.73 & 87.47 & 88.17 & 89.42 & 87.26 \\ pixelate-2 & 88.23 & 81.91 & 80.95 & 82.37 & 82.82 & 81.93 & 81.99 & 83.80 & 80.95 \\ pixelate-3 & 84 & 78.4 & 75.25 & 78.63 & 78.03 & 77.15 & 77.28 & 78.89 & 75.30 \\ pixelate-4 & 68.51 & 64.11 & 58.11 & 62.49 & 60.89 & 61.43 & 60.06 & 61.71 & 59.95 \\ pixelate-5 & 50.57 & 50.68 & 44.77 & 48.18 & 45.44 & 46.74 & 43.27 & 47.29 & 45.62 \\ saturate-1 & 92.41 & 84.98 & 88.38 & 85.38 & 87.26 & 85.39 & 88.23 & 89.34 & 86.82 \\ saturate-2 & 90.12 & 80.74 & 85.26 & 81.57 & 82.68 & 80.74 & 84.22 & 86.06 & 83.09 \\ saturate-3 & 93.83 & 87.89 & 90.45 & 88.12 & 89.53 & 88.03 & 90.15 & 90.97 & 88.73 \\ saturate-4 & 91.61 & 82.5 & 86.88 & 82.6 & 84.66 & 82.44 & 86.04 & 87.56 & 83.70 \\ saturate-5 & 87.48 & 76.03 & 82.76 & 75.53 & 78.3 & 75.85 & 80.62 & 82.64 & 77.00 \\ spatter-1 & 91.17 & 87.5 & 89.75 & 87.83 & 89.34 & 87.9 & 89.42 & 91.00 & 88.98 \\ spatter-2 & 85.2 & 83.85 & 85.98 & 83.52 & 85.64 & 84.72 & 85.88 & 87.59 & 85.00 \\ spatter-3 & 80.63 & 77.94 & 80.33 & 77.95 & 80.19 & 79.41 & 80.07 & 82.65 & 79.50 \\ spatter-4 & 94.68 & 84.57 & 86.71 & 84.77 & 86.51 & 85.14 & 86.72 & 88.22 & 85.32 \\ spatter-5 & 74.07 & 78.85 & 80.77 & 78.71 & 80.94 & 79.71 & 80.51 & 83.14 & 79.48 \\ \midrule Overall & 74.11 & 69.43 & 71.51 & 69.36 & 70.76 & 69.09 & 70.31 & 73.56 & 70.70 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.2} \centering \caption{Sensitivity to Hyper Parameters in Training (1/2).} \label{tab:appendix-sensitivity-acc1} \vspace{-0.1in} \setlength{\tabcolsep}{1.6mm} \begin{tabular}{lccrrccccccc} \toprule {Algorithm} & {Epoch} & {Optimizer} & {Learning Rate} & {Scheduler} & {Acc.} & {Acc.$_{1}$} & {Acc.$_{2}$} & {Acc.$_{3}$} & {Acc.$_{4}$} & {mean} & std\\ \midrule \multirow{8}{*}{FP32} & 200 & SGD & 0.1 & cosine & 94.58 & 94.6 & 95.05 & 94.64 & 94.84 & 94.74 & 0.20 \\ ~ & 200 & SGD & 0.1 & step & 92.63 & 92.42 & 92.15 & 92.62 & 92.38 & 92.44 & 0.20 \\ ~ & 200 & SGD & 0.01 & cosine & 92.23 & 91.76 & 91.76 & 91.99 & 92.17 & 91.98 & 0.22 \\ ~ & 200 & SGD & 0.01 & step & 83.94 & 83.50 & 82.80 & 84.13 & 83.89 & 83.65 & 0.53 \\ ~ & 200 & Adam & 0.001 & cosine & 93.51 & 92.94 & 93.12 & 93.35 & 92.86 & 93.16 & 0.27 \\ ~ & 200 & Adam & 0.001 & step & 93.37 & 93.15 & 93.32 & 93.41 & 93.35 & 93.32 & 0.10 \\ ~ & 200 & Adam & 0.0001 & cosine & 89.97 & 89.92 & 89.96 & 89.9 & 89.92 & 89.93 & 0.03 \\ ~ & 200 & Adam & 0.0001 & step & 90.57 & 89.91 & 90.43 & 90.25 & 90.31 & 90.29 & 0.25 \\ \midrule \multirow{8}{*}{BNN} & 200 & SGD & 0.1 & cosine & 87.62 & 87.53 & 87.99 & 88.86 & 87.84 & 87.97 & 0.53 \\ ~ & 200 & SGD & 0.1 & step & 70.87 & 73.86 & 71.83 & 73.1 & 72.87 & 72.51 & 1.17 \\ ~ & 200 & SGD & 0.01 & cosine & 73.52 & 72.62 & 72.82 & 71.14 & 72.59 & 72.54 & 0.87 \\ ~ & 200 & SGD & 0.01 & step & 52.85 & 51.77 & 52.00 & 52.34 & 53.14 & 52.42 & 0.57 \\ ~ & 200 & Adam & 0.001 & cosine & 88.76 & 88.99 & 88.67 & 88.84 & 88.81 & 88.81 & 0.12 \\ ~ & 200 & Adam & 0.001 & step & 88.85 & 89.34 & 88.77 & 89.02 & 89.00 & 89.00 & 0.22 \\ ~ & 200 & Adam & 0.0001 & cosine & 83.46 & 83.09 & 83.20 & 83.70 & 83.20 & 83.33 & 0.25 \\ ~ & 200 & Adam & 0.0001 & step & 84.08 & 84.11 & 84.20 & 84.31 & 83.56 & 84.05 & 0.29 \\ \midrule \multirow{8}{*}{XNOR} & 200 & SGD & 0.1 & cosine & 91.83 & 91.99 & 91.87 & 92.01 & 91.56 & 91.85 & 0.18 \\ ~ & 200 & SGD & 0.1 & step & 90.02 & 90.01 & 90.12 & 89.82 & 89.7 & 89.93 & 0.17 \\ ~ & 200 & SGD & 0.01 & cosine & 90.09 & 89.68 & 90.01 & 89.97 & 90.00 & 89.95 & 0.16 \\ ~ & 200 & SGD & 0.01 & step & 86.86 & 86.66 & 87.21 & 86.98 & 86.61 & 86.86 & 0.24 \\ ~ & 200 & Adam & 0.001 & cosine & 89.39 & 89.81 & 89.73 & 89.91 & 89.75 & 89.72 & 0.20 \\ ~ & 200 & Adam & 0.001 & step & 89.92 & 89.79 & 89.73 & 90.01 & 89.61 & 89.81 & 0.16 \\ ~ & 200 & Adam & 0.0001 & cosine & 86.18 & 86.29 & 87.03 & 86.36 & 86.62 & 86.50 & 0.34 \\ ~ & 200 & Adam & 0.0001 & step & 86.32 & 87.04 & 86.68 & 86.99 & 87.18 & 86.84 & 0.34 \\ \midrule \multirow{8}{*}{DoReFa} & 200 & SGD & 0.1 & cosine & 85.64 & 85.67 & 85.89 & 86.00 & 85.79 & 85.80 & 0.15 \\ ~ & 200 & SGD & 0.1 & step & 86.95 & 86.98 & 86.69 & 86.62 & 86.65 & 86.78 & 0.17 \\ ~ & 200 & SGD & 0.01 & cosine & 86.56 & 86.59 & 86.52 & 86.69 & 86.88 & 86.65 & 0.14 \\ ~ & 200 & SGD & 0.01 & step & 78.76 & 79.97 & 80.73 & 79.94 & 80.47 & 79.97 & 0.76 \\ ~ & 200 & Adam & 0.001 & cosine & 88.85 & 89.06 & 88.92 & 88.87 & 88.75 & 88.89 & 0.11 \\ ~ & 200 & Adam & 0.001 & step & 89.08 & 89.16 & 88.93 & 89.23 & 88.84 & 89.05 & 0.16 \\ ~ & 200 & Adam & 0.0001 & cosine & 83.56 & 83.17 & 83.65 & 83.60 & 83.66 & 83.53 & 0.20 \\ ~ & 200 & Adam & 0.0001 & step & 83.70 & 83.74 & 84.27 & 84.19 & 84.01 & 83.98 & 0.26 \\ \midrule \multirow{8}{*}{Bi-Real} & 200 & SGD & 0.1 & cosine & 87.55 & 87.81 & 88.06 & 87.30 & 87.88 & 87.72 & 0.30 \\ ~ & 200 & SGD & 0.1 & step & 87.95 & 88.35 & 88.13 & 87.73 & 88.25 & 88.08 & 0.25 \\ ~ & 200 & SGD & 0.01 & cosine & 87.76 & 87.93 & 87.73 & 87.72 & 87.64 & 87.76 & 0.11 \\ ~ & 200 & SGD & 0.01 & step & 83.75 & 82.91 & 82.82 & 82.91 & 83.39 & 83.16 & 0.40 \\ ~ & 200 & Adam & 0.001 & cosine & 88.78 & 89.15 & 89.06 & 89.00 & 89.2 & 89.04 & 0.16 \\ ~ & 200 & Adam & 0.001 & step & 88.89 & 88.98 & 88.78 & 89.11 & 89.05 & 88.96 & 0.13 \\ ~ & 200 & Adam & 0.0001 & cosine & 83.96 & 84.17 & 84.37 & 83.54 & 84.07 & 84.02 & 0.31 \\ ~ & 200 & Adam & 0.0001 & step & 84.63 & 84.48 & 84.32 & 84.75 & 84.29 & 84.49 & 0.20 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.6} \centering \caption{Sensitivity to Hyper Parameters in Training (2/2).} \label{tab:appendix-sensitivity-acc2} \vspace{-0.1in} \setlength{\tabcolsep}{1.6mm} \begin{tabular}{lrrrrrrrrrrr} \toprule {Algorithm} & {Epoch} & {Optimizer} & {Learning Rate} & {Scheduler} & {Acc.} & {Acc.$_{1}$} & {Acc.$_{2}$} & {Acc.$_{3}$} & {Acc.$_{4}$} & {mean} & {std}\\ \midrule \multirow{8}{*}{XNOR++} & 200 & SGD & 0.1 & cosine & 87.82 & 88.42 & 88.12 & 88.55 & 88.19 & 88.22 & 0.28 \\ ~ & 200 & SGD & 0.1 & step & 73.55 & 73.11 & 75.06 & 74.05 & 73.78 & 73.91 & 0.73 \\ ~ & 200 & SGD & 0.01 & cosine & 74.03 & 75.06 & 73.64 & 74.53 & 74.71 & 74.39 & 0.56 \\ ~ & 200 & SGD & 0.01 & step & 53.55 & 54.16 & 54.01 & 52.91 & 54.36 & 53.80 & 0.58 \\ ~ & 200 & Adam & 0.001 & cosine & 88.77 & 88.65 & 89.10 & 88.61 & 88.81 & 88.79 & 0.19 \\ ~ & 200 & Adam & 0.001 & step & 89.18 & 89.05 & 89.27 & 88.93 & 89.00 & 89.09 & 0.14 \\ ~ & 200 & Adam & 0.0001 & cosine & 83.86 & 83.49 & 83.56 & 83.16 & 83.62 & 83.54 & 0.25 \\ ~ & 200 & Adam & 0.0001 & step & 83.46 & 83.77 & 84.40 & 84.06 & 83.82 & 83.90 & 0.35 \\ \midrule \multirow{8}{*}{ReActNet} & 200 & SGD & 0.1 & cosine & 88.60 & 88.53 & 88.38 & 88.48 & 88.89 & 88.58 & 0.19 \\ ~ & 200 & SGD & 0.1 & step & 88.42 & 88.01 & 88.10 & 88.02 & 88.43 & 88.20 & 0.21 \\ ~ & 200 & SGD & 0.01 & cosine & 87.75 & 87.86 & 88.00 & 87.80 & 88.02 & 87.89 & 0.12 \\ ~ & 200 & SGD & 0.01 & step & 83.29 & 82.89 & 83.65 & 83.76 & 83.27 & 83.37 & 0.35 \\ ~ & 200 & Adam & 0.001 & cosine & 89.47 & 89.29 & 89.01 & 89.05 & 89.14 & 89.19 & 0.19 \\ ~ & 200 & Adam & 0.001 & step & 89.27 & 89.74 & 89.48 & 89.40 & 89.39 & 89.46 & 0.18 \\ ~ & 200 & Adam & 0.0001 & cosine & 84.65 & 84.93 & 84.48 & 84.65 & 84.67 & 84.68 & 0.16 \\ ~ & 200 & Adam & 0.0001 & step & 84.69 & 84.55 & 84.93 & 84.94 & 85.38 & 84.90 & 0.32 \\ \midrule \multirow{8}{*}{ReCU} & 200 & SGD & 0.1 & cosine & 91.72 & 91.94 & 91.68 & 91.69 & 91.81 & 91.77 & 0.11 \\ ~ & 200 & SGD & 0.1 & step & 87.73 & 88.14 & 87.81 & 88.02 & 87.91 & 87.92 & 0.16 \\ ~ & 200 & SGD & 0.01 & cosine & 87.32 & 87.28 & 87.53 & 87.48 & 87.32 & 87.39 & 0.11 \\ ~ & 200 & SGD & 0.01 & step & 71.86 & 71.72 & 71.78 & 72.26 & 71.59 & 71.84 & 0.25 \\ ~ & 200 & Adam & 0.001 & cosine & 88.24 & 89.98 & 88.26 & 88.48 & 88.13 & 88.62 & 0.77 \\ ~ & 200 & Adam & 0.001 & step & 88.36 & 88.48 & 88.55 & 88.42 & 88.63 & 88.49 & 0.11 \\ ~ & 200 & Adam & 0.0001 & cosine & 80.07 & 81.10 & 80.62 & 81.09 & 79.95 & 80.57 & 0.55 \\ ~ & 200 & Adam & 0.0001 & step & 81.26 & 81.42 & 81.08 & 81.58 & 81.69 & 81.41 & 0.24 \\ \midrule \multirow{8}{*}{FDA} & 200 & SGD & 0.1 & cosine & 89.69 & 89.59 & 89.56 & 89.53 & 89.65 & 89.60 & 0.07 \\ ~ & 200 & SGD & 0.1 & step & 80.38 & 80.34 & 80.83 & 80.52 & 80.52 & 80.52 & 0.19 \\ ~ & 200 & SGD & 0.01 & cosine & 80.72 & 80.93 & 80.89 & 80.70 & 80.79 & 80.81 & 0.10 \\ ~ & 200 & SGD & 0.01 & step & 63.41 & 62.85 & 63.04 & 63.04 & 63.14 & 63.10 & 0.20 \\ ~ & 200 & Adam & 0.001 & cosine & 89.70 & 89.57 & 89.57 & 89.80 & 89.76 & 89.68 & 0.11 \\ ~ & 200 & Adam & 0.001 & step & 89.84 & 89.85 & 90.10 & 89.79 & 90.01 & 89.92 & 0.13 \\ ~ & 200 & Adam & 0.0001 & cosine & 89.59 & 89.10 & 89.34 & 89.31 & 89.51 & 89.37 & 0.19 \\ ~ & 200 & Adam & 0.0001 & step & 89.52 & 89.59 & 89.52 & 89.64 & 89.58 & 89.57 & 0.05 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{0.92} \centering \caption{Inference Efficiency on Hardware (1/4).} \label{app:tab:eff1} \vspace{-0.1in} \setlength{\tabcolsep}{3.3mm} \begin{tabular}{lllcrrrrrrrr} \toprule \multirow{2}{*}{Hardware} & \multirow{2}{*}{Threads} & \multirow{2}{*}{Arch.} & \multicolumn{3}{c}{Larq} & \multicolumn{2}{c}{daBNN} \\ & & & {FP32} & {BNN} & {ReAct} & {FP32} & {BNN}\\ \midrule \multirow{13}{*}{Kirin 970} &\multirow{3}{*}{1} & ResNet18 & 716.427 & 123.263 & 126.457 & 427.585 & 72.585 & \\ & & ResNet34 & 1449.67 & 159.615 & 171.227 & 836.321 & 124.091 & \\ & & VGG-Small & 242.443 & 14.833 & 16.401 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 372.642 & 72.697 & 78.605 & -- & -- & \\ & & ResNet34 & 732.355 & 96.711 & 108.41 & -- & -- & \\ & & VGG-Small & 121.91 & 10.304 & 11.935 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 191.517 & 42.986 & 47.182 & -- & -- & \\ & & ResNet34 & 367.891 & 61.413 & 73.101 & -- & -- & \\ & & VGG-Small & 57.721 & 8.72 & 8.387 & -- & -- & \\ \cmidrule{2-8} & \multirow{3}{*}{8} & ResNet18 & 96.937 & 37.457 & 56.017 & -- & -- & \\ & & ResNet34 & 212.982 & 53.809 & 67.667 & -- & -- & \\ & & VGG-Small & 33.647 & 18.649 & 19.818 & -- & -- & \\ \midrule \multirow{13}{*}{Kirin 980} &\multirow{3}{*}{1} & ResNet18 & 307.624 & 49.009 & 50.018 & 158.363 & 31.803 & \\ & & ResNet34 & 507.734 & 71.909 & 74.920 & 308.537 & 53.031 & \\ & & VGG-Small & 83.163 & 7.772 & 8.215 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 187.494 & 52.057 & 54.285 & -- & -- & \\ & & ResNet34 & 367.853 & 57.336 & 60.483 & -- & -- & \\ & & VGG-Small & 49.264 & 6.116 & 5.604 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 104.076 & 29.556 & 35.539 & -- & -- & \\ & & ResNet34 & 202.173 & 31.324 & 35.911 & -- & -- & \\ & & VGG-Small & 22.690 & 3.147 & 3.291 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 60.307 & 45.683 & 56.416 & -- & -- & \\ & & ResNet34 & 120.738 & 60.758 & 86.887 & -- & -- & \\ & & VGG-Small & 18.147 & 21.688 & 23.350 & -- & -- & \\ \midrule \multirow{13}{*}{Kirin 985} &\multirow{3}{*}{1} & ResNet18 & 173.238 & 27.429 & 30.626 & 164.556 & 34.528 & \\ & & ResNet34 & 438.971 & 58.165 & 60.885 & 323.439 & 57.808 & \\ & & VGG-Small & 70.797 & 6.147 & 6.796 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 103.621 & 25.672 & 35.477 & -- & -- & \\ & & ResNet34 & 327.416 & 53.949 & 62.865 & -- & -- & \\ & & VGG-Small & 55.328 & 5.955 & 6.243 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 92.387 & 26.728 & 34.778 & -- & -- & \\ & & ResNet34 & 184.050 & 39.881 & 52.153 & -- & -- & \\ & & VGG-Small & 28.076 & 8.919 & 14.795 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 130.972 & 82.772 & 89.766 & -- & -- & \\ & & ResNet34 & 227.504 & 119.586 & 143.958 & -- & -- & \\ & & VGG-Small & 44.339 & 34.034 & 43.816 & -- & -- & \\ \midrule \multirow{13}{*}{Kirin 990} &\multirow{3}{*}{1} & ResNet18 & 114.238 & 21.235 & 22.066 & 144.205 & 29.239 & \\ & & ResNet34 & 227.043 & 31.545 & 32.821 & 275.502 & 49.476 & \\ & & VGG-Small & 38.118 & 3.338 & 3.482 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 59.329 & 13.911 & 14.179 & -- & -- & \\ & & ResNet34 & 116.822 & 23.452 & 22.770 & -- & -- & \\ & & VGG-Small & 20.055 & 2.080 & 2.194 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 38.403 & 10.280 & 12.208 & -- & -- & \\ & & ResNet34 & 81.273 & 15.570 & 17.727 & -- & -- & \\ & & VGG-Small & 13.508 & 1.542 & 1.760 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 37.703 & 25.360 & 31.365 & -- & -- & \\ & & ResNet34 & 78.753 & 34.884 & 39.363 & -- & -- & \\ & & VGG-Small & 12.707 & 14.414 & 21.749 & -- & -- & \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{0.92} \centering \caption{Inference Efficiency on Hardware (2/4).} \label{app:tab:eff2} \vspace{-0.1in} \setlength{\tabcolsep}{3.mm} \begin{tabular}{lllcrrrrrrrr} \toprule \multirow{2}{*}{Hardware} & \multirow{2}{*}{Threads} & \multirow{2}{*}{Arch.} & \multicolumn{3}{c}{Larq} & \multicolumn{2}{c}{daBNN} \\ & & & {FP32} & {BNN} & {ReAct} & {FP32} & {BNN}\\ \midrule \multirow{13}{*}{Kirin 9000E} &\multirow{3}{*}{1} & ResNet18 & 118.059 & 19.865 & 20.547 & 129.270 & 24.781 & \\ & & ResNet34 & 236.047 & 31.822 & 32.575 & 250.680 & 42.134 & \\ & & VGG-Small & 39.114 & 3.595 & 3.832 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 68.351 & 16.821 & 16.115 & -- & -- & \\ & & ResNet34 & 133.671 & 25.061 & 24.660 & -- & -- & \\ & & VGG-Small & 23.018 & 2.684 & 2.598 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 45.592 & 17.452 & 18.847 & -- & -- & \\ & & ResNet34 & 91.648 & 23.395 & 28.022 & -- & -- & \\ & & VGG-Small & 14.360 & 2.762 & 2.782 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 43.363 & 61.263 & 42.328 & -- & -- & \\ & & ResNet34 & 89.405 & 70.232 & 93.558 & -- & -- & \\ & & VGG-Small & 19.070 & 17.351 & 23.825 & -- & -- & \\ \midrule \multirow{13}{*}{Dimensity 820} &\multirow{3}{*}{1} & ResNet18 & 158.835 & 32.636 & 34.912 & 323.035 & 63.471 & \\ & & ResNet34 & 328.020 & 57.133 & 60.807 & 629.493 & 102.443 & \\ & & VGG-Small & 82.417 & 5.958 & 6.420 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 122.167 & 29.306 & 34.384 & -- & -- & \\ & & ResNet34 & 250.088 & 43.306 & 50.143 & -- & -- & \\ & & VGG-Small & 51.320 & 4.670 & 5.053 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 94.636 & 21.850 & 30.027 & -- & -- & \\ & & ResNet34 & 177.757 & 33.809 & 40.816 & -- & -- & \\ & & VGG-Small & 45.056 & 4.223 & 4.546 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 90.210 & 45.357 & 61.981 & -- & -- & \\ & & ResNet34 & 166.989 & 68.444 & 74.286 & -- & -- & \\ & & VGG-Small & 32.971 & 21.344 & 23.706 & -- & -- & \\ \midrule \multirow{13}{*}{Dimensity 9000} &\multirow{3}{*}{1} & ResNet18 & 106.388 & 21.023 & 24.770 & 148.690 & 29.030 & \\ & & ResNet34 & 210.665 & 32.841 & 34.590 & 284.438 & 49.854 & \\ & & VGG-Small & 42.057 & 4.410 & 5.530 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 81.606 & 22.661 & 27.050 & -- & -- & \\ & & ResNet34 & 143.349 & 27.666 & 37.910 & -- & -- & \\ & & VGG-Small & 26.512 & 2.273 & 2.410 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 51.421 & 13.079 & 15.200 & -- & -- & \\ & & ResNet34 & 100.249 & 23.314 & 25.920 & -- & -- & \\ & & VGG-Small & 17.735 & 3.015 & 3.770 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 43.355 & 24.939 & 30.740 & -- & -- & \\ & & ResNet34 & 84.182 & 30.212 & 39.990 & -- & -- & \\ & & VGG-Small & 14.857 & 14.258 & 17.540 & -- & -- & \\ \midrule \multirow{13}{*}{Snapdragon 855+} &\multirow{3}{*}{1} & ResNet18 & 90.430 & 19.769 & 20.530 & 163.293 & 31.174 & \\ & & ResNet34 & 186.694 & 29.126 & 30.512 & 298.882 & 49.948 & \\ & & VGG-Small & 29.735 & 3.153 & 3.259 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 58.510 & 25.780 & 26.331 & -- & -- & \\ & & ResNet34 & 124.580 & 31.023 & 32.646 & -- & -- & \\ & & VGG-Small & 19.408 & 2.258 & 2.471 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 39.269 & 19.865 & 23.297 & -- & -- & \\ & & ResNet34 & 82.180 & 30.248 & 31.387 & -- & -- & \\ & & VGG-Small & 13.566 & 2.032 & 2.359 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 36.630 & 49.060 & 85.861 & -- & -- & \\ & & ResNet34 & 73.513 & 41.131 & 88.101 & -- & -- & \\ & & VGG-Small & 12.860 & 17.828 & 23.489 & -- & -- & \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{0.92} \centering \caption{Inference Efficiency on Hardware (3/4).} \label{app:tab:eff3} \vspace{-0.1in} \setlength{\tabcolsep}{2.7mm} \begin{tabular}{lllcrrrrrrrr} \toprule \multirow{2}{*}{Hardware} & \multirow{2}{*}{Threads} & \multirow{2}{*}{Arch.} & \multicolumn{3}{c}{Larq} & \multicolumn{2}{c}{daBNN} \\ & & & {FP32} & {BNN} & {ReAct} & {FP32} & {BNN}\\ \midrule \multirow{13}{*}{Snapdragon 870} &\multirow{3}{*}{1} & ResNet18 & 88.145 & 16.527 & 17.020 & 126.762 & 25.240 & \\ & & ResNet34 & 185.468 & 25.488 & 26.195 & 237.361 & 41.440 & \\ & & VGG-Small & 30.318 & 2.851 & 2.964 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 63.829 & 18.351 & 19.575 & -- & -- & \\ & & ResNet34 & 159.174 & 25.352 & 26.340 & -- & -- & \\ & & VGG-Small & 27.669 & 2.094 & 2.308 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 42.796 & 17.578 & 21.083 & -- & -- & \\ & & ResNet34 & 89.960 & 25.816 & 27.201 & -- & -- & \\ & & VGG-Small & 14.829 & 2.614 & 2.215 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 46.798 & 19.192 & 28.579 & -- & -- & \\ & & ResNet34 & 97.834 & 25.060 & 32.863 & -- & -- & \\ & & VGG-Small & 16.799 & 9.717 & 17.293 & -- & -- & \\ \midrule \multirow{13}{*}{Snapdragon 888} &\multirow{3}{*}{1} & ResNet18 & 77.205 & 15.547 & 16.111 & 123.618 & 25.240 & \\ & & ResNet34 & 152.887 & 22.906 & 23.893 & 234.972 & 41.648 & \\ & & VGG-Small & 25.133 & 2.410 & 2.543 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 46.297 & 19.309 & 19.321 & -- & -- & \\ & & ResNet34 & 93.615 & 20.473 & 22.489 & -- & -- & \\ & & VGG-Small & 16.001 & 1.920 & 2.213 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 33.524 & 13.699 & 14.332 & -- & -- & \\ & & ResNet34 & 67.495 & 19.020 & 21.157 & -- & -- & \\ & & VGG-Small & 11.743 & 2.882 & 2.768 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 33.761 & 26.108 & 58.989 & -- & -- & \\ & & ResNet34 & 67.876 & 37.018 & 61.315 & -- & -- & \\ & & VGG-Small & 11.752 & 27.615 & 16.774 & -- & -- & \\ \midrule \multirow{13}{*}{Raspberrypi 3B+} &\multirow{3}{*}{1} & ResNet18 & 740.509 & 158.732 & 175.256 & 1460.723 & 241.713 & \\ & & ResNet34 & 1536.915 & 240.606 & 254.810 & 2774.888 & 435.170 & \\ & & VGG-Small & 257.079 & 24.479 & 25.790 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 667.012 & 143.716 & 106.894 & -- & -- & \\ & & ResNet34 & 933.149 & 144.287 & 158.868 & -- & -- & \\ & & VGG-Small & 145.427 & 14.503 & 15.628 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 562.567 & 108.585 & 116.640 & -- & -- & \\ & & ResNet34 & 976.223 & 159.258 & 183.698 & -- & -- & \\ & & VGG-Small & 191.470 & 10.839 & 10.196 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 877.026 & 279.660 & 356.239 & -- & -- & \\ & & ResNet34 & 1638.035 & 389.924 & 485.260 & -- & -- & \\ & & VGG-Small & 399.338 & 110.448 & 142.978 & -- & -- & \\ \midrule \multirow{13}{*}{Raspberrypi 4B} &\multirow{3}{*}{1} & ResNet18 & 448.744 & 80.822 & 82.380 & 688.838 & 120.348 & \\ & & ResNet34 & 897.735 & 112.837 & 119.536 & 1362.893 & 209.276 & \\ & & VGG-Small & 150.814 & 11.177 & 12.024 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 261.861 & 49.079 & 55.279 & -- & -- & \\ & & ResNet34 & 525.735 & 67.480 & 79.468 & -- & -- & \\ & & VGG-Small & 89.284 & 6.647 & 7.882 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 270.191 & 36.331 & 45.903 & -- & -- & \\ & & ResNet34 & 572.423 & 53.866 & 70.841 & -- & -- & \\ & & VGG-Small & 90.650 & 5.056 & 6.167 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 466.585 & 168.844 & 226.771 & -- & -- & \\ & & ResNet34 & 879.375 & 264.638 & 319.789 & -- & -- & \\ & & VGG-Small & 216.439 & 114.064 & 162.118 & -- & -- & \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \renewcommand\arraystretch{1.5} \centering \caption{Inference Efficiency on Hardware (4/4).} \label{app:tab:eff4} \vspace{-0.1in} \setlength{\tabcolsep}{3.4mm} \begin{tabular}{lllcrrrrrrrr} \toprule \multirow{2}{*}{Hardware} & \multirow{2}{*}{Threads} & \multirow{2}{*}{Arch.} & \multicolumn{3}{c}{Larq} & \multicolumn{2}{c}{daBNN} \\ & & & {FP32} & {BNN} & {ReAct} & {FP32} & {BNN}\\ \midrule \multirow{16}{*}{Apple M1} &\multirow{3}{*}{1} & ResNet18 & 44.334 & 8.219 & 8.355 & -- & -- & \\ & & ResNet34 & 88.334 & 12.505 & 12.771 & -- & -- & \\ & & VGG-Small & 14.093 & 1.446 & 1.465 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 24.775 & 5.037 & 5.194 & -- & -- & \\ & & ResNet34 & 47.179 & 7.425 & 7.690 & -- & -- & \\ & & VGG-Small & 7.398 & 0.829 & 0.854 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 18.612 & 3.448 & 3.671 & -- & -- & \\ & & ResNet34 & 27.515 & 4.965 & 5.254 & -- & -- & \\ & & VGG-Small & 4.294 & 0.526 & 0.551 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 16.653 & 5.035 & 6.003 & -- & -- & \\ & & ResNet34 & 27.680 & 6.445 & 6.953 & -- & -- & \\ & & VGG-Small & 3.996 & 0.735 & 0.712 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{16} & ResNet18 & 90.323 & 70.697 & 73.729 & -- & -- & \\ & & ResNet34 & 162.057 & 130.907 & 125.362 & -- & -- & \\ & & VGG-Small & 25.366 & 23.050 & 23.194 & -- & -- & \\ \midrule \multirow{16}{*}{Apple M1 Max} &\multirow{3}{*}{1} & ResNet18 & 46.053 & 8.653 & 8.486 & -- & -- & \\ & & ResNet34 & 91.861 & 12.593 & 13.039 & -- & -- & \\ & & VGG-Small & 14.285 & 1.454 & 1.488 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{2} & ResNet18 & 25.039 & 5.450 & 5.361 & -- & -- & \\ & & ResNet34 & 51.860 & 7.579 & 8.925 & -- & -- & \\ & & VGG-Small & 7.657 & 0.855 & 0.896 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{4} & ResNet18 & 14.708 & 3.625 & 3.888 & -- & -- & \\ & & ResNet34 & 27.933 & 5.266 & 6.021 & -- & -- & \\ & & VGG-Small & 4.292 & 0.576 & 0.620 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{8} & ResNet18 & 10.660 & 3.718 & 4.510 & -- & -- & \\ & & ResNet34 & 18.988 & 4.745 & 5.457 & -- & -- & \\ & & VGG-Small & 3.432 & 0.560 & 0.629 & -- & -- & \\ \cmidrule{2-8} &\multirow{3}{*}{16} & ResNet18 & 60.500 & 47.727 & 53.900 & -- & -- & \\ & & ResNet34 & 120.449 & 91.464 & 96.356 & -- & -- & \\ & & VGG-Small & 21.354 & 13.868 & 15.311 & -- & -- & \\ \midrule \end{tabular}} \end{table} \begin{table}[!t] \centering \renewcommand\arraystretch{3} \caption{Accuracy and efficiency comparison among multi-bit quantization (2\&8-bits), pruning, and binarization.} \vspace{-0.1in} \label{app:tab:other_tech} \resizebox{\linewidth}{!} {\begin{tabular}{lllll} \toprule ~ & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Efficiency} \\ ~ & CIFAR10-Res18 & CIFAR10-Res20 & FLOPs (Res20, G) & Param (Res20, K) \\ \midrule FP32 & 94.82 & 91.99 & 13.61 & 11690 \\ \midrule Binary (overall) & 91.08 (-3.74) & 85.74 (-6.25) & 1.105 (12.32$\times$) & 884 (13.22$\times$) \\ \midrule DoReFa-INT2~\citep{Dorefa-Net} & 92.71 & 87.56 & 1.686 & 1681 \\ PACT-INT2~\citep{PACT} & 92.98 & 88.12 & 1.686 & 1681 \\ LSQ-INT2~\citep{esser2019learned} & 93.11 & 89.26 & 1.686 & 1681 \\ INT2 (overall) & 92.93 (-1.89) & 88.31 (-3.68) & 1.686 (8.07$\times$) & 1681 (6.95$\times$) \\ \midrule DoReFa-INT8~\citep{Dorefa-Net} & 94.79 & 91.83 & 7.278 & 4067 \\ PACT-INT8~\citep{PACT} & 94.8 & 91.87 & 7.278 & 4067 \\ LSQ-INT8~\citep{esser2019learned} & 94.78 & 91.95 & 7.278 & 4067 \\ INT8 (overall) & 94.79 (-0.03) & 91.88 (-0.11) & 7.278 (1.87$\times$) & 4067 (2.87$\times$) \\ \midrule \citep{li2016pruning} & 94.47 & 91.32 & 9.527 & 9936 \\ ResRep~\citep{ding2021resrep} & 94.53 & 91.76 & 6.805 & 7247 \\ Pruning (overall) & 94.50 (-0.32) & 91.54 (-0.45) & 8.166 (1.67$\times$) & 8591 (1.36$\times$) \\ \bottomrule \end{tabular}} \end{table} \subsection{Comparison Results against Other Compression Technologies} We further evaluated representative multi-bit quantization algorithms~\citep{Dorefa-Net,PACT,esser2019learned} (with INT2 and INT8) and dropout (pruning) algorithms~\citep{li2016pruning,ding2021resrep} in the limited time to demonstrate their accuracy and efficiency metrics and compare them to network binarization. The results show that compared with multi-bit quantization and dropout, binarization brings more significant compression and acceleration while facing greater challenges from the decline in accuracy. To highlight the characteristics of network binarization, we compare it with other mainstream compression approaches, including multi-bit quantization and pruning, from accuracy and efficiency perspectives (Table~\ref{app:tab:other_tech}). Overall, the results express the intuitive conclusion that there is a trade-off between accuracy and efficiency for different compression approaches. The ultra-low bit-width of network binarization brings acceleration and compression beyond multi-bit quantization and pruning. For example, binarization achieves 12.32$\times$ FLOPs saving, while INT8 quantization achieves 1.87$\times$. However, binarization algorithms also introduce a significant performance drop, the largest among all compression approaches (\textit{e.g.}, CIFAR10-Res20 binary 85.74 vs. pruning 91.54). These results show that network binarization pursues a more radical efficiency improvement among existing compression approaches and is oriented for deployment on edge devices, which is consistent with the conclusions in BiBench.
2,877,628,090,927
arxiv
2,877,628,090,928
arxiv
\section{Introduction} It is indeed not a novel idea to suggest that all the distant objects may be affected by the gravitational lensing of the matter clumps between the sources and the observer. Three decades ago, Barnothy and Barnothy (1968) proposed that all the quasars were nothing but the gravitationally magnified images of Seyfert galactic nuclei. Press and Gunn (1973) showed that the probability of the occurrence of gravitational lensing in an $\Omega=1$ universe is nearly unity. Unlike the previous speculations for which there was apparently a lack of both convincing observational and theoretical supports, the current argument is based on the numerous and unprecedented deep galaxy surveys which have revealed a considerably large population of faint galaxies (Metcalfe et al. 1996; references therein). Using the surface number density of faint galaxies down to $R=26$, $1.93\times10^{15}/\Box^{\circ}$, Fried (1997) derived the projected mean distance between galaxies to be $8^{\prime\prime}.2$, which led him to the conclusion that all the high redshift ($z>1$) objects are moderately magnified by a factor of 1.1--1.5 due to gravitational lensing by the intervening galaxies. Indeed, this was a natural and plausible consequence, provided that all the faint galaxies were at $z\approx0.5$ and had a mean velocity dispersion $\sigma\approx200$--$300$ km s$^{-1}$. Nonetheless, spectroscopic redshifts have not been available for most of the faint galaxies to date. Namely, we do not yet known where these faint galaxies are. For instance, the faint blue galaxies might be the star-forming galaxies at moderate redshift of $z\sim0.4$ (e.g. Broadhurst et al. 1992) or at high redshifts of $z\sim2$ (Metcalfe et al. 1996). While the dispute regarding the merging rate has existed for several years, it is generally agreed that galaxy mergers may play an important role in the formation and evolution of galaxies. At least, the merging model provides a good fit to the faint galaxy number counts. It is particularly noted that the merging alters significantly the redshift and velocity dispersion distributions of galaxies. What is the optical depth due to gravitational lensing for a distant source if the redshift and velocity dispersion information for the faint galaxies according to the prediction of galaxy merging is employed ? Can the sky be fully covered by the lensing cross-sections of galaxies if faint galaxies are neither peaked at $z\sim0.5$ nor distributed randomly in redshift space ? We would like to answer these questions by modeling the galaxy matter distribution as the simplest singular isothermal sphere and the galaxy evolution as merging. Rix et al. (1994) and Mao \& Kochanek (1994) have presented a sophisticated treatment of how galaxy mergers affect the various aspects of statistical lensing. Here we focus on the specific issue of the lensing covering of galaxies over the sky. \section{Galaxy lenses as a result of merging} Following the idea of Broadhurst et al.(1992), we assume that the galaxy merging only increases galaxy number with increasing lookback time, whilst maintaining the proportion of different types (E, S0, S) of galaxies, their respective K-corrections and luminosity function shapes. Under these hypotheses, the present-day galaxy luminosity function can be written as \begin{equation} \phi_i(L_0,0)dL=\phi^*(L_0/L_i^*)^{-\alpha}\exp(-L_0/L_i^*)d(L_0/L_i^*), \end{equation} where $i$ indicates the $i-$th morphological type of galaxies: $i$=(E, S0, S). The luminosity function at redshift $z$ in the merging model is thus \begin{eqnarray} \phi_i(L,z)=\phi_i(L_0,0)f(z),\\ f(z)=\exp\{-Q[(1+z)^{-\beta}-1]/\beta\}. \end{eqnarray} Here $f(z)$ is representative of the time-dependence of evolution of the luminosity function, $Q$ is the merging rate and $\beta$ is the ratio of the Hubble time $H_0^{-1}$ to the age of the universe. The galaxy luminosity $L$ at $z$ is relevant to both the mering rate and the history of the star formation rate. For a matter dominated flat universe of $\Omega_0=1$, $\beta=1.5$, while matching the galaxy number counts gives roughly $Q\approx4$. This scenario of galaxy merging can account for both the redshift distribution and the number counts of galaxies at optical and near-infrared wavelengths (Broadhurst et al. 1992). If we further model the galactic halo by an isothermal sphere which is characterized solely by its velocity dispersion $\sigma$, $\sigma$ at $z$ will be reduced by a factor of $f(z)^{\nu}$ with respect to its present-day value $\sigma_0$ since the galaxy mass as a result of merging would decrease with lookback time. In particular, $\nu$ is close to $1/4$ (Rix et al. 1994). The surface number density of faint galaxies to $R=26$ obtained by Fried (1997) from the deep observations of the fields around three quasars is $1.93\times10^{5}/\Box^{\circ}$, in good agreement with the previous surveys(e.g. Metcalfe et al. 1996). This yields a mean alignment distance of $4^{\prime\prime}.1$ between the line-of-sight and the faint galaxies, i.e. $7.3$ $h^{-1}$ kpc in linear size if the galaxy is at $z=0.5$, where $h$ is the Hubble constant in unit of $100$ km/s/Mpc. Indeed, assuming that the faint galaxies seen at $R=26$ have a mean velocity dispersion of 200 km/s and are located at $z\sim0.5$, we can easily estimate that any background sources at $z\sim1$ will be gravitationally magnified by a factor of $\mu>1.1$. So, Fried (1997) argued that it is a purely observational fact that the distant objects must be lensed by foreground galaxies. Using the empirical formula between the luminosity $L_0$ and central velocity dispersion $\sigma_0$ of local galaxies $L_0/L_i^*=(\sigma_0/\sigma_i^*)^{g_i}$ with $\sigma^*_i=(225,206,144)$ km/s and $g_i=(4,4,2.6)$ for $i=(E,S0,S)$ galaxies (see Fukugita \& Turner 1991), we can compute from eq.(1) the morphological composition \{$\gamma_i$\} of galaxies by requiring $\sigma_0=200$ km/s. It turns out that $\{E:S0:S\}=(62,37,1)$, i.e., the galaxies with $\sigma_0>200$ km/s following the Schechter luminosity function eq.(1) are mainly composed of the E/S0 populations. As numerous surveys have shown that spirals are in the majority in the universe ($\sim70\%$), the oversimple assumption of Fried (1997) regarding the velocity dispersion ($200 - 300$ km/s) for all the faint galaxies has overestimated their contributions to gravitational lensing. Furthermore, the velocity dispersion of distant galaxies becomes smaller relative to that of local galaxies in terms of galaxy merging, which also leads to a decrease of lensing magnification. As a consequence, if the faint galaxies observed by Fried (1997) are $L^*$ spirals at $z\approx0.5$, the lensing magnification of a background source at $z=1$ would reduce to $\mu\approx1.03$. \section{Lensing cross-sections} We now estimate the total lensing cross-sections of galaxies with redshifts ranging from 0 to $z_s$ for the distant sources like quasars. For simplicity, we still employ the singular isothermal sphere for the galaxy matter distribution and the evolutionary scenario of galaxy merging. The lensing cross-section for magnification greater than $\mu$ by a single galaxy at $z_d$ is simply $\pi\theta_E^2/(\mu-1)^2$ where $\theta_E=4\pi[\sigma(z_d)^2/c^2](D_dD_{ds}/D_s)$ is the Einstein radius with $D_{d}$, $D_{ds}$ and $D_s$ being the angular diameter distances to the galaxy, to the distant source and from the galaxy to the source, respectively. The total lensing cross-section by all the galaxies is \begin{equation} p(z_s,>\mu)=F\;T(z_s)\;\frac{1}{(\mu-1)^2}, \end{equation} in which \begin{equation} F\equiv 16\pi^3\frac{\phi^*}{cH_0^3}\displaystyle\sum_i \gamma_i{{\sigma_i}^*}^4 \Gamma(-\alpha+4/g_i+1), \end{equation} and \begin{equation} T(z_s)\equiv \int_0^{z_s}(1+z_d)^3 \left(\frac{\tilde{D}_d\tilde{D}_{ds}}{\tilde{D}_s}\right)^2 f(z_d)^{1-4\nu}d\tilde{r}_{prop}, \end{equation} where the symbols with a hat of tilde are the corresponding parameters in units of $c/H_0$, and $dr_{prop}$ is the proper distance within $dz$ of $z$. Except for the factor of $1/(\mu-1)^2$, eq.(4) identifies the eq.(6) of Rix et al. (1994) for $\Omega_0=1$, in which they concluded that the total optical depth to multiple images is quite insensitive to merging. This can be easily shown in terms of eq.(4) by noticing that $\nu\approx1/4$. Taking $\nu=1/4$ and the galaxy morphological composition $\{\gamma_i\}= (12\%,19\%,69\%)$ (Postman \& Geller 1984) and utilizing the numerical result of $F$ found by Fukugita \& Turner (1991) and the result of $T$ found by Turner et al. (1984), we have \begin{equation} p(z_s,>\mu)=0.047\times\frac{1}{(\mu-1)^2}\frac{4}{15} \frac{[(1+z_s)^{1/2}-1]^3}{(1+z_s)^{3/2}}. \end{equation} A straightforward computation yields $p(1,>1.1)=3\%$ and $p(2,>1.1)=10\%$, i.e., the total lensing cross-sections of $\mu>1.1$ by all the galaxies even to $z_s=2$ cannot cover the whole sky at all ! It is important to note that this conclusion is independent of the limiting magnitudes of the surveys which may reveal a remarkably high surface density of galaxies. Also, our computation has probably overestimated the lensing cross-sections of galaxies in the sense that a biasing factor of $\sqrt{1.5}$ between the velocity dispersion of stars and of dark matter particles is employed by Fukugita \& Turner (1991) in obtaining $F$ for E/S0 galaxies. If such a correction of velocity biasing is unnecessary (e.g. Kochanek 1994), the total lensing cross-section of galaxies $p(z_s,>1.1)$ reduces to $(2\%,6\%)$ for $z_s=(1,2)$. \section{Conclusion} The merging model provides an increasing galaxy number and a decreasing galaxy mass with lookback time, which can relatively easily account for the observed high surface number density and the redshift distribution of galaxies in the deep surveys (Broadhurst et al. 1992). At least, it works equally well as other models (see, for examples, Yoshii \& Sato 1992; Metcalfe et al. 1996). In the scenario of galaxy mergers, the gravitational lensing of distant sources (e.g. quasars) by galaxies is affected by the following two factors: (1)There will be more galaxies as lenses as one goes back in time; (2)The galaxy masses, and equivalently the galaxy velocity dispersion, will decrease with lookback time. The first factor will alter significantly the galaxy redshift distributions and enhance the lensing amplitude, while the second one reduces the lensing cross-sections. A combination of these two factors gives rise to an optical depth to gravitational lensing that is roughly independent of the galaxy mergers [eqs.(4)-(6); see also Rix et al. 1994; Mao \& Kochanek 1994]. As a consequence, despite the fact that a considerably high surface number density of faint galaxies is detected in the deep surveys, the total lensing cross-sections of galaxies towards a distant source are still rather small, and can never fully cover our sky up to $z=2$. The claim that all the high redshift ($z>1$) objects are moderately magnified by galaxies (Fried 1997) arises from the oversimple assumptions about the galaxy redshifts and velocity dispersions ($200 - 300$ km s$^{-1}$) at high redshifts. We find that the maximum lensing covering by galaxies to $z=2$ is only $10\%$, and this number is likely to reduce to $6\%$ for a more realistic galaxy distribution. Other more sophisticated models of galaxy evolution should be employed in order to give a better estimate of the lensing covering by the faint galaxies over the sky. \begin{acknowledgements} We gratefully acknowledge an anonymous referee for helpful criticisms. This work was supported by the National Science Foundation of China. \end{acknowledgements}
2,877,628,090,929
arxiv
\section{Introduction} \label{s:int} The inhibition of convection by magnetic field in sunspots is an obvious reason to expect that the temperature and magnetic field are coupled. The magnetic field magnitude $B$ and continuum intensity $I_{\rm c}$, from which the brightness temperature $T_{\rm b}$ can be derived, were measured at different positions in one or more sunspots by several authors. \cite{Abdus71} first made a scatter plot of $I_{\rm c}$ versus $B$, based in observations of the spectral line Fe {\sc i} 630.25 nm. \cite{GurHo81} claimed to obtain a linear relation between $I_{\rm c}$ and $B$ but did not attempt to interpret it. \citeauthor{Marti90} (\citeyear{Marti90}, \citeyear {Marti93}) suggested to study the relation $T_{\rm b} \sim B^2$ relevant to the horizontal pressure balance in sunspots. \cite{Kopp92} for the first time utilised the infrared line Fe {\sc i} 1564.85 nm with Land\'e ~factor $g = 3$ to study this relation and found that it was linear only in the umbra but not in the whole sunspot. The line Fe {\sc i} 1564.85 nm seems to be very suitable to study the coupling of $I_{\rm c}$ or $T_{\rm b}$ with $B$. The line is only moderately sensitive to the temperature and it is formed in the low photosphere at the height of about $h = 110$ km above $\tau_{500} = 1$ \citep{Bruls91}, while a nearby continuum is formed at $h = -30$ km \citep{Verna81}. It splits completely already at $B = 500$ G and larger field-strength magnitudes are simply proportional to the wavelength difference between the peaks of its Zeeman $\sigma$-components \citep{Solan92}. In spite of the fact that the line suffers from some blends in the umbra, which, together with broadened wings at low temperatures, reduces the accuracy of the measured umbral field strength \citep{Rueed95}, it can be used to measure $B$ directly without a need of line-inversion methods. Another advantage is that in the infrared, the scattered light, caused by the telescope and instrument, and also the image degradation by seeing is lower than in the visible region. In the works that followed, the infrared line Fe {\sc i} 1564.85 nm was used to derive magnetic information either from direct measurements of the $I$ and/or $V$ Stokes profiles \citep{Livin02,Penn03,Reza12} or by means of line-inversion methods \citep{Solan93,Mathe04,Jaegg12}. Other authors \citep{Balth93,Stanc97,Leona08} utilised spectral lines in the visible region and line-inversion methods. Scatter plots of $B$ versus $I_{\rm c}$ or $T_{\rm b}$ derived from measurements including the umbra together with the penumbra \citep{Kopp92,Solan93,Balth93,Stanc97,Mathe04} have a typical non-linear shape: an umbral part with increasing $B$ at lowest intensities (temperatures), an umbra-penumbra transition with a relatively small change of $B$ in a wide intensity range, and a penumbral part characterised by a substantial decrease of $B$. A recent survey of numerous spots made by \cite{Jaegg12} presents a similar picture. The authors interpret the non-linear $B - T_{\rm b}$ relation in the umbra as a consequence of molecular (H$_2$) formation, reducing the number density of particles and the gas pressure in the coolest parts, so that a stronger magnetic field is necessary to maintain the horizontal pressure balance. We extend the previous studies by high spatial resolution full-Stokes observations that newly make it possible to study the temperature -- magnetic field relation separately in the umbra, light bridges, and the penumbra. We also take advantage of existing radiative MHD numerical simulations of sunspots (\citealp{Rempe12}, in particular) and compare our results with synthetic data. \section{Observations and Data Analysis} \label{s:oda} \subsection{Observations} \label{ss:obs} The data were acquired on 11 May 2015, from 09:30 to 11:30 UT with the {\it GREGOR Infrared Spectrograph} (GRIS, \citealp{Colla12}) attached to the 1.5 m solar telescope {\it GREGOR} \citep{Schmi12}. The telescope is equipped with an adaptive optics system \citep{Berke12}, which corrects aberrations due to the seeing and telescope, and an image rotator, compensating the solar image rotation caused by the {\it GREGOR's} alt-azimuth mount. GRIS full-Stokes scans in the spectral region centred at the Fe {\sc i} 1564.85 nm line were obtained by moving the spectrograph slit by 300--400 steps 0.13\arcsec wide, which is also the resolution element of GRIS along the slit. The wavelength sampling in this region was 3.95 pm, the exposure time 100 ms, and each polarization state was accumulated three times to increase the signal-to-noise ratio. The polarimetric calibration was made using the telescope model and the {\it GREGOR} polarimetric calibration unit \citep{Hofma12}. The continuum intensity was measured in a line-free region at $\lambda = 1564.23$ nm. Our target was the large and complex active region NOAA 12339, which was composed of four large sunspots (two leading and two following) and many smaller spots. It was located near the disc centre and started to decay at the day of observations. We selected three best scans for further analysis: 007 (09:31--09:47 UT, leading spot 1), 016 (10:51--11:03 UT, leading spot 2), and 018 (11:10--11.24 UT, following spot 1). The heliocentric angle of all the spots was equal to 15\degr ($\mu = 0.97$) at the times when their scans were acquired. The field of view of the scanned maps varied from 61\arcsec $\times$~52\arcsec to 61\arcsec $\times$~39\arcsec according to the number of scanning steps. During the observations, the adaptive optics worked throughout all scans, indicating a seeing characterised by the Fried parameter between $r_0 = 10$--15~cm. Because the wavefront sensor worked at $\lambda = 500$ nm and $r_0 \sim \lambda^{6/5}$, the Fried parameter at $\lambda = 1565$ nm was approximately 40--60~cm. The spatial resolution of 0.5\arcsec was estimated from the smallest features seen in the continuum map. \subsection{Scattered Light} \label{ss:sca} The observed intensity suffers from parasitic light scattered in the atmosphere, optical system of the telescope, and spectrograph. The influence of the atmosphere in the near infrared is small and its major part is compensated by the adaptive optics. The spectral scattered light ({\it i.e.} in the $\lambda$-direction) introduced by the spectrograph mainly influences line depths in Stokes $I$ and it can be neglected for $Q,\, U,\, V$ because their continua are at zero. Generally, the effect of spectral scattered light is not important for measurements of line splitting, shifts, and $I$-continuum intensity, and we can neglect it in our analysis. The most important is the spatial scattered light in the telescope and spectrograph. Its 2D point-spread function (PSF) was determined for {\it GREGOR} and GRIS under seeing conditions similar to ours by \cite{Borre16}. It was composed of two Gaussian functions, which described the scattering from narrow and wide angles: \begin{equation} \label{e:psf} {\rm PSF}(x,y) = p_{\rm n}G_{\rm n}(x,y,\sigma_{\rm n}) + p_{\rm w}G_{\rm w}(x,y,\sigma_{\rm w}) \,. \end{equation} Parameters of the wide-angle term $p_{\rm w} = 0.2$ and $\sigma_{\rm w} = 20\arcsec$ were derived under the assumption that when the magnetic-field vector {\textit {\textbf B}} is parallel with the line of sight (hereafter LOS), which occurs at some places inside the umbra, the observed central component of the full-split line Fe~{\sc i} 1564.85~nm is formed completely by scattering of light from the surrounding penumbra and granulation. This term corresponds to scattered light in the telescope's optics. Parameters of the narrow-angle term $p_{\rm w} = 0.8$ and $\sigma_{\rm w} = 0.18\arcsec$ were derived from a spectral scan of a pinhole array inserted in the science focus of the telescope. This term corresponds to the scatter inside the spectrograph. We deconvolved the observed $I$-continuum maps with this PSF in the Fourier domain, using the Wiener optimum restoration filter $F$ ({\it e.g.} \citealp{Pratt78}) in the form \begin{equation} \label{e:wie} F(\omega) = \frac{C+1}{C} \frac{{\rm MTF}(\omega)}{{\rm MTF}^2(\omega) + 1/C} \,, \end{equation} where MTF (modulation transfer function) is the Fourier image of PSF, $\omega$ is the spatial frequency in a radially symmetric MTF, and $C = 30$ is a parameter depending on the signal-to-noise ratio. An increase of this parameter raises the contribution of high spatial frequencies in the restoration, including a noise. Its value was found empirically following the rules that i) no part of real information was suppressed and ii) the noise was not enhanced. This way we obtained the deconvolved maps of $I_{\rm c}$ for all scans. \subsection{Brightness Temperature} \label{ss:tem} The brightness temperature $T_{\rm b}$ can be easily obtained from the deconvolved $I_{\rm c}$ maps normalised to the mean continuum intensity of undisturbed photospheric granulation, using the Planck function \citep{Solan93,Mathe04}. Following these authors, we adopt the reference quiet-Sun temperature equal to 7058 K. This value corresponds to the temperature at $\tau_{1600} = 1$ in the photospheric reference model of \cite{Maltb86}. Due to the opacity minimum at 1600 nm, this optical depth refers to deeper and hotter layers than $\tau_{500} = 1$. The obtained function $T_{\rm b}(I_{\rm c})$ for the continuum at $\lambda = 1564.2$ nm is approximately linear in the $I_{\rm c}$ range from 0.5 to 1.1. \subsection{Magnetic Field Measurement} \label{ss:mag} The Fe {\sc i} 1564.85 nm normal Zeeman triplet shows a complete split for $B > 500$~G \citep{Solan92}. We can use the well-known formula \begin{equation} \label{e:bee} B = \frac{\Delta \lambda}{4.67\times 10^{-5} \, g\lambda_0^2} \ \ {\rm [G, cm]} \end{equation} to calculate the magnetic field strength magnitude from the displacement $\Delta \lambda$ of $\sigma$-components from the central wavelength $\lambda_0$. To reduce the effect of noise in the spectra as much as possible, we measure the line split $s = 2 \Delta \lambda$ using a 7-point parabolic fit to the extrema in the Stokes profiles $\overline{QU} = \sqrt{Q^2 + U^2}$ and/or $V$ as follows: We calculate the total circular and linear polarizations $P_{\rm cir} = \int |V(\lambda)| \, {\rm d}\lambda$ and $P_{\rm lin} = \int \overline{QU}(\lambda) \, {\rm d}\lambda$ in a 0.35~nm wide region around the central wavelength. Then the splits $s_{V}$ and/or $s_{\,\overline{QU}}$ are measured from the $V$ and/or $\overline{QU}$ profiles only when $P_{\rm cir}$ and/or $P_{\rm lin}$ are higher than 0.5, respectively. The total polarization values serve as weights in the final calculation of the split $s$, so that \begin{equation} \label{e:spl} \Delta \lambda = s/2 = \frac{P_{\rm cir} \, s_{V} + {P_{\rm lin} \, s_{\,\overline{QU}}}}{2(P_{\rm cir} + P_{\rm lin})} \,. \end{equation} One resolution element in $\Delta \lambda$ corresponds to 58 G and it is further refined by the parabolic fit to the extrema. The magnetic field inclination $\gamma$ related to the LOS direction can be determined directly as $\gamma = \arctan(\sqrt{Q_{\rm max}^2 + U_{\rm max}^2}/V_{\rm max})$. According to \cite{Solan92}, the ratio $\sqrt{Q_{\rm max}^2 + U_{\rm max}^2}/V_{\rm max}$ for the line Fe {\sc i} 1564.85 nm depends only on $\gamma$ and not $B$ for $B > 1000$ G outside the umbra and $B > 2500$ G in the umbra. We adopted a conservative condition of $B > 1500$ G for the $\gamma$ calculations, being aware of a possible underestimate of $\gamma$ in the umbra. The fact that the $Q, \, U, \, V$ profiles are not corrected for scattered light is another source of uncertainty, leading to a possible overestimate of $\gamma$. Thus we have to consider the LOS inclination values only as a rough estimate, particularly in the umbra. \section{Results} \label{s:res} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{Fig1.eps}} \caption{Continuum intensity maps, corrected for scattered light, obtained from the scans 007, 016, and 018. Contours delineate three types of masks that distinguish the umbra (green), light bridges (red), and the penumbra (black). The purple line is a contour of $B = 1400$ G, considered as an approximate boundary between the inner and outer penumbra. The vertical size of images is 61\arcsec.} \label{f:masks} \end{figure} Regions of interest, which are the umbra, light bridges, and penumbra, were defined by binary masks that were drawn by hand using the $I_{\rm c}$ maps corrected for scattered light (Figure~\ref{f:masks}). These masks also served to exclude problematic areas caused by scanning errors and dirts on the spectrograph slit. Scatter plots of $B$ versus $I_{\rm c}$ and $T_{\rm b}$ for all three spots and all regions of interest together are shown in Figure~\ref{f:biall}, left panel. The points, corresponding to different positions in the fields of view, are green in umbrae, red in light bridges, and black in penumbrae. Scatter plots that were made separately for individual sunspots (not shown) have a very similar shape. Solid lines represent mean values of $B$ at points with $I_{\rm c}$ falling into 0.01 wide bins of the continuum intensity histograms. The bins must contain at least 200 points to calculate the mean value. We also computed standard deviations $\sigma$ characterising the scatter of $B$ of individual points in each bin (dashed lines). Colours of the lines are black for the umbra, yellow for light bridges, and purple for penumbra. Because the ranges of the umbra, light bridges, and penumbra partially overlap, contours of relative densities of points in the scatter plots are depicted for a better clarity in the right panel of the figure. The relative densities are normalised to their maxima. Compared to the previous studies, our results have a substantially larger scatter due to resolved fine structures; our spatial resolution is at least twice better than that of \cite{Mathe04}. In the umbra, the magnetic field strength varies between 1600 and 3100 G with a typical scatter of $\pm 150$ G. The normalised continuum intensity (temperature) ranges from 0.53 (5100 K) to 0.85 (6400 K) and the typical scatter is $\pm 200$~K, caused by numerous umbral dots. The $B - I_{\rm c}$ relation is definitely non-linear and its curvature even increases in a corresponding $B^2 - T_{\rm b}$ plot (not shown). A possible reason for the deviation from linearity was discussed by \cite{Jaegg12}. Our results are consistent with those of \cite{Kopp92}, who made measurements at various positions inside six sunspots. \cite{Livin02}, \cite{Reza12}, and \cite{Watson14} concentrated to the darkest points in multiple umbrae. Our results are in a good agreement with the first work and $B$ measured by us in the whole umbra is weaker by 200--300~G compared to the two latter ones. A compilation of results published by different authors can be seen in Figure~7 of \cite{Penn03}. Light bridges, with magnetic field strength between 1300 and 2500~G and continuum intensity (temperature) from 0.62 (5500 K) to 0.97 (6900 K), show a $B - I_{\rm c}$ relation very similar to that of the umbra, slightly shifted to higher continuum intensities and temperatures. The scatter of points is larger than in the umbra, $\pm 200$ G in $B$ and $\pm~250$ K in $T$. \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{Fig2.eps}} \caption{{\it Left:} Scatter plot of $B$ versus $I_{\rm c}$ and $T_{\rm b}$ in the umbra (green, 27826 points), light bridges (red, 9901 points), and the penumbra (black, 113218 points) for all three sunspots together. Solid lines show average values in the umbra (black), light bridges (yellow), and penumbra (purple), together with dashed lines of $\pm 1\sigma$, which characterise the scatter of individual points. {\it Right:} Density contours of the scatter plot. Density values are normalised to their maxima and contour levels are annotated in the plot. The horizontal dashed line marks $B = 1400$ G (cf. Figure~\ref{f:masks}).} \label{f:biall} \end{figure} The $B - I_{\rm c}$ relation in the penumbra is more complex. The density plot in Figure~\ref{f:biall} shows a dense cloud of points below $B = 1400$~G and an extension towards higher magnetic field strengths (1400--2300~G) and lower continuum intensities and temperatures (0.70--0.97, 5800--6900~K). The contour $B = 1400$~G plotted in Figure~\ref{f:masks} indicates that this value may be considered as an approximate boundary between the inner and outer penumbra. The cloud characterised by a weaker magnetic field (700--1400~G) and higher continuum intensity and temperature (0.83--1.01, 6400--7100~K) corresponds to the outer penumbra. The magnetic field strength rises quite steeply towards the inner penumbra, while the temperature at the continuum formation height diminishes only slightly. Below $I_{\rm c} < 0.83$ in the inner penumbra, the $B - I_{\rm c}$ relation is similar to that of the umbra and it is nearly identical with that of light bridges (Figure~\ref{f:biall}, left panel). The large scatter of intensities and temperatures is obviously due to the presence of bright and dark penumbral filaments. \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{Fig3.eps}} \caption{Scatter-plot densities of $B$ versus $I_{\rm c}$ and $T_{\rm b}$ ({\it left}) and $\gamma$ versus $I_{\rm c}$ and $T_{\rm b}$ ({\it right}), separately for the umbra ({\it top}), light bridges ({\it middle}), and the penumbra ({\it bottom}). The contours represent observed data for all three spots, while the grey-scale clouds depict in a linear scale the relative densities of scatter plots obtained from numerical simulations. The contour levels are as in Figure~\ref{f:biall} in the left column and 0.1--0.7, step 0.2 in the right column. Note that $\gamma$ was measured only in regions where $B > 1500$ G.} \label{f:psep} \end{figure} Figure~\ref{f:psep} shows separate density plots of the umbra, light bridges, and penumbra for all three sunspots together. The left column depicts density contours of the $B - I_{\rm c}(T_{\rm b})$ relation identical to those in Figure~\ref{f:biall} and the right column presents density contours of magnetic field LOS inclination $\gamma$ versus $I_{\rm c}$. The plots include 27826 measured points in the umbra, 9901 in light bridges, and 113218 in the penumbra. Because $\gamma$ was measured only in the locations with $B > 1500$~G, the number of points in the inclination plots was reduced to 9374 in light bridges and 31749 in the penumbra, so that the whole outer penumbra is missing. Note that the $\gamma - I_{\rm c}$ scatter-plot densities were obtained for the LOS reference angle equal to 15\degr from the normal. The umbra has a large scatter of points but the density plot shows an increase of $\gamma$ with $I_{\rm c}$, which is an effect of increasing field inclination towards the edge of the umbra and in umbral dots \citep{Socas04}. The LOS inclination in light bridges is similar to that in the umbra. A major part of the points is scattered between 0--50\degr and the average is around $20{^\circ}$. \cite{Jurca06} found a higher value of $\gamma = 40$\degr connected with a magnetic canopy above light bridges at $h \approx 200$~km above $\tau_{500} =1$. The line Fe~{\sc i} 1564.85 nm, however, is formed 100~km below this canopy, where the magnetic field vector can still be almost vertical. In the inner penumbra, the individual values are spread over 10--90\degr similarly to the results of \cite{Mathe04}. \section{Comparison with a Synthetic Sunspot} \label{s:syn} It is interesting to compare our observational results with analogous data derived from numerical simulations of a sunspot, free of instrumental effects, scattered light, spectral line blends, and providing a much higher spatial resolution. We used the numerical model of a sunspot located at the disc centre \citep{Rempe12}, particularly {\tt slab\_12x8x12km\_ng}\footnote{See {\tt http://download.hao.ucar.edu/pub/rempel/sunspot\_models/Fine\_Structure/}.}, which simulates a 6144 km wide strip of the umbra, penumbra, and surrounding granulation with the horizontal sampling of $12 \times 12$~km. This data set was derived by M. Rempel from a grey radiative transfer sunspot simulation at originally $16\times 16\times 12$ km resolution in a $49.152\times 49.152\times 6.144$ Mm ($x, y, z$) domain, evolving for one hour from an initial state defined by other models with lower resolution. The top half of that domain was regridded to $12\times 12\times 8$ km resolution and evolved for another 15 minutes (10 minutes grey and the last 5 minutes non-grey). A snapshot of this model was utilised by \cite{Borre14} to compute full-Stokes synthetic spectra in the 1.5 $\mu$m region, employing the synthesis module of the SIR code (Stokes Inversion based on Response functions; \citealp{Basil92}). These forward modeled spectra were used for comparison with our observations. Splits of the synthetic $V$ and $\overline{QU}$ profiles of the Fe~{\sc i} 1564.85~nm line as well as the LOS inclination were measured in the same way as in the observations (Section~\ref{ss:mag}). The umbra and penumbra were distinguished by binary masks drawn by hand in the synthetic continuum map. Light bridges do not exist in the synthetic sunspot. Relations between the magnetic field strength/inclination and continuum intensity/temperature based on the synthetic data are displayed in Figure~\ref{f:psep} in the form of grey-scale density plots. Scales of the plots are linear, from zero to the maximum density. We can see from the figure that in the umbra, synthetic $B$ is generally higher by approximately 1000~G, $I_{\rm c}$ lower by 0.15, and the scatter of points is larger than in the observations. In the penumbra, the differences are smaller but still significant: 700~G, 0.08, and the scatter is substantially larger. Atomic and molecular blends may reduce the observed $B$ in the umbra and the light scattered in the Earth's atmosphere (we corrected only for the instrumental stray light) can raise the observed continuum intensity. Moreover, highly magnetic features obtained by numerical simulations in the umbra and penumbra increase the simulated $B$ but they are too small to be detected in observations with the resolution of 350~km. Also the sunspot simulation has some photometric limits. According to the notes included for the Rempel's models, ``the luminosity of the quiet Sun surrounding the sunspot is not exactly solar'', being off by a few percent. It is possible that the value used for the quiet-Sun intensity in the model is spuriously increased by bright magnetic elements in this highly resolved simulation, which reduces the normalised intensity. There is also a question of how accurate are the intensities of this model, since the model evolved only for one hour at this resolution and the non-grey radiative transfer was switched on only during the last five minutes. Realising that the spatial resolution in the simulations is much better than in the observations, we suggest that the scatter of points increases with increasing spatial resolution. This is seen already when we compare our results with previous works that have a lower resolution. The simulations are fifteen times finer than our observations and the corresponding scatter is much larger. It can be considered that the $B - I_{\rm c}$ scatter is an intrinsic attribute of sunspot fine structures. It is hard to compare the observed LOS magnetic field inclination with the local-reference-frame inclination in the simulations. However, because the LOS angle is only 15\degr in our observations, a tentative comparison may be interesting. In the umbra, both clouds of points are quite similar when we omit the differences in $I_{\rm c}$ caused by the scattered light in our observations and possible photometric inaccuracies of the simulations. The trend of increasing inclination with increasing $I_{\rm c}$ is common for both observed and synthetic data. In the penumbra, the points with $\gamma < 30$\degr are absent in the simulations. These points appear in the observations probably due to the effect of projection into the LOS system. We also have to keep in mind that the inclination was measured only in the inner penumbra. \section{Discussion and Conclusions} \label{s:dis} We repeat the classical exercise of finding the temperature -- magnetic field relation in sunspots in the infrared region, using the simplest direct methods to measure the magnetic field strength and inclination. Our results are consistent with the previous works, showing the typical shape of the $B - T_{\rm b}$ relation (see Section~\ref{s:int}). Thanks to high spatial resolution provided by {\it GREGOR}, we are able to treat the umbra, penumbra, and light bridges separately. While the authors of previous works, {\it e.g.} \cite{Stanc97}, \cite{Mathe04}, and \cite{Jaegg12}, utilised an intensity threshold to separate the umbra and penumbra, we used manually drawn boundaries based on visible structures. Thanks to this morphological criterion we obtained overlaps in the properties of the umbra, penumbra, and light bridges. All these structures have a common overlap in the range $1700\, {\rm G}<B< 2100\, {\rm G}$ and \mbox{$5800\, {\rm K}<T_{\rm b}<6300\, {\rm K}$} (see Figure~\ref{f:biall}). In our results, this range corresponds to the ``umbra-penumbra transition'' in the typical $B - T_{\rm b}$ plot. Despite the large scatter caused by fluctuations of temperature and magnetic field strength in fine-scale structures, which is also typical for the synthetic sunspot, there are general trends in the $B - T_{\rm b}$ relation represented by the mean values plotted in Figure~\ref{f:biall}. These trends are very similar in the umbra, light bridges, and the inner penumbra in the range of the common overlap. This indicates that the interaction of the magnetic field with moving plasma, which is the cause of sunspot cooling, has the same character in the inner parts of spots, where the magnetic field is not strongly inclined to the normal. We find a tentative separation value $B = 1400$ G between the inner and outer penumbra. The outer penumbra shows a generally weaker magnetic field strength compared to the inner penumbra at equal temperatures. These facts can be explained in terms of the ``interlocking comb'' magnetic structure of the penumbra, (\citealp{ThoWe08}, Chapter 5.2, and references therein), where at least two magnetic-field systems are expected. The first one, which can be considered as a continuation of the umbral magnetic field, has less inclined field lines that extend high into the atmosphere. Its magnetic field is stronger and its filamentary structure (spines) weakens in the outer penumbra. In the inner penumbra, it is related to dark filaments \citep{Borre11}. The second system, which is connected with the Evershed flow, has a weaker magnetic field with steeply inclined, nearly horizontal field lines and a limited vertical extent. The corresponding penumbral filaments are bright in the inner penumbra and dark in the outer one. The largest intensity contrast between the bright and dark penumbral filaments is seen in the inner penumbra close to the umbral border, where bright penumbral grains \citep{Mulle73} are formed at the tips of the bright filaments. When observed in the low photosphere (at $h \approx 100$ km), the umbra, light bridges, and inner penumbra are dominated by the first, strong, and less inclined magnetic field system. Intensity differences in the inner penumbra are large due to the high contrast of penumbral filaments. The outer penumbra, however, is dominated by the nearly horizontal magnetic field at this height and the field strength is considerably weaker because i) it decreases with increasing distance from the umbra and ii) the second field system is generally weaker than the first one \citep{Langh05}. The contrast of penumbral filaments is lower than in the vicinity of the umbra, so that the intensity differences are smaller than in the inner penumbra. The statistical analysis of the temperature -- magnetic field relation is complementary to the state-of-art methods of spectropolarimetric inversions and numerical simulations, which are currently used to study sunspots. It can serve to verify their results and even to bring some new details. Our approach made it possible to compare individual $B - T_{\rm b}$ relations obtained for different sunspot structures and we find a strong similarity between the umbra, light bridges, and inner penumbra, while the outer penumbra with $B < 1400$ G shows a different behaviour. This can be explained by the interlocking comb magnetic structure. There are still some questions open: Do other sunspots with different size, brightness, and phase of evolution show a different $B - T_{\rm b}$ relationship? And does it vary with the solar cycle? \begin{acks} This work was supported by the grant 14-04338S of the Czech Science Foundation, the FP-7 Capacities Project No. 312495 SOLARNET, and the institutional support RVO:67985815 of the Czech Academy of Sciences. R.R. acknowledges financial support by the Spanish Ministry of Economy and Competitiveness through the project AYA2014-60476-P. We thank J.~M. Borrero for synthetic spectra computed in the frame of the international working group Extracting Information from Spectropolarimetric Observations: Comparison of Inversion Codes at the International Space Science Institute (ISSI) in Bern (Switzerland). We use data provided by M. Rempel at the National Center for Atmospheric Research (NCAR). The National Center for Atmospheric Research is sponsored by the National Science Foundation. The 1.5-meter {\it GREGOR} solar telescope was built by a German consortium under the leadership of the Kiepenheuer Institute for Solar Physics in Freiburg with the Leibniz Institute for Astrophysics Potsdam, the Institute of Astrophysics G\"ottingen, and the Max Planck Institute for Solar System Research in G\"ottingen as partners, and with contributions by the Instituto de Astrof\'isica de Canarias and the Astronomical Institute of the Czech Academy of Sciences. We thank the referee for comments leading to a substantial improvement of the paper. \noindent {\bf Conflict of Interest:} The authors declare that they have no conflict of interest. \end{acks} \bibliographystyle{spr-mp-sola}
2,877,628,090,930
arxiv
\section{Introduction} The application of stochastic geometry to the modeling and analysis of wireless networks has attained a lot of attention in recent years. It has enabled a new framework called \emph{transmission capacity} (TC) framework, which led to many new profound results in the topic of wireless networks (cf. \cite{weber10,ganti09}). The advantage of using a spatial model to describe the node positions rather than assuming a deterministic network topology is two-fold: First, such a probabilistic approach decouples the performance analysis from the \emph{actual} topology, thereby increasing the generality of results. Second, it provides powerful means for network optimization, especially for highly dynamic networks, where interference is (unpredictably) fast-varying. With few exceptions, the node positions are mostly modeled as a \emph{stationary} point process. Stationarity is a desirable property, allowing analytically tractable computations and, even more important, representing a key requirement for applying the definition of TC. Even though the stationarity assumption has not really narrowed the range of obtainable insights, it poses some shortcomings to the analysis of wireless networks:\\ \textbf{Infinite networks:} Stationarity implies that the network is infinitely large as opposed to real deployments with a finite number of nodes.\\ \textbf{No border effects:} Border effects are inherently neglected in infinite networks. However, border effects cause heterogeneity in the nodes' capabilities depending on their location, i.e., being dis-/connected, interference-/noise-limited, etc.\\ \textbf{Infinite interference for free-space path loss:} For stationary node distributions in the plane and a path loss exponent $\alpha=2$, the interference is infinite almost sure (a.s.) \cite{weber10}, resulting in a TC of zero. More specifically, stationary models lose their accuracy as the path loss exponent decreases due to the fact that infinitely many nodes contribute to the interference.\\ \textbf{Application of the TC:} As already mentioned, the TC applies only to stationary networks. When the node distribution is non-stationary, this metric must be modified to take into account heterogeneous node deployments. In reality, wireless ad hoc networks always exhibit a heterogeneous node distribution. The most obvious example is perhaps when the nodes are distributed in a \emph{bounded} region. In such a network, the interference situation near the center will significantly differ from that at the border. Besides this simple example, more complex deployments are often found in practice, e.g., wireless sensor networks created by airdrop \cite{akyildiz02}, spontaneous formation of hot spots \cite{feeney01}, etc. \subsection{Contribution} We extend the existing framework by relaxing the requirements on the node distribution, thereby allowing for \emph{isotropy only}. More specifically, we have the following results: \begin{itemize} \item The interference and outage statistics for slotted Aloha with $\alpha=2$ and $\alpha=4$ are derived as a function of the receiver position and the spatial shape of the node distribution. We consider a path loss plus block fading model. As for the outage statistics, we focus on Rayleigh fading. We show how known results for the stationary case arise from our results as special cases. \item Two global metrics, namely the differential TC and the average sum throughput, that take into account heterogeneous node deployments are proposed. While the former metric is a refinement of the TC, the latter quantifies the first order overall network efficiency. \end{itemize} \subsection{Related work} Stationary models with heterogeneous node deployment have already been investigated. Specifically, Poisson-Cluster \cite{ganti09_1} and Mat\'{e}rn hard-core models \cite{baccelli09} have been studied, as they are well-suited for analyzing more sophisticated medium access control (MAC) schemes. Treated as \emph{general motion-invariant}, these and similar models were further analyzed in \cite{ganti11,ganti09,giacomelli11} in a unifying way. In \cite{govindasamy11}, a non-stationary and isotropic node distribution was assumed for analyzing multi-antenna receivers. While the analysis showed that the shape of the spatial distribution has a considerable impact on link performance, the scenario was limited only to the case of the receiver located in the origin. \section{Network model}\label{sec:model} We consider a wireless ad hoc network with nodes \emph{isotropically} distributed in $\mathbb{R}^2$. The MAC employed by the nodes is slotted Aloha. In a randomly chosen slot, some nodes wish to transmit a packet. We assume that the set of transmitters $\{\mathsf{x}\}$ follows an isotropic Poisson point process (PPP) $\Phi_{\text{t}}:=\{\mathsf{x}\}$ on $\mathbb{R}^2$ with intensity $\lambda(x)$, where $x\in\mathbb{R}^2$. Due to the isotropy of $\Phi_{\text{t}}$, $\lambda(x)$ is rotation-invariant and depends only on the Euclidean norm $\|x\|$, i.e., $\lambda(x)=\lambda(\|x\|e^{j\phi})=\lambda(\|x\|)$, $\phi\in[0,2\pi)$. When working with polar coordinates, we will use the notation $\lambda(r)$, where $r:=\|x\|$. With \cite{baccelli09}, we can describe $\lambda(r)$ as the resulting intensity after \emph{distance-dependent} thinning of a stationary PPP of intensity $\lambda$, i.e. \begin{IEEEeqnarray}{c} \lambda(r):=\lambda F(r),\IEEEeqnarraynumspace \end{IEEEeqnarray} where $F(r)$ is called the \emph{shape function} as it reflects the spatial shape of $\Phi_{\text{t}}$. We will pose the following restrictions on $F(r)$: \begin{enumerate}[(i)] \item Positiveness: $F(r)\geq0$ for all $r\geq0$. \item Normalization: $\max_{r}\{F(r)\}=1$. \end{enumerate} The restrictions (i) and (ii) are necessary to ensure that $\lambda(r)$ is non-negative and bounded by $\lambda$ everywhere. We assume that each transmitter $\mathsf{x}$ has an intended receiver $\mathsf{y}$ randomly located at fixed distance $d$. From the random translation Theorem \cite{baccelli09} it follows that the set of receivers $\{\mathsf{y}\}$ forms an isotropic PPP $\Phi_{\text{r}}:=\{\mathsf{y}\}$ on $\mathbb{R}^2$ with intensity $\lambda(x)$ as well. The fixed distance assumption is commonly accepted, see \cite{weber10}. However, we will relax this assumption in Section \ref{sec:applications} We consider a path loss plus block fading channel with independent and identically distributed (i.i.d.) fading coefficients. The power path loss between two positions $x,y\in\mathbb{R}^2$ is given by $\ell(\|x-y\|):=(c+\|x-y\|^{\alpha})^{-1}$ with path loss exponent $\alpha$. The parameter $c>0$ ensures boundedness of $\ell$. The power fading coefficient between a transmitter at $x$ and a receiver at $y$ is given by $\mathsf{g}_{xy}$, where $\mathbb{E}\left[\mathsf{g}_{xy}\right]=1$ for all $x,y\in\mathbb{R}^2$. We further place a receiver at $y_{0}\in\mathbb{R}^2$ and an intended transmitter at an arbitrary position $x_{0}\in\mathbb{R}^2$ with distance $d$ to $y_{0}$. The pair $x_{0}\to y_{0}$ is called the \emph{reference pair} as it will allow us to measure the (spatially-averaged) link performance for receivers at distance $\|y_{0}\|$ from the origin Assuming fixed power transmissions for all nodes, the instantaneous signal-to-interference-plus-noise ratio (SINR) at the reference receiver $y_{0}$ is given by: \begin{IEEEeqnarray}{rCl} \mathsf{SINR}(y_{0})&:= \frac{\mathsf{g}_{x_{0}y_{0}}}{\eta+\ell(d)^{-1}\mathsf{I}(y_{0})},\IEEEeqnarraynumspace\label{eq:sinr} \end{IEEEeqnarray} where $\eta$ is the average noise-to-signal ratio and \begin{IEEEeqnarray}{c} \mathsf{I}(y_{0}):=\sum\limits_{ \mathclap{\mathsf{x}\in\Phi_{\text{t}}\setminus\{x_{0}\}}}\mathsf{g}_{\mathsf{x}y_{0}}\ell(\|\mathsf{x}-y_{0}\|) \end{IEEEeqnarray} is the interference power. We assume strong channel coding, i.e., the outage event is a steep function of the SINR. The outage probability (OP) at the reference pair $x_{0}\to y_{0}$ is then given by the reduced Palm probability \begin{IEEEeqnarray}{rCl}\label{eq:outage} q(y_{0}):=\mathbb{P}^{!x_{0}}\left(\mathsf{SINR}(y_{0})<\beta\right), \end{IEEEeqnarray} where $\beta$ is a modulation and coding specific SINR threshold. \section{Interference analysis}\label{sec:interference} \newcounter{tmpeqnnum} \begin{figure*}[!b] \normalsize \hrulefill \setcounter{tmpeqnnum}{\value{equation}} \begin{IEEEeqnarray}{c} \setcounter{equation}{12} A_{4}(y_{0},c):=\frac{\pi}{2\sqrt{c}}\left(F(r)\,\text{arctan}\frac{2\text{Re}\{\kappa(r,c,y_{0})\}}{1-|\kappa(r,c,y_{0})|^2} \bigg\vert_{r=0}^{\infty}-\int_{0}^{\infty} f(r)\,\text{arctan}\frac{2\text{Re}\{\kappa(r,c,y_{0})\}}{1-|\kappa(r,c,y_{0})|^2}\,\mathrm dr\right)\label{eq:A4} \end{IEEEeqnarray} \setcounter{equation}{\value{tmpeqnnum}} \end{figure*} We now study the interference statistics at the reference receiver at $y_{0}$. Before, we note the following two integral identities which are taken from \cite{gradshteyn07}: \begin{identity}\label{lemma:integral_id1} If $a>|b|$, $a,b\in\mathbb{R}$, \begin{IEEEeqnarray}{c} \int_{0}^{\pi}\frac{\mathrm d\phi}{(a+b\cos\phi)^{n+1}}=\frac{\pi\,P_{n}\left(\frac{a}{\sqrt{a^2-b^2}}\right)}{(a^2-b^2)^{\frac{n+1}{2}}},\IEEEeqnarraynumspace\label{eq:integral_id1} \end{IEEEeqnarray} where $P_{n}(x)$ is the $n^{\text{th}}$-Legendre polynomial. \end{identity} \begin{identity}\label{lemma:integral_id2} Let $a_{1},a_{2},a_{3}\in\mathbb{R}$, $R:=a_{1}+a_{2}t^2+a_{3}t^4$, $\Delta=4a_{1}a_{3}-a_{2}^2$ and $a_{3}>0$. Using substitution $t\to t^2$, we have \begin{IEEEeqnarray}{c} \int\frac{2t\sqrt{a_{3}}\,\mathrm dt}{\sqrt{a_{1}+a_{2}t^2+a_{3}t^4}}=\begin{cases} \log\frac{2\sqrt{a_{3}R}+2a_{3}t^2+a_{2}}{\sqrt{\Delta}},& a_{3}>0\\ \text{arcsinh}\frac{2a_{3}t^2+a_{2}}{\sqrt{\Delta}}, & \Delta>0\\ \log(2a_{3}t^2+a_{2}), & \Delta=0. \end{cases}\IEEEeqnarraynumspace\label{eq:integral_id2} \end{IEEEeqnarray} \end{identity} We are now in the position to derive the first moment of the interference at $y_ {0}$. \begin{theorem}\label{theorem:moment1} Let $f(r):=\mathrm d F(r)/\mathrm d r$, $c>0$ and $\alpha=2$. If $\lim\limits_{r\to\infty}F(r)r^{\nu} < \infty$ for some $\nu>0$, then \begin{IEEEeqnarray}{c} \mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right] = \lambda A_{2}(y_{0},c)<\infty,\label{eq:exp_int2a} \end{IEEEeqnarray} where the \emph{interference-driving} function $A_{2}(y_{0},c)$ is given by \begin{IEEEeqnarray}{rCl} A_{2}(y_{0},c)&:=&F(0)\,\text{arcsinh}\frac{y_{0}^2-c}{2y_{0}\sqrt{c}}\IEEEnonumber\\ &&+\int_{0}^{\infty}f(r)\,\text{arcsinh}\frac{y_{0}^2-r^2-c}{2y_{0}\sqrt{c}}\,\mathrm dr.\IEEEeqnarraynumspace\label{eq:A2} \end{IEEEeqnarray} \end{theorem} \begin{IEEEproof} We write \begin{IEEEeqnarray}{rCl} \mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right] &=& \lambda\int_{\mathbb{R}^2} \mathbb{E}\left[\mathsf{g}_{xy_{0}}\right]\ell(\|x-y_{0}\|) F(x) \,\mathrm dx\IEEEnonumber \end{IEEEeqnarray} what follows from Campbell's Theorem and Slivnyak's Theorem \cite{stoyan95}, and from the i.i.d. property of $\mathsf{g}_{\mathsf{x}y_{0}}$. Applying Identity \ref{lemma:integral_id1} and \ref{lemma:integral_id2} yields the result. \end{IEEEproof} The function $A_{2}(y_{0},c)$ in (\ref{eq:A2}) has an interesting interpretation: $A_{2}(y_{0},c)$ can be described as the interference field associated with the origin $o$, from which the remaining interference adds up differentially. \begin{corollary} Summary of some special cases of Theorem \ref{theorem:moment1}: \begin{enumerate} \item When we assume $F(0)=1$ and $f(r)\leq 0$ for all $r\in\mathbb{R}_{+}$, $F(r)$ can be interpreted as a complementary cumulative distribution function (CDF) with respect to a \emph{random} distance $\mathsf{r}$ to the origin, yielding \begin{IEEEeqnarray}{c} A_{2}(y_{0},c) = \text{arcsinh}\frac{y_{0}^2-c}{2y_{0}\sqrt{c}}-\mathbb{E}\left[\text{arcsinh}\frac{y_{0}^2-\mathsf{r}^2-c}{2y_{0}\sqrt{c}}\right].\IEEEnonumber \end{IEEEeqnarray} \item Letting $\|y_{0}\|\to 0$, we further have \begin{IEEEeqnarray}{c} A_{2}(0,c) = \log(1/2c)+\mathbb{E}\left[\log(2(\mathsf{r}+c))\right].\IEEEnonumber \end{IEEEeqnarray} \item Letting $c\to0$, we have $\mathbb{E}\left[\mathsf{I}(y_{0})\right]=\infty$, which is due to the resulting singularity of $\ell(\|x-y_{0}\|)$ at $x=y_{0}$, cf. \cite{ganti09}. \item Sparse network ($0<\lim_{r\to\infty}F(r)r^{\nu}<\infty$, $0<\nu\leq 2$): Remarkably, $\int_{0}^{\infty}rF(r)\mathrm dr=\infty$ but $\mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right]<\infty$. \item Dense network ($0<\lim_{r\to\infty}F(r)r^{\nu}<\infty$, $\nu\to0$): As expected \cite{ganti09}, $\mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right]=\infty$. \end{enumerate} \end{corollary} 1) has an interesting interpretation as well: The expectation can be seen as \emph{averaging} the differential interference over $\mathsf{r}$. Such an interpretation may be appropriate when analyzing networks with a priori unknown or fast-varying spatial configurations, for which a CDF is then used to model their spatial shape. 4) implies $\mathsf{I}(y_{0})<\infty$ a.s. although infinitely many nodes contribute to the interference on average. Note that 5) includes the homogeneous case with $F(r)=1$ ($f(r)=0$). We now extend the findings of Theorem \ref{theorem:moment1}. Before, we need the following Lemma. \begin{lemma}\label{lemma:pbz} Let $a_{1},a_{2}\in\mathbb{R}$, $a_{1}>0$. Then, \begin{IEEEeqnarray}{rCl} &&\hspace{-2cm}\int\int_{0}^{\pi}\frac{2t\,\mathrm d\phi\,\mathrm d t}{a_{1}+(t^2+a_{2}^2-2ta_{2}\cos\phi)^2}\IEEEnonumber\\ &\hspace{1cm}=&\frac{\pi}{2\sqrt{a_{1}}}\text{arctan}\frac{2\text{Re}\{\kappa(t,a_{1},a_{2})\}}{1-|\kappa(t,a_{1},a_{2})|^2},\IEEEeqnarraynumspace\label{eq:pbz} \end{IEEEeqnarray} where \begin{IEEEeqnarray}{c} \kappa(t,a_{1},a_{2}):=\frac{t^2-a_{2}^2-j\sqrt{a_{1}}}{\sqrt{(\sqrt{a_{1}}+j(t^2+a_{2}^2))^2+4t^2a_{2}^2}}. \end{IEEEeqnarray} \end{lemma} \begin{IEEEproof} The basic idea is to decompose the integrand into partial fractions and to apply Identity \ref{lemma:integral_id1} and \ref{lemma:integral_id2}, yielding (\ref{eq:pbz}) after some algebraic manipulations. Note that according to \cite{gradshteyn07}, (\ref{eq:integral_id1}) and (\ref{eq:integral_id2}) hold only for real-valued parameters. However, they were verified to hold also for complex-valued parameters. \end{IEEEproof} \begin{theorem}\label{theorem:moment2} Let $f(r):=\mathrm d F(r)/\mathrm d r$, $c>0$ and $\alpha=4$. Then, \begin{IEEEeqnarray}{c} \mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right] = \lambda\,A_{4}(y_{0},c)<\infty,\IEEEeqnarraynumspace\label{eq:exp_int2} \end{IEEEeqnarray} where $A_{4}(y_{0},c)$ is given by (\ref{eq:A4}) below. \end{theorem} \setcounter{equation}{12} \begin{IEEEproof} The proof is analogous to the proof of Theorem \ref{theorem:moment1} and uses the integral identity of Lemma \ref{lemma:pbz}. We further make use of (ii) in Section \ref{sec:model} to show that $\lim_{r\to\infty}F(r)<\infty$. \end{IEEEproof} \begin{corollary}\label{col:comp1} Summary of some special cases of Theorem \ref{theorem:moment2}: \begin{enumerate} \item Case $c\to0$: By taking the limit $\lim_{c\to0}A_{4}(y_{0},c)$ in (\ref{eq:A4}), we observe that $\mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right]=\infty$, cf. 3) in Corollary \ref{col:comp1}. \item Homogeneous case: Let $F(r)=1$. Then, $f(r)=0$ and \begin{IEEEeqnarray}{c} \lim\limits_{r\to a}\arctan\frac{2\text{Re}\{\kappa(r,c,y_{0})\}}{1-\kappa(r,c,y_{0})|^2}= \begin{cases} -\frac{\pi}{2}, & a=0\\ \frac{\pi}{2}, & a=\infty, \end{cases}\IEEEnonumber \end{IEEEeqnarray} yielding $\mathbb{E}^{!x_{0}}\left[\mathsf{I}(y_{0})\right]=\lambda\frac{\pi^2}{2\sqrt{c}}$ as expected, cf. \cite{ganti09}. \end{enumerate} All results of this Corollary are consistent with the literature. \end{corollary} The first moment of the interference is useful for bounding the interference distribution for the path loss only scenario. In case of Rayleigh fading channels, the Laplace transform of $\mathsf{I}(y_{0})$, i.e., $\mathcal{L}_{\mathsf{I}(y_{0})}(s):=\mathbb{E}\left[\exp\{-s\mathsf{I}(y_{0})\}\right]$, is of significant importance, since it allows one to obtain the OP in closed-form. When treating the case $\alpha=2$, we will always assume that $F(r)$ satisfies the additional condition of Theorem \ref{theorem:moment1}. \begin{theorem}\label{theorem:laplace_int} For $\mathsf{g}_{xy}\sim\text{Exp}(1)$ for all $x,y\in\mathbb{R}^2$ (Rayleigh fading), the Laplace transform of $\mathsf{I}(y_{0})$ at $y_{0}\in\mathbb{R}^2$ is \begin{IEEEeqnarray}{c} \mathcal{L}_{\mathsf{I}(y_{0})}(s)=\exp\left\{-\lambda s \,A_{\alpha}(y_{0},s+c)\right\},\IEEEeqnarraynumspace\label{eq:laplace} \end{IEEEeqnarray} for the cases $\alpha=2$ and $\alpha=4$, where $A_{2}(y_{0},c)$ is given by (\ref{eq:A2}) and $A_{4}(y_{0},c)$ is given by (\ref{eq:A4}). \end{theorem} \begin{IEEEproof} We write \begin{IEEEeqnarray}{rCl} \mathcal{L}_{\mathsf{I}(y_{0})}(s)&\overset{(a)}{=}&\mathbb{E}^{!x_{0}}_{\Phi_{\text{t}}}\left[\prod\limits_{\mathsf{x}\in\Phi_{\text{t}}} \mathbb{E}_{\mathsf{g}_{\mathsf{x}y_{0}}}\left[\exp\left\{-s\mathsf{g}_{\mathsf{x}y_{0}}\ell(\|\mathsf{x}-y_{0}\|)\right\}\right]\right]\IEEEnonumber\\ &\overset{(b)}{=}&\exp\left\{-\int_{\mathbb{R}^2} \left(1-\mathcal{L}_{\mathsf{g}}\left(s\ell(\|x-y_{0}\|)\right)\right)\,\lambda(x)\mathrm dx\right\},\IEEEnonumber \end{IEEEeqnarray} where (a) follows from algebraic manipulations and the i.i.d. property of the $\mathsf{g}_{\mathsf{x}y_{0}}$. (b) follows from the probability generating functional and the Laplace functional of a PPP \cite{baccelli09}. After noting that $\mathcal{L}_{\mathsf{g}}(s)=(1+s)^{-1}$ for $\mathsf{g}\sim\text{Exp}(1)$, the integral is computed using Identity \ref{lemma:integral_id1} and \ref{lemma:integral_id2}, and Lemma \ref{lemma:pbz}. \end{IEEEproof} Note that (a) in the proof holds for general point processes and some approximation techniques for computing the right-hand side already exist \cite{ganti10}. The (b) part is for PPPs only. \begin{corollary}\label{col:comp2} Setting $F(r)=1$ for all $r\in\mathbb{R}_{+}$ and $c=0$, we obtain the well-known result for the homogeneous case with $\alpha=4$ \cite{ganti09}: $\mathcal{L}_{\mathsf{I}(y_{0})}(s)=\exp\{-\lambda\frac{\pi^2}{2} \sqrt{s}\}$. \end{corollary} \section{Outage and Local Throughput} \subsection{Outage probability} We now study the OP for the reference pair $x_{0}\to y_{0}$. In order to broadly discuss the impact of the spatial shape on the performance, we focus on the Rayleigh fading scenario. For other channel models, the interference moments derived in Section \ref{sec:interference} can be used to effectively bound the OP, e.g., using the Markov inequality \cite{weber10}. We do not expect additional insights by considering also other channel models. \begin{theorem}\label{theorem:op} The OP for the Rayleigh fading scenario and $\alpha=2$ respectively $\alpha=4$ is given by \begin{IEEEeqnarray}{c} q(y_{0})=1-\mathcal{L}_{\mathsf{I}(y_{0})}\left(\beta(c+d^{\alpha})\right)\exp\left\{-\beta\eta\right\}.\IEEEeqnarraynumspace\label{eq:op} \end{IEEEeqnarray} \end{theorem} \begin{IEEEproof} It is well-known that the OP for Aloha MAC and exponentially distributed power gains $\mathsf{g}_{xy}$ can be written in terms of the Laplace transform of the interference \cite{baccelli09,ganti09}: We condition (\ref{eq:sinr}) on $\Phi$ and evaluate the OP first with respect to $\mathsf{g}_{x_{0}y_{0}}$. We finally use (\ref{eq:laplace}) with $s=\beta(c+d^{\alpha})$. \end{IEEEproof} By means of (\ref{eq:op}) in Theorem \ref{theorem:op} we can now measure the OP for Rayleigh fading at an arbitrary location for an arbitrary spatial shape function $F(r)$ satisfying the given restrictions. Fig. \ref{fig:op} shows $q(\|y_{0}\|)$ vs. $\|y_{0}\|$ for $\alpha=2$ and $\alpha=4$, thereby confirming the analysis. It can further be observed how the network ``moves'' from the interference-limited to the power-limited regime with increasing $\|y_{0}\|$. To highlight the accuracy of the model, we compare the OP from Theorem \ref{theorem:op} to a straightforward way of approximating the OP which consists of assuming that the intensity $\lambda(x)$ is approximately constant around $y_{0}$. In this case the OP can then be described as in the homogeneous case \cite{ganti09}, except for the intensity in the exponential term being modulated by $F(y_{0})$, i.e., $\tilde{q}(y_{0}):=1-\exp\{-F(y_{0})\lambda\pi^2 d^2\beta^{\frac{2}{\alpha}}\frac{2}{\alpha}\csc\frac{2\pi}{\alpha}\}\approx q(y_{0})$. We will now study the logarithmic ratio of exact to approximate success probability, i.e., $\gamma:=\log\frac{1-q(y_{0})}{1-\tilde{q}(y_{0})}$. \begin{corollary} Let $c=0$. The ratio $\gamma$ for $\alpha=4$ is given by \begin{IEEEeqnarray}{c} \gamma=\lambda d^{2}\sqrt{\beta}\left(\tfrac{\pi^2}{2}F(y_{0})-d^{2}\sqrt{\beta} A_{4}(y_{0},\beta d^{4})\right).\IEEEeqnarraynumspace \end{IEEEeqnarray} \end{corollary} Fig. \ref{fig:gamma} shows the ratio $\gamma$ together with the shape function $F(r)$ for different receiver positions $y_{0}$. $F(r)$ was chosen such that the network exhibits a communication hotspot, with the density of active nodes slowly decaying between $70$ and $500$ until it becomes approximately zero. One can see that the approximation is not satisfactory, especially in the transition region, where border effects come into play. \subsection{Local throughput} We now propose two local throughout metrics that are suitable for non-stationary wireless ad hoc networks. \begin{definition}[Differential transmission capacity (DTC)] The DTC is defined as the maximal density of concurrent transmissions in an \emph{infinitesimal} region around the point $x\in\mathbb{R}^2$ subject to an OP constraint $\epsilon$, i.e., \begin{IEEEeqnarray}{c} c(x,\epsilon):=\lambda(x,\epsilon)(1-\epsilon).\label{eq:def_dtc} \end{IEEEeqnarray} \end{definition} The TC and its differential counterpart have similar meaning, except that the latter is \emph{position-dependent}: For a given spatial shape $F(r)$ and target OP $\epsilon$, $c(x,\epsilon)$ yields the TC in a region $\mathrm dx$. Hence, the DTC implicitly takes into account the spatial shape of the node distribution. For Rayleigh fading, $c(x,\epsilon)$ is obtained by solving (\ref{eq:op}) for $\lambda$. Like the TC, the DTC can be used for comparing different transmission protocols. \begin{definition}[Average sum throughput (AST)] The AST is defined as the ratio of average number of successful transmissions to average number of simultaneous transmissions, i.e., \begin{IEEEeqnarray}{c} \Omega:=\frac{\mathbb{E}\Big[\sum\limits_{\mathsf{x}\in\Phi_{\text{t}}}\mathds{1}_{\{\mathsf{x}\text{ successful}\}}\Big]}{\mathbb{E}\Big[\sum\limits_{\mathsf{x}\in\Phi_{\text{t}}}\mathds{1}_{\{\mathsf{x}\in\mathbb{R}^2\}}\Big]}. \end{IEEEeqnarray} \end{definition} \begin{figure}[t] \psfrag{tag1}[c][c]{\small{$\eta=0$}} \psfrag{tag2}[c][c]{\small{$\eta=0.1$}} \psfrag{tag3}[c][c]{\small{$\|y_{0}\|$}} \psfrag{tag4}[c][c]{\small{$q(y_{0})$}} \psfrag{tag5tag5}{\footnotesize{$\alpha=2$}} \psfrag{tag6}{\footnotesize{$\alpha=4$}} \centering \includegraphics[width=0.47\textwidth]{figure_op.eps} \caption{$q(y_{0})$ vs. $\|y_{0}\|$ for $F(r)=\exp\{-(r/100)^{3}\}$, $d=10$, $\beta=0.5$, $c=1$, $\lambda=0.001$. Marks represent the simulation results.} \label{fig:op}\vspace{-0.2cm} \end{figure} \begin{figure}[t] \psfrag{tagx}[c][c]{\small{$\|y_{0}\|$}} \psfrag{tagy1}[c][c]{\small{$\gamma$}} \psfrag{tagy2}[c][c]{\small{$F(\|y_{0}\|)$}} \psfrag{tag3tag3}{\footnotesize{$\gamma$}} \psfrag{data2}{\footnotesize{$F(\|y_{0}\|)$}} \centering \includegraphics[width=0.47\textwidth]{figure_approx.eps} \caption{$\gamma$ and $F(\|y_{0}\|)$ vs. $\|y_{0}\|$ for $\alpha=4$, $\beta=1$, $d=10$, $c=1$.}\vspace{-0.45cm} \label{fig:gamma} \end{figure} The AST quantifies the first order overall efficiency of the network on the MAC layer. While the DTC highlights the spatial dynamics of the local throughput, the AST yields a single figure of merit. In essence, the AST counts the number of successful transmissions, thereby integrating over the spatial dynamics. Note that the success function $\mathds{1}_{\{\mathsf{x}\text{ successful}\}}$, indicating that transmitter $\mathsf{x}$ has been successful, can be chosen arbitrarily to include additional outage-inducing effects, e.g., energy-limitations, dis-connectivity or secrecy outage. \begin{theorem}\label{theorem:fc} Let $\lim_{r\to\infty}F(r)r^{\nu}<\infty$ for some $\nu>2$. With the underlying network model and success function $\mathds{1}_{\{\mathsf{SINR}(y)\geq\beta\}}$, the AST $\Omega$ can be computed as\vspace{-0.1cm} \begin{IEEEeqnarray}{c} \Omega=\frac{\int_{0}^{\infty}r(1-q(r))\,F(r)\,\mathrm dr}{\int_{0}^{\infty}r F(r)\,\mathrm dr}. \end{IEEEeqnarray} \end{theorem} \begin{IEEEproof} Since the denominator directly follows from Campbell's Theorem, we focus on the numerator and write \begin{IEEEeqnarray}{rCl} &&\mathbb{E}\left[\sum\limits_{\mathsf{x}\in\Phi_{\text{t}}}\mathds{1}_{\{\mathsf{x}\text{ successful}\}}\right]\overset{(a)}{=}\int_{\mathbb{R}^2} \mathbb{E}^{!x} \left[\mathds{1}_{\{x\text{ successful}\}}\right]\lambda(x)\,\mathrm dx\IEEEnonumber\\ &&\overset{(b)}{=}\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \mathbb{E}^{!x}_{\Phi_{\text{t}}} \left[\mathds{1}_{\{\mathsf{SINR}(y)\geq\beta\}}\right]\mathbb{P}\left(\mathsf{y}=y|x\right)\,\mathrm dy\,\lambda(x)\,\mathrm dx\IEEEnonumber\\ &&\overset{(c)}{=}\int_{\mathbb{R}^2}\left(\int_{\mathbb{R}^2} \mathbb{E}^{!x}_{\Phi_{\text{t}}} \left[\mathds{1}_{\{\mathsf{SINR}(y)\geq\beta\}}\right]\mathbb{P}\left(\mathsf{y}=y|x\right)\lambda(x)\,\mathrm dx\right)\mathrm dy\IEEEnonumber\\ &&\overset{(d)}{=}\int_{\mathbb{R}^2}\left(\int_{\mathbb{R}^2} \mathbb{P}^{!x}_{\Phi_{\text{t}}} \left(\mathsf{SINR}(y)\geq\beta\right)\mathbb{P}\left(\mathsf{y}=y|x\right)\,\lambda(x)\,\mathrm dx\right)\mathrm dy\IEEEnonumber\\ &&\overset{(e)}{=}\int_{\mathbb{R}^2}(1-q(y))\left(\int_{\mathbb{R}^2} \mathbb{P}\left(\mathsf{y}=y|x\right)\,\lambda(x)\,\mathrm dx\right)\mathrm dy\IEEEnonumber\\ &&=\int_{\mathbb{R}^2}(1-q(y))\lambda(y)\,\mathrm dy.\IEEEnonumber \end{IEEEeqnarray} (a) is due to Campbell's Theorem \cite{stoyan95}. (b) is obtained by noting that a transmitter $x$ is successful if the intended receiver at $y$ is not in outage. From Section \ref{sec:model}, we know that $y$ is placed by random translation of $x$ according to some probability kernel $\mathbb{P}\left(\mathsf{y}=y|x\right)$. (c) follows from Tonelli's Theorem \cite{bauer92} and (d) follows from $\mathbb{E}\left[\mathds{1}_{\{X\in A\}}\right]=\mathbb{P}\left(X\in A\right)$. (e) follows from (\ref{eq:op}) and the fact that $q(y)$ is independent of $x$. \end{IEEEproof} \section{Applications of the Model}\label{sec:applications \subsection{Shot-range inhibition} Besides slotted Aloha, other MAC protocols such as CSMA/CA or local FDMA, are promising techniques for reducing excessive interference generated by nodes within \emph{shot-range}. To study ad hoc networks with such inhibition mechanisms while ensuring analytical tractability, powerful methods based on non-homogeneous Poisson approximation have been used \cite{hunter10,baccelli09,tanbourgi11_2}. When such protocols are \emph{transmitter-initiated}, e.g., transmitter sensing for CSMA or transmitter orthogonalization for FDMA, the resulting spatial distribution of interferers becomes inhomogeneous and approximately isotropic around the transmitter $x$, while the interference field at the intended receiver $y$ will depend on the distance $\|x-y\|$. Hence, our model can be applied also to such modeling problems and is not limited to Aloha MAC. \subsection{Network optimization} Consider the following situation: Let the set $\{\mathsf{y}\}$ of \emph{potential} receivers be distributed as an isotropic PPP $\Phi_{\text{r}}$ of intensity $\lambda_{\text{r}}(r)=\lambda_{\text{r}} F(r)$. Assume that $\Phi_{\text{t}}$ and $\Phi_{\text{r}}$ are independent. That is, the set of all nodes follows a PPP, e.g., a sensor network created by airdrop, and connectivity at distance $d$ is no longer guaranteed for every node. We further assume that the routing protocol employs a nearest neighbor strategy, i.e., transmitters aim at minimizing $d$. For points distributed as a PPP, the CDF $F_{\mathsf{d}}(d)$ of the distance $\mathsf{d}$ between a point and its nearest neighbor is well-known, see \cite{baccelli09}. We would like to know the optimal SINR threshold $\beta$ such that the product $\log_{2}(1+\beta)\,\Omega(\beta)$ with success function $\mathds{1}_{\{\mathsf{x}\text{ successful}\}}\mathds{1}_{\{\mathsf{x}\text{ connected}\}}$ is maximized. This corresponds to maximizing the expected \emph{sum rate}, i.e., \begin{IEEEeqnarray}{rCl} \beta^{\ast}&=&\arg\max\limits_{\beta}\left\{\log_{2}(1+\beta)\,\Omega(\beta)\right\}\IEEEnonumber\\ &\overset{(a)}{\approx}& \arg\max\limits_{\beta}\left\{\log_{2}(1+\beta)\hspace{-0.1cm}\int_{0}^{\infty}\hspace{-0.3cm} r \hspace{-0.1cm}\int_{0}^{\infty}\hspace{-0.3cm} (1-q(r,\beta,d))F_{\mathsf{d}}(\mathrm dd) \mathrm dr\right\}\hspace{-0.1cm},\label{eq:opt}\IEEEnonumber \end{IEEEeqnarray} where we altered the notation $\Omega\,{\to}\,\Omega(\beta)$ and $q(r)\,{\to}\, q(r,\beta,d)$ to point out the functional dependencies. (a) follows from \begin{IEEEeqnarray}{c} \mathbb{E}\left[q(\|\mathsf{y}_{i}\|)|\mathsf{x}_{i}=x_{i}\right \approx q(\|x_{i}\|),\IEEEnonumber \end{IEEEeqnarray} which essentially approximates the interference field at a receiver $y_{i}$ by the interference field at the associated transmitter $x_{i}$. This approximation is reasonable for high $\lambda_{\text{r}}$ and/or moderate slopes of $F(r)$. Fig. \ref{fig:opt} shows $\log_{2}(1+\beta)\,\Omega(\beta)$ vs. $\beta$. As can be seen, optimizing over $\beta$ yields large improvements. \begin{figure}[t] \psfrag{tag4}[c][c]{\small{$\beta$ dB}} \psfrag{tag5}[c][c]{\small{$\log_{2}(1+\beta)\,\Omega(\beta)$}} \psfrag{tag1tag1tag1tag1tag1tag1tag1tag1tag}{\tiny{$\eta_{10}=-8$ dB, $\lambda=10^{-3}$, $\lambda_{\text{r}}=10^{-2}$}} \psfrag{tag2}{\tiny{$\eta_{10}=-14$ dB, $\lambda=10^{-2}$, $\lambda_{\text{r}}=10^{-2}$}} \psfrag{tag3}{\tiny{$\eta_{10}=-18$ dB, $\lambda=3\cdot 10^{-4}$, $\lambda_{\text{r}}=10^{-3}$}} \centering \includegraphics[width=0.47\textwidth]{figure_AST_sim.eps} \caption{$\log_{2}(1+\beta)\,\Omega(\beta)$ vs. $\beta$ for $\alpha=2$. $\eta_{10}$ denotes $\eta$ at a distance $d=10$. $F(r)=\exp\{-r/250\}$. Marks represent simulation results.} \label{fig:opt}\vspace{-0.4cm} \end{figure} \section{Concluding Remarks} We extended prior work on the modeling and analysis of wireless networks by assuming an isotropic but not necessarily stationary spatial distribution of nodes. We derived, for slotted Aloha, the interference and outage statistics as a function of the receiver position and the shape of the spatial node distribution. The case $\alpha=2$, which could not be studied yet due to the stationarity assumption, was intensively studied. For $\alpha=4$, we also obtain closed-form results, from which known results arise as special cases. We proposed two metrics for measuring local throughput in non-stationary and finite networks and discussed possible applications of our model. \vspace{-0.00cm} \section*{Acknowledgements} The authors gratefully acknowledge that their work is partially supported within the priority program 1397 "COIN" under grant No. JO 258/21-1 by the German Research Foundation (DFG). \vspace{-0.3cm} \bibliographystyle{IEEEtran}
2,877,628,090,931
arxiv
\section{Introduction} Within reinforcement learning (RL), off-policy evaluation (OPE) is the task of estimating the value of a given evaluation policy, using data collected by interaction with the environment under a different behavior policy \citep{sutton2018reinforcement, precup2000eligibility}. OPE is particularly valuable when interaction and experimentation with the environment is expensive, risky, or unethical---for example, in healthcare or with self-driving cars. However, despite recent interest and progress, state-of-the-art OPE methods still often fail to differentiate between obviously good and obviously bad policies, e.g. in healthcare \citep{gottesman2018evaluating}. Most of the OPE literature focuses on sub-problems such as improving asymptotic sample efficiency or bounding the error on OPE estimators for the value of a policy. However, while these bounds are theoretically sound, they are often too conservative to be useful in practice (though see e.g.~\citet{thomas2019} for an exception). This is not surprising, as there is a theoretical limit to the statistical information contained in a given dataset, no matter which estimation technique is used. Furthermore, many of the common assumptions underlying these theoretical guarantees are usually not met in practice: observational healthcare data, for example, often contains many unobserved confounders \citep{gottesman2019guidelines}. Given the limitations of OPE, we argue that in high stakes scenarios domain experts should be integrated into the evaluation process in order to provide useful actionable results. For example, senior clinicians may be able to provide insights that reduce our uncertainty of our value estimates. In this light, the explicit integration of expert knowledge into the OPE pipeline is a natural way for researchers to receive feedback and continually update their policies until one can make a responsible decision about whether to pursue gathering prospective data. The question is then what information can humans provide that might help assess and potentially improve our confidence in an OPE estimate? In this work, we consider how human input could improve our confidence in the recently proposed OPE estimator, fitted Q-evaluation (FQE) \cite{le2019batch}, as well as importance sampling (IS) methods. We develop an efficient approach to identify the most influential transitions in a batch of observational data, that is, transitions whose removal would have large effects on the OPE estimate. By presenting these influential transitions to a domain expert and verifying that they are indeed representative of the data, we can increase our confidence that our estimated evaluation policy value is not dependent on outliers, confounded observations, or measurement errors. The main contributions of this work are: \begin{itemize} \item \emph{Conceptual}: We develop a framework for using influence functions to interpret OPE, and discuss the types of questions which can be shared with domain experts to use their expertise in debugging OPE. \item \emph{Technical}: We develop computationally efficient algorithms to compute the exact influence functions for several IS estimators as well as two broad function classes for FQE: kernel-based functions and linear functions. \item \emph{Empirical}: We demonstrate the potential benefits of influence analysis for interpreting OPE on a cancer simulator, and present results of analysis together with practicing clinicians of OPE for management of acute hypotension from a real intensive care unit (ICU) dataset. \end{itemize} \section{Related work} The OPE problem in RL has been studied extensively. Works fall into two main categories: importance sampling (e.g. \citet{precup2000eligibility, jiang2015doubly}) and model-based (often referred to as the direct method), which can be further subdivided into modeling the environment dynamics (e.g. \citet{hanna2017bootstrapping, gottesman2019combining}), and directly modeling the value function (e.g. \citet{le2019batch}). Some of these works provide bounds on the estimation errors (e.g. \citet{thomas2015high, dann2018policy}). We emphasize, however, that for most real-world applications these bounds are either too conservative to be useful or rely on assumptions which are usually violated. While there has been considerable recent progress in interpretable machine learning and machine learning with humans in the loop (e.g. \citet{tamuz2011adaptively, lage2018human}), to our knowledge, there has been little work that considers human interaction in the context of OPE. \citet{oberst2019counterfactual} proposed framing the OPE problem as a structural causal model, which enabled them to identify trajectories where the predicted counterfactual trajectories under an evaluation policy differs substantially from the observed data collected under the behavior policy. However, that work does not give guidance on what part of the trajectory might require closer scrutiny, nor can it use human input for additional refinement. Finally, the notion of influence that we use throughout this work has a long history in statistics as a technique for evaluating the robustness of estimators \citep{cook1980characterizations}. Recently, an approximate version of influence for complex black-box models was presented in \citet{koh2017understanding}, and they demonstrated how influence functions can make machine learning methods more interpretable. In the context of optimal control and RL, influence functions were first introduced by \citet{munos2002variable} to aid in online optimization of policies. However, their definition of influence as a change in the value function caused by perturbations of the reward at a specific state is quite different from ours. \section{Background} \paragraph{Notation} A Markov Decision Process (MDP) is a tuple $\langle \mathcal{X}, \mathcal{A}, P_T, P_R, P_0, \gamma \rangle$, where $\mathcal{X}$, $\mathcal{A}$ and $\gamma$ are the state space, action space, and the discount factor, respectively. The next state transition and reward distributions are given by $P_T(\cdot | x, a)$ and $P_R(\cdot | x, a)$ respectively, and $P_0(x)$ is the initial state distribution. The state and action spaces could be either discrete or continuous, and the transition and reward functions may be either stochastic or deterministic. A dataset is composed of a set of $N$ observed transitions $\mathcal{D} = \{ (x^{(n)}, a^{(n)} ,r^{(n)} ,x'^{(n)}) \}_{n=1}^N$, and we use $\tau^{(n)}$ to denote a single transition. The subset $\mathcal{D}_0 \subseteq \mathcal{D}$ denotes initial transitions from which $P_0$ can be estimated. Note that although we treat all data points as observed transitions, in most practical applications data is collected in the form of trajectories rather than individual transitions. A policy is a function $\pi : (\mathcal{X}, \mathcal{A}) \rightarrow \left[0, 1 \right]$ that gives the probability of taking each action at a given state $(\sum_{a \in \mathcal{A}} \pi(a|x) = 1)$. The value of a policy is the expected return collected by following the policy, $v^\pi \coloneqq \mathrm{E} [ g_T | a_t \sim \pi]$, where expectations are taken with respect to the MDP and $g_T \coloneqq \sum_{t=0}^T \gamma^t r_t$ denotes the total trajectory return (sum of discounted rewards). The state-action value function $q^{\pi}(x, a)$ is the expected return for taking action $a$ at state $x$, and afterwards following $\pi$ in selecting future actions. The goal of off-policy evaluation is to estimate the value of an \emph{evaluation} policy, $\pi_e$, using data collected under a different \emph{behavior} policy, $\pi_b$. In this work, we are only interested in estimating $v^{\pi_e}$ and $q^{\pi_e}$, and will therefore drop the superscript for brevity. We will also limit ourselves to deterministic evaluation policies. For the purpose of kernel-based value function approximation, we define a distance metric, $d((x^{(i)}, a^{(i)}),(x^{(j)}, a^{(j)}))$ over $\mathcal{X} \times \mathcal{A}$. In this work, for discrete action spaces, we will assume $d((x^{(i)}, a^{(i)}),(x^{(j)}, a^{(j)})) = \infty$ when $a^{(i)} \neq a^{(j)}$, but this is not required for any of the derivations. \paragraph{Fitted Q-Evaluation} Fitted Q-Evaluation \citep{le2019batch} models the q-function of $\pi_e$ and can be thought of as dynamic programming on an observational dataset to compute the value of a given evaluation policy. It is similar to the more well-known fitted Q-iteration method (FQI) \citep{ernst2005tree}, except it is performed offline on observational data, and the target is used for evaluation of a given policy rather than for optimization. FQE performs a sequence of supervised learning steps where the inputs are state-action pairs, and the targets at each iteration are given by $y_i(x, a) = r + \gamma \hat{q}_{i-1}(x', \pi_e(x'))$, where $\hat{q}_{i-1}(x, a)$ is the estimator (from a function class $\mathcal{F}$) that best estimates $y_{i-1}(x, a)$. For more information, see \citet{le2019batch}. \paragraph{Importance sampling} A popular class of OPE estimators consists of IS methods. These methods estimate the value of a policy by taking a sample average of trajectories returns, properly weighted to account for the difference between $\pi_b$ and $\pi_e$. The standard IS estimator is unbiased but has high variance, and there are many variants of this estimator which trade of bias and variance. For more information see \citep{precup2000eligibility, jiang2015doubly, thomas2016data}. \section{OPE diagnostics using influence functions} \subsection{Definition of the influence} We aim to make OPE interpretable and easy to debug by identifying transitions in the data which are highly influential on the estimated policy value. We define the \emph{total influence} of transition $\tau^{(j)}$ as the change in the value estimate if $\tau^{(j)}$ was removed: \begin{equation} I_j \equiv \hat{v}_{-j} - \hat{v}, \end{equation} where $\hat{v}_{-j}$ is the value estimate using the same dataset after removal of $\tau^{(j)}$. In general, for any function of the data $f(\mathcal{D})$ we will use $f(\mathcal{D}_{-j}) \equiv f_{-j}$ to denote the value of $f$ computed for the dataset after removal of $\tau^{(j)}$. Another quantity of interest is the change in the estimated value of $q(x^{(i)}, a^{(i)})$ as a result of removing $\tau^{(j)}$, which we call the \emph{individual influence}: \begin{equation} I_{i, j} \equiv \hat{q}_{-j}(x^{(i)}, a^{(i)}) - \hat{q}(x^{(i)}, a^{(i)}). \end{equation} The total influence of $\tau^{(j)}$ can be computed by averaging its individual influences over the set $\mathcal{D}^*_0$ of all initial state-action transitions in which $a=\pi_e(x)$: \begin{equation} I_j = \frac{1}{|\mathcal{D}^*_0|} \sum_{i \in \mathcal{D}^*_0} I_{i, j}. \end{equation} As we are interested in the robustness of our evaluation, we can normalize the absolute value of the influence of $\tau^{(j)}$ by the estimated value of the policy to provide a more intuitive notion of overall importance: \begin{equation} \tilde{I}_j \equiv \frac{|I_j|}{|\hat{v}|}. \end{equation} \subsection{Diagnosing OPE estimation} With the above definitions of influence functions, we now formulate and discuss guidelines for diagnosing the OPE process for potential problems. \paragraph{No influential transitions: OPE appears reliable.} As a first diagnostic, we check that none of the transitions influence the OPE estimate by more than a specified influence threshold $\tilde{I}_C$, i.e. for all $j$ we have $\tilde{I}_j \leq \tilde{I}_C$. In such a case we would output that, to the extent that low influences suggests the OPE is stable, the evaluation appears reliable. That said, we emphasize that our proposed method for evaluating OPE methods is not exhaustive, and there could be many other ways in which OPE could fail. \paragraph{Influential transitions: a human can help.} When there are several influential transitions in the data (defined as transitions whose influence is larger than $\tilde{I}_C$), we present them to domain experts to determine whether they are representative, that is, taking action $a$ in state $x$ is likely to result in transition to $x'$. If the domain experts can validate all influential transitions, we can still have some confidence in the validity of the OPE. If any influential transitions are flagged as unrepresentative or artefacts, we have several options: (1) Declare the OPE as unreliable; (2) Remove the suspect influential transitions from the data and recompute the OPE; (3) Caveat the OPE results as valid only for a subset of initial states that do not rely on that problematic transition. In situations where there is a large number of influential transitions, manual review by experts may be infeasible. As such, it is necessary to present as few transitions as possible while still presenting enough to ensure that any potential artefacts in the data and/or the OPE process are accounted for. In practice, we find it is common to observe a sequence of influential transitions where removing any single transition has the same effect as removing the entire sequence. An example of this is shown schematically in Figure \ref{fig:influential_sequence}. An entire sequence marked in blue and red leads to a region of high reward, and so all transitions in that sequence will have high influence. The whole influential sequence appears very different from the rest of the data, and a domain expert might flag it as an outlier to be removed. However, we can present the expert with only the red transition and capture the influence of the blue transitions as well, reducing the number of suspect examples to be manually reviewed. \paragraph{Influential transitions: policy is unevaluatable.} When an influential transition, $\tau^{(j)}$, has no nearest neighbors to $(x'^{(j)}, \pi_e(x'^{(j)}))$, we can determine that the evaluation policy cannot be evaluated, even without review by a domain expert. This claim is a result of the fact that such a situation represents reliance of the OPE on transitions for which there is no overlap between the actions observed in the data and the evaluation policy. However, while the evaluation policy is not evaluatable, the influential ``dead-end'' transitions may still inform experts of what data is required for evaluation to be feasible. It should be noted that the applicability of the diagnostics methods discussed above may change depending on whether the FQE function class is parametric or nonparametric. All function classes lend themselves to highlighting of highly influential transitions. However, the notion of stringing together sequences of neighbors, or looking for red flags in the form of influential transitions with no neighbors to their $(x', \pi_e(x'))$ state action pairs only makes sense for nonparametric models. In the case of parametric models, the notion of neighbors is less important as the influence of removing a transition manifests as a change to the learned parameters which affects the value estimates for the entire domain simultaneously. In contrast, for nonparametric methods, removing a transition locally changes the value of neighboring transitions and propagates through the entire domain through the sequential nature of the environment. While we derive efficient ways to compute the influence for both parametric and nonparametric function classes, in the empirical section of this paper we present results for nonparametric kernel-based estimators to demonstrate all diagnostics. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{influential_sequence.pdf} \caption{\textbf{Schematic of an influential sequence.} All transitions in the sequence leading to a high reward have high influence, but flagging just the red transition for inspection will capture the influence of the blue ones as well.}\label{fig:influential_sequence} \end{figure} \subsection{Influence analysis for importance sampling} The approach of using influence analysis to to asses the validity of the OPE can be naturally extended to IS methods, with a few small changes. Most IS methods use entire trajectories rather than individual transitions as their basic data input, and therefore for IS we would compute the influence of trajectories rather than transitions. This also implies that we cannot identify obvious unevaluateble datasets as described in the previous section. Last, it should be noted that for IS the influence is determined not only by the return of a trajectory, but is also strongly determined by the weights, which may grow exponentially with the horizon. \section{Efficient computation of influence functions} \label{sec:computation_of_influence} A key technical challenge in performing the proposed influence analysis in OPE is computing the influences efficiently. The brute-force approach of removing a transition and recomputing the OPE estimate is clearly infeasible for all but tiny problems, as it requires refitting $N$ models. The computation of influences in RL is also significantly more challenging than in static one-step prediction tasks, as a change in the value of one state has a ripple effect on all other states that are possible to reach from it. We describe computationally efficient methods to compute the influence functions in two classes of FQE: kernel-based, and linear least squares, as well as several popular IS estimators. Unlike previous works (e.g. \citep{koh2017understanding}) that approximate the influence function for a broad class of black-box functions, we provide closed-form, analytic solutions for the exact influence function for a broad range of OPE methods. \subsection{Kernel-Based FQE} \label{sec:kernel_based_fqe} In kernel based FQE, the function class we choose for estimating the value function of $\pi_e$ at a point in state-action space is based on similar observations within that space. For simplicity, in the main body of this work we estimate the value function as an average of all its neighbors within a ball of radius $R$, i.e. \begin{align} \hat{q}(x, a) = \frac{1}{N_{(x, a)}} \sum_{i} \hat{q}(x^{(i)}, a^{(i)}) \end{align} where the summation is performed over all $(x^{(i)}, a^{(i)})$ such that $d((x^{(i)}, a^{(i)}),(x, a)) < R$ and $N_{(x, a)}$ is the number of such points. Extension to general kernel functions is straightforward. We introduce a matrix formulation for performing FQE which allows for efficient computation of the influence functions. \paragraph{Matrix formulation of nearest-neighbors based FQE.} We define $\Delta_{i j}$ as the event that the starting state-action of $\tau^{(j)}$ is a neighbor of the starting state-action of $\tau^{(i)}$, i.e. $d((x^{(i)}, a^{(i)}), (x^{(j)}, a^{(j)})) < R$. Similarly, we define $\Delta_{i' j}$ as the event that the starting state-action of $\tau^{(j)}$ is a neighbor of the next-state and corresponding $\pi_e$ action of $\tau^{(i)}$, i.e. $d((x'^{(i)}, \pi(x'^{(i)})), (x^{(j)}, a^{(j)})) < R$. We also define the counts for numbers of neighbors of transitions as $N_i = \sum_{j=1}^N \mathbb{I} (\Delta_{i j})$ and $N_{i'} = \sum_{j=1}^N \mathbb{I} (\Delta_{i' j})$, where $\mathbb{I}(e)$ is the indicator function. To perform nearest-neighbors FQE using matrix multiplications, we first construct two nearest-neighbors matrices: one for the neighbors of all state-action pairs, and one for the neighbors of all state-action pairs with pairs of next-states and subsequent actions under $\pi_e$. Formally: \begin{equation} \mathbf{M}_{i j} = \frac{\mathbb{I} (\Delta_{i j})}{N_i}; \quad \mathbf{M}'_{i j} = \frac{\mathbb{I} (\Delta_{i' j})}{N_{i'}}. \end{equation} The $N \times N$ matrices $\mathbf{M}$ and $\mathbf{M}'$ can be easily computed from the data, and are used to compute the value function for all state-action pairs using the following proposition, the proof of which is given in Appendix \ref{appendix:proof_matrix_fqe}. \begin{proposition} \label{prop:matrix_fqe} For all transitions in the dataset, the values for corresponding state-action pairs are given by \begin{align} \mathbf{\hat{q}}'_t &= \left( \sum_{t'=1}^t \gamma^{t'-1} \mathbf{M}'^{t'} \right) \mathbf{r} \equiv \mathbf{\Phi}'_t \mathbf{r} \label{eq:q_prime_estimate_main} \\ \mathbf{\hat{q}}_t &= \mathbf{M} \left( \sum_{t'=1}^t \left( \gamma \mathbf{M}' \right)^{t'-1} \right) \mathbf{r} \equiv \mathbf{\Phi}_t \mathbf{r}. \label{eq:q_estimate} \end{align} where $\hat{q}'_{t, i}$ and $\hat{q}_{t, i}$ are the estimated policy values at $(x'^{(i)}, \pi_e(x'^{(i)}))$ and $(x^{(i)}, a^{(i)})$, respectively, for $\tau^{(i)}$. \end{proposition} In future derivations, we will drop the time dependence of $\mathbf{\Phi}$ and $\hat{\mathbf{q}}$ on $t$. This is justified when there are well defined ends of trajectories with no nearest neighbors (or equivalently, trajectories end in an absorbing state), and the number of iterations in the FQE is larger than the longest trajectory. \paragraph{Influence function computation.} Removal of a transition $\tau^{(j)}$ from the dataset can affect $\hat{q}_i$ in two ways. First, $\hat{q}_i$ is a mean over all of its neighbors, indexed by $k$, of $r^{(k)} + \gamma \hat{q}'_k$. Thus if $(x^{(j)}, a^{(j)})$ is one of the $M^{-1}_{ij}$ neighbors of $(x^{(i)}, a^{(j)})$, removing it from the dataset will change the value of $\hat{q}_i$ by $\frac{\hat{q}_{i} - \left( r^{(j)} + \gamma \hat{q}'_{j} \right)}{M_{ij}^{-1} - 1}$. The special case of $M^{-1}_{ij} = 1$ does not pose a problem in the denominator, as given that $i \neq j$ and every transition is a neighbor of itself, if $(x^{(j)}, a^{(j)})$ is a neighbor of $(x^{(i)}, a^{(i)})$, then $M^{-1}_{ij} \geq 2$. The second way in which removing $\tau^{(j)}$ influences $\hat{q}_i$ is through its effect on intermediary transitions. Removal of $\tau^{(j)}$ changes the estimated value of $\hat{q}'_k$, of all $(x'^{(k)}, \pi_e(x'^{(k)}))$ that $(x^{(j)}, a^{(j)})$ is a neighbor of by $\frac{\hat{q}'_{k} - \left( r^{(j)} + \gamma \hat{q}'_{j} \right)}{M'^{-1}_{kj} - 1}$. Multiplying this difference by $\gamma$ yields the difference in $\hat{q}_{k}$ due to removal of $\tau^{(j)}$. A change in the value of $\hat{q}_{k}$ is identical in its effect on the value estimation to changing $r^{(k)}$, a change which is mediated to $\hat{q}_i$ through $\Phi_{ik}$. In the special case that $(x^{(j)}, a^{(j)})$ is the only neighbor of $(x'^{(k)}, \pi_e(x'^{(k)}))$, the value estimate $\hat{q}'_k$ changes from $\hat{q}_j$ to zero. Combining the two ways in which removal of $\tau^{(j)}$ changes the estimated value $\hat{q}_i$ yields the individual influence: \begin{align} I_{i, j} &= \mathbb{I}(\Delta_{ij}) \frac{\hat{q}_{i} - \left( r^{(j)} + \gamma \hat{q}'_{j} \right)}{M_{ij}^{-1} - 1} + \sum_{k:\Delta_{k'j}} I_{(i, j)}^{(k)}, \end{align} where we define \begin{align} I_{i, j}^{(k)} = \begin{cases} \gamma \Phi_{ik} \frac{\hat{q}'_{k} - \left( r^{(j)} + \gamma \hat{q}'_j \right)}{M'^{-1}_{kj} - 1} & M'^{-1}_{kj} > 1 \\ \gamma \Phi_{ik} \hat{q}_j & M'^{-1}_{kj} = 1. \end{cases} \end{align} \paragraph{Computational complexity.} The matrix formulation of kernel based FQE allows us to compute an individual influence in constant time, making influence analysis of the entire dataset possible in $\mathcal{O}(N |\mathcal{D}^*_0|)$ time. Furthermore, the sparsity of $\mathbf{M}$ and $\mathbf{M}'$ allows the FQE itself to be done in $\mathcal{O}(N^2 T)$. See Appendix \ref{appendix:kernel_fqe_complexity} for a full discussion. \subsection{Linear Least Squares FQE} \newcommand{\pmb{\psi}}{\pmb{\psi}} \newcommand{\mathbf{\Psi}}{\mathbf{\Psi}} In linear least squares FQE, the policy value function $\hat{q}(x,a)$ is approximated by a linear function $\hat{q}(x,a) = \pmb{\psi}(x,a)^\top \mathbf{w}$ where $\pmb{\psi}(x,a)$ is a $D$-dimensional feature vector for a state-action pair. Let $\mathbf{\Psi} \in \mathbb{R}^{N \times D}$ be the sample matrix of $\pmb{\psi}(x,a)$. Define vector $\pmb{\psi}_{\pi}(x) = \gamma \pmb{\psi}(x, \pi_e(x))$ and let $\mathbf{\Psi}_p \in \mathbb{R}^{N \times D}$ be the sample matrix of $\pmb{\psi}_{\pi}(x')$. The least-squares solution of $\mathbf{w}$ is $(\mathbf{\Psi}^\top \mathbf{\Psi} - \gamma \mathbf{\Psi}^\top \mathbf{\Psi}_p)^{-1}\mathbf{\Psi}^\top \mathbf{r}$ (See Appendix \ref{prop:ls_fqe} for full derivation). Let $\mathbf{w}_{-j}$ be the solution of linear least squares FQE after removing $\tau^{(j)}$, and $\mathbf{\Psi}_{-j}$, $\mathbf{r}_{-j}$, and $\mathbf{\Psi}_{p,-j}$ be the corresponding matrices and vectors without the $\tau^{(j)}$. Then, $\mathbf{w}_{-j} = (\mathbf{\Psi}_{-j}^\top \mathbf{\Psi}_{-j} - \gamma \mathbf{\Psi}_{-j}^\top \mathbf{\Psi}_{p,-j})^{-1}\mathbf{\Psi}_{-j}^\top \mathbf{r}_{-j}$. The key challenge of computing the influence function is computing $\mathbf{w}_{-j}$ in an efficient manner that avoids recomputing a costly matrix inverse for each $j$. Let $\mathbf{C}_{-j} = (\mathbf{\Psi}_{-j}^\top \mathbf{\Psi}_{-j} - \gamma \mathbf{\Psi}_{-j}^\top \mathbf{\Psi}_{p,-j})$ and $\mathbf{C} = (\mathbf{\Psi}^\top \mathbf{\Psi} - \gamma \mathbf{\Psi}^\top \mathbf{\Psi}_p)$. We compute $\mathbf{w}_{-j}$ as follows: \begin{align} \mathbf{B}_{j} &\leftarrow \mathbf{C}^{-1} + \frac{\mathbf{C}^{-1} \pmb{\psi}_j \pmb{\psi}_j^\top \mathbf{C}^{-1}}{1- \pmb{\psi}_{j}^{\top}\mathbf{C}^{-1}\pmb{\psi}_{j}}\\ \left(\mathbf{C}_{-j}\right)^{-1} &\leftarrow \mathbf{B}_{j} - \frac{\gamma \mathbf{B}_{j} \pmb{\psi}_j \pmb{\psi}_{\pi,j}^\top \mathbf{B}_{j}}{1+ \gamma \pmb{\psi}_{\pi,j}^{\top}\mathbf{B}_{j}\pmb{\psi}_{j}} \\ \mathbf{w}_{-j} &\leftarrow \left(\mathbf{C}_{-j}\right)^{-1}\left( \mathbf{\Psi}^\top \mathbf{r} - r^{(j)} \pmb{\psi}_j \right) \end{align} The proof of correctness is in Proposition \ref{prop:ls_fqe_influence} in Appendix \ref{appendix:ls_fqe}. The individual influence function is then simply: \begin{align} I_{i,j} = \pmb{\psi}(s^{(i)},a^{(i)})^\top (\mathbf{w}_{-j} - \mathbf{w}). \end{align} \paragraph{Computational complexity.} The bottleneck of computing $\mathbf{w}_{-j}$ is the matrix multiplication of $D \times D$ matrices which takes at most $\mathcal{O}(D^{3})$. All the other matrix multiplications involving size $N$, e.g. $\mathbf{\Psi}^\top \mathbf{r}$, do not depend on $j$ and could be cached from the original OPE. Thus, the overall complexity for computing $I_{i,j}$ for all $i$ and $j$ is $\mathcal{O}(ND^{3})$. Assuming $N>D$, the complexity of the original OPE algorithm is $\mathcal{O}(ND^2)$, where the bottleneck is computing $\mathbf{\Psi}^\top \mathbf{\Psi}$. \subsection{Importance Sampling} IS methods are essentially weighted averages over returns of trajectories, and therefore computing the total influence of a trajectory in a dataset can easily be performed in constant time, as long as certain values a cached. For example, the influence of the $j^{th}$ trajectory for standard IS is \begin{equation} I_j = \frac{1}{N-1} \left( \hat{v} - w_{0:T}^{(j)} g_T^{(j)} \right), \end{equation} where $N$ is the number of trajectories, and $w_{0:T}^{(j)}$ and $g_T^{(j)}$ are the IS weight and return of the $j^{th}$ trajectory, respectively. In Appendix \ref{appendix:is_computation} we present the derivation of the influence for IS, WIS, PDIS, DR and WDR estimators. \section{Illustration of influence functions in a sequential setting} \label{sec:intuition} We now demonstrate and give intuition for how the influence behaves in an RL setting. For the demonstrations and experiments presented throughout the rest of the paper we use the kernel-based FQE method. Several factors determine the influence of a transition. For transitions to be influential they must have actions which are possible under the evaluation policy and form links in sequences which result in returns different than the expected value. Furthermore, transitions will be more influential the less neighbors they have. To demonstrate this intuition we present in Figure \ref{fig:intuition} trajectories from a 2D continuous navigational domain \footnote{Code for reproducing the results in this paper can be found at https://github.com/dtak/interpretable\_ope\_public.git}. The agent starts at the origin and takes noisy steps of length $1$ at $45^{\circ}$ to the axes. The reward for a given transition is a function of the state and has the shape of a Gaussian centered along the approximate path of the agent, represented as the background heat map in Figure \ref{fig:intuition} (top), where observed transitions are drawn as black line segments. Because distances for the FQE are computed in the state-action space, in this example all actions in the data are the same to allow for distances to be visualized in 2D. To illustrate how influence is larger for transitions with few neighbors, we removed most of the transitions in two regions (denoted II and III), and compared the distribution of influences in these regions with influences in a data dense region (denoted I). Figure \ref{fig:intuition} (bottom) shows the distribution over 200 experiments (in each experiment, new data is generated) of the influences of transitions in the different regions. The influence is much higher for transitions in sparse regions with few neighbors, as can be seen by comparing the distributions in regions I and II. This is a desired property, as in analysis of the OPE process, we'd like to be able to present domain experts with transitions that have few neighbors where the sampling variance of a particular transition could have large effect on evaluation. In region III, despite the fact that the observations examined also have very few neighbors, their influence is extremely low, as they don't lead to any regions where rewards are gained by the agent. \begin{figure}[t] \centering \subfigure{\includegraphics[width=0.35\textwidth]{intuition_for_influence.pdf}} \subfigure{\includegraphics[width=0.35\textwidth]{intuition_for_influence_boxplots.pdf}} \caption{\textbf{Conceptual demonstration on a 2D domain.} For transitions in the data to have high influence, they must agree with the evaluation policy and lead to rewarding regions in the state-action space. Additionally, the influence of transitions decreases with the number of their close neighbors.} \label{fig:intuition} \end{figure} \section{Experiments} \subsection{Medical cancer simulator} \label{sec:exp_cancer} \begin{figure}[t] \centering \subfigure[No influential transitions]{\includegraphics[width=0.4\textwidth]{cancer_no_influential_transitions_partial.pdf}} \subfigure[Dead end sequence]{\includegraphics[width=0.4\textwidth]{cancer_dead_end_partial.pdf}} \subfigure[Highlighted reliable transitions]{\includegraphics[width=0.4\textwidth]{cancer_influential_good_transitions_partial.pdf}} \subfigure[Highlighted problematic transitions]{\includegraphics[width=0.4\textwidth]{cancer_recognizable_bad_transitions_partial.pdf}} \caption{\textbf{Influence analysis for simulated cancer data.} Analysis of synthetic cancer simulations demonstrates how influence analysis can differentiate between different diagnostics of the OPE process.} \label{fig:cancer} \end{figure} \begin{figure}[t] \centering \subfigure[Influence Distribution]{\includegraphics[width=0.35\textwidth]{is_influence_histogram.pdf}} \subfigure[IS]{\includegraphics[width=0.4\textwidth]{is_influence_is.pdf}} \subfigure[WIS]{\includegraphics[width=0.4\textwidth]{is_influence_wis.pdf}} \subfigure[PDIS]{\includegraphics[width=0.4\textwidth]{is_influence_pdis.pdf}} \caption{\textbf{IS influence analysis for simulated cancer data.} For the same dataset, different estimators have different influence distributions, and for each estimator different trajectories have high influence.} \label{fig:is} \end{figure} To demonstrate the different ways in which influence analysis can allow domain experts to either increase our confidence in the validity of OPE or identify instances where they are invalid, we first present results on a simulator of cancer dynamics. The 4 dimensional states of the simulator approximate the dynamics of tumor growth, with actions consisting administration of chemotherapy at each timestep representing one month. See \citet{ribba2012tumor} for details. In Figure \ref{fig:cancer} we present four cases in which we attempt to evaluate the policy of treating a patient for 15 months and then discontinuing chemotherapy until the end of treatment at 30 months. Each subplot in Figure \ref{fig:cancer} shows two of the four state variables as a function of time, under different conditions which might make evaluation more difficult, such as difference in behavior policy or stochasticity in the environment. The heavy black line represents the expectation of each state dimension at each time-step under the evaluation policy, while the grey lines represent observed transitions under the behavior policy which is $\epsilon$-greedy with respect to the evaluation policy. In all figures, we highlight in red all influential transitions our method would have highlighted for review by domain experts $(\tilde{I}_c = 0.05)$. \paragraph{Case 1: OPE seems reliable.} Figure \ref{fig:cancer}(a) represents a typical example where the OPE can easily be trusted. Despite the large difference between the evaluation and behavior policy $(\epsilon = 0.3)$, enough trajectories have been observed in the data to allow for proper evaluation, and no transition is flagged as being too influential. The value estimation error in this example is less than $1\%$ and our method correctly labels this dataset as reliable. \paragraph{Case 2: Unevaluatable.} Figure \ref{fig:cancer}(b) is similar in experimental conditions to (a) ($\epsilon = 0.3$ and deterministic transitions), but with less collected data, so that the observations needed to properly estimate the dynamics are not in the data. This can be seen by the lack of overlap between the observed transitions and the expected trajectory, and results in a $38\%$ value estimation error. In real life we will not know what the expected trajectory under the evaluation policy looks like, and therefore will not be able to make the comparison and detect the lack of overlap between transitions under the evaluation and behavior policies. However, our method highlights a very influential sequence which terminates at a dead-end, and thus will correctly flag this dataset as not sufficient for evaluation. Our method in this case is confident enough to dismiss the results of evaluation without need for domain experts, but can still inform experts on what type of data is lacking in order for evaluation to be feasible. \paragraph{Case 3: Humans might help.} In Figures \ref{fig:cancer}(c-d), $\epsilon = 0.3$, but the dynamics have different levels of stochasticity. The less stochastic dynamics in \ref{fig:cancer}(c) allow for relatively accurate evaluation ($8\%$ error) but our method identifies several influential transitions which must be presented to a domain expert. These transitions lie on the expected trajectory, and thus a clinician would verify that they represent a typical response of a patient to treatment. This is an example in which our method would allow a domain expert to verify the validity of the evaluation by examining the flagged influential transitions. Conversely, in \ref{fig:cancer}(d) some extreme outliers lead to a large estimation error ($23\%$ error). The influential transitions identified by our method are exactly those which start close to the expected trajectory but deviate significantly from the expected dynamics. A domain expert presented with the these transitions would easily be able to note that the OPE heavily relies on atypical patients and rightly dismiss the validity of evaluation. To summarize, we demonstrated that analysis of influences can both validate or invalidate the evaluation without need for domain experts, and in intermediate cases present domain experts with the correct queries required to gain confidence in the evaluation results or dismiss them. \paragraph{Influence analysis for IS - Influence is a method specific quantity.} In Figure \ref{fig:is} we present influence analysis results for the cancer environment, with different importance sampling methods. Unlike the FQE experiment where we performed influence analysis of the same estimator for different datasets, here we analyze the same dataset for three different OPE estimaors - IS, WIS and PDIS. In Figure \ref{fig:is} (a) we plot the distribution of the influence of all trajectories in the data, and see that the distributions are qualitatively different for each estimator. Furthermore, in \ref{fig:is} (b-d) we highlight the 5 most influential trajectories for each estimator, and see that they are different for each estimator. The key point we wish to highlight is that influence analysis identifies features of the interaction between a dataset and an estimator, and not of the data alone. This makes sense, as different OPE methods are robust or sensitive to different types of noise or artefacts in the data. \subsection{Analysis of real ICU data - MIMIC III} \begin{figure*}[t] \centering \includegraphics[width=0.43\textwidth]{ID-226662_trans-8.pdf} \includegraphics[width=0.43\textwidth]{ID-233975_trans-10.pdf} \caption{Influence analysis on our real-world dataset discovered six transitions in the evaluation dataset that were especially influential on our OPE. We display two of them in this figure, see Appendix \ref{sec:additional-mimic-results} for the remaining four.} \label{fig:mimic} \end{figure*} To show how influence analysis can help debug OPE for a challenging healthcare task, we consider the management of acutely hypotensive patients in the ICU. Hypotension is associated with high morbidity and mortality \citep{jones2006emergency}, but management of these patients is not standardized as ICU patients are heterogeneous. Within critical care, there is scant high-quality evidence from randomized controlled trials to inform treatment guidelines \citep{de2018unexplained,girbes2019time}, which provides an opportunity for RL to help learn better treatment strategies. In collaboration with an intensivist, we use influence analysis to identify potential artefacts when performing OPE on a clinical dataset of acutely hypotensive patients. \paragraph{Data and evaluation policy.} Our data source is a subset of the publicly available MIMIC-III dataset \citep{johnson2016mimic}. See Appendix \ref{appendix:mimic_details} for full details of the data preprocessing. Our final dataset consists of 346 patient trajectories (6777 transitions) for learning a policy and another 346 trajectories (6863 transitions) for evaluation of the policy via OPE and influence analysis. Our state space consists of 29 relevant clinical variables, summarizing current physiological condition and past actions. The two main treatments for hypotension are administration of an intravenous (IV) fluid bolus or initiation of vasopressors. We bin doses of each treatment into 4 categories for "none", "low", "medium" and "high", so that the full action space consists of 16 discrete actions. Each reward is a function of the next blood pressure (MAP) and takes values in $[-1,0]$. As an evaluation policy, we use the most common action of a state's 50 nearest neighbors. This is setup is equivalent to constructing a decision assistance tool for clinicians by recommending the common practice action for patients, and using OPE combined with influence analysis to estimate the efficacy of such a tool. See Appendix \ref{appendix:mimic_details} for more details on how we setup the RL problem formulation, and for the kernel function used to compute nearest-neighbors. \paragraph{Presenting queries to a practicing intensivist.} Running influence analysis flags 6 influential $(\tilde{I}_C = 0.05)$. We show 2 of these transitions in Figure \ref{fig:mimic} and the rest in Appendix \ref{sec:additional-mimic-results}. While this analysis highlights individual transitions, our results figures display additional context before and after the suspect transition to help the clinician understand what might be going on. In Figure \ref{fig:mimic}, each column shows a transition flagged by influence analysis. The top two rows show actions taken (actual treatments in the top row and binned actions in the second row). The remaining three rows show the most important state variables that inform the clinicians' decisions: blood pressure (MAP), urine output, and level of consciousness (GCS). For these three variables, the abnormal range is shaded in red, where the blood pressure shading is darker highlighting its direct relationship with the reward. Vertical grey lines represent timesteps, and the highlighted influential transition is shaded in grey. \paragraph{Outcome: Identifying and removing an influential, buggy measurement.} The two transitions in Figure \ref{fig:mimic} highlight potential problems in the dataset that have a large influence. In the first transition (left), a large drop in blood pressure is observed at the starting time of this transition, potentially indicating a dangerous hypotensive state. Suprisingly, the patient received no treatment, and this unusual transition has a 29\% influence on the OPE estimate. Given additional context just before and after the transition, showing otherwise stable MAP and GCS (patient was conscious and alert) as well as a normal urine output, the intensivist determined the single low MAP value was likely either a measurement error or a clinically insignificant transient episode of hypotension. After correcting the outlier MAP measurement to its most recent normal value (80mmHg) and then rerunning FQE and the influence analysis, the transition no longer has high influence and was not flagged. \paragraph{Outcome: Identifying and correcting a temporal misalignment.} The second highlighted transition (right) features a sudden drop in GCS and worsening MAP values, indicating a sudden deterioration of the patient's state, but treatment is not administered until the next timestep. The intensivist attributed this finding to a time stamp recording error. Again, influence analysis identified an inconsistency in the original data which had undue impact on evaluation. After correcting the inconsistency by shifting the two fluid treatments back by one timestep each, we found that the transition no longer had high influence and was not flagged. \section{Discussion} A key aim of this paper is to formulate a framework for using domain expertise to help in evaluating the trustworthiness of OPE methods for noisy and confounded observational data. The motivation for this research direction is the intersection of two realities: for messy real-world applications, the data itself might never be enough; and domain experts will always need to be involved in the integration of decision support tools, so we should incorporate their expertise into the evaluation process. We showcased influence analysis as one way of performing this task for value-based and IS OPE, but emphasize that such measures can and should be incorporated into other methods as well. For example, when modeling the dynamics in model-based OPE, the results can be tested for their agreement with expert intuition. We stress that research to integrate human input into OPE methods to increase their reliability complements, and does not replace, the approaches for estimating error bounds and uncertainties over the errors of OPE estimates. The fact that traditional theoretical error bounds rely so heavily on assumptions which are generally impossible to verify from the data alone highlights the need for other techniques for gauging to what extent these assumptions hold.
2,877,628,090,932
arxiv
\section{\label{sec:Intro}Introduction} Coherent processes in atoms and molecules yield many interesting and practical phenomena such as coherent population trapping~\cite{Arimondo1996}, lasing without inversion~\cite{Kocharovskaya1992}, and electromagnetically induced transparency (EIT)~\cite{Marangos1998}. Pioneering EIT experiments employed alkali metals due to their simple electronic level structure and long-lived coherence, but recently coherent processes are investigated in other systems such as quantum dots~\cite{Xu2008}, nanoplasmonics~\cite{Liu2009}, superconducting circuits~\cite{Kelly2010}, metamaterials~\cite{Papasimakis2008,Zhang2008}, and optomechanics~\cite{SMC+11}. EIT is also observed for classical coupled oscillator, e.g.\ inductively or capacitively coupled electrical resonator circuits~\cite{a:LambRetherford1951,a:Nussenzveig2002}. EIT systems could enable new practical applications of coherent processes, but the lack of time-scale separations characteristic of alkalis~\cite{FleischhauerMarangos2005} obfuscates signatures of coherent processes. Here we focus on EIT, where transparency is induced coherently by a pump field even if the pump is arbitrarily weak. EIT is crucial for optically-controlled slowing of light~\cite{Hau1999} and optical storage~\cite{Philips2001} and is achieved by Fano interference~\cite{Fano1961} between two atomic transitions. Without Fano interference, EIT is simply Autler-Townes splitting (ATS), equivalently the ac-Stark effect~\cite{AutlerTownes1955}, corresponding to a doublet structure in the atomic-absorption profile that requires strong pumping. In essence, both EIT and ATS lead to the presence of a transparency window due to electromagnetic (EM) pumping, but the mechanisms are entirely different. Here we introduce an objective test for use on empirical data to discern EIT from ATS in any experiment. This test is based on Akaike weights for the models~\cite{BurnhamAnderson2002} and reveals whether EIT or ATS has been observed or whether the operating conditions make the data inconclusive. Fano's seminal study of two nearly-resonant modes decaying via a common channel differed from the prevalent normal-mode analyses at the time: he showed that this shared decay channel yields additional cross-coupling between modes mediated by the common reservoir, which explained the anomalous asymmetric lineshape for electrons scattering from Helium~\cite{Fano1961}. In fact any response that combines multiple modes can have Fano interference, which can be extremely sharp and highly sensitive to variability in the system~\cite{Hao2008}. Harris and Imamo\u{g}lu showed that hybrid ``atom+field'' modes in the dressed-state formalism interact with the same reservoir hence readily satisfy the Fano interference conditions~\cite{ImamogluHarris89} thereby producing a transparency window in the absorption profile~$A(\delta)$ for~$\delta$ the two-photon detuning frequency. This effect was originally demonstrated for a $\Lambda$-type three-level atom (TLA) with energy levels~$|a\rangle$, $|b\rangle$, and $|c\rangle$ and judiciously chosen rates as shown in Fig.~\ref{fig:LS}(a). Dressed-state frequency separation is proportional to the pump-field Rabi frequency~$\Omega$, and this separation yields ATS in the absence of Fano interference. Fano interference is negligible for large~$\Omega$ but must transition smoothly from ATS to EIT as $\Omega$ decreases and the dressed states try to merge thereby strengthening the Fano interference effect. Under EIT conditions, complete transparency holds even in the weak-pump limit. \begin{figure} \includegraphics[width=\columnwidth]{Fig1.eps} \caption{(Color online) (a)~$\Lambda$-type TLA with probe (pump) driving field with Rabi frequency~$\alpha$ ($\Omega$), which probes (drives) the $|a\rangle\leftrightarrow|b\rangle$ ($|a\rangle\leftrightarrow|c\rangle$) transition. (b-d)~Absorption~$A$ vs.\ two-photon detuning~$\delta$ (red dots) for resonant ($\Delta=0$) pump with $\Gamma_\text{ab}=1$, $\Gamma_\text{bc}=0.1$ and various $\Omega$ with best fits to $A_\text{EIT}(C_+,C_-,\gamma_+,\gamma_-)$ (blue solid) and $A_\text{ATS}(C,\gamma,\delta_{0})$ (green dashed) models calculated for (b)~weak $\Omega$ with good fit to $A_\text{EIT}(2.14,1.89,0.581,0.520)$ and poor fit to closest $A_\text{ATS}(0.532,0.633,0.712)$, (c)~intermediate $\Omega$ with poor fit to closest $A_\text{ATS}(0.472,0.512,1.03)$ as well as $A_\text{EIT}(88.3,88.3,0.75,0.752)$, and (d)~strong $\Omega$ with poor fit to closest $A_\text{EIT}(1.3\times 10^3,1.3\times 10^3,2.92,2.92)$ and good fit to $A_\text{ATS}(0.499,0.521,3.05)$. } \label{fig:LS} \end{figure} There are four TLAs: $\Lambda$, V, and two ladder ($\Xi$) cascade systems with upper- and lower-level driving, respectively. Only $\Lambda$- and upper-level-driven $\Xi$ TLAs exhibit Fano interference-induced suppression of absorption~\cite{LeeScully2000}. For simplicity, we focus on the $\Lambda$ TLA to show how the decaying dressed-states formalism yields distinctive absorption profiles characteristic of EIT and ATS~\cite{AnisimovKocharovskaya2008,a:Abi-Salloum2010}, but our approach to discern EIT from ATS is independent of the choice of TLA so directly applicable to upper-level-driven $\Xi$-type TLA. We use a semiclassical description with decay and dephasing rates manually inserted. The EM response to the probe is proportional to the probe-induced excited coherence corresponding to the off-diagonal TLA density matrix element~$\sigma_\text{ab}$. The steady-state solution to linear order of the probe electric field has all population in~$|b\rangle$ so excited coherence at the probed transition depends only on dephasing rates~$\Gamma_{\text{ab}}$ and~$\Gamma_{\text{bc}}$: $\sigma _{\text{ab}}=\alpha/[\delta +\Delta -\text{i}\Gamma_{\text{ab}}-\Omega^2/(\delta -\text{i}\Gamma _{\text{bc}})]$, with~$\Delta$ the one-photon detuning and~$\alpha$ the probe Rabi frequency~\cite{AnisimovKocharovskaya2008}. Linear absorption~$A\propto\text{Im}(\sigma_{\text{ab}})$, shown in Figs.~\ref{fig:LS}(b-d), has spectral poles $\delta _\pm=-\Delta/2+\text{i}(\Gamma _{\text{ab}}+\Gamma _{\text{bc}})/2\pm[\Omega^2+(\Delta-\text{i}\Gamma_{\text{ab}}+\text{i}\Gamma_{\text{bc}})^2/4]^{1/2}$, which produce resonant contributions to atomic response, $A_{\pm}=S_{\pm}/(\delta-\delta_{\pm})$, with strengths $S_{\pm}=\pm(\delta_{\pm}-\text{i}\Gamma_\text{bc})/(\delta_+-\delta_-)$. These resonant contributions can be attributed to ``decaying-dressed states''~\cite{AnisimovKocharovskaya2008} with frequencies and dephasing rates given by Re$(\delta_{\pm})$ and Im$(\delta_{\pm})$, respectively. Decaying-dressed states arise from the interaction between dressed states with eigenenergies $-\Delta/2\pm(\Omega^2+\Delta^2/4)^{1/2}$ and two reservoirs with decay rates $\Gamma_\text{ab}$ and $\Gamma_\text{bc}$. This interaction is affected by the pump in two ways: separating dressed-states and exciting the $|a\rangle\leftrightarrow|c\rangle$ transition needed for destructive Fano interference with the $|a\rangle\leftrightarrow|b\rangle$ reservoir. Unfortunately, the excited $|a\rangle\leftrightarrow|c\rangle$ transition interacts with the $|b\rangle\leftrightarrow|c\rangle$ reservoir, which is always positive and thus negates absorption suppression. Finally, one-photon detuning further separates dressed states thereby weakening Fano interference. Strong Fano interference, hence strong EIT, occurs for resonant driving ($\Delta=0$) where the spectral poles exist in three $\Omega$-regions: (i)~dressed states share a reservoir $\Omega\leq\Omega_\text{EIT}\equiv\left(\Gamma_{\text{ab}}-\Gamma_{\text{bc}}\right)/2$, (ii)~dressed states decay into distinct reservoirs $\Omega\gg\Gamma_\text{ab}$, and (iii)~intermediate regime where the dressed-state reservoirs are only partially distinct. In $\Omega$-region (i) $\text{Re}(\delta _{\pm })=0=\text{Im}(S_{\pm})$ so the absorption profile comprises two Lorentzians centered at the origin, one broad and positive and the other narrow and negative: $A_\text{EIT}=C^2_+/(\gamma_+^2+\delta^2)-C^2_-/(\gamma_-^2+\delta^2)$. Hence, low-power pump-induced transparency, where Fano interference dominates, has a transparency window without splitting~\cite{AnisimovKocharovskaya2008}. For strong-pump $\Omega$-region (ii) $\delta_\pm\approx\pm\Omega+\text{i}(\Gamma_{\text{ab}}+\Gamma _\text{bc})/2$ and $S_{\pm}\approx1/2$ so A_\text{ATS} = C^2[1/(\gamma^2+\left(\delta -\delta _0\right)^2)+1/(\gamma^2+\left(\delta +\delta _0\right)^2)]$, corresponding to the sum of two equal-width Lorentzians shifted from the origin by $\delta_0\approx\pm \Omega$. Figures~\ref{fig:LS}(b-d) demonstrate how well these EIT and ATS models fit calculated absorption profiles, but an objective criterion is needed to discern the best model or whether the data are inconclusive. Akaike's Information Criterion (AIC) identifies the most informative model based on Kullback-Leibler divergence (relative entropy), which is the average logarithmic difference between two distributions with respect to the first distribution. AIC quantifies the information lost when model~$A_i$ with $K_i$ fitting parameters is used to fit actual data: $I_i=-2\log\mathcal{L}_i+2K_i$ for $\mathcal{L}_i$ the maximum likelihood for model~$A_i$ with penalty $2K_i$ for fitting parameters~\cite{BurnhamAnderson2002}. \begin{figure} \includegraphics[width=\columnwidth]{Fig2.eps} \caption{(Color online) (a)~Akaike weights vs.\ Rabi frequency for the TLA in Fig.~\ref{fig:LS} showing a sharp transition at~$\Omega_\text{AIC}$ from EIT model (blue solid) to ATS model (green dashed); (b)~Transition boundary~$\Omega_\text{AIC}$ with corresponding transparency values vs.\ $\Gamma_\text{bc}$. } \label{fig:AWsGbc} \end{figure} We demonstrate AIC-based testing by fitting an absorption data set $D=\{A(\delta_j);|\delta_j|\le 5\}$, incrementing in steps $\Delta\delta_j=0.05$, for the TLA in Fig.~\ref{fig:LS}(a) to models $A_\text{EIT}$ and $A_\text{ATS}$ using the NonlinearModelFit function in Mathematica\texttrademark, which can calculate AIC. The relative likelihood of model~$A_i$ out of~$n$ models is its Akaike weight $w_i=\text{e}^{-I_i/2}/\sum_{k=1}^n\text{e}^{-I_k/2}$ depicted in Fig.~\ref{fig:AWsGbc}(a). This figure shows that, based on AIC, the EIT model explains data with 100\% likelihood for all $\Omega<\Omega_{\rm AIC}=0.86$. Figure~\ref{fig:AWsGbc}(b) shows that increasing $\Gamma_\text{bc}$ reduces the EIT threshold~$\Omega_\text{AIC}$ and guides devising EIT experiments. Testing for EIT is affected by the fact that experiments have additional complexities such as one-photon detuning or more than three energy levels, but these complexities do not negate the validity of our test; rather these complications just make it harder to \emph{pass} the EIT test. Consequently, one can construct and test more general models that accommodate these extra features because AIC allows relative testing between any number of models. The corresponding signatures of Fano interference in generalized models can be identified thus revealing genuine EIT effects. A more important issue of working with experimental data sets $D=\{A(\delta_{j})\}$ is that experiments are noisy so each run produces a different data set, say $D_\ell$, with many data points measured. In turn, the Akaike weight reveals the likelihood of describing a data set $D_\ell$ that becomes binary (0 or 1), hence conclusive, for large data sets as shown in Fig.~\ref{fig:AWsGbc}(a). Consequently, one will conclusively say after each run which model pertains, but, because of noise, this conclusion could vary from run to run. Intuitively the best model should be picked more often, however, experimental data are not reported on per run basis. Experimental data are typically reported as mean values with error bars representing the confidence interval for the data. Hence we need to adapt the AIC based testing to the way experimental data are reported. Akaike's information according to the least-squares analysis is $I=N\log(\hat{\sigma}^2)+2K$ for $\hat{\sigma}^2=\sum_{j=1}^{N}\hat{\epsilon}_j^2/N$ and $\hat{\epsilon}^2_j$ the estimated residuals from the fitted model \cite{BurnhamAnderson2002}. Technical noise, however, blurs the distinction between models $\{A_i\}$ causing Akaike's information to become $I=N\log(\hat{\sigma}^2+\hat{\sigma}^2_\text{exp})+2K$ with aforementioned consequences. Hence, we propose a fitness test for Akaike's information obtained from reported experimental data. Our fitness test uses a per-point (mean) AIC contribution $\bar{I}=I/N$ to calculate a per-point weight for the $i^{\rm th}$ model: $\bar{w}_i=\exp(-\bar{I}_i/2)/\sum_{k=1}^n\exp(-\bar{I}_k/2)$. These unnormalized per-point weights $\exp(-\bar{I}_i/2)$ converge to $1/\sqrt{\hat{\sigma}^2_i}$ for large data sets; in case of noisy data, this yields equal per-point weights for all models as expected intuitively. \begin{figure} \includegraphics{Fig3.eps} \caption{(Color online) (a)~Per-point weights~$\bar{w}_i$ for the conditions of Fig.~\ref{fig:LS} as a function of pump-field Rabi frequency~$\Omega$ illustrate three distinct regions: $\Omega<\Omega_\text{EIT}=0.45$, where the EIT model (blue) dominates unconditionally; $0.45<\Omega<0.86$, where the ATS model (green) shows non-zero likelihood; $\Omega>0.86$, ATS model dominates. The presence of Gaussian noise with standard derivation $\sigma=0.1$ (red dots) [$\sigma=0.01$ (burgundy dots)] affects the per-point weights for EIT and ATS models leading to the absence of unconditional dominance by the EIT model. (b)~In the weak-pump limit and a poor signal-to-noise ratio, both models are equally likely to fit data (red dots).} \label{fig:AWs} \end{figure} We simulate a noisy absorption profile by generating data $D_{\ell}$ according to $\left\langle A(\delta_j)\right\rangle=(1+\xi)A(\delta_j)$ for $\xi$ randomly chosen from the normal distribution $\exp\left[-x^2/2 \sigma^2\right]/\sqrt{2 \pi} \sigma$. Figure~\ref{fig:AWs}(a) shows our per-point weights for generated data with no noise, small noise and moderate noise for the conditions of Fig.~\ref{fig:LS}. In no noise case and $\Omega<\Omega_\text{EIT}=0.45$, the ATS model fails and has per-point weight: $\bar{w}_2=0$; beyond the EIT threshold $\Omega_\text{EIT}$, the per-point weight for ATS starts to increases with both models describing the absorption profile equally well at $\Omega_{\rm AIC}=0.86$. This agrees with intuition about fitting models, especially a continuous trade-off between models in the intermediate regime. It is also intuitive to expect that under noisy conditions and weak pump, $\Omega^2<\Omega^2_{\sigma}=2\sigma\Gamma_\text{ab}\Gamma_\text{bc}/(1-2\sigma)$, induced transparency is buried in noise, $1-\text{Im}[\sigma_{\text{ab}}(\delta=0,\Omega)]/\text{Im}[\sigma_{\text{ab}}(\delta=0,\Omega=0)]<2\sigma$, and both models account for the absorption profile equally well [see Fig.~\ref{fig:AWs}(b)]. Consequently, at $\Omega=0$ and any amount of noise, per-point weights are equal to 0.5 and results are inconclusive. Increasing the pump field, however, favors the EIT model until it gives way to ATS dominance for pump strength greater than $\Omega_{\rm AIC}$. Therefore, a convincing EIT demonstration requires suppression of technical noise to the point that our per-point weights become well separated. We apply our theory to the recent observation of induced transmission (i.e.\ transparency), reported as EIT, for an open transmission line of a superconducting circuit with a single flux-type artificial atom (`flux qubit')~\cite{AbdumalikovTsai2010}. In contrast to TLA system discussed here, a flux qubit driven/probed by microwave fields, which are polarized and confined to one dimension, presents a nearly lossless upper-pumped $\Xi$ system. Nevertheless, EIT testing of this observation is straightforward, with absorption being effectively replaced by reflection, since their analysis shows that transmission coefficient agrees with the electromagnetic response for a TLA: $t=1-(\gamma_\text{ab}/2)/[\Gamma_\text{ab}+i\delta+\Omega^2/(\Gamma_\text{bc}+i\delta)]$ with our Rabi frequency~$\Omega$ being half their Rabi frequency~\cite{AbdumalikovTsai2010}. Induced transparency is evident from calculating Re$(t)$ for the probe field in the presence of the control field. Their system has population relaxation rate $\gamma_\text{ab}/2\pi=11$~MHz and dephasing rates $\Gamma_\text{ab}/2\pi=7.2$~MHz and $\Gamma_\text{bc}=0.96\Gamma_\text{ab}$. Therefore, the transparency window appears for a control field amplitude of $\Omega/2\pi=6$~MHz, which exceeds $\Omega_\text{EIT}/2\pi=0.15$~MHz so the experiment operates in a region where demonstrating Fano interference must be inconclusive. \begin{figure} \includegraphics{Fig4.eps} \caption{(Color online) Transmission Re$(t)$ vs.\ two-photon detuning~$\delta$ for (a)~theoretical curve (red dots) with parameters taken from Ref.~\cite{AbdumalikovTsai2010} and control-field amplitude $\Omega=6$~MHz compared to the best-fit $A_\text{EIT}(25.4, 24.2, 6.36, 6.15)$ (blue solid) and $A_\text{ATS}(4.42, 7.1, 6.1)$ (green dashes) and (b)~actual experimental data form Ref.~\cite{AbdumalikovTsai2010} (red dots) vs.\ best-fit $A_\text{EIT}(11.8, 9.08, 6.77, 5.66)$ (blue solid) and $A_\text{ATS}(4.59, 7.29, 5.49)$. } \label{fig:exper} \end{figure} In fact the theoretical transmission curve based on the reported parameters, shown in Fig.~\ref{fig:exper}(a), is indistinguishable from the best-fit ATS model and clearly distinct from the EIT model. This is further corroborated by our per-point weight that yields $\bar{w}_1=0.03$ implying that the result is far from EIT. Whereas the reported induced transparency suffices for switching of propagating waves in a superconducting circuit~\cite{AbdumalikovTsai2010}, our objective test shows conclusively that they demonstrated ATS and definitely not EIT. Due to noise, however, the actual experimental data shown in Fig.~\ref{fig:exper}(b)~differs from the theoretical prediction discussed above and shown in Fig.~\ref{fig:exper}(a) so a reported data set does not conclusively show EIT nor rule it out. That is, optimal choices of~$A_\text{EIT}$ and~$A_\text{ATS}$ seem to fit the data equally well. Yet, there is a slight preference for ATS according to our per-point weight criterion, $\bar{w}_1=0.48$ and $\bar{w}_2=0.52$, in the weak-field limit with obvious favoring of ATS in the strong-field regime. In conclusion, we propose an objective way to discern ATS vs.\ EIT from experimental data obtained in systems that demonstrate a smooth transition from ATS to EIT through three qualitative regions as the strength of the driving field~$\Omega$ decreases. The Akaike weight provides a rigorous criterion to ascertain from each data set which of EIT or ATS pertains. We have introduced a per-point weight that accommodates experimental noise and readily produces a conclusion of whether EIT or ATS pertain as well as alternative in case that the experiment is inconclusive. In essence, our test seeks direct evidence of Fano interference, which is manifested as a negative Lorentzian in the absorption spectrum accompanied by the absence of splitting, that ATS lacks. Akaike's information criterion, combined with our per-point weights, allows for testing arbitrarily many models simultaneously. Thus the data can be tested against a more complicated model, that takes care of additional levels, one-photon detunings as well as inhomogeneous broadenings, with greater likelihood of a conclusive result even in the presence of noise, and we have made clear that the sought-after EIT signature is Fano interference, which should appear as narrow negative Lorentzians in the data. Our test of EIT vs.\ ATS is especially important if operation in the weak-pump regime is necessary for applications such as sensing so that the data unambiguously reveal whether the requisite conditions have been met. Nowadays EIT is becoming demonstrated in a multitude of experimental systems, and a proper test is needed, which can be employed for any EIT-type experiment. We have provided such a test. \acknowledgments BCS acknowledges valuable discussions with A.\ Abdumalikov, Y.\ Nakamura and P.\ Nussenzweig, and is especially grateful to A.\ Abdumalikov for providing data to test their superconducting-circuit induced-transparency experiment. BCS has financial support from NSERC, \emph{i}CORE and a CIFAR Fellowship. PMA and JPD acknowledge support from ARO, DoE, FQXi, NSF, and the Northrop-Grumman Corporation.
2,877,628,090,933
arxiv
\section{Introduction} Cataclysmic variables (CVs) are binary systems where a white dwarf (WD) accretes material from a secondary star which is filling its Roche lobe (see Warner 1995 for a review). Recent theoretical works (e.g. Howell et al. 2001, and Kolb 2001) predict that the majority of the CVs should be very old and evolved systems. These systems are characterized by short orbital periods (i.e. $P<$2 hr), low intrinsic luminosity ($M_V<11$ mag), and evolved low mass secondary stars (see also Howell et al. 1995). However, the number of currently known/observed short orbital period systems does not match the expectation. Thus, in recent years, many surveys have taken place to fill such a gap (e.g. SDSS by Szkody et al. 2002, HQS by Gansicke et al. 2002, FSVS by Groot et al. 2003, etc). At a same time, new ideas (e.g., the need to go even fainter) and different accretion scenarios (e.g. Spruit and Taam 2001, Dubus et al. 2002), have been presented and analyzed. H$\alpha$0242-2802 (hereafter H$\alpha$0242) was a CV candidate within the UK-Schmidt survey (Davenhall et al. 2001). It was confirmed as CV by Howell et al. (2002), who observed it spectroscopically and reported a spectrum very similar to that of WZ~Sge. Indeed, the authors also advanced the hypothesis that H$\alpha$0242 belongs to the same dwarf nova subclass of Tremendous Outburst Amplitude Dwarf novae (TOADs, see Howell et al. 1995 for a review on TOADs). An interesting implication of this would be that H$\alpha$0242 is quite high above the galactic plane ($z=250$ pc), and resides in the hold disk - halo population. Here, we present time-series spectroscopy obtained with VLT+FORS2 with the aim of determining the orbital period, the system parameters, the emission lines characteristics, and comparing such values with those of WZ~Sge, the prototype object of the very old evolved CV population (see Howell et al., 2004). We present our spectroscopic observations in Sec.~2, the data analysis in Sec.~3 and our conclusions in Sec.~4 \section{Observations and data reduction} H$\alpha$0242 was observed at VLT+FORS2 using a series of 5 min exposures over 4 consecutive hours. An 8-m class telescope is mandatory in order to perform time resolved spectroscopy of faint targets (H$\alpha0242$ B mag is $\sim$19, see Howell et al. 2002), and determine the radial velocity curve of systems having orbital period around 2 hr. We used FORS2 in Multi Object Spectroscopy (MOS) mode with the holographic grism 1400V. We preferred the MOS mode to the long slit spectroscopy (LSS), in order to cover a bluer wavelength range\footnote{Namely $\lambda\lambda$ 4270-5550, rather than the standard $\lambda\lambda$4560-5860.}. Indeed, the bluer wavelength coverage allowed us to observe the Doppler broadened Balmer lines H$\beta$ and H$\gamma$ (see Fig.~1). The slit width was set to 1'' and together with the grism 1400V provided a dispersion of 0.63 \AA/pix. \begin{table} \label{log} \begin{center} \scriptsize \caption{The log. of observation.} \begin{tabular}{cc} date of obs. & 2002/09/10 \\ UT start & 05:41\\ UT end & 09:52 \\ average seeing & 0.7 \\ average transparency & CLR-THN\\ number of spectra & 40 \\ exptime per spectrum & 300 sec \\ instrument & FORS2\\ instr. set up & MOS/SR/no filter/GRIS 1400V\\ read out mode & 200kHz, 2x2, 1.25 (speed, binning, gain)\\ \end{tabular} \end{center} \end{table} \begin{figure*} \centering \rotatebox{-90}{\includegraphics[width=14cm,]{2349fig1.eps}} \caption{The average spectrum of H$\alpha$0242. The flux is given in erg sec${-1}$ cm$^{-2}$ \AA$^{-1}$, but has not been slit or sky corrected thus does not correspond to an absolute calibration.} \label{f1} \end{figure*} The log. of observation is presented in Table~1 . A standard star with the same instrument set up was observed one month later allowing us to place the spectra on a relative flux scale. Fortunately, absolute flux measurements are not needed for the analysis presented in this paper. The data reduction was performed through standard IRAF routines within the packages {\it ccdproc}, {\it apex}, and {\it onedspec}. \section{Data analysis} \subsection{Overview/General properties} We plot in Fig.1 the average spectrum of H$\alpha$0242. The continuum shape is not as blue as in the previous observation of Howell et al. (2002) and this is probably an artifact of the calibration. The emission lines which are visible in the spectrum are the two Balmer lines H$\beta$ and H$\gamma$ and the FeII lines from multiplets 37, 38, 42 (the strongest), and 49. Visible are also the HeI lines $\lambda$4471 and $\lambda$4713. The observation of optical FeII emission has already been reported in the literature but never with sufficient attention. We will develop a brief discussion about FeII emission and their formation mechanism in CVs in Sec.~3.6. Here, we note that all the emission lines have a similar shape. They are double peaked with a deep central absorption core which extends below the continuum. This is signature of high orbital inclination. The B/R ratio of the {\it average spectrum} (Fig.~1) appears greater than 1 for the Balmer lines and $<$1 for the FeII. This is, at least in part, an artifact of the flux calibration which introduces a red continuum. Indeed, on one hand, the {\it normalized average spectrum} shows that B/R$>$1 in the Balmer lines and B/R$\sim$1 in the FeII (42) emission lines. On the other hand, the analysis of the {\it single spectra} shows that the Balmer, HeI, and the FeII lines follow a similar modulation throughout the orbit. This is well shown by the trailed spectrograms that we present further below (see Fig.~5). A possible explanation for the different B/R ratio between the Balmer and the FeII lines in the {\it average spectrum}, may be a larger fractional contribution of the hot spot in the Balmer lines, when it is blue shifted. We also observed a weak emission from the HeII $\lambda$4684. The HeII line is not readily visible in the single spectra, nor in the average one. It is visible only in the trailed spectrogram where it produces a pure S-wave (see Fig.~5 and Sec.~3.4). In the following sections we will analyze the spectra searching for the orbital period of the binary system, and applying the usual analysis for the accretion disk emission lines (RV measurements, trailed spectra etc). We will pay particular attention to the comparison between WZ~Sge and H$\alpha$0242. \subsection{Period search} \begin{figure*} \centering \rotatebox{-90}{\includegraphics[width=14cm,]{2349fig2.eps}} \caption{The continuum and Balmer emission line light curves. From top to bottom: the continuum flux measured at 5500\AA, H$\beta$ emission lines flux, and H$\gamma$ emission line flux. } \label{f2} \end{figure*} We searched for the binary system orbital period applying the Phase Dispersion Minimization method (PDM method, Stellingwerf 1987) to different emission line features\footnote{We measured in particular: the position of the red and the blue peak in the emission line, the position of the central absorption, and the line flux barycenter.}. We did not discover any statistically significant period. However, we found clear evidence for the orbital period by plotting the light curve of the continuum flux measured at 5500\AA \ (see Fig.~2). Indeed, the light curve is characterized by a deep eclipse of almost 2 magnitudes with periodicity of 107 min. Woudt et al. (2004), determined the same orbital period through time resolved photometry. We also found a periodicity of 106 min from the radial velocity measurements of the Balmer emission lines through the double Gaussian fit method which was developed by Shafter (1983, and reference therein). Within this paper, we will adopt the period of 107 min and will phase our spectra assuming as $T_0$ the time of the observed mid-eclipse minimum\footnote{Due to the time resolution of our spectra, the ``true'' minimum could be deeper.}. Our approximate ephemeris is: $HJD=2452527.89275(\pm 0.00395)+0.0743055(\pm 0.0017361)E $ where the uncertainty on the time $T_0$ corresponds to the time resolution of our data points which is 341 sec, i.e. the sum of the exposure time and the readout time of the mosaic CCD. Thus, the uncertainty on the period (150 sec), was derived both from measuring the width of the period peak in the standard PDM ($\theta$, P) plot and an estimation from the eclipse light curves in Fig. 2. \begin{figure}[h] \centering \rotatebox{-90}{\includegraphics[width=6.8cm]{2349fig3.eps}} \caption{The diagnostic diagram. Squared symbols are for the H$\gamma$ data points; circles are for H$\beta$ data points. Filled symbols mark the best fit data points. } \label{f1} \end{figure} \subsection{Radial velocity curves} Knowing the orbital period we can phase the spectra and the radial velocity measurements to derive a few parameters characterizing the binary system. We can then produce trailed spectrograms and Doppler maps to qualitatively analyze the evolution of the line profile and the emission components. In principle, radial velocity measurements of the accretion disk emission lines provide valid information on the white dwarf orbital motion. However, this does not appear to be the case for many CVs and, in particular, the short orbital period systems (e.g. WZ~Sge, Mason et al. 2000 and reference therein, and V893 Sco, Mason et al. 2001). The exact causes of such behavior is not well known, nor is there a model capable to explain why the radial velocity curves from different emission lines yield discordant system parameters. Since we are not yet able to quantify the discrepancy between the derived quantities and the corresponding real values, it is still worthwhile measuring and fitting the radial velocity curves of the emission lines and determining coarse values which reasonably constraint the WD Keplerian velocity and the systemic velocity. Of course, within our uncertainties, the readers should be cautionary in their interpretation. We measured the radial velocity of the Balmer lines using the double Gaussian fit method as defined by Shafter (1983, and references therein). We used narrow Gaussian FWHM (4-2\AA), and Gaussian separation steps of equal size. The diagnostic diagram in Fig.~3 shows the fitting parameters corresponding to steps of 4 \AA. The best fit for the H$\beta$ radial velocity curve is found at a Gaussian separation of 60 \AA, while the best fit for the H$\gamma$ radial velocity curve is at a separation of 47 \AA. We list in Table~2 the best fit parameters and plot in Fig.~4 (two top panels) the radial velocity curves. As expected and already observed in other short orbital period systems, the two Balmer lines do not produce consistent values for the WD Keplerian velocity and the systemic velocity, which indeed differs by more than $3\sigma$. In addition we may note that the two radial velocity curves consist of largely scattered points. We believe that one possible explanation can be found in the fact that the line profile is largely variable across different orbital cycles (see Fig.~5 and Sec.~3.4). Also, the larger scatter of the velocity measurements in the H$\gamma$ lines are probably due to the smaller Gaussian separation, which, in principle, can be biased by the hot spot S-wave. However, a larger separation does not produce a better result due to the noisy wings of the H$\gamma$ emission line. We will thus adopt a white dwarf Keplerian velocity of 99 km/sec (from just the H$\beta$ line) throughout the present work. Since the systemic velocity $\gamma$ resulting by the application of the Gaussian fit method is the most un-reliable parameter (see Tappert 1999 for details), we will adopt the value of $\gamma=22$ km/sec derived below. The HeII emission is visible only as an S-wave in the trailed spectrogram (see Fig.5), thus, the only way we had to measure its radial velocities was by cursor position and visual inspection. \begin{figure*} \centering \rotatebox{-90}{\includegraphics[width=14cm,]{2349fig4.eps}} \caption{The radial velocity curves. From top to bottom: 1) H$\beta$ accretion disk emission, 2) H$\gamma$ accretion disk emission, 3) HeII hot spot emission. } \label{f4} \end{figure*} \begin{table}[h] \label{log} \begin{center} \scriptsize \caption{Best fit radial velocity curve parameters for the H Balmer lines and HeII 4686.} \begin{tabular}{cccc} em. line & $\gamma$ (km sec$^{-1}$) & K$_1$ (km sec$^{-1}$) & $\phi_{R/B}$ \\ & & & \\ H$\beta$ & 53$\pm$6 & 99$\pm$9 & 0.056$\pm$0.014 \\ H$\gamma$ & 18$\pm$7 & 71$\pm$10 & 0.053$\pm$0.025 \\ HeII & 22$\pm$3 & 680$\pm$4 & 0.44$\pm$1E-3 \\ \end{tabular} \end{center} \end{table} The radial velocity curve is plotted in Fig.~4, while its best fit parameters are reported in Table~2. The derived $K_1$ value corresponds to the Keplerian velocity of the outer edge of the accretion disk, to a first approximation. Indeed, the real value should be larger as the WD (instantaneous) radial velocity subtracts off from each spectrum/measurement. However, the derived value $K_1=680$ km/sec matches fairly well the half peak separation (HPS) measured on the Balmer lines: $\langle HPS \rangle = 650 \pm 46$ km/sec. The HPS is also a good estimate of the Keplerian velocity at the outer accretion disk edge. We used the value of K=680 km/sec to determine the accretion disk size. Assuming a WD mass of 0.64 $M_\odot$ (see Sec.~3.5) we find $r_d\sim 1.8\times 10^{10}$ km/sec\footnote{The slightly smaller velocity derived from the HPS produces a larger accretion disk radius of $r_d\sim 1.98\times 10^{10}$ km/sec. However, we believe that the average spectrum is affected by both the orbital motion and the hot spot emission which reduce the peak separation.}. The derived value of $\phi_{R/B}=0.44$ is consistent with the standard hot spot position at an angle of $\sim$50-60 deg from the line connecting the two star centers of mass. While, the HeII $\gamma$ value corresponds to just the systemic velocity, within our assumption. This is the value adopted in our next analysis. \subsection{Trailed spectra and Doppler maps} \begin{figure*} \centering \rotatebox{-90}{\includegraphics[width=13cm,]{2349fig5.ps}} \rotatebox{-90}{\includegraphics[width=13cm,]{2349fig6.ps}} \caption{Trailed spectrograms: top panel from left to right: H$\gamma$, H$\beta$, and HeII $\lambda$4686 emission lines; bottom panel: FeII 42 emission lines. } \label{f5} \end{figure*} Trailed spectrograms and Doppler maps of the observed emission lines are a common tool to qualitatively analyze the line forming region. We thus plot in Fig.~5 the trailed spectrograms of the Balmer lines and the HeII $\lambda$4686 (top panels of Fig.~5), as well as of the FeII (42) lines (bottom panels of Fig.~5). All the emission lines clearly show evidence for an eclipse, which implies a high orbital inclination of the binary system (with possibly total eclipse of the white dwarf itself). The Balmer and the FeII lines are double peaked, i.e. form in the upper atmosphere/corona of the accretion disk. They also show only weak evidence of the hot spot emission which indeed is visible only in the phase ranges 0.25$\div$0.30 and 0.75$\div$0.85, i.e. when it is seen from the ``outside'' and the ``inside'' respectively (see Mason et al. 2000). We also note that the Balmer emission lines vary in width and intensity both across the orbit and different orbital cycles (alternate spectra in the trailed spectrograms belong to two distinct orbital cycles). The same behavior is not evident in the FeII lines but we cannot say whether this is an effect of the smaller S/N or is rather and an intrinsic properties of the system. On the other side, the HeII emission shows no evidence for the accretion disk contribution and produces just a clear S-wave which is signature of pure hot spot emission. This can be explained with the fact that the high ionization emission lines can form only at high temperatures, as in the hot spot region where the stream of in-falling gas from the secondary star hits the outer edge of the accretion disk. The Balmer, FeII, and HeII lines were also used to produce back projected Doppler maps. The maps confirm our previous conclusions about the emission lines from the Balmer series and low ionization elements. They mostly form in the accretion disk while the hot spot fractional contribution is small and washed out by the disk emission\footnote{We may stress that this was not at all the case in WZ~Sge where the hot spot emission was contributing up to 50\% of the line flux.}. The high ionization emission line from HeII is confirmed to form in the impact region within the accretion disk. We plot on top of each Doppler map the Roche lobe geometry and the stream trajectory derived for H$\alpha$0242 (see section 3.5). Also, the circle drawn on top of the H and FeII Doppler maps fits by eye the bulk of the accretion disk emission and corresponds to 0.55 of the white dwarf Roche lobe radius ($r_{L1}$). In the case of the HeII Doppler map we draw the circle at 0.76 $r_{L1}$, as derived from the computation in Sec.~3.5. \begin{figure*} \centering \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig7.ps}} \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig8.ps}} \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig9.ps}} \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig10.ps}} \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig11.ps}} \rotatebox{-90}{\includegraphics[width=4.3cm,]{2349fig12.ps}} \caption{Back projected Doppler maps: from left to right, top to bottom: H$\gamma$, H$\beta$, HeII $\lambda$4686, FeII(42) $\lambda\lambda$4923, 5018, and 5169. See text for details. } \label{f5} \end{figure*} \subsection{System parameters and geometry} In the present section we will make use of the previous observations and results to constraint the system geometry. The continuum light curve at 5500 \AA \ shows a 2 mag deep eclipse which is comparable to that observed in Z~Cha and OY~Car (e.g. Ritter and Kolb 2004). We infer, to first approximation, that the orbital inclination of H$\alpha$0242 must be similar to these two eclipsing systems, hence $i\sim$82$^o$. Howell and Skidmore (2002) present a $M_2$-$P$ relation which can be used to predict the mass of the secondary star in both the hypothesis of a pre- and post-orbital period minimum system. In particular, we found a secondary star of mass $M_2=0.17 M_\odot$ and radius $R_2=0.19 R_\odot$ in the case of pre- orbital period minimum system; while, a secondary star of mass $M_2=0.03 M_\odot$ and radius $R_2=0.11 R_\odot$, in case of a system which has already evolved past the orbital period minimum. Knowing the orbital inclination and the mass of the secondary star, we can solve for $M_1$ (the white dwarf mass), the secondary star mass function (i.e. equation 2.79 in Warner 1995). We derive a primary mass of either 0.64 $M_\odot$ or $0.03 M_\odot$, depending on H$\alpha$0242 being a pre- or post-orbital period minimum system. The case $M_2=M_1= 0.03 M_\odot$ seems unlikely both because of standard stellar evolution time scales (a star able to form a low mass WD of just 0.03 $M_\odot$ has not evolved off the main sequence, yet, within our galaxy), and the standard scenario for interacting binaries. Thus, we discard it and conclude that H$\alpha$0242 has not reached the orbital period minimum yet and is likely to have a white dwarf of mass 0.64 $M_\odot$. In the case of $M_1=0.64 M_\odot$, the mass ratio will be $q=0.27$, the binary separation is $\sim 4.8\times 10^{10}$ cm and the radius of the primary star Roche lobe is $2.4 \times 10^{10}$ cm. Thus, the accretion disk radius (see Sec.~3.3) is $\sim 0.76$ times the size of the Roche lobe, and 0.38 times the star separation $a$. \subsection{The iron lines and the Balmer decrement} As already pointed out, FeII emission are probably quite common in CVs, though, they have never been studied carefully. In particular, many times they have likely been identified either as HeI (due to limited spectral rage) or a combination of FeII and HeI. In the case of H$\alpha$0242, we see a multitude of FeII emissions and just weak HeI lines. Thus, it is reasonable to conclude that (at least for this system) the emission lines at 4924\AA \ and 5018\AA \ consist mostly of FeII transitions. FeII emission in the optical were observed since the 70s in Seyfert 1 galaxies (see Osterbrock 1975 and Phillips 1977). The two mechanisms which are believed to produce optical FeII emissions in Seyfert 1 galaxies are: {\it 1)} the resonance fluorescence (e.g. Wampler \& Oke 1967), and {\it 2)} the collisional excitation (e.g. Boksenber et al. 1975). In the first case the UV photons emitted by a hot source ($T_{eff}\leq25000$ K) are absorbed by the iron peak elements mostly in the wavelength range 2300$\div$2800\AA. These UV absorption would be followed by downward transitions in the optical. In the second case the UV spectrum should be characterized by {\it emission} resonance lines in the UV region. Now, the optical spectrum of H$\alpha$0242 shows emission lines from FeII 42, 49, 37 and 38 similar to what is seen in many Seyfert 1 galaxies. We do not have UV observation of H$\alpha$0242, still several DN systems have been observed by HST, and for at least some of them (OY Car, Horne et al. 1994; Z~Cha, WZ~Sge and V2051 Oph, Catalan et al. 1998) the observations of an iron curtain (the UV absorption of the iron peak elements) has been reported. It is thus reasonable to infer that H$\alpha$0242 UV spectrum is characterized by iron peak absorptions, and that the mechanism responsible for the observed optical FeII emission is indeed resonance fluorescence possibly from a disk wind. It would be interesting, now, to observe/recover high S/N spectra of DNe to search for FeII emission in order to both derive the fraction of DNe which show optical FeII emission lines and verify the formation of these FeII lines through the fluorescence mechanism. We are currently analyzing our database with such a purpose. Here, we can comment that the optical spectrum of WZ~Sge in 2002 was showing FeII optical emissions (see Fig.~4 of Howell et al. 2002), while the same system was not showing optical FeII emission lines in 1996. This is perfectly consistent with the fact that WZ~Sge underwent an outburst in 2001 and the idea that WD became hotter as a consequence of the accretion, exciting the FeII in an increased disk wind. It also matches the observation by Catalan et al. (1998) who report the weakest signature of iron curtain in WZ~Sge (before the 2001 outburst). \begin{table} \begin{center} \scriptsize \caption{The Balmer decrement as derived from the intensities of the emission lines during times when they are not effected by the hot spot emission. The value for the WZ~Sge 1996 spectra is from Mason et al. (2000). The 2002 NTT WZ Sge values (see Howell et al. 2002) are from a single spectrum covering 0.07 of the orbital period, while the values for H$\alpha0242$ in 2002 (Howell et al., 2002) span 0.2 of an orbital period. The intensity ratio of the blue and red peak for the 2003 spectra of H$\alpha$0242 (this work) has been derived from the spectra within the phase ranges 0.02-0.11 and 0.43-0.56.} \begin{tabular}{ccccc} object & date & H$\alpha$/H$\beta$ & H$\gamma$/H$\beta$ & H$\delta$/H$\beta$ \\ & & & & \\ WZ~Sge & 1996 & 3.82 & - & - \\ WZ~Sge & 08/2002 & 1.33 & 0.75 & 0.61 \\ H$\alpha$0242 & 08/2002 & 1.10 & 0.77 & 0.74 \\ H$\alpha$0242 & 11/2003 & - & 0.59 & - \\ \end{tabular} \end{center} \end{table} In order to complete our comparison between H$\alpha$0242 and WZ~Sge, we also compare the Balmer decrement in the two systems. Direct comparison is not simple/possible because of the different wavelength and phase coverages\footnote{Some of the data are in time resolved mode, while others consist of just a single spectrum.} of the spectra in our hand. Still, we report in Table~3 the Balmer decrements measured in WZ~Sge before the outburst (spectra from Mason et al. 2000) and after the 2001 outburst (spectrum from Howell et al. 2002) with the Balmer decrement of H$\alpha$0242-28 measured in 2001 (Howell et al. 2002) and in this work. From Table~3, it is clear that: {\it i)} the Balmer decrement in WZ~Sge before the 2001 outburst was larger (and probably steeper) than after its outburst; {\it ii)} the Balmer decrement in H$\alpha$0242 is flatter than in WZ~Sge (at least in 2002). We also note that in the 2003 spectra of H$\alpha$0242 there is little or no difference in the Balmer decrement resulting from the average spectrum and/or the time-resolved spectra (which exclude the hot spot). This implies that the opacity and probably the gas physics within the disk and the hot spot are similar. The conclusion is that the line forming region in the accretion disk of H$\alpha$0242 consists of gas which is optically thicker, thus, probably denser and/or warmer, than that in WZ~Sge. \section{Summary and conclusions} We have presented time series spectra of the CV H$\alpha$0242. The object was identified as a CV candidate from an H$\alpha$-R band survey (Davenhall et al. 2001) and further suspected as a candidate TOAD by Howell et al. (2002). We determined the system orbital period of 107 min and showed evidence for a very deep eclipse in the light curve (Sec.~3.1), both reported also by Woudt et al. (2004). We infer an orbital inclination of 82$^o$ from the observed eclipse depth. We measured radial velocities of the Balmer lines and derived an approximate value for the white dwarf Keplerian velocity (Sec.~3.3). We constrained the secondary star on the basis of current evolution theory and derived the mass of the white dwarf through geometrical considerations (Sec.3.5). We found $M_2=0.17 M_\odot$ and $M_1=0.64 M_\odot$ for the mass of the secondary star and the white dwarf, respectively. We examined the line forming region within the accretion disk (Sec.~3.4 and 3.6), observing a multitude of FeII emission lines and a flat Balmer decrement.We note strong FeII emission in the optical spectrum and postulate that many other cataclysmic variables show it as well, but previous observations often lacked of either the proper wavelength coverage and/or sufficient S/N. We believe that the FeII lines are produced by the resonance fluorescence mechanism and, therefore, that the white dwarf has a relatively high effective temperature. From the Balmer decrement we also conclude that the gas both in the accretion disk and the hot spot is optically thick. The hot spot was found to not contribute significantly to both the Balmer and the FeII lines. On the contrary, we show evidence for pure hot spot emission in the HeII line $\lambda$4684. We interpret the HeII emission as a signature of high temperature in the impact region, thus of high density gas within the accretion disk. The results listed above clearly indicate that H$\alpha$0242 is a short orbital period system which has not yet evolved past the orbital period minimum. The accretion disk appearance (see trailed spectrograms, Doppler maps and Table~3) resemble that of a normal SU~UMa star. \begin{acknowledgements} The authors wish to thank the ESO Director for the generous allocation of time allowing these observations to be made. \end{acknowledgements}
2,877,628,090,934
arxiv
\section{Introduction} \subsection{The interface splash singularity} The fluid interface {\it splash singularity} was introduced by Castro, C\'{o}rdoba, Fefferman, Gancedo, \& G\'{o}mez-Serrano in \cite{CaCoFeGaGo2013} in the context of the one-phase water waves problem. As shown in Figure \ref{fig1}, A {\it splash singularity} occurs when a fluid interface remains locally smooth but self-intersects in finite time. Using methods from complex analysis together with a clever transformation of the equations, Castro, C\'{o}rdoba, Fefferman, Gancedo, \& G\'{o}mez-Serrano \cite{CaCoFeGaGo2013} showed that a splash singularity occurs in finite time for the water waves equations. In Coutand \& Shkoller \cite{CoSh2014a}, we showed the existence of a finite-time splash singularity for the one-phase incompressible Euler equations with free-boundary using a very different approach, founded upon an approximation of the self-intersecting fluid domain by a sequence of smooth fluid domains, each with non self-intersecting boundary. For one-phase flow, it is the vacuum state on one side of the interface which permits this finite-time interface self-intersection, and neither surface tension nor magnetic fields nor other inviscid regularizations of the interface change this fact \cite{CaCoFeGaGo2012,CoSh2014a}, and even stationary solutions, having a splash singularity, have been shown to exist (see C\'{o}rdoba, Enciso, \& Grubic \cite{CoEnGr}). \begin{figure}[htbp] \begin{center} \includegraphics[scale = 0.4]{fig_splash1a.eps} \caption{The splash singularity at a point $x_0$ occurs when a locally smooth interface self-intersects in finite time $t=T$.} \end{center} \label{fig1} \end{figure} On the other hand, for the two-phase incompressible Euler equations, wherein the moving interface is a vortex sheet\footnote{For the vortex sheet problem, it is necessary to have surface tension in order to ensure well-posedness in Sobolev spaces.}, it was proven by Fefferman, Ionescu, \& Lie \cite{FeIoLi2013} and Coutand \& Shkoller \cite{CoSh2014b} that a splash singularity cannot occur in finite-time while the interface remains locally smooth. In particular, there is a fundamental difference in the behavior of the fluid interface when vacuum is replaced with fluid in the mathematical model. Since these results have been established for inviscid flows, it is natural to ask if splash singularities can occur for viscous flows modeled by the incompressible Navier-Stokes equations with a moving free-surface. Because the methods of constructing splash singularities for inviscid flows have relied on the ability to flow backward-in-time, a new strategy must be devised to study the parabolic Navier-Stokes equations. By using the change-of-variables employed in \cite{CaCoFeGaGo2013} together with stability estimates, Castro, C\'{o}rdoba, Fefferman, Gancedo, \& G\'{o}mez-Serrano in \cite{CaCoFeGaGo2015} have shown the existence of finite-time splash singularities for the Navier-Stokes equations. Herein, we give a different proof which is amenable to any space dimension $d\ge 2$. \subsection{The Eulerian description of the Navier-Stokes free-boundary problem} For $0 \le t \le T$, the evolution of a $d$-di\-men\-si\-o\-nal ($d=2$ or $3$) one-phase, incompressible, viscous fluid with a moving free boundary is modeled by the in\-com\-pres\-sib\-le Navier-Stokes equations: \begin{subequations} \label{NSe} \begin{alignat}{2} u_t+ u\cdot D u + D p&= \nu \Delta u \ \ \ &&\text{in} \ \ \Omega(t) \,,\\ {\operatorname{div}} u &=0 &&\text{in} \ \ \Omega(t) \,, \\ \nu \operatorname{Def} u \cdot n - p\, n &= 0 \ \ &&\text{on} \ \ \Gamma(t) \,, \\ \mathcal{V} (\Gamma(t))& = u \cdot n &&\ \ \\ u &= u_0 \ \ &&\text{on} \ \ \Omega(0) \,,\\ \Omega(0) &= \Omega_0\,. && \end{alignat} \end{subequations} The open subset $\Omega(t) \subset \mathbb{R}^d $, $d=2$ or $3$, denotes the time-dependent volume occupied by the fluid, $\Gamma(t):= \partial\Omega(t)$ denotes the moving free-surface, $ \mathcal{V} (\Gamma(t))$ denotes normal velocity of $\Gamma(t)$, and $n(t)$ denotes the exterior unit normal vector to the free-surface $\Gamma(t)$. The vector-field $u = (u_1,.., u_d)$ denotes the Eulerian velocity field, and $p$ denotes the pressure function. We use the notation $D =(\partial_1, ..., \partial_d)$ to denote the gradient operator, and set $\operatorname{Def} u = D u + D u^T$, twice the symmetric part of the gradient of velocity. We have normalized the equations to have all physical constants equal to 1. The pressure $p$ is a solution to the following Dirichlet problem: \begin{subequations} \label{p} \begin{alignat}{2} - \Delta p &= u^i,_j u^j,_i \ \ \ &&\text{in} \ \ \Omega(t) \,,\\ p &= n \cdot \left[ \nu \operatorname{Def} u \cdot n \right] \ \ &&\text{on} \ \ \Gamma(t) \,, \end{alignat} \end{subequations} so that given an initial domain $\Omega$ and an initial velocity field $u_0$, the initial pressure is obtained as the solution of (\ref{p}) at $t=0$. \begin{definition}Given a locally smooth, time-dependent fluid interface or free-boundary, if there exists a time $T< \infty $ such that the interface $\Gamma(T)$ self-intersects at a point while remaining locally smooth, we call this point of self-intersection at time $T$ a ``splash'' singularity.\end{definition} We prove that there exist smooth initial data for the Navier-Stokes equations (\ref{NSe}) for which such a splash singularity occurs in finite time. \subsection{Statement of the Main Theorem}\label{sec:maintheorem} \begin{theorem}[Finite-time splash singularity]\label{theorem_main} There exist \begin{enumerate} \item open bounded $C^ \infty $-class initial domains $\Omega\subset \mathbb{R} ^d$, $d=2$ or $3$, with $N$ denoting the unit normal vector field on $\partial \Omega $, and \item smooth divergence-free velocity fields $u_0$ satisfying the compatibility condition $$\left[ \operatorname{Def} u_0 \cdot N\right] \times N =0 \text{ on } \partial \Omega \,,$$ \end{enumerate} such that after a finite time $T^*>0$, the solution to the Navier-Stokes equations (\ref{NSe}) has a splash singularity; that is, the interface $\Gamma(T^*)$ self-intersects. \end{theorem} In Theorem \ref{thm_general}, we show that the geometry of such a splash singularity can be prescribed arbitrarily close (in the $H^3$ norm) to any sufficiently smooth and prescribed self-intersecting domain. \subsection{Prior results for the incompressible Navier-Stokes equations with moving free-surface} Local-in-time well-posedness of solutions to (\ref{NSe}) have been known since the pioneering work of Solonnikov \cite{Sol1977, Sol1991, Sol1992}; his proof did not rely on energy estimates, but rather on Fourier-Laplace transform techniques, which required the use of exponentially weighted anisotropic Sobolev-Slobodeskii spaces with only fractional-order spatial derivatives for the analysis. Beale \cite{Beale1981} proved local well-posedness in a similar functional framework, and Abels \cite{Abels2005} established the existence theory in the $L^p$ Sobolev space framework. Well-posedness in energy spaces was established by Coutand \& Shkoller in \cite{CoSh2002} for the case of surface tension on the free-boundary, and for Navier-Stokes fluid-structure interaction problems wherein a viscous fluid is coupled to an elastic solid, in \cite{CoSh2005,CoSh2006}. Guo \& Tice \cite{GuTi2013a} also used energy spaces for local well-posed for the case of zero surface tension. Beale \cite{Beale1983} established global existence of solutions to (\ref{NSe}) for small perturbations of equilibrium. More recent small-data global existence and decay results (both with and without surface tension) can be found in \cite{TaTa1995}, \cite{PaSo2000}, \cite{NiTeYo2004}, \cite{Hataya2009}, \cite{Bae2011}, and \cite{GuTi2013b,GuTi2013c}. Recent results on the limit of zero viscosity and the limit of zero surface tension can be found in \cite{MaRo2012}, \cite{ElLe2014}, and \cite{WaXi2015}. For the history of the well-posedness and singularity theory for the inviscid problem, we refer the reader to the introduction in \cite{CoSh2007} and \cite{CoSh2014b}. \textcolor{black}{ \subsection{Outline of the paper} In Section \ref{sec:notation}, we define our notation. In Section \ref{sec::dino_wave}, we define a sequence of domains $ \Omega^\epsilon $ that we use as the initial data for the splash singularity, wherein the boundary $\Gamma^ \epsilon $ of these domains is close to self-intersection with a distance $ \epsilon $ between two approaching portions of $\Gamma^ \epsilon $. We convert the Navier-Stokes equations to Lagrangian coordinates in Section \ref{sec::lagrangian}, thus fixing the domain. In Section \ref{sec5}, we present some preliminary lemmas which show that the constant appearing in elliptic estimates and the Sobolev embedding theorem is independent of $ \epsilon $. In Section \ref{sec6}, we define the sequence of initial divergence-free velocity fields that are guaranteed to satisfy the single compatibility condition that we require, and whose norm is independent of $ \epsilon $. Section \ref{section7} is devoted to the basic a priori estimates for the Navier-Stokes equations in Lagrangian coordinates; following our approach in \cite{CoSh2002}, we establish estimates for velocity $v \in L^2(0,T; H^3(\Omega^ \epsilon ) ) \cap C^0([0,T]; H^2(\Omega^ \epsilon ) ) $ which are independent of $ \epsilon $. We then prove that the vertical component of velocity $v( \cdot t)$ at time $t$ remains in an $O( t^ {\frac{1}{4}} )$ neighborhood of the vertical component of the initial velocity field. Using this fact, we prove the main theorem in Section \ref{section8}; we show that by choosing $ \epsilon $ appropriately, a finite-time splash singularity must occur at some time $T^* \in (0, 10 \epsilon )$. We consider a completely arbitrary geometry for a splash singularity in Section \ref{section9}, by following our definition of a generalized splash domain from our previous work in \cite{CoSh2014a}. This, then, allows us to show in Section \ref{section10}, that we can construct a splash singularity for a geometry which is arbitrarily close in $H^{3}$ to {\it any} prescribed $H^{3}$ splash domain. } \section{Notation, local coordinates, and some preliminary results} \label{sec:notation} \subsection{Notation for the gradient vector} \label{sec:grad-horiz-deriv} Throughout the paper the symbol $D $ will be used to denote the $d$-dimensional gradient vector $ D =\left( \frac{\partial}{\partial x_1}\,, \frac{\partial}{\partial x_2}\,, ...,\, \frac{\partial}{\partial x_d} \right) $. \subsection{Notation for partial differentiation and the Einstein summation convention} \label{sec:notat-part-diff} The $k$th partial derivative of $F$ will be denoted by $F\cp{k}=\frac{ \partial F}{ \partial x_k}$. Repeated Latin indices $i,j,k$, etc., are summed from $1$ to $d$, and repeated Greek indices $\alpha, \beta, \gamma$, etc., are summed from $1$ to $d$$-$$1$. For example, $F\cp{ii}=\sum_{i=1}^d\frac{\partial^2F}{\partial x_i\partial x_i}$, and $F^i\cp{\alpha} I^{\alpha\beta} G^i\cp{\beta}=\sum_{i=1}^d\sum_{\alpha=1}^{d-1}\sum_{\beta=1}^2\frac{\partial F^i}{\partial x_\alpha} I^{\alpha\beta} \frac{\partial G^i}{\partial x_\beta}$. \def \mathbb{R} { \mathbb{R} } \subsection{Tangential (or horizontal) derivatives}\label{sec: tangential derivative} On each boundary chart $\UL\cap\Omega$, for $1\le\local\le K$, we let $\bar \partial$ denote the \textit{tangential derivative} whose $\alpha$th-component given by \begin{align*} \bar \partial_ \alpha f=\Pset{\frac{\partial}{\partial x_\alpha}\bset{f\circ\thetal}}\circ\thetal^{-1}=\Pset{\pset{ D f\circ\thetal}\frac{\partial\thetal}{\partial x_\alpha}}\circ\thetal^{-1} \,. \end{align*} For functions defined directly on $B^+$, $\hd$ is simply the horizontal derivative $\hd = (\partial_{x_1},..., \partial_{x_{d-1}})$. \subsection{Sobolev spaces} \label{sec:diff-norms-open} For integers $k\ge0$ and a bounded domain $U$ of $ \mathbb{R} ^3$, we define the Sobolev space $H^k(U)$ $\pset{H^k(U; \mathbb{R} ^3)}$ to be the completion of $C^\infty(\bar{U})$ $\pset{C^\infty(\bar{U}; \mathbb{R} ^3)}$ in the norm \begin{align*} \norm{u}_{k,U}^2=\sum_{\abs{a}\le k}\int_U \Abs{ D ^a u(x) }^2 , \end{align*} for a multi-index $a\in \mathbb{Z} ^3_+$, with the convention that $\abs{a}=a_1+a_2+a_3$. When there is no possibility for confusion, we write $\| \cdot \|_k$ for $\norm{\cdot }_{k,U}$. For real numbers $s\ge0$, the Sobolev spaces $H^s(U)$ and the norms $\label{n:interior norm}\norm{\cdot}_{s,U}$ are defined by interpolation. We will write $H^s(U)$ instead of $H^s(U; \mathbb{R} ^d)$ for vector-valued functions. \subsection{Sobolev spaces on a surface $\Gamma$} \label{sec:sobolev-spaces-gamma} For functions $u\in H^k(\Gamma)$, $k\ge0$, we set \begin{align*} \norm{u}_{k,\Gamma}^2=\sum_{\abs{a}\le k } \int_\Gamma \Abs{ \hd^a u(x)}^2, \end{align*} for a multi-index $a\in \mathbb{Z} ^2_+$. For real $s\ge0$, the Hilbert space $H^s(\Gamma)$ and the boundary norm $\label{n:boundary-norm}\abs{\cdot}_s$ is defined by interpolation. The negative-order Sobolev spaces $H^{-s}(\Gamma)$ are defined via duality. That is, for real $s\ge0$, $H^{-s}(\Gamma)=H^s(\Gamma)' $. \subsection{The unit normal and tangent vectors} We let $n( \cdot ,t)$ denote the outward unit normal vector to the moving boundary $ \Gamma (t)$. When $t=0$, we let $N_ \epsilon $ denote the outward unit normal to $\Gamma^ \epsilon $. For each $ \alpha =1,...,d-1$ and $x \in \Gamma^ \epsilon$ , $\tau_ \alpha(x)$ denotes an orthonormal basis of the ($d$$-$$1$)-dimensional tangent space to $ \Gamma^\epsilon$ at the point $x$. \section{The sequence of initial domains $\Omega^ \epsilon $}\label{sec::dino_wave} We shall use, as initial data, a sequence of domains, whose two-dimensional cross-section resembles a dinosaur neck arching over its body. \subsection{The ``dinosaur wave'' domains} \begin{definition}[The domain $\Omega$]\label{def-dino} Let $\Omega\subset \mathbb{R} ^d$, $d=2,3$, be a smooth bounded domain (as shown on the left of Figure \ref{fig_dino}) with boundary $\Gamma$. We assume that there are three particular open subsets of $\Omega$ as follows: \begin{enumerate} \item There exists an open subset $\omega\subset \Omega$ such that its boundary $\partial\omega\subset\Gamma$ is a vertical circular cylinder of radius $r$ and of length $h>0$. \item There exists an open subset $\omega_+\subset\Omega$ which is the lower-half of an open ball of radius $1$, located directly below the cylindrical region $\omega$, and in contact with the cylindrical region $\overline{\omega}$. The ``south pole'' of $\omega_+$ is the point $X_+$ (see Figure \ref{fig_initialconditions}). \item There exists an open subset $\omega_-\subset\Omega$ directly below, at a distance $1$, from the ``south pole'' $X_+$ of $\omega_+$, such that the points with maximal vertical coordinate in $\partial\omega_-\cap \Gamma$ form a subset of the horizontal plane $x_d=0$. \item Coordinates are assigned to subsets of $\Omega$ as follows: \begin{enumerate} \item The origin of $ \mathbb{R} ^d$ is contained in $\partial\omega_-\subset\Gamma \cap \{ x_d=0\}$. \item The point $X_+$, the ``south pole'' of $\omega_+$, has the coordinates $X^ \alpha _+ =0$ for $ \alpha =1,..., d-1$ and $X_+^d =1$. \item The top boundary of the hemisphere $\omega_+$ is the set $\{ (x_h, x_d) \in \mathbb{R} ^d \ : \ x_d = {\frac{3}{2}} , \ | x_h | < 1 \}$. \item The cylindrical region $ \omega $ is given by $\{ (x_h, x_d) \in \mathbb{R} ^d \ : \ {\frac{3}{2}} < x_d <{\frac{3}{2}} + h , \ | x_h | < 1 \}$. \end{enumerate} \end{enumerate} \end{definition} \begin{figure}[h] \begin{tikzpicture}[scale=.5] \draw[color=red,ultra thick] (6,0.5) arc (-90:0:1cm); \draw[color=red,ultra thick] (6,0.5) arc (270:180:1cm); \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (5,1.5) (7,1.5) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (5,3) (7,3) }; \draw (6,2) node { $ \omega $}; \draw (6,1) node { $ \omega_+ $}; \draw (6,-2) node { $ \omega_- $}; \draw[color=blue,ultra thick] (5,3) arc (00:180:2cm); \draw[color=blue,ultra thick] (7,3) arc (00:180:4cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (1,0) (1,2) (1,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (-1,-3) (-1,2) (-1,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (5,1.5) (5,2) (5,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (7,3) (7,2) (7,1.5) }; \draw[color=blue,ultra thick] (1,0) arc (180:270:1cm); \draw[color=blue,ultra thick] (-1,-3) arc (180:270:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (2,-1) (10,-1) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (0,-4) (10,-4) }; \draw[color=blue,ultra thick] (10,-1) arc (90:0:1cm); \draw[color=blue,ultra thick] (10,-4) arc (-90:0:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (11,-2) (11,-3) }; \draw (10,-2.5) node { $\Omega $}; \draw (7,5) node { $\Gamma $}; \draw[color=red,ultra thick] (20,-0.5) arc (-90:0:1cm); \draw[color=red,ultra thick] (20,-0.5) arc (270:180:1cm); \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (19,0.5) (21,0.5) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (19,3) (21,3) }; \draw (20,1.5) node { $ \omega^ \epsilon $}; \draw (20,0.) node { $ \omega_+^ \epsilon $}; \draw (20,-2.) node { $ \omega_- $}; \draw[color=blue,ultra thick] (19,3) arc (00:180:2cm); \draw[color=blue,ultra thick] (21,3) arc (00:180:4cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (15,0) (15,2) (15,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (13,-3) (13,2) (13,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (19,0.5) (19,2) (19,3) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (21,3) (21,2) (21,0.5) }; \draw[color=blue,ultra thick] (15,0) arc (180:270:1cm); \draw[color=blue,ultra thick] (13,-3) arc (180:270:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (16,-1) (24,-1) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (14,-4) (24,-4) }; \draw[color=blue,ultra thick] (24,-1) arc (90:0:1cm); \draw[color=blue,ultra thick] (24,-4) arc (-90:0:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (25,-2) (25,-3) }; \draw (24,-2.5) node { $\Omega^ \epsilon $}; \draw (21,5) node { $\Gamma^ \epsilon $}; \draw(2.5,0.5) -- (4.5,0.5); \draw[->] (3.5,2.5) -- (3.5,0.6); \draw[->] (3.5,-3.) -- (3.5,-1.1); \draw (3.5,-.25) node { $1$}; \draw(16.5,-0.5) -- (18.5,-0.5); \draw[->] (17.5,1.5) -- (17.5,-0.4); \draw[->] (17.5,-3.) -- (17.5,-1.1); \draw (17.5,-.75) node { $_{ \epsilon }$}; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (2,-1.05) (2,-3.95) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (8,-1.05) (8,-3.95) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (16,-1.05) (16,-3.95) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (22,-1.05) (22,-3.95) }; \end{tikzpicture} \caption{{\footnotesize {\bf Left.} The ``dinosaur wave'' domain $\Omega $ with boundary $\Gamma $. {\bf Right.} The sequence of ``dinosaur waves'' $\Omega^ \epsilon $ with boundary $\Gamma^ \epsilon $, $ \epsilon >0$, used as initial data for the Navier-Stokes splash singularity. In order to ensure that a splash occurs, the ``dinosaur neck'' $ \omega ^ \epsilon $ stretches downward so that there is a distance $ \epsilon $ between the two portions. The domains $\Omega^\epsilon $ simply stretch the neck of the dinosaur, and are identical to $\Omega$ away from the neck. }}\label{fig_dino} \end{figure} \begin{definition}[The initial domains $\Omega^ \epsilon $]\label{def-dino-e} For $ 0 < \epsilon \ll 1$, let $\Omega\subset \mathbb{R} ^d$, $d=2,3$, be a smooth bounded domain (as shown on the right of Figure \ref{fig_dino}) with boundary $\Gamma^ \epsilon $. We define the domain $ \Omega^\epsilon $ to be the following modification of the domain $\Omega$: \begin{enumerate} \item There exists an open subset $\omega^ \epsilon \subset \Omega^ \epsilon $, which is a vertical dilation of the domain $\omega$, such that its boundary $\partial\omega^ \epsilon \cap \Gamma^ \epsilon $ is a vertical circular cylinder of radius $r$ and of length $h+1- \epsilon $. \item There exists an open subset $\omega_+^ \epsilon \subset\Omega^ \epsilon $ which is the set $\omega^+$ translated vertically downward a distance $1- \epsilon $; hence, $\omega_+^ \epsilon $ is the lower-half of an open ball of radius $1$, located directly below the cylindrical region $\omega^ \epsilon $, and in contact with the cylindrical region $\overline{\omega^ \epsilon }$. The ``south pole'' of $\omega_+^ \epsilon $ is the point $X_+^ \epsilon $. \item There exists an open subset $\omega_-\subset\Omega^ \epsilon $ directly below, and a distance $ \epsilon $, from the ``south pole'' $X_+^ \epsilon $ of $\omega_+^ \epsilon $, such that the points with maximal vertical coordinate in $\partial\omega_-\cap \Gamma$ form a subset of the horizontal plane $x_d=0$. We assume that $\partial\omega_-\cap \Gamma$ contains a $d$$-$$1$-dimensional ball of radius $\sqrt{ \epsilon }$. \item Coordinates are assigned to subsets of $\Omega^ \epsilon $ as follows: \textcolor{black}{ \begin{enumerate} \item The origin of $ \mathbb{R} ^d$ is contained in $\partial\omega_-\subset\Gamma \cap \{ x_d=0\}$. \item The point $X_+^ \epsilon $, the ``south pole'' of $\omega_+^ \epsilon $, has the coordinates $X^ \alpha _+ =0$ for $ \alpha =1,..., d-1$ and $X_+^d = \epsilon $. \item The top boundary of the hemisphere $\omega_+^ \epsilon $ is the set $\{ (x_h, x_d) \in \mathbb{R} ^d \ : \ x_d = \epsilon + {\frac{1}{2}} , \ | x_h | < 1 \}$. \item The cylindrical region $ \omega^ \epsilon $ is given by $\{ (x_h, x_d) \in \mathbb{R} ^d \ : \ \epsilon + {\frac{1}{2}} < x_d < \epsilon + {\frac{1}{2}} + h , \ | x_h | < 1 \}$. \end{enumerate} } \end{enumerate} \end{definition} \subsection{Local coordinate charts for $\Omega$ and $\Omega^ \epsilon $ } \label{sec::charts} \subsubsection{Local charts for $\Omega$} We let $s \ge 3$ and $0 < \epsilon \ll 1$. Let $\Omega\subset \mathbb{R}^d $ denote a smooth open set, and let $\{U_l\}_{l=1}^K$ denote an open covering of $\Gamma=\partial\Omega$, such that for each $l\in \{1,2,\dots,K\}$, with \begin{align*} B&=B(0,1),\text{ denoting the open ball of radius $1$ centered at the origin and}, \\ B^+&=B\cap\set{x_d>0}, \\ B^0&=\overline B\cap\set{x_d=0}, \end{align*} there exist $C^\infty $ charts $\thetal$ which satisfy \begin{subequations} \label{normalchart} \begin{align} \thetal\colon B\to\UL\ &\text{ is an $C^ \infty $ diffeomorphism}, \\ \thetal(B^+)&=\UL\cap\Omega, \ \ \ \thetal(B^0)=\UL\cap\Gamma\,, \end{align} \end{subequations} and $\det D \thetal=C_l$ for a constant $C_l >0$. We assume these boundary charts can be split into three categories (each being non empty): \begin{itemize} \item For $1\le l\le K_1$, $\theta_l (B^+) \subset \omega $. \item For $K_1+1\le l\le K_2$, $\theta_l (B^+) \not \subset \omega $ and $\theta_l (B^+)\cap \omega_+ =\emptyset$. \item For $K_2+1\le l\le K$, $\theta_l (B^+) \not \subset \omega $ and $\theta_l (B^+)\cap \omega_+ \neq \emptyset$. \end{itemize} We also assume that the images of any charts $\theta_l$ for $K_1+1\le l\le K_2$ does not intersect any of the images of the charts for $K_2+1\le l\le K$. Next, for $L>K$, we let $\set{\UL}_{\local=K+1}^L$ denote a family of open sets contained in $\Omega$ such that $\set{\UL}_{\local=1}^L$ is an open cover of $\Omega$ and there exist smooth diffeomorphisms $\theta_l:B \to U_l$ with $\det D \theta_l$ equal to a constant $C_l>0$. Just as for the case of the boundary charts, we assume that these interior charts are split into three categories (each being non empty): \begin{itemize} \item For $K+1\le l\le L_1$, $\theta_l (B) \subset \omega $. \item For $L_1+1\le l\le L_2$, $\theta_l (B) \not \subset \omega $ and $\theta_l (B^+)\cap \omega_+ =\emptyset$. \item For $L_2+1\le l\le L$, $\theta_l (B) \not \subset \omega $ and $\theta_l (B^+)\cap \omega_+ \neq \emptyset$. \end{itemize} We assume that the union of the images of the charts $\theta_l$, for $1\le l\le K_1$ and $K+1\le l\le L_1$ contains the shortened cylindrical region $\stackrel{ \circ }{\omega} = \{ (x_h, x_d) \in \mathbb{R} ^d \ : \ {\frac{3}{2}} + \epsilon < x_d <{\frac{3}{2}} + h - \epsilon , \ | x_h | < 1 \}$. \textcolor{black}{ We assume that the union of the images of the charts $\theta_l$, for $1\le l\le K_1$ and $K+1\le l\le L_1$ contains the shortened cylindrical region $$\stackrel{ \circ }{\omega} = \{ (x_h, x_d) \in \mathbb{R} ^d \ : \ {\frac{3}{2}} +\frac{h}{3+h} \frac{h}{2} < x_d <{\frac{3}{2}} + h - \frac{h}{3+h}\frac{h}{2} , \ | x_h | < 1 \}$$ of length $\frac{3}{3+h} h$} \textcolor{black}{ We also assume that the union of the images of the charts $\theta_l$, for $K_2+1\le l\le K$ and $L_2+1\le l\le L$ contains the complement in $\omega$ of the shortened cylindrical region $$\tilde \omega = \{ (x_h, x_d) \in \mathbb{R} ^d \ : \ {\frac{3}{2}} + \frac{h}{2}\frac{\frac{3}{2}+h} {3+h}< x_d <{\frac{3}{2}} + h - \frac{h}{2}\frac{\frac{3}{2}+h} {3+h} , \ | x_h | < 1 \}$$ of length $\frac{3}{3+h}\frac{h}{2}$, so that the complement is of length $\frac{\frac{3}{2}+h}{3+h} h$. } \textcolor{black}{ Finally, we assume the images of any of the charts $\theta_l$ for $L_1+1\le l\le L_2$ does not intersect any of the images of the charts $\theta_l$ for $L_2+1\le l\le L$. } \subsubsection{Local charts for $\Omega^ \epsilon $} We next explain how this system of charts can be simply modified to describe $\Omega^\epsilon$ using the following three steps: \begin{enumerate} \item For either $1\le l\le K_1$ or $K+1\le l\le L_1$, we define the vertically dilated chart (corresponding to a cylinder with length dilated from $h$ to $h+1-\epsilon$) $$\theta_l^\epsilon=\left(\theta_1,\theta_2, \frac{h+1-\epsilon}{h} (\theta_3-{\frac{3}{2}} )+ {\frac{1}{2}} +\epsilon\right)\,.$$ Note that $\theta_l^\epsilon$ sends any point whose image by $\theta_l$ was at the altitude $ {\frac{3}{2}} $ in $\overline{\omega}$ (respectively ${\frac{3}{2}} +h$) into a point of altitude ${\frac{1}{2}} +\epsilon$ (respectively ${\frac{3}{2}} +h$) in $\Omega^\epsilon$. \item For either $K_1+1\le l\le K_2$ or $L_1+1\le l\le L_2$ , we set $\theta_l^\epsilon=\theta_l$. \item For either $K_2+1\le l\le K$ or $L_2+1\le l\le L$, we set the translated in the vertical direction chart $\theta_l^\epsilon=\theta_l-(1-\epsilon) e_d$. \end{enumerate} These charts describe $\Omega^\epsilon$, and again $\det D \theta_l$ is a strictly positive constant given by either $C_l$ or $\frac{h+1-\epsilon}{h} C_l$. \subsubsection{Cut-off functions on charts covering $\Omega$} \textcolor{black}{ Let $\{\xi_l\}_{l=1}^L$ denote a smooth partition of unity, subordinate to the covering $\{U_l\}_{l=1}^L$; i.e., $ \xi _l \in C^ \infty _c(U_l)$, $0 \le \xi _l \le 1$, and $\sum_{l=1}^L \xi_l =1$.} \textcolor{black}{ We set $ \mathcal{B}_l = B^+ $ for $l=1,...,K$, and $ \mathcal{B}_l = B$ for $l=K+1,...,L$. For each $l=1,...,L$, we set $\zeta_l = \xi _l \circ \theta_l$, so that $ \zeta_l \in C^ \infty _c ( \mathcal{B} _l)$ whenever the charts $\theta_l$ are smooth.} \subsubsection{Cut-off functions on charts covering $\Omega^ \epsilon $}\label{sec::partition} \textcolor{black}{ We define the cut-off functions $\xi _l^ \epsilon $ as follows: $$ \xi ^ \epsilon _l \circ \theta_l^ \epsilon = \xi _l \circ \theta_l \,. $$ Setting $ \zeta _l = \xi ^ \epsilon _l \circ \theta_l^ \epsilon$, we see that $\| \zeta_l \|_{k, \mathcal{B} _l} $ is bounded by a constant which is independent of $ \epsilon $. } \section{The Lagrangian description of the Navier-Stokes free-boundary problem} \label{sec::lagrangian} For $ \epsilon >0$, we let $\Omega^ \epsilon $ with boundary $\Gamma^ \epsilon $ be given by Definition \ref{def-dino-e}, and we transform the system (\ref{NSe}) into a system of equations set on this reference domain. To do so, we shall employ the Lagrangian coordinates. The Lagrangian flow map $\eta ( \cdot ,t)$ is the solution of the $\eta_t (x,t) = u(\eta(x,t),t)$ for $t>0$ with initial condition $\eta(x,0) =0$. Since $ \operatorname{div} u=0$, it follows that $\det D \eta =1$. For each instant of time $t$ for which the flow is well-defined, we have $$ \eta( \cdot ,t): \Omega^ \epsilon \to \Omega (t) \text{ is a diffeomorphism}\,; $$ furthermore, thanks to (\ref{NSe}d), $$ \Gamma(t) = \eta( \Gamma^ \epsilon , t) \,. $$ Notationally, we keep the dependence on $ \epsilon >0$ implicit, except for the initial domain and boundary. Next, we define \begin{align*} v &= u \circ \eta \text{ (Lagrangian velocity)}, \\ q&=p \circ \eta \text{ (Lagrangian pressure)}, \\ A &= [ D \eta]^{-1} \text{ (inverse of the deformation tensor)}\,, \\ g_{ \alpha \beta } &= \eta, _ \alpha \cdot \eta,_\beta \ \ \alpha ,\beta =1,.., d-1 \text{ (induced metric on $\Gamma$)}\,, \\ \mathfrak{g} & = \det( g_{ \alpha \beta }) \,. \end{align*} We also define the Lagrangian analogue of some of the fundamental differential operators present in this equation: \begin{align*} \operatorname{div} _\eta v &= (\operatorname{div} u) \circ \eta = v^i,_j A^j_i \,, \\ \operatorname{curl} _\eta v &= (\operatorname{curl} u) \circ \eta \text{ or } [ \operatorname{curl} _ \eta v]_i = \varepsilon_{ i jk} v^k,_r A^r_j \,, \\ \operatorname{Def} _\eta v &= (\operatorname{Def} u) \circ \eta \text{ or } [\operatorname{Def} _ \eta v]^i_j = v^i,_rA^r_j + v^j,_rA^r_i \,, \\ \Delta _\eta v &= (\Delta u) \circ \eta = ( A^j_r A^k_r v,_k),_j \,. \end{align*} The Lagrangian version of equations (\ref{NSe}) is given on the fixed reference domain $\Omega^ \epsilon $ by \begin{subequations} \label{NSlag} \begin{alignat}{2} \eta( \cdot ,t) & = e + \int_0^t v(\cdot ,s) ds \ && \text{ in } \Omega^ \epsilon \times [0,T] \,, \\ v_t + A^T D q &= \nu \Delta _ \eta v \ \ && \text{ in } \Omega^ \epsilon \times (0,T] \,, \\ \operatorname{div} _\eta v &=0 \ \ && \text{ in } \Omega^\epsilon \times [0,T] \,, \\ \nu \operatorname{Def} _ \eta v \cdot n - qn &=0\ \ && \text{ on } \Gamma^ \epsilon \times [0,T] \,, \\ (\eta,v) &=(e,u_0) \ \ \ \ && \text{ in } \Omega^\epsilon \times \{t=0\} \,, \end{alignat} \end{subequations} where $e(x)=x$ denotes the identity map on $\Omega$, and where we write $n$ for $n(\eta)$ in the Lagrangian description; in particular, the unit normal vector $n$ at the point $\eta( x, t)$ can be expressed in terms of the cofactor matrix $A$ and the time $t=0$ normal vector $N_ \epsilon $ as $$ n = A^T N_ \epsilon / | A^T N_ \epsilon| \,. $$ Due to (\ref{NSlag}c), $$ \Delta _ \eta v = \operatorname{div} _ \eta \operatorname{Def} _\eta v \,, $$ so that (\ref{NSlag}d) can be viewed as the natural boundary condition. The variables $ \eta, v$, and $q$ have an a priori dependence on $ \epsilon > 0$, but we do not explicitly write this. Local-in-time existence and uniqueness of solutions to (\ref{NSlag}) have been known since the pioneering work of Solonnikov \cite{Sol1977}. We shall establish a priori estimates for (\ref{NSlag}) with the initial domain $\Omega^ \epsilon $ and with divergence-free initial velocity fields satisfying the single compatibility condition \begin{equation}\label{comp} [ \operatorname{Def} u^ \epsilon _0 \cdot N^ \epsilon ]\cdot \tau^ \epsilon _ \alpha =0 \text{ on } \Gamma^ \epsilon \,, \end{equation} where $N^ \epsilon $ denotes the outward unit normal to $\Gamma^ \epsilon $ and $\tau^ \epsilon _ \alpha $, $ \alpha =1,.., d-1$, denotes the $d$$-$$1$ tangent vectors to $\Gamma^ \epsilon $. We will show that both the a priori estimates and the time of existence for solutions are independent of the distance $ \epsilon >0$ between the falling dinosaur head $X_+^ \epsilon $ and the flat trough $ \partial \omega_- \cap \{ x_d=0\}$ (see Figure \ref{fig_dino}). To do so, we shall rely on some basic lemmas that provide us constants which are independent of $ \epsilon$. \section{Elliptic and Sobolev constants are independent of $\epsilon $}\label{sec5} We consider the following linear Stokes problem \begin{subequations} \label{Stokes} \begin{alignat}{2} - \Delta u + D p& = f \ && \text{ in } \Omega^ \epsilon\,, \\ \operatorname{div} u &= \phi \ \ && \text{ in } \Omega^ \epsilon \,, \\ u&=g\ \ && \text{ on } \Gamma^ \epsilon \,, \end{alignat} \end{subequations} \begin{lemma}[Estimates for the Stokes problem on $\Omega^ \epsilon $] \label{lemma1} Suppose that for integers $k \ge 2$, $f \in H^{k-2}(\Omega^ \epsilon )$, $\phi \in H^{k-1}(\Omega^ \epsilon )$, and $g \in H^{k-1/2}(\Gamma^\epsilon )$, and $\int_{ \Omega ^ \epsilon } \phi(x) dx = \int_{\Gamma^\epsilon } g \cdot N\, dS$. Then, there exists a unique solution $u \in H^k (\Omega^ \epsilon )$ and $p \in H^{k-1} (\Omega^ \epsilon )/ \mathbb{R} $ to the Stokes problem (\ref{Stokes}). Moreover, there is a constant $C$ depending only on $\Omega$, but independent of $ \epsilon >0$, such that \begin{equation}\label{stokes-reg} \|u \|_k + \|p\|_{k-1} \le C \left( \|f\|_{k-2} + \|\phi\|_{k-1} + | g|_{k-1/2} \right) \,. \end{equation} \end{lemma} \begin{proof} The estimate (\ref{stokes-reg}) is well-known on the domain $\Omega$; see, for example, \cite{AmGi1991}. This estimate on the sequence of domains $\Omega^ \epsilon $ follows by localization using the charts $\theta^ \epsilon _l$ given in Section \ref{sec::charts}. Since the charts $\theta^ \epsilon _l$, are modified from the charts $\theta_l$ by a vertical dilation with lower and upper bound that is uniform in $ \epsilon $, the constant for the elliptic estimate in each chart is independent of $ \epsilon>0$. \end{proof} \begin{lemma}[Sobolev constant on $\Omega^ \epsilon $] \label{lemma2} Independent of $ \epsilon$, there exists a constant $C>0$ which depends only on the domain $\Omega$, such that $$ \max_{x\in\Omega^ \epsilon } | u(x)| \le C \|u\|_{s, \Omega^ \epsilon } \ \ \forall u \in H^s(\Omega^ \epsilon ) \,, \ \ s> d/2 \,. $$ \end{lemma} \begin{proof} The constant is determined by the radius $r$ of the smallest ball $B(x,r)$ for $x \in \Omega^ \epsilon $, such that $B(x,r) \subset \overline{ \Omega ^ \epsilon }$. By Definition \ref{def-dino-e} of the domains $\Omega^ \epsilon $, $r$ does not depend on $ \epsilon $, and hence the Sobolev constant $C$ only depends on $\Omega$. \end{proof} \begin{lemma}[Trace theorem on $\Omega^ \epsilon $]\label{lemma3} Independent of $ \epsilon$, there exists a constant $C>0$ which depends only on the domain $\Omega$, such that for $ s\in( {\frac{1}{2}},3] $ $$ \|u\|_{s-{\frac{1}{2}} , \Gamma^ \epsilon } \le C \|u\|_{s , \Omega^ \epsilon } \ \ \forall u \in H^s( \Omega^ \epsilon ) \,. $$ \end{lemma} \begin{proof} From the standard trace theorem in $B^+$, we have the existence of a constant $C>0$ such that for any boundary chart, $$ \|u\circ\theta_l^\epsilon\|_{s-{\frac{1}{2}} , B_0 } \le C \|u\circ\theta_l^\epsilon\|_{s , B^+ } \ \ \forall u \in H^s( \Omega^ \epsilon ) \,. $$ Now, since $\theta_l^\epsilon$ is either a chart $\theta_l$ for the domain $\Omega$ or a vertical dilation of such a chart with a uniform bounded from below and above as is made precise in Section \ref{sec::charts}, this implies that by the chain rule, $$ \|u\|_{s-{\frac{1}{2}} , \theta_l^\epsilon(B_0) } \le C \|u\|_{s , \theta_l^\epsilon(B^+) } \ \ \forall u \in H^s( \Omega^ \epsilon ) \,. $$ Since $\Gamma^\epsilon$ is the union of all $\theta_l^\epsilon(B_0)$, $1\le l\le K$, the above inequality implies the result. \end{proof} \section{The sequence of initial velocity fields $u_0^ \epsilon $}\label{sec6} \subsection{Constructing the sequence of initial velocity fields $u_0^\epsilon$}\label{sec::u0} As described in Definition \ref{def-dino-e}, near the intended splash (or self-intersection) point, the open set $\Omega^ \epsilon $ consists of two sets: the upper set $\omega^ \epsilon _ +$ and the lower set $\omega_-$ whose boundary contains the flat ``dinosaur belly'' at $ x_d=0$, as shown in Figure \ref{fig_initialconditions}. We We let $X_+^ \epsilon $ denote the point which has the smallest vertical coordinate in $\partial \omega^ \epsilon _+$. Directly below, we let $X_-$ be the point in $\partial \omega_- \cap \{ x_d=0\}$ with the same horizontal coordinate as $X_+^ \epsilon $. Without loss of generality, we set $X_-$ to be the origin of $ \mathbb{R} ^d$. \begin{figure}[h] \begin{tikzpicture}[scale=.5] \draw[color=blue,ultra thick] (6,0.5) arc (-90:0:1cm); \draw[color=blue,ultra thick] (6,0.5) arc (270:180:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (5,1.5) (5,1.6) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (7,1.6) (7,1.5) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (7,1.6) (5,1.6) }; \draw[color=blue,ultra thick] (1,0) arc (180:270:1cm); \draw[color=blue,ultra thick] (-1,-3) arc (180:270:1cm); \draw (6,1.2) node { $\omega_+^ \epsilon $}; \draw (6.,.5) node { $\newmoon$}; \draw (6.6,0) node { $X_+^ \epsilon $}; \draw (6.,-1) node { $\newmoon$}; \draw (6,-1.7) node { $X_- $}; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (2,-1) (10,-1) }; \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (0,-4) (10,-4) }; \draw[color=blue,ultra thick] (10,-1) arc (90:0:1cm); \draw[color=blue,ultra thick] (10,-4) arc (-90:0:1cm); \draw[color=blue,ultra thick] plot[smooth,tension=.6] coordinates{ (11,-2) (11,-3) }; \draw (6,-3) node { $\omega_- $}; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (2,-1.05) (2,-3.95) }; \draw[color=green,ultra thick] plot[smooth,tension=.6] coordinates{ (9,-1.05) (9,-3.95) }; \end{tikzpicture} \caption{{\footnotesize In a neighborhood of the intended splash point, we suppose that $\Omega^ \epsilon $ consists of two sets: the upper set $\omega_+^ \epsilon $ and the lower set $\omega_-$ containing the horizontally flat ``dinosaur belly.'' The point $X_+^ \epsilon $ is at a distance $ \epsilon $ from the set $\omega_-$ and the point $X_-$ is assumed to be the origin in $ \mathbb{R} ^d$. }}\label{fig_initialconditions} \end{figure} We choose a smooth function $b_0^\epsilon \in C^ \infty ( \Gamma ^ \epsilon ) $ such that $b_0^ \epsilon = -1$ in a small neighborhood of $X_+^ \epsilon $ on $\partial \omega_+^ \epsilon $, $b_0^ \epsilon =0$ on $\partial \omega_- $, $b_0^ \epsilon =0$ on $\partial \omega^ \epsilon\cap \Gamma^\epsilon $, $\int_{ \Gamma ^ \epsilon } b_0^ \epsilon \, dS=0$, and satisfying the estimate \begin{equation}\label{b-est} \|b_0^ \epsilon \|_{2.5,\Gamma^ \epsilon } \le m_0 < \infty \,, \end{equation} where $m_0$ does not depend on $ \epsilon $. We define the initial velocity field $u_0^ \epsilon $ at $t=0$ as the solution to the following Stokes problem: \begin{subequations} \label{Stokes2} \begin{alignat}{2} - \Delta u_0^ \epsilon + D r_0^ \epsilon & = 0 \ && \text{ in } \Omega^ \epsilon\,, \\ \operatorname{div} u_0^ \epsilon &= 0 \ \ && \text{ in } \Omega^ \epsilon \,, \\ [\operatorname{Def} u_0^ \epsilon \cdot N^ \epsilon ] \cdot \tau _ \alpha ^ \epsilon &=0 \ \ && \text{ on } \Gamma^ \epsilon \,, \\ u_0^ \epsilon \cdot N^ \epsilon &=b^ \epsilon \ \ && \text{ on } \Gamma^ \epsilon \,, \end{alignat} \end{subequations} with $N^ \epsilon $ denoting the outward unit normal to $ \Gamma ^ \epsilon $ and $\tau_ \alpha ^ \epsilon $, $\alpha =1,2$ denoting an orthonormal basis of the tangent space to $\Gamma ^ \epsilon $ (if the dimension $d=2$, then there is only one tangent vector). Using the regularity theory of this elliptic system (see, for example, \cite{SoSc1973} or \cite{AmSe2011} and references therein), together with the proof of Lemma \ref{lemma1}, for a constant independent of $ \epsilon >0$, \begin{equation}\label{u0-est} \|u_0^ \epsilon \|_{3, \Omega^ \epsilon } \le C \|b^ \epsilon \|_{2.5,\Gamma^ \epsilon } \le C \, m_0\,. \end{equation} The boundary condition (\ref{Stokes2}c) ensures that $u_0^ \epsilon $ satisfies (\ref{comp}). \subsection{The initial pressure function $p_0^ \epsilon $} The initial pressure function $p_0^ \epsilon $ at $t=0$ then satisfies \begin{subequations} \label{p0} \begin{alignat}{2} - \Delta p_0^ \epsilon &= (u_0^ \epsilon )^i,_j (u_0^ \epsilon )^j,_i \ \ \ &&\text{in} \ \ \Omega^ \epsilon \,,\\ p_0^ \epsilon &= N_0^ \epsilon \cdot \left[ \nu \operatorname{Def} u_0^ \epsilon \cdot N_0^ \epsilon \right] \ \ &&\text{on} \ \ \Gamma^ \epsilon \,, \end{alignat} \end{subequations} so that using the same proof as that of Lemma \ref{lemma1}, we have the following $ \epsilon $-independent elliptic estimate: \begin{equation}\label{p0-est} \|p_0^ \epsilon \|_{2, \Omega^ \epsilon } \le C \left[ \|u^0 _\epsilon \|_{3,\Omega^ \epsilon}+ \|u^0_ \epsilon \|^2_{3,\Omega^ \epsilon } \right] \le C \, \mathcal{P} ( m_0) \,, \end{equation} where we use $\mathcal{P} $ denote denote a generic polynomial function that depends only on $\Omega$. \section{A priori estimates}\label{section7} Let $\Omega^ \epsilon $ denote the dinosaur domain shown in Figure \ref{fig_dino}, and let $\theta_l$ denote the system of local charts for $\Omega^ \epsilon $ as defined in (\ref{normalchart}). By denoting $\eta_l = \eta \circ \theta_l$ we see that $$ \eta_l(t): B^+ \to \Omega(t) \ \text{ for } \ \ l=1,...,K \,. $$ We set $v_l = u \circ \eta_l$, $q_l = p \circ \eta_l$ and $A_l= [ D \eta_l ] ^{-1} $, $J_l = C_l$ (where $C_l>0$ is a constant, and $a_l = J_l A_l$. The unit normal $n_l$ is defined as $ \mathfrak{g} ^ {-\frac{1}{2}} \frac{\partial \eta_l}{\partial x_1} \times \frac{\partial \eta_l}{\partial x_2} $ if $d=3$ and by $\mathfrak{g} ^ {-\frac{1}{2}}{ \frac{\partial \eta_l}{\partial x_1}}^ \perp$ if $d=2$. It follows that for $l=1,...,K$, \begin{subequations} \label{localNS} \begin{alignat}{2} \eta_l(t) &= \theta_l + \int_0^t v_l\ \ && \text{ in } B^+ \times [0,T] \,, \label{localNS.a0} \\ \partial_t v_l + A_l^T D q_l &= \Delta _ {\eta_l} v_l \ \ && \text{ in } B^+ \times (0,T] \,, \label{localNS.a} \\ \operatorname{div} _{\eta_l} v_l &=0 \ \ && \text{ in } B^+ \times [0,T] \,,\label{localNS.b} \\ \nu \operatorname{Def} _ {\eta_l} v_l \cdot n_l - q_l \, n_l &=0 \ \ && \text{ on } B^0 \times [0,T] \,,\label{localNS.c} \\ (\eta_l,v_l) &=(\theta_l,u_0 \circ \theta_l ) \ \ \ \ && \text{ in } B^+ \times \{t=0\} \,, \label{localNS.e} \end{alignat} \end{subequations} where we have set $\nu=1$. \begin{definition}[Higher-order energy function] For each $t\in[0,T]$, we define the higher-order energy function \begin{align*} E^\epsilon (t) & = 1+ \| \eta( \cdot ,t)\|_{3, \Omega^ \epsilon }^2 + \| v ( \cdot ,t)\|_{2, \Omega^ \epsilon }^2 + \int_0^t \| v ( \cdot ,s)\|_{3, \Omega^ \epsilon }^2 ds + \int_0^t \| q( \cdot ,s)\|_{2, \Omega^ \epsilon }^2 ds \\ & \qquad + \| v_t( \cdot ,t)\|_{0, \Omega^ \epsilon }^2 + \int_0^t \| v_t ( \cdot ,s)\|_{1, \Omega^ \epsilon }^2 ds \end{align*} We then set $M_0 = \mathcal{P} ( E^ \epsilon (0))$ where $\mathcal{P} $ denotes a generic polynomial whose coefficients depend only on $\Omega$. The constant $M_0$ is then equal to $\mathcal{P} (m_0)$, a polynomial function of the constant $m_0$ introduced in (\ref{u0-est}). \end{definition} \begin{theorem}\label{prop1} Assuming that $\Gamma(t)$ does not self-intersect, independent of $ \epsilon >0$, there exists a time $T>0$ and a constant $C>0$ such that the solution \begin{align*} v \in C([0,T], H^2( \Omega^ \epsilon )) \cap L^2(0,T; H^3(\Omega^ \epsilon )) \,, \ \ q \in L^2(0,T; H^2(\Omega^ \epsilon )) \end{align*} to (\ref{NSlag}) satisfies the a priori estimate: \begin{align} \max_{t\in [0,T]} E^ \epsilon (t) \le C\, M_0 \,. \label{main-est} \end{align} \end{theorem} \begin{proof} The proof will proceed in five steps. \vspace{.1 in} \noindent {\bf Step 1. Estimates for $ D \eta$ and $A$.} Using (\ref{localNS}a), we see that \begin{equation}\label{t-est} \| D \eta (\cdot , t)- \operatorname{Id} \|_{ 2, \Omega^ \epsilon } \le \left\| \int_0^t D v( \cdot ,s)ds \right \|_{ 2, \Omega^ \epsilon } \le \sqrt{t} \sup_{s \in [0,t]} \sqrt{E^ \epsilon (t)}\,. \end{equation} Thanks to Lemma \ref{lemma2}, there exists a constant $C>0$, independent of $ \epsilon $, such that \begin{equation}\label{est-eta} \| D \eta (\cdot , t)- \operatorname{Id} \|_{ L^\infty(\Omega^ \epsilon )} \le C \sqrt{t} \sup_{s \in [0,t]} \sqrt{E^ \epsilon (t)}\,. \end{equation} Since $\det D \eta=1$, the matrix $A$ is simply the cofactor matrix of $ D \eta$: \begin{equation}\label{def-A} A= \left[ \begin{matrix} - {\bf \eta},_2^\perp \\ {\bf \eta},_1^\perp \end{matrix} \right] \text{ for } d=2, \text{ and } A= \left[ \begin{matrix} {\bf \eta,_2 \times \eta,_3}\\ {\bf \eta,_3 \times \eta,_1}\\ {\bf \eta,_1 \times \eta,_2} \end{matrix} \right] \text{ for } d=3\,, \end{equation} where each row is a vector, and for a $2$-vector $x=(x_1,x_2)$, $x^\perp=(-x_2, x_1)$. We make the following basic assumption, that we shall verify below in Step 5: for a constant $ 0 < \vartheta \ll 1$, we suppose that $t \in[0,T]$ and that $T$ is chosen sufficiently small so that \begin{equation}\label{basic} \sup_{t \in [0,T]} \| D \eta (\cdot , t)- \operatorname{Id} \|_{ L^\infty(\Omega^ \epsilon )} \le \vartheta^{10} \,. \end{equation} It follows from (\ref{def-A}), that since $\|A ( \cdot , t) - \operatorname{Id} \|_{L^\infty (\Omega^ \epsilon )} \le \int_0^t \|A_t ( \cdot ,s) \|_{L^\infty (\Omega^ \epsilon )} ds$, \begin{equation}\label{est-A1} \sup_{t \in [0,T]} \|A ( \cdot , t) - \operatorname{Id} \|_{L^\infty (\Omega^ \epsilon )} + \|A A^T( \cdot , t) - \operatorname{Id} \|_{L^\infty (\Omega^ \epsilon )} \le \vartheta\,. \end{equation} \vspace{.1 in} \noindent {\bf Step 2. Boundary regularity.} We begin by considering a single boundary chart $\theta_l : B^+ \to \Omega(t)$. Let $\zeta_l$ denote the smooth cut-off function defined in Section \ref{sec::partition}. Using equation (\ref{localNS}b), we compute the following $L^2(B^+)$ inner-product: \begin{equation}\label{cs0} \left( \zeta_l \bar \partial^2 [ \partial_t v_l - \Delta _\eta v + A_l^T \, D q_l ] \ , \ \zeta_l \bar \partial^2 v_l \right)_{L^2(B^+)} =0 \,. \end{equation} To simplify the notation, we fix $l \in \{1,...,K\}$ and drop the subscript. The chart $\theta_l$ was defined so that $\det D \theta_l=C_l$ for a constant $C_l>0$. Then (\ref{cs0}) can be written as be written as \begin{equation}\label{cs8} \int_{B^+} \zeta ^2 \bar \partial^2 v_t^i \, \bar\partial^2 v^i \, dx - \int_{B^+} \zeta ^2 \bar\partial^2 [ A^k_s A^j_s v ^i,_j],_k \, \bar\partial^2 v^i \, dx + \int_{B_+} \zeta ^2 \bar\partial^2 [A^k_i q],_k \, \bar \partial^2 v^i \, dx =0 \,. \end{equation} Integration-by-parts with respect to $x_k$ shows that \begin{align} \label{cs1} 0 = {\frac{1}{2}} \frac{d}{dt} \| \zeta \bar \partial^2 v(t)\|^2_{0,B^+} + \int_{B^+} \bar \partial^2 [ A^k_s A^j_s v^i,_j]\, \bar\partial^2[ \zeta ^2 v^i],_k dx + \int_{B^+} \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 v^i],_k dx \end{align} where we have used the boundary condition (\ref{localNS}d) to show that the boundary integral vanishes. Using $ \delta ^{jk}$ to denote the Kronecker delta function, we write (\ref{cs1}) as \begin{align} &{\frac{1}{2}} \frac{d}{dt} \| \zeta \bar \partial^2 v(\cdot , t)\|^2_{0,B^+} + \| \zeta \bar\partial^2 D v(t) \|^2_{0,B^+} = - \int_{B^+} \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 v^i],_k dx \nonumber \\ & \qquad - \int_{B^+} \bar \partial^2 [ (A^k_s A^j_s - \delta^{kj}) v^i,_j]\, \bar\partial^2[ \zeta ^2 v^i],_k dx - \int_{B^+} \left[ \bar \partial^2 v^i,_k\, (\bar\partial^2 \zeta ^2 v^i + 2 \bar \partial \zeta^2 \bar \partial v^i),_k + \xi,_k \bar \partial^2 v^i \right] dx \,. \label{cs2} \end{align} We integrate (\ref{cs2}) over the time interval $[0,T]$: \begin{align} {\frac{1}{2}} \| \zeta \bar \partial^2 v(\cdot , t)\|^2_{0,B^+} + \int_0^T \| \zeta \bar\partial^2 v(t) \|^2_{1,B^+} \le M_0 +\mathcal{I} _1 + \mathcal{I} _2 +\mathcal{I} _3 \label{cs7} \end{align} where \begin{align*} \mathcal{I} _1 & = \int_0^T \int_{B^+} \left| \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 v^i],_k\right| dx dt \,, \\ \mathcal{I} _2 & =\int_0^T \int_{B^+} \left| \bar \partial^2 [ (A^k_s A^j_s - \delta^{kj}) v^i,_j]\, \bar\partial^2[ \zeta ^2 v^i],_k \right| dxdt\,, \\ \mathcal{I} _3 & =\int_0^T \int_{B^+} \left| \bar \partial^2 v^i,_k\, [ \bar\partial^2 \zeta ^2 v^i + 2 \bar \partial \zeta^2 \bar \partial v^i],_k + \xi,_k \bar \partial^2 v^i \right| dxdt \,. \end{align*} Using the Sobolev embedding theorem and Lemma \ref{lemma2} We estimate $ \mathcal{I} _1$ \begin{align*} \mathcal{I} _1 & \le \underbrace{\int_0^T \int_{B^+} | \bar \partial^2 q| \, | A^k_i \bar \partial^2 v^i,_k| \, dxdt}_{\mathcal{I} _1^a} + \underbrace{ \int_0^T \| q\|_{2, \epsilon } \| A\|_{2, \Omega^ \epsilon} \|v\|_{2, \Omega ^ \epsilon } dt}_{ \mathcal{I} _1^b} \\ & \qquad \qquad + \underbrace{\int_0^T \| q\|_{1.5, \epsilon } \| A\|_{2, \Omega^ \epsilon} \|v\|_{3, \Omega ^ \epsilon } dt}_{ \mathcal{I} _1^c} \,. \end{align*} To estimate the integral $ \mathcal{I} _1^a$, we use (\ref{localNS}c) to write $$ v^i,_{ k \alpha \beta } A^k_i = - A^k_i,_{ \alpha \beta } v^i,_k - A^k_i,_ \beta v^i,_{k \alpha } - A^k_i,_ \alpha v^i,_{k \beta } \,, $$ so that the term with three derivatives on $v$ is converted to a term with three derivatives on $\eta$ plus lower-order terms. It follows that for $ \delta >0$, and a constant $C_ \delta $ (which blows-up as $ \delta \to 0$), $$ \mathcal{I} _1^a \le \delta \int_0^T \|q\|^2_{2, \Omega ^ \epsilon } dt + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) \,. $$ The integral $ \mathcal{I} _1^b$ is estimated in the same way. For the integral $ \mathcal{I} _1^c$ we use linear interpolation to estimate the norm $\int_0^T \| q\|_{1.5, \epsilon }$: \begin{align*} \mathcal{I} _1^c & \le \delta \int_0^T \|v\|^2_{3, \Omega ^ \epsilon } dt + \delta \int_0^T \|q\|^2_{2, \Omega ^ \epsilon } dt + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) \,. \end{align*} It follows that \begin{equation}\label{cs3} \mathcal{I} _1 \le M_0 + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} Next, for the integral $ \mathcal{I} _2$, \begin{align*} \mathcal{I} _2 & \le \underbrace{\int_0^T \int_{B^+} \left| (A^k_s A^j_s - \delta^{kj}) \bar \partial^2 v^i,_j\, \bar\partial^2[ \zeta ^2 v^i],_k \right| dxdt}_{ \mathcal{I} _2^a} + \underbrace{2 \int_0^T \int_{B^+} \left| \bar \partial (A^k_s A^j_s - \delta^{kj}) \bar \partial v^i,_j\, \bar\partial^2[ \zeta ^2 v^i],_k \right| dxdt}_{ \mathcal{I} _2^b} \\ & \qquad + \underbrace{\int_0^T \int_{B^+} \left| \bar \partial^2 (A^k_s A^j_s - \delta^{kj}) v^i,_j\, \bar\partial^2[ \zeta ^2 v^i],_k \right| dxdt }_{ \mathcal{I} _2^c} \,. \end{align*} Using (\ref{est-A1}) and choosing $\vartheta < \delta $, $$ \mathcal{I} _2^a \le C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. $$ In the same way as above, we again use Lemma \ref{lemma2}, together with linear interpolation for term $ \mathcal{I} _2^b$, to see that \begin{equation}\label{cs4} \mathcal{I} _2 \le M_0 + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} The integral $ \mathcal{I} _3$ is straightforward and also satisfies \begin{equation}\label{cs5} \mathcal{I} _3 \le M_0 + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) + C\delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} Summing over all of the boundary charts $l=1,...,K$ in (\ref{cs7}), the inequalities (\ref{cs3})--(\ref{cs5}) together with the trace theorem, Lemma \ref{lemma3}, show that \begin{equation}\label{cs6} \int_0^T \| v(\cdot , t) \|^2_{2.5, \Gamma^ \epsilon } \le M_0 + C_ \delta T P( \sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \end{equation} \noindent {\bf Step 3. Estimates for the time-differentiated problem.} We consider the time-differentiated version of (\ref{NSlag}) which we write as the following system: \begin{subequations} \label{NSlagt} \begin{alignat}{2} \eta_t & = v \ && \text{ in } \Omega^ \epsilon \times [0,T] \,, \\ v_{tt} - \Delta _ \eta v_t + A^T D q_t &= - A^T_t D q + [ \partial_t( A^j_s A^k_s) v,_k],_j \ \ && \text{ in } \Omega^ \epsilon \times (0,T] \,, \\ \operatorname{div} _\eta v_t &=-v^i,_j \partial_t A^j_i \ \ && \text{ in } \Omega^\epsilon \times [0,T] \,, \\ \partial_t\left[ \operatorname{Def} _ \eta v \cdot n - qn\right] &=0\ \ && \text{ on } \Gamma^ \epsilon \times [0,T] \,, \\ (\eta,v,v_t) &=(e,u_0^ \epsilon ,u_1^ \epsilon ) \ \ \ \ && \text{ in } \Omega^\epsilon \times \{t=0\} \,, \end{alignat} \end{subequations} where $u_1^ \epsilon = \Delta u_0^ \epsilon - D p_0^ \epsilon $, with $u_0^ \epsilon $ defined in (\ref{Stokes2}) and $p_0^ \epsilon $ defined in (\ref{p0}); therefore, independently of $ \epsilon >0$, \begin{equation}\label{u1} \| u_1^ \epsilon \|_{0, \Omega^\epsilon } \le \mathcal{P} (m_0) \,. \end{equation} We define the space of $ \operatorname{div} _\eta$-free vectors fields on $\Omega^\epsilon $ as $$ \mathcal{V} (t) = \{ \phi \in H^1(\Omega^\epsilon ; \mathbb{R} ^d) \ : \ \operatorname{div} _{\eta( \cdot , t)} \phi =0 \}\,. $$ Taking the $ L^2(\Omega^ \epsilon ) $ inner-product of equation (\ref{NSlagt}b) with a test function $\phi \in \mathcal{V} (t)$, we have that \begin{equation}\label{weak} \int_{ \Omega^\epsilon } v_{tt} \cdot \phi dx + \int_{\Omega ^ \epsilon } \partial_t [A^k_s A^j_s v^i,_j] \, \phi^i,_k dx =\textcolor{black}{ \int_ \Omega q\, \partial_t A^k_i \phi^i,_k \, dx } \ \ \forall \phi \in \mathcal{V} (t) \,. \end{equation} Next, we define a vector field $w$ satisfying \begin{subequations} \label{w} \begin{alignat}{2} \operatorname{div} _ \eta w &= -v^i,_j \partial_t A^j_i \ \ \ &&\text{in} \ \ \Omega^ \epsilon \,,\\ w &= \phi(t) n \ \ &&\text{on} \ \ \Gamma^ \epsilon \,, \end{alignat} \end{subequations} where $\phi(t) = -\int_{ \Omega^\epsilon } -v^i,_j \partial_t A^j_i dx / | \Gamma^ \epsilon |$. A solution $w$ can be found by solving a Stokes-type problem, and according to the proof of Lemma 3.2 in \cite{ChSh2010}, for integers $k\ge 1$, \begin{equation}\label{est-w} \| w( \cdot , t) \|_{k, \Omega^\epsilon } \le C\left( \| v^i,_j (\cdot , t) \, \partial_t A^j_i(\cdot , t) \|_{k-1, \Omega^\epsilon } + \| \phi(t) n \|_{k-1/2, \Gamma^\epsilon }\right) \,, \end{equation} where the constant $C$ is independent of $ \epsilon $ by Lemma \ref{lemma1}. It follows from (\ref{est-w}) and (\ref{def-A}) that \begin{equation}\label{est-w2} \sup_{t \in [0,T]} \| w( \cdot , t) \|_{2, \Omega^\epsilon } + \int_0^T \| w( \cdot , t) \|_{3, \Omega^\epsilon } ^2 \le T P( \sup_{t \in [0,T]} E^ \epsilon (t)) \,. \end{equation} Similarly, \begin{subequations} \label{wt} \begin{alignat}{2} \operatorname{div} _ \eta w_t &= -\left(w^i,_j \partial_t A^j_i + \partial_t(v^i,_j \partial_t A^j_i )\right) \ \ \ &&\text{in} \ \ \Omega^ \epsilon \,,\\ w_t &= \left( \phi_t n \right) _t \ \ &&\text{on} \ \ \Gamma^ \epsilon \,. \end{alignat} \end{subequations} and \begin{equation}\nonumber \| w_t\|_{1, \Omega^\epsilon } \le C\left( \| w^i,_j \partial_t A^j_i + \partial_t(v^i,_j \partial_t A^j_i)\|_{0, \Omega^\epsilon } + \| (\phi_t n)_t\|_{1/2, \Gamma^\epsilon } \right) \,, \end{equation} so that \begin{equation}\label{est-wt} \int_0^T \| w_t\|_{1, \Omega^\epsilon }^2 \le T P( \sup_{t \in [0,T]} E^ \epsilon (t)) \,. \end{equation} Now, because of (\ref{w}a), $v_t -w \in \mathcal{V} (t)$, and we are allowed to set $\phi = v_t -w$ in (\ref{weak}). We find that \begin{align*} {\frac{1}{2}} \frac{d}{dt} \| v_t ( \cdot ,t) \|^2_{0, \Omega^\epsilon }+ \int_{\Omega ^ \epsilon } \partial_t [A^k_s A^j_s v^i,_j]\, v_t^i,_k dx &= \int_{ \Omega^\epsilon } v_{tt} \cdot w dx + \int_{\Omega^\epsilon } \partial_t (A^k_s A^j_s v^i,_j) \, w^i,_k dx \\ & \qquad \qquad + \int_ \Omega q\, \partial_t A^k_i \left[v_t^i,_k + w^i,_k \right] \, dx \,. \end{align*} and hence for $t \in (0,T)$, \begin{align*} & {\frac{1}{2}} \| v_t ( \cdot ,t) \|^2_{0, \Omega^\epsilon }+ \int_0^t\| D v_t\|^2_{0, \Omega^\epsilon } ds = {\frac{1}{2}} \|u_1 \|^2_{0, \Omega^\epsilon } \overbrace{- \int_0^t \int_{\Omega ^ \epsilon } [A^k_s A^j_s - \delta ^{kj}] v_t^i,_j\, v_t^i,_k dx ds}^{ \mathcal{J} _1}\\ & \qquad \underbrace{- \int_0^t \int_{\Omega ^ \epsilon } \partial_t [A^k_s A^j_s ] v^i,_j v_t^i,_k dxds}_{ \mathcal{J} _2} + \underbrace{\int_0^t \int_{ \Omega^\epsilon } v_{tt} \cdot w dxds}_{ \mathcal{J} _3} + \underbrace{ \int_0^t \int_{\Omega^\epsilon } \partial_t [A^k_s A^j_s v^i,_j]\, w^i,_k dxds}_{ \mathcal{J} _4}\\ & \qquad + \underbrace{ \int_0^t \int_ \Omega q\, \partial_t A^k_i \left[v_t^i,_k + w^i,_k \right] \, dxds}_{ \mathcal{J} _5} \,. \end{align*} For $ \delta >0$ and using (\ref{est-A1}) with $\vartheta< \delta $, we see that \begin{equation}\label{csj1} | \mathcal{J} _1| \le \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} Next, according to (\ref{def-A}) the components of $A$ are either linear ($d=2$) or quadratic ($d=3$) with respect to the components of $ D \eta$; hence, $\partial_t A $ behaves like $ D v$ for $d=2$ and like $ D \eta \, D v$ for $d=3$. We consider the more difficult case that $d=3$ in which case $\partial_t (A A^T)$ behaves like $ D \eta \, D \eta \, D \eta \, D v$. It follows by the Cauchy-Young inequality that for $ \delta >0$, we have that \begin{equation}\label{csj2} | \mathcal{J}_2| \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} To estimate $\mathcal{J} _3$, we integrate-by-parts in time: \begin{align} |\mathcal{J} _3| & \le \int_0^t \int_{ \Omega^\epsilon } |v_{t} \cdot w_t | dxds + \left| \left. \int_{ \Omega^\epsilon } v_{t} \cdot w dx\right|^t_0 \right| \nonumber \\ & \le \int_0^t \int_{ \Omega^\epsilon } |v_{t} \cdot w_t | dxds + M_0 + \int_{ \Omega^\epsilon } | v_t ( \cdot ,t) w( \cdot ,0)| dx + \int_{ \Omega^\epsilon } \left| v_t ( \cdot ,t) \int_0^t w_t(\cdot ,s)ds\right| dx \nonumber \\ & \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,, \label{csj3} \end{align} the last inequality following from the Cauchy-Young inequality and the estimates (\ref{est-w}) and (\ref{est-wt}). The integrals $ \mathcal{J} _4$ and $ \mathcal{J} _5$ (using (\ref{est-w2}) and (\ref{est-wt})) are estimated in the same way as $ \mathcal{J} _2$ so that \begin{equation}\label{csj4} | \mathcal{J}_4| + | \mathcal{J}_5| \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t)) + \delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} Combining the estimates (\ref{csj1})--(\ref{csj4}), we find that \begin{equation}\label{cs10} \sup_{t \in [0,T]} \| v_t ( \cdot ,t) \|^2_{0, \Omega^\epsilon }+ \int_0^T\| v_t\|^2_{1, \Omega^\epsilon } dt \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t)) + C\delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} \vspace{.1 in} \noindent {\bf Step 4. Regularity for the velocity and pressure.} Next, we write equation (\ref{NSlag}b) as \begin{subequations} \label{stokes_for_v} \begin{alignat}{2} -\Delta v + D q & = \operatorname{div} [ (AA^T - \operatorname{Id} ) D v] - (A^T -\operatorname{Id} ) D q-v_t \ \ && \text{ in } \Omega^ \epsilon \times (0,T] \,, \label{stokes_for_v.a} \\ \operatorname{div} v &= - (A^j_i - \delta ^j_i) v^i,_j \ \ && \text{ in } \Omega^\epsilon \times [0,T] \,, \\ v & \in L^2(0,T; H^{2.5}(\Gamma^ \epsilon ) && \end{alignat} \end{subequations} The two inequalities (\ref{cs6}) and (\ref{cs10}), together with the Stokes regularity given in Lemma \ref{lemma1}, show that $v \in L^\infty([0,T]; H^2(\Omega^ \epsilon ) ) \cap L^2(0,T; H^3( \Omega^\epsilon ))$ and satisfies \begin{equation}\label{cs11} \sup_{t \in [0,T]} \| v ( \cdot ,t) \|^2_{2, \Omega^\epsilon }+ \int_0^T\| v\|^2_{3, \Omega^\epsilon } dt + \int_0^T\| q\|^2_{2, \Omega^\epsilon } dt \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t)) + C\delta \sup_{t \in [0,T]} E^ \epsilon (t) \,. \end{equation} By choosing $ \delta >0$ sufficiently small, we obtain that \begin{equation} \label{cs12} \sup_{t \in [0,T]} E^ \epsilon (t) \le M_0 + T \mathcal{P} (\sup_{t \in [0,T]} E^ \epsilon (t))\,, \end{equation} for a constant $M_0$ and a polynomial function $ \mathcal{P} $ which are both independent of $ \epsilon $. From the estimate (\ref{cs11}), $v\in L^2(0,T; H^3(\Omega^ \epsilon ) )$, and the estimate (\ref{cs10}), $v_t\in L^2(0,T; H^1(\Omega^ \epsilon ) )$. Using the partition of unity functions $\zeta_l$ defined in Step 2 above, we then see that for each chart $\zeta_l v\in L^2(0,T; H^3(\mathcal{B}_l ) )$ where $ \mathcal{B}_l = B^+ $ for $l=1,...,K$, and $ \mathcal{B}_l = B$ for $l=K+1,...,L$. Similarly, $\zeta_l v_t\in L^2(0,T; H^1(\mathcal{B}_l ) )$. It is then standard that $\zeta_l v\in C^0([0,T]; H^2( \mathcal{B} _l ) )$, and hence by summing over $l=1,...,L$, $ v\in C^0([0,T]; H^2( \Omega^\epsilon ) )$. Since the pressure satisfies the elliptic system: \begin{alignat*}{2} -\Delta _\eta q & = v^i,_rA^r_j v^j,_sA^s_i\ \ && \text{ in } \Omega^ \epsilon \times (0,T] \,, \\ q &= n\cdot \left[ \text{Def}_\eta v\cdot n\right] \ \ && \text{ on } \Gamma^\epsilon \times [0,T] \,, \end{alignat*} we then infer that $q\in C^0([0,T]; H^1(\Omega^ \epsilon ) )$. Then, using the momentum equation (\ref{stokes_for_v.a}), it follows that $v_t\in C^0([0,T]; L^2(\Omega^ \epsilon ) )$. { This then shows that $E^ \epsilon(t)$ is a continuous function of time}. Following Section 9 in \cite{CoSh2006}, from (\ref{cs12}), we now may choose $T>0$ sufficiently small and independent of $ \epsilon $, such that \begin{equation}\label{apriori-est} \sup_{t \in [0,T]} E^ \epsilon (t) \le 2M_0 \,. \end{equation} \vspace{.1 in} \noindent {\bf Step 5. Verifying the basic assumption (\ref{basic}).} Having established (\ref{apriori-est}) on $[0,T]$ with $T$ independent of $ \epsilon $, for any $\varepsilon>0$, we may now use the formula (\ref{t-est}) to choose $T$ even smaller if necessary to ensure that (\ref{basic}) holds. This concludes the proof. \end{proof} We now establish a more quantitative estimate in order to assess the continuity of $\bar \partial^2 v(t , \cdot )$ in $ L^2( \Omega^\epsilon ) $. \begin{proposition} \label{prop2} For all $t\in [0,T]$, \begin{equation}\label{est-main2} \max_{x \in \Omega^ \epsilon } \| \bar \partial^2 ( v^ \epsilon ( \cdot ,t) - u_0^ \epsilon )\|_{0, \Omega ^ \epsilon }^2 + \int_0^t \| \bar \partial^2 ( v^ \epsilon(\cdot ,s) - u_0^ \epsilon )\|_{1, \Omega ^ \epsilon }^2 ds \lesssim t^{1/2} \mathcal{P} (M_0) \,. \end{equation} \end{proposition} \begin{proof} We write $v(t) = v( \cdot ,t)$ and again set viscosity $\nu=1$. The difference $v(t) - u_0^\epsilon$ satisfies the equation \begin{equation}\nonumber (v-u_0^\epsilon)_t - \Delta _ \eta( v- u_0^\epsilon) + A^T D q = \Delta _ \eta u_0^\epsilon \,. \end{equation} Following Step 2 in the proof of Theorem \ref{prop1}, and once again localize to a boundary chart $\theta_l$, $l=1,...,K$, with $ \det D \theta_l=C_l$ and with cut-off functions $\zeta_l$, we obtain that \begin{align} 0 &= {\frac{1}{2}} \frac{d}{dt} \| \zeta \bar \partial^2 [v(t)-u_0^\epsilon]\|^2_{0,B^+} + \int_{B^+} \bar \partial^2 [ A^k_s A^j_s (v-u_0^\epsilon),_j] \cdot \bar\partial^2[ \zeta ^2 (v-u_0^\epsilon)],_k dx \nonumber \\ &\qquad \qquad\qquad + \int_{B^+} \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 v^i],_k dx + \int_{B^+} \bar \partial^2 [ A^k_s A^j_s {u_0^\epsilon},_j] \cdot \bar\partial^2[ \zeta ^2 (v-u_0^\epsilon)],_k dx\,, \label{cs101} \end{align} where we have dropped the explicit chart dependence on $l$ and where again, the boundary integral terms have vanished due to (\ref{NSlag}d). We integrate (\ref{cs101}) over the time interval $[0,T]$: \begin{align*} \| \zeta \bar \partial^2 [v(t)- u_0^\epsilon]\|^2_{0,B^+} + \int_0^T \| \zeta \bar\partial^2 [v(t)-u_0^\epsilon] \|^2_{1,B^+} \le |\mathcal{K} _1| + |\mathcal{K} _2| + |\mathcal{K} _3| + |\mathcal{K} _4| \,, \end{align*} where we are writing $u_0^\epsilon$ for $u_0^\epsilon \circ \theta_l$, and where \begin{align*} \mathcal{K} _1 & = \int_0^T \int_{B^+} \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 (v-u_0^\epsilon)^i],_k dx dt \,, \\ \mathcal{K} _2 & =\int_0^T \int_{B^+} \bar \partial^2 [ (A^k_s A^j_s - \delta^{kj}) (v-u_0^\epsilon),_j] \cdot \bar\partial^2[ \zeta ^2 (v-u_0^\epsilon)],_k dxdt\,, \\ \mathcal{K} _3 & =\int_0^T \int_{B^+} \bar \partial^2 (v-u_0^\epsilon)^i,_k\, [\ [ \bar\partial^2 \zeta ^2 (v-u_0^\epsilon)^i + 2 \bar \partial \zeta^2 \bar \partial (v-u_0^\epsilon)^i],_k\textcolor{black} {+\zeta^2,_k \bar\partial^2 v^i]} dxdt \,, \\ \mathcal{K} _4 & =\int_0^T \int_{B^+} \bar \partial^2 [ (A^k_s A^j_s {u_0^\epsilon},_j] \cdot \bar\partial^2[ \zeta ^2 (v-u_0^\epsilon)],_k dxdt \,. \end{align*} We write \begin{align*} \mathcal{K} _1\le \underbrace{ \int_0^T \int_{B^+} \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 v^i],_k dx dt }_{\mathcal{K} _1^a} + \underbrace{\int_0^T \int_{B^+} \left| \bar \partial^2 [A^k_i q]\, \bar\partial^2[ \zeta ^2 {u_0^\epsilon}^i],_k\right| dx dt }_{ \mathcal{K} _1^b} \,. \end{align*} By (\ref{u0-est}) and (\ref{main-est}), we see that $$ | \mathcal{K} _1^b | \le \sqrt{T} \mathcal{P} (M_0) \,. $$ For the integral $ \mathcal{K} _1^a$, we focus on the integrand that arises when $\bar \partial^2$ acts on {\it both} $q$ and $v^i,_k$, for all other derivative combinations immediately give an integral bound of $ \sqrt{T} \mathcal{P} (M_0)$. Using the Lagrangian divergence-free condition (\ref{NSlag}c), \begin{align*} \left| \int_0^T \int_{B^+} \zeta^2 \bar \partial^2q \, A^k_i \bar \partial^2 v^i,_k dx dt \right| & \le \left| \int_0^T \int_{B^+} \zeta^2 \bar \partial^2q \, \bar \partial^2 A^k_i v^i,_k dx dt\right| \\ & \qquad \qquad \qquad + 2\left| \int_0^T \int_{B^+} \zeta^2 \bar \partial^2q \, \bar \partial A^k_i \bar \partial v^i,_k dx dt\right| \,. \end{align*} An application of the Cauchy-Young inequality together with the Sobolev embedding theorem, shows that $$ | \mathcal{K} _1^a | \le \sqrt{T} \mathcal{P} (M_0) \,. $$ For the integral $ \mathcal{K} _2$, we consider the case that $ \bar \partial^2$ acts on $(A^k_s A^j_s - \delta^{kj}) $, all other terms immediately giv\textcolor{red}{ing} the desired bound. Using (\ref{t-est}) and (\ref{def-A}), $ \| A A^T - \operatorname{Id} \|_{ L^\infty(B^+)} \le \sqrt{T} \mathcal{P} (M)$, so that with (\ref{main-est}), $$ | \mathcal{K} _2 | \le \sqrt{T} \mathcal{P} (M_0) \,. $$ The integral $ \mathcal{K} _3$ and $\mathcal{K} _4$ are easily estimated using the Cauchy-Young inequality, the Sobolev embedding theorem, and (\ref{main-est}). We have thus established that $$ \| \zeta \bar \partial^2 [v_l(t)- u_0^\epsilon]\|^2_{0,B^+} + \int_0^T \| \zeta \bar\partial^2 [v_l(t)- u_0^\epsilon \circ \theta_l] \|^2_{1,B^+} \le \sqrt{T} \mathcal{P} ( M_0) \,. $$ Summing over $l =1,...,K$ then concludes the proof. \end{proof} \section{Proof of the Main Theorem}\label{section8} Using the Lagrangian divergence condition (\ref{NSlag}c), we have that $ \operatorname{div} v = -(A^j_i \textcolor{black}{-}\delta ^j_i) v^i,_j$, which we write as $ \operatorname{div} v = \textcolor{black}{-}(A- \operatorname{Id} ) : D v$. Then, since $ \operatorname{div} u_0^\epsilon =$, for all $t \in [0,T]$, \begin{equation}\label{div-est} \| \bar \partial \operatorname{div} (v -u_0^\epsilon) \|^2_{0, \Omega^\epsilon } \le \| \bar \partial (A- \operatorname{Id} ) \, D v \|^2_{0, \Omega^\epsilon } + \| (A- \operatorname{Id} ) \, \bar \partial D v \|^2_{0, \Omega^\epsilon } \le \sqrt{T} \mathcal{P} (M_0) \,. \end{equation} Using (\ref{div-est}) together with (\ref{est-main2}), the normal trace theorem (see, for example, (A.6) in \cite{CoSh2014a}) shows that $\bar \partial^2 (v - u_0^\epsilon) \cdot N_ \epsilon \in C([0,T; H^{ - {\frac{1}{2}} }(\Gamma^ \epsilon ))$ and $$ \| \bar \partial^2 (v - u_0^\epsilon) \cdot N_ \epsilon\|^2_{ -1/2, \Gamma^ \epsilon } \le \sqrt{T} \mathcal{P} (M_0) \,, $$ so that \begin{equation}\nonumber \| (v - u_0^\epsilon) \cdot N_ \epsilon\|^2_{ 1.5, \Gamma^ \epsilon } \le \sqrt{T} \mathcal{P} (M_0) \,, \end{equation} and hence by Lemma \ref{lemma2}, \begin{equation}\label{cs200} \max_{x \in \Gamma^ \epsilon } \| (v(x,t) - u_0^\epsilon) \cdot N_ \epsilon\| \le T ^{\frac{1}{4}} \mathcal{P} (M_0) \ \ \forall t \in[0,T] \,. \end{equation} Next, we consider the motion of the points $X_+^ \epsilon $ and $X_-$ given in Section \ref{sec::u0} (see Figure \ref{fig_initialconditions}). Recall that the unit normal $N_ \epsilon $ at both the points $X_+^ \epsilon = (0,0, \epsilon ) $ and $X_-=(0,0,0)$ is vertical, so by definition of $u_0^\epsilon$, we have that $$ u_0^\epsilon ( X_+^ \epsilon ) \cdot N_ \epsilon = -1 \, \ \ u_0^\epsilon ( X_-)\cdot N_ \epsilon =0 \,, \ \ \text{ and } | X_+^ \epsilon - X_-|= \epsilon \,. $$ Using Theorem \ref{prop1}, we choose $\epsilon $ so small that $ 10 \epsilon < T$, where $[0,T]$ is the time interval of existence which is independent of $ \epsilon $, and we consider the vertical displacement of the falling particle $X_+^ \epsilon $. Since $X_+^ \epsilon \cdot e_d = \epsilon $, and $$\eta(X_+^ \epsilon ,t) \cdot e_d = \epsilon + \int_0^t v^d(X_+^ \epsilon , s)ds\,,$$ for $t= 10 \epsilon $, we have from (\ref{cs200}) that $$ \eta^d(X_+^ \epsilon , 10 \epsilon ) < -8 \epsilon \,. $$ Next, let $Z$ denote any point on $\partial \omega_- \cap \{x_d=0\}$. Since $u_0^\epsilon (Z) \cdot N_ \epsilon =0$ and $ \eta(Z, 10 \epsilon ) = \int_0^{10\epsilon} v(Z,s)ds$, according to (\ref{cs200}), $$ \eta(Z, 10 \epsilon ) \cdot e_d \ge -c \epsilon^ {\frac{5}{4}} \,, \ \ c= 10^ {\frac{5}{4}} \mathcal{P} (M_0) \,. $$ We then choose $ \epsilon >0$ sufficiently small so that $c \epsilon^ {\frac{5}{4}} < 8\epsilon $. It follows that \begin{equation} \label{cs201} \eta(X_+^ \epsilon , 10 \epsilon ) \cdot e_d < \eta(Z, 10 \epsilon ) \cdot e_d \,. \end{equation} We next consider the horizontal displacement of the particle $X_+^\epsilon$ and any particle $Z$ on $\partial \omega_- \cap \{x_d=0\} \times [0,10\epsilon]$. From the estimate (\ref{apriori-est}), for all time $t\in [0,10\epsilon]$, $\|v(\cdot,t)\|_{L^\infty(\Omega)}\le \mathcal{P}(M_0)$. Therefore, for any $t\in [0,10\epsilon]$ and for $ \alpha =1,...,d-1$, $$|\eta^ \alpha (X_+^ \epsilon , t )| \le10 \epsilon \mathcal{P}(M_0) \text{ and } | \eta^ \alpha (Z , t ) -Z^ \alpha | \le 10 \epsilon \mathcal{P}(M_0) \,, $$ showing that the distance between the projection of the surface $\eta(\partial \omega_- \cap \{x_d=0\}, t)$ onto the plane $x_d=0$ and the set $\partial \omega_- \cap \{x_d=0\}$ is $O( \epsilon )$. Since by Definition \ref{def-dino-e}, the set $\partial \omega_- \cap \{x_d=0\}$ contains a $d$$-$$1$-dimensional ball of radius $ \sqrt{ \epsilon }$ centered at the origin, we see that by choosing $ \epsilon $ sufficiently small the vertical line passing through $\eta(X_+^ \epsilon , t )$ must intersect the surface $\eta(\partial \omega_- \cap \{x_d=0\}, t)$ for any $t\in [0,10\epsilon]$. Now, since at $t=0$, $X_+^\epsilon$ is directly (vertically) above $\partial \omega_- \cap \{x_d=0\}$, and at $t= 10 \epsilon$, from (\ref{cs201}), $\eta(X_+^ \epsilon , 10 \epsilon )$ is (vertically) below $\eta(\partial \omega_- \cap \{x_d=0\},10\epsilon)$, then by continuity there necessarily exists a time $0< T^* < 10 \epsilon $ at which $\eta(X_+^ \epsilon , T^* ) = \eta(Z , T^* )$ for some $Z \in \textcolor{black}{\partial \omega_-\cap\{x_d=0\}}$. This concludes the proof of the main theorem. \section{The case of a general self-intersection splash geometry}\label{section9} We now show how the analysis presented in the previous sections for the case of the ``dinosaur wave'' initial domain can be used to establish the existence of a splash singularity in a finite time $T^*$ for any domain whose boundary is arbitrarily close (in the $H^3$-norm) to any given self-intersecting surface of class $H^3$. This generalization requires the geometric constructions that we introduced in our previous work \cite{CoSh2014a}, coupled with a very minor adaptation of the analysis of the previous sections. We begin with the definition of the splash domain that we gave in \cite{CoSh2014a}. \subsection{The definition of the splash domain}\label{subsec:splashdomain} \begin{enumerate} \item We suppose that $x_0 \in \Gamma:= \partial \Omega_s$ is the unique boundary self-intersection point, i.e., $\Omega_s$ is locally on each side of the tangent plane to $\partial\Omega_s=\Gamma_s$ at $x_0$. For all other boundary points, the domain is locally on one side of its boundary. Without loss of generality, we suppose that the tangent plane at $x_0$ is the horizontal plane $x_3-(x_0)_3=0$. \item We let $U_0$ denote an open neighborhood of $x_0$ in $ \mathbb{R} ^3$, and then choose an additional $L$ open sets $\{U_l\}_{l=1}^L$ such that the collection $\{U_l\}_{l=0}^K$ is an open cover of $\Gamma_s$, and $\{U_l\}_{l=0}^L$ is an open cover of $\Omega_s$ and such that there exists a sufficiently small open subset $ \omega \subset U_0$ containing $x_0$ with the property that $$\overline\omega \cap \overline{U_l} = \emptyset \ \text{ for all } \ l=1,...,L \,.$$ We set \begin{align*} U_0^+ = U_0 \cap \Omega_s \cap \{ x_3 > (x_0)_3 \} \ \text{ and } U_0^- = U_0 \cap \Omega_s \cap \{ x_3 < (x_0)_3 \} \,. \end{align*} Additionally, we assume that $\overline{U_0}\cap\overline{\Omega_s}\cap\{x_3=(x_0)_3\}=\{x_0\}$, which implies in particular that $U_0^+$ and $U_0^-$ are connected. See Figure 9.1. \begin{figure}[htbp] \begin{center} \includegraphics[scale = 0.4]{fig_u0.eps} \caption{Splash domain $\Omega_s$, and the collection of open set $\{U_0,U_1,U_2,...,U_K\}$ covering $\Gamma$.} \end{center} \label{fig3} \end{figure} \item For each $l\in \{1,...,K\}$, there exists an $H^{3}$-class diffeomorphism $\theta_l$ satisfying \begin{gather} \theta_l : B:=B(0,1) \rightarrow U_l \nonumber \\ U_l \cap \Omega_s = \theta_l ( B^+ ) \ \text{ and } \ \overline{U_l} \cap \Gamma_s = \theta_l ( B^0 ) \,, \nonumber \end{gather} where \begin{align*} B^+ &=\{(x_1,x_2,x_3)\in B: x_3>0\} \,, \\ B^0 &=\{(x_1,x_2,x_3)\in \overline B: x_3=0\}\,. \end{align*} \item For $L > K$, let $\{U_l\}_{l=K+1}^{L}$ denote a family of open sets contained in $\Omega_s$ such that $\{U_l\}_{l=0}^{L}$ is an open cover of $\Omega_s$, and for $l\in \{K+1,...,L\}$, $\theta_l : B \to U_l$ is an $H^{3}$ diffeormorphism. \item To the open set $U_0$ we associate two $H^{3}$-class diffeomorphisms $\theta_+$ and $\theta_-$ of $B$ onto $U_0$ with the following properties: \begin{alignat*}{2} \theta_+(B^+) &= U_0^+ \,, \qquad \qquad && \theta_-(B^+)= U_0^- \,, \\ \theta_+(B^0) & = \overline{U_0^+}\cap \Gamma_s\,, && \theta_-(B^0) = \overline{U_0^-}\cap \Gamma_s\,, \end{alignat*} such that \begin{equation}\nonumber \{x_0\}=\theta_+(B^0)\cap\theta_-(B^0)\,, \end{equation} and \begin{equation}\nonumber \theta_+(0)=\theta_-(0)=x_0\,. \end{equation} We further assume that $$ \overline{\theta_\pm(B^+\cap B(0,1/2))} \cap \overline{\theta_l(B^+)} = \emptyset \text{ for } l=1,...,K \,,$$ and $$ \overline{\theta_\pm(B^+\cap B(0,1/2))} \cap \overline{\theta_l(B)} = \emptyset \text{ for } l=K+1,...,L \,.$$ \end{enumerate} \begin{definition}[Splash domain $\Omega_s$]\label{def:splashdomain} We say that $\Omega_s$ is a splash domain, if it is defined by a collection of open covers $\{U_l\}_{l=0}^L$ and associated maps $\{\theta_\pm, \theta_1, \theta_2,...,\theta_L\}$ satisfying the properties (1)--(5) above. Because each of the maps is an $H^{3}$ diffeomorphism, we say that the splash domain $\Omega_s$ defines a self-intersecting {\it generalized} $\bf H^{3}$-domain. \end{definition} \subsection{An approximating sequence of non self-intersecting domains converging to the splash domain} Following \cite{CoSh2014a}, we can then define standard (non self-intersecting) domains $\Omega^\epsilon$ (for $\epsilon>0$ small enough) by just modifying $\theta_\pm$, and leaving the other charts unchanged. As shown in Figure \ref{fig4}, our non self-intersecting domain $\Omega^\epsilon$ will be defined by associated maps $\{\theta\epsilon_\pm, \theta_1, \theta_2,...,\theta_L\}$ such that \begin{equation} \label{thetaeps} \|\theta^\epsilon_\pm-\theta_\pm\|_{H^3(B^+)}\le C\epsilon\,, \end{equation} and such that \begin{equation} \label{thetaepsbis} 0<d(\theta^\epsilon_+(B^+),\theta^\epsilon_-(B^+))\le \epsilon\,. \end{equation} \begin{figure}[htbp] \begin{center} \includegraphics[scale = 0.55]{chart_eps.eps} \caption{The black dot denotes the point $x_0$ where the boundary self-intersects (middle). For $ \epsilon >0$, the approximate domain $\Omega^ \epsilon $ does not intersect itself (right).} \end{center} \label{fig4} \end{figure} In summary, we have approximated the self-intersecting splash domain $\Omega_s$ with a sequence of $H^{3}$-class domains $\Omega^ \epsilon $ converging toward $\Omega$, such that for each $ \epsilon >0$, $\partial \Omega ^ \epsilon $ does not self-intersect. As such, each one of these domains $\Omega^ \epsilon $, $ \epsilon >0$, will thus be amenable to our local-in-time well-posedness theory for free-boundary incompressible Navier-Stokes equations. \section{Existence of a splash in finite time in a domain arbitrarily close to a given splash domain} \label{section10} We next define an initial velocity field of the same type as in Section \ref{sec::u0}. Due to (\ref{thetaeps}), the estimates of Section \ref{section7} remain unchanged. Similarly, the main proof of Section \ref{section8} works in a similar manner due to (\ref{thetaepsbis}), leading to the necessity of self-intersection at a time $T^\epsilon\in (0,10 \epsilon)$. Note that since the tangent plane at the intended splash singularity $x_0$ is the horizontal plane $\{ x_3=0\}$, $\partial [\theta_-( B^+)]$ is very close to $\{x_3=0\}$ in a small ball $B(x_0, \sqrt{ \epsilon })$ for $\epsilon $ taken sufficiently small; thus, we are using the fact that the almost flat portion of $\theta_-( B^+)$ is very close to $\{ x_3=0\}$ and contains a region of diameter at least $ \sqrt{ \epsilon }$. Furthermore, \begin{align} \|\eta^\epsilon(\theta^\epsilon_\pm, T^\epsilon)-\theta_\pm\|_3& \le \|\eta^\epsilon(\theta^\epsilon_\pm, T^\epsilon)-\theta^\epsilon_\pm\|_3+\|\theta^\epsilon_\pm-\theta_\pm\|_3\nonumber\\ & \le \|\int_0^{T^\epsilon} v^\epsilon(\theta^\epsilon_\pm,t)\ dt\|_3+C\epsilon\,, \label{9.1} \end{align} where we used the estimate (\ref{thetaeps}) in the above inequality (\ref{9.1}); hence, from our estimates in Section \ref{section7}, \begin{equation} \|\eta^\epsilon(\theta^\epsilon_\pm,T^\epsilon)-\theta_\pm\|_3 \le C\mathcal{P}(M_0) \sqrt{T^\epsilon}+C\epsilon\le C\mathcal{P}(M_0) \sqrt{\epsilon}\,. \label{9.2} \end{equation} This, therefore, shows that the splash-free surface $\eta^{\epsilon}(\Omega^\epsilon,T^\epsilon)$ is at a distance less than $C\mathcal{P}(M_0) \sqrt{\epsilon}$ from $\Omega_s$ in $H^3$. We have then established the following: \begin{theorem}\label{thm_general} For {\it any} given splash domain $\Omega_s$ of class $H^3$, there exists a splash domain $\tilde \Omega_s$ arbitrarily close in $H^3$ to $\Omega_s$, and smooth initial data consisting of a non self-intersecting domain $\Omega^\epsilon$ of class $H^{3}$ and a divergence-free velocity field $u_0^ \epsilon \in H^3( \Omega^\epsilon )$ satisfying $ [ \operatorname{Def} u_0^ \epsilon \cdot N_\epsilon ] \times N_ \epsilon =0$ on $\partial\Omega^\epsilon$, such that the flow map $\eta(x, t)$ solving the Navier-Stokes equations (\ref{NSlag}) satisfies $\eta( \partial \Omega^\epsilon , T^*) = \tilde \Omega_s$. That is, in finite time $T^*>0$, a splash singularity occurs which is very close to a prescribed self-intersecting geometry. \end{theorem} \section*{Acknowledgments} DC was supported by the Centre for Analysis and Nonlinear PDEs funded by the UK EPSRC grant EP/E03635X and the Scottish Funding Council. SS was supported by the National Science Foundation under grant DMS-1301380 and by the Royal Society Wolfson Merit Award.
2,877,628,090,935
arxiv
\section{Introduction} \label{sect:intro} Before the ascendency of quantum field theory, Stueckelberg proposed an approach to relativistic quantum field theory based on the conception of particle paths in spacetime, parameterized by an invariant fifth parameter \cite{stueckelberg41, stueckelberg42}. Feynman later considered this idea as the basis for relativistic path integrals (see the appendices to \refcite{feynman50, feynman51}), a conception which seems to have informed his early work on quantum field theory (though it is not much apparent in his later work). Since then, a number of authors have further developed the theory of parameterized relativistic quantum physics (see \refcite{fanchi93} and references therein), though not necessarily using a path integral approach. However, relativistic path integrals in particular have a natural interpretation in terms of consistent or decoherent histories \cite{griffiths84, omnes88, gellmann90}. In this interpretation, the path of a particle in spacetime is considered a \emph{fine-grained} history. A path integral then represents a \emph{coarse-grained} history as a superposition of paths meeting some criteria. When the criteria are properly chosen, the states for these coarse grained histories do not interfere---that is, they are \emph{decoherent} \cite{hartle95}. Since decoherent histories do not interfere, they can be assigned classical probabilities. Further adopting a ``many worlds'' interpretation \cite{everett57}, these histories can be considered to be alternate ``branches'' in the history of the universe, with associated probabilities for each of the branches to ``occur''. (For an informal introduction to the ideas of decoherence and emergent classicality, see \refcite{halliwell04}. For a more extensive survey see \refcite{halliwell03}.) Relativistic path integrals have also proved useful in the study of quantum gravity and quantum cosmology, because the time coordinate is treated similarly to the space coordinates, rather than as an evolution parameter (see, for example, \refcite{teitelboim82, hartle95}). In quantum cosmological models, the total Hamiltonian annihilates the ``wave function of the universe'', rather than determining the time evolution of the system. The question is how to extract physical predictions from such a wave function. Inspired by this, Halliwell and Thorwart recently published a paper with the engaging title ``Life in an energy eigenstate'' \cite{halliwell02} in which they consider the internal dynamics of a simple particle system in an energy eigenstate. In the present paper, I would like to take this idea a bit farther, and describe how the \emph{entire universe} might be considered to be in an eigenstate determined by classical limiting conditions within it. In effect, such an eigenstate is a selection of a specific coarse-grained branch as ``the'' history of the universe. Pursuing this idea requires a formalism that allows coarse-grained histories to be expressed as quantum states. I will use the spacetime path formalism proposed in \refcite{seidewitz06a}. For completeness, \sect{sect:formalism} summarizes the development of this formalism. A particularly important result from this work is that the coarse-grained histories of free particles with fixed 3-momentum become on-shell and decoherent in the infinite time limit. \Sect{sect:decoherence} then discusses decoherence in the context of the spacetime path formalism. \Sect{sect:decoherence:2slit} applies the formalism to the analysis of the familiar scenario of the two slit experiment. \Sect{sect:decoherence:scattering} extends the approach to consideration of a scattering process that takes place in a finite region of spacetime. Finally, taking this analysis of scattering as a paradigm, \sect{sect:decoherence:probabilities} considers the relation of probabilities to measured relative frequencies and \sect{sect:decoherence:cosmo} presents a heuristic discussion of the decoherence of \emph{cosmological states} of the entire universe. Throughout, I will use a spacetime metric signature of $(-+++)$ and take $\hbar = c = 1$. \section{Spacetime Paths} \label{sect:formalism} This section summarizes the spacetime path formalism I will use in the following sections. For further details on the development of this formalism, see \refcite{seidewitz06a}. \subsection{Position States} \label{sect:formalism:position} A \emph{spacetime path} is specified by four functions $\qmul$, for $\mu = 0, 1, 2, 3$, of a \emph{path parameter} $\lambda$. Note that such a path is not constrained to be timelike or even to maintain any particular direction in time. The only requirement is that it must be continuous. And, while there is no \emph{a priori} requirement for the paths to be differentiable, we can, as usual, treat them as differentiable within the context of a path integral (see the discussion in \refcite{seidewitz06a}.) It is well known that a spacetime path integral of the form \begin{equation} \label{eqn:A1} \prop = \eta \int_{\lambdaz}^{\infty} \dif \lambda_{1}\, \intDfour q\, \delta^{4}(q(\lambda_{1}) - x) \delta^{4}(q(\lambdaz) - \xz) \exp\left( \mi \int^{\lambda_{1}}_{\lambdaz} \dl L(\qdotsq(\lambda)) \right) \,, \end{equation} for an appropriate normalization constant $\eta$ and the Lagrangian function \begin{equation*} L(\qdotsq) = \frac{1}{4}\qdotsq - m^{2} \,, \end{equation*} gives the free-particle Feynman propagator \cite{feynman50, teitelboim82, halliwell01b, seidewitz06a}. In the path integral above, the notation $\Dfour q$ indicates that the integral is over the four functions $\qmul$ and the delta functions constrain the starting and ending points of the paths integrated over. Consider, however, that \eqn{eqn:A1} can be written \begin{equation*} \prop = \int_{\lambdaz}^{\infty} \dif \lambda_{1}\, \kersym(x-\xz; \lambda_{1}-\lambdaz) \,, \end{equation*} where \begin{equation} \label{eqn:A2} \kersym(x-\xz; \lambda_{1}-\lambdaz) \equiv \eta \intDfour q\, \delta^{4}(q(\lambda_{1}) - x) \delta^{4}(q(\lambdaz) - \xz) \exp\left( \mi \int^{\lambda_{1}}_{\lambdaz} \dl L(\qdotsq(\lambda)) \right) \end{equation} now has a similar path integral form as the usual non-relativistic \emph{propagation kernel} \cite{feynman48, feynman65}, except with paths parametrized by $\lambda$ rather than time. We can, therefore, use the relativistic kernel of \eqn{eqn:A2} to define a parametrized probability amplitude function in a similar fashion to the non-relativistic case: \begin{equation} \label{eqn:A3} \psixl = \intfour \xz\, \kerneld \psixlz \,. \end{equation} These wave functions are just the parametrized probability amplitude functions defined by Stueckelberg \cite{stueckelberg41}. In this sense, the $\psixl$ represent the probability amplitude for a particle to reach position $x$ at the point along its path with parameter value $\lambda$. The path integral in \eqn{eqn:A2} can be evaluated to give \cite{teitelboim82, seidewitz06a} \begin{equation*} \kerneld = (2\pi)^{-4} \intfour p \, \me^{\mi p\cdot(x - \xz)} \me^{-\mi (\lambda-\lambdaz)(p^{2} + m^{2})} \,. \end{equation*} Inserting this into \eqn{eqn:A3}, we see that $\psixl$ satisfies the \emph{Stuekelberg-Schr\"odinger equation} \begin{equation*} -\mi \pderiv{}{\lambda} \psixl = \left( \frac{\partial^{2}}{\partial x^{2}} - m^{2} \right) \psixl \,. \end{equation*} Note that this equation is based on the relativistic Hamiltonian $p^{2}+m^{2}$, and therefore includes the mass term $m^{2}$. This is in contrast to most previous authors \cite{feynman50, horwitz73, fanchi78}, who used a Hamiltonian of the form $p^{2}/(2m)$, by analogy with non-relativistic mechanics. The relativistic propagation kernel can also be given a conjugate form as a superposition of particle mass states. For $T > 0$, \begin{equation} \label{eqn:A4} \begin{split} \theta(T)\kersym(x-\xz;T) &= \me^{-\mi T m^{2}} \intfour p\, \me^{\mi p\cdot(x-\xz)} \int_{0}^{\infty} \dif T'\, \me^{-\mi T' p^{2}} \delta(T'-T) \\ &= (2\pi)^{-1}\me^{-\mi T m^{2}} \intfour p\, \me^{\mi p\cdot(x-\xz)} \int_{0}^{\infty} \dif T'\, \me^{-\mi T' p^{2}} \int \dif m'^{2}\, \me^{-\mi(T'- T)m'^{2}} \\ &= (2\pi)^{-1}\me^{-\mi T m^{2}} \int \dif m'^{2}\, \me^{\mi T m'^{2}} \propsym(x-\xz;m'^{2}) \,, \end{split} \end{equation} where \begin{equation*} \propsym(x-\xz;m'^{2}) \equiv \int_{0}^{\infty} \dif T'\, \intfour p\, \me^{\mi p\cdot(x-\xz)} \me^{-\mi T'(p^{2}+m'^{2})} = -\mi(2\pi)^{-4}\intfour p\, \frac{\me^{\mi p\cdot(x-\xz)}} {p^{2}+m'^{2}-\mi\varepsilon} \,. \end{equation*} Except for the extra phase factor $\exp(-\mi T m^{2})$, this form for $\kersym(x-\xz;T)$ is essentially that of the retarded Green's function derived by Land and Horwitz for parametrized quantum field theory \cite{land91,frastai95} as a superposition of propagators for different mass states (see also \refcite{enatsu63, enatsu86}). The value $T$ in $\kersym(x-\xz;T)$ can be thought of as fixing a specific \emph{intrinsic length} for the paths being integrated over in \eqn{eqn:A2}. The full propagator then results from a regular integration over all possible intrinsic path lengths: \begin{equation*} \prop = \int_{0}^{\infty} \dif T\, \kersym(x-\xz; T) \,. \end{equation*} As a result of the phase factor $\exp(-\mi T m^{2})$ in \eqn{eqn:A4}, the integration over $T$ effectively acts as a Fourier transform, resulting in the Feynman propagator with mass sharply defined at $m$, $\prop = \propsym(x-\xz;m)$. The functions defined in \eqn{eqn:A3} form a Hilbert space over four dimensional spacetime, parameterized by $\lambda$, in the same way that traditional non-relativistic wave functions form a Hilbert space over three dimensional space, parameterized by time. We can therefore define a consistent family of \emph{position state} bases $\ketxl$, such that \begin{equation} \label{eqn:A5} \psixl = \innerxlpsi \,, \end{equation} given a single Hilbert space state vector $\ketpsi$. These position states are normalized such that \begin{equation*} \inner{x'; \lambda}{x; \lambda} = \delta^{4}(x' - x) \,. \end{equation*} for each value of $\lambda$. Further, it follows from \eqns{eqn:A3} and \eqref{eqn:A5} that \begin{equation} \label{eqn:A6} \kerneld = \innerxlxlz \,. \end{equation} Thus, $\kerneld$ effectively defines a unitary transformation between the various Hilbert space bases $\ketxl$, indexed by the parameter $\lambda$. The overall state for propagation from $\xz$ to $x$ is given by the superposition of the states for paths of all intrinsic lengths. If we fix $\qmulz = \xmu_{0}$, then $\ketxl$ already includes all paths of length $\lambda - \lambdaz$. Therefore, the overall state $\ketx$ for the particle to arrive at $x$ should be given by the superposition of the states $\ketxl$ for all $\lambda > \lambdaz$: \begin{equation*} \ketx \equiv \int_{\lambdaz}^{\infty} \dl\, \ketxl \,. \end{equation*} Then, using \eqn{eqn:A6}, \begin{equation*} \innerxxlz = \int_{\lambdaz}^{\infty} \dl\, \kerneld = \int_{0}^{\infty} \dl\, \kersym(x-\xz; \lambda) = \prop \,. \end{equation*} \subsection{On-Shell States} \label{sect:formalism:on-shell} The position states defined in \sect{sect:formalism:position} make no distinction based on the time-direction of propagation of particles. Normally, particles are considered to propagate \emph{from} the past \emph{to} the future. Therefore, we can define normal particle states $\ketax$ such that \begin{equation} \label{eqn:B0} \innerxaxlz = \thetaax \prop \,, \end{equation} On the other hand, \emph{antiparticles} may be considered to propagate from the \emph{future} into the \emph{past} \cite{stueckelberg41, stueckelberg42, feynman49}. Therefore, antiparticle states $\ketrx$ are such that \begin{equation} \label{eqn:B0a} \innerxrxlz = \thetarx \prop \,. \end{equation} Note that the particle/antiparticle distinction proposed here is subtly different than that originally proposed by Stueckelberg \cite{stueckelberg41, stueckelberg42}. Stueckelberg considered the possibility that a single particle path might undergo a dynamical interaction that could change the time direction of its propagation, corresponding to what seemed to be a particle creation or annihilation event when viewed in a time-advancing direction. In contrast, the definitions of particle and antiparticle states given here depend only on whether the \emph{end point} $x$ of the particle path is in the future or past of its starting point $\xz$. Between these two points, the path may move arbitrarily forward or backwards in time. This division into particle and antiparticle paths depends, of course, on the choice of a specific coordinate system in which to define the time coordinate. However, if we take the time limit of the end point of the path to infinity for particles and negative infinity for antiparticles, then the particle/antiparticle distinction will be coordinate system independent. In taking this time limit, one cannot expect to hold the 3-position of the path end point constant. However, for a free particle, it is reasonable to take the particle \emph{3-momentum} as being fixed. Therefore, consider the state of a particle or antiparticle with a 3-momentum $\threep$ at a certain time $t$: \begin{equation*} \ketar{t,\threep} \equiv (2\pi)^{-3/2} \intthree x\, \me^{\mi(\mp\Ep t + \threep\cdot\threex)} \ketar{t,\threex} \,, \end{equation*} where $\Ep \equiv \sqrt{\threep^{2} + m^{2}}$. Now, as shown in \refcite{seidewitz06a}, \begin{equation} \label{eqn:B1} \begin{split} \keta{t,\threep} &= (2\Ep)^{-1} \int_{-\infty}^{t} \dt_{0}\, \ketalz{t_{0}, \threep} \quad \text{and} \\ \ketr{t,\threep} &= (2\Ep)^{-1} \int_{t}^{+\infty} \dt_{0}\, \ketrlz{t_{0}, \threep} \,, \end{split} \end{equation} where \begin{equation*} \ketarlz{t, \threep} \equiv (2\pi)^{-3/2} \intthree x\, \me^{\mi(\mp\Ep t + \threep\cdot\threex)} \ketlz{t, \threex} \,. \end{equation*} Since \begin{equation*} \inner{\advret{t', \threepp}; \lambdaz} {\advret{t, \threep}; \lambdaz} = \delta(t'-t) \delta^{3}(\threepp - \threep) \,, \end{equation*} we have, from \eqn{eqn:B1}, \begin{equation*} \inner{t, \advret{\threep}}{\advret{t_{0}, \threep_{0}{}}; \lambdaz} = (2\Ep)^{-1} \theta(\pm(t-t_{0})) \delta^{3}(\threep - \threep_{0}) \,. \end{equation*} If we now define the time limit particle and antiparticle states \begin{equation} \label{eqn:B2} \ketarthreep \equiv \lim_{t \to \pm\infty} \ketartp \,, \end{equation} then \begin{equation} \label{eqn:B3} \inner{\advret{\threep}}{\advret{t_{0}, \threep_{0}{}}; \lambdaz} = (2\Ep)^{-1} \delta^{3}(\threep - \threep_{0}) \,, \end{equation} for \emph{any} value of $t_{0}$. \Eqn{eqn:B3} is a natural introduction of an ``induced'' inner product, in the sense of \cite{halliwell01b, hartle97}. To see how this induced inner product may be used, consider, the two Hilbert-space subspaces spanned by the normal particle states $\ketalz{t, \threep}$ and the antiparticle states $\ketrlz{t, \threep}$, for each time $t$. States in these subspaces have the form \begin{equation*} \ketarlz{t,\psi} = \intthree p\, \psi(\threep) \ketarlz{t,\threep} \,, \end{equation*} for any square-integrable function $\psi(\threep)$, with \begin{equation*} \psi(\threep) = (2\Ep)\inner{\advret{\threep}}{t,\advret{\psi}} \,. \end{equation*} Similarly, consider the dual subspaces spanned by the bra states $\braa{\threep}$ and $\brar{\threep}$, such that \begin{equation*} \braar{\psi} \equiv \intthree p\, \psi(\threep)^{*} \braar{\threep} \end{equation*} and \begin{equation} \label{eqn:B3a} \psi(\threep)^{*} = \inner{\advret{\psi}}{t,\advret{\threep}}(2\Ep) \,. \end{equation} As a result of \eqn{eqn:B3}, we get the traditional inner product \begin{equation} \label{eqn:B4} (\psi', \psi) \equiv \inner{\advret{\psi'}}{t,\advret{\psi}} = \int \frac{\dif^{3} p}{2\Ep} \psi'(\threep)^{*} \psi(\threep) \,. \end{equation} With the inner product given by \eqn{eqn:B4}, the spaces of the $\ketar{t,\psi}$ can be considered ``reduced'' Hilbert spaces in their own right, with the dual Hilbert space being the spaces of the $\braar{\psi}$. \Eqn{eqn:B3} can then be seen as a \emph{bi-orthonormality} relation (see \refcite{akhiezer81} and App. A.8.1 of \refcite{muynk02}) expressing the orthonormality of the $\ketl{t,\threep}$ basis with respect to this inner product and allowing for the resolution of the identity \begin{equation} \label{eqn:B5} \intthree p\, (2\Ep)\ketarlz{t,\threep}\bra{\threep} = 1 \,. \end{equation} This can be used to reproduce the usual probabilistic interpretation of quantum mechanics over 3-space for each time $t$ (for further details, see \refcite{seidewitz06a}). Further, writing \begin{equation*} \ketarlz{t_{0}, \threep} = (2\pi)^{-1/2} \me^{\mp\mi\Ep t_{0}} \int \dif p^{0}\, \me^{\mi p^{0}t_{0}} \ketplz \,, \end{equation*} where \begin{equation*} \ketplz \equiv (2\pi)^{-2} \intfour x\, \me^{\mi p \cdot x} \ketlz{x} \end{equation*} is the corresponding 4-momentum state, it is straightforward to see from \eqn{eqn:B1} that the time limit of \eqn{eqn:B2} is \begin{equation*} \ketarthreep \equiv \lim_{t \to \pm\infty} \ketartp = (2\pi)^{1/2} (2\Ep)^{-1} \ketarEplz \,. \end{equation*} Thus, a normal particle ($+$) or antiparticle ($-$) that has 3-momentum $\threep$ as $t \to \pm\infty$ is \emph{on-shell}, with energy $\pm\Ep$. Such on-shell particles are unambiguously normal particles or antiparticles, independent of choice of coordinate system, and, because of the bi-orthonormality relation of \eqn{eqn:B3}, we can assign classical probabilities for them to have specific 3-momenta. \subsection{Fields and Interactions} \label{sect:formalism:fields} Multiple particle states can be straightforwardly introduced as members of a Fock space over the Hilbert space of position states $\ketxl$. First, in order to allow for multiparticle states with different types of particles, extend the position state of each individual particle with a \emph{particle type index} $n$, such that \begin{equation*} \inner{x',n';\lambda}{x,n;\lambda} = \delta^{n'}_{n}\delta^{4}(x'-x) \,. \end{equation*} Then, construct a basis for the Fock space of multiparticle states as sym\-me\-trized products of $N$ single particle states: \begin{equation*} \ket{\xnliN} \equiv (N!)^{-1/2} \sum_{\text{perms }\Perm} \ket{\xni{\Perm 1};\lambda_{\Perm 1}} \cdots \ket{\xni{\Perm N};\lambda_{\Perm N}} \,, \end{equation*} where the sum is over all permutations $\Perm$ of $1, 2, \ldots, N$. (Since, for simplicity, I am only considering scalar particles in the present work, only Bose-Einstein statistics need be accounted for.) It is then convenient to introduce a \emph{creation field} operator $\oppsit(x,n;\lambda)$ such that \begin{equation*} \oppsit(x,n;\lambda)\ket{\xnliN} = \ket{x,n,\lambda;\xnliN} \,, \end{equation*} with the corresponding annihilation field $\oppsi(x,n;\lambda)$ having the commutation relation \begin{equation*} [\oppsi(x',n';\lambda), \oppsit(x,n;\lambdaz)] = \delta^{n'}_{n}\propsym(x'-x;\lambda-\lambdaz) \,. \end{equation*} Further, define \begin{equation*} \oppsi(x,n) \equiv \int_{\lambdaz}^{\infty} \dl\, \oppsi(x,n;\lambda) \,, \end{equation*} so that \begin{equation*} [\oppsi(x',n'), \oppsit(x,n;\lambdaz)] = \delta^{n'}_{n}\propsym(x'-x) \,. \end{equation*} Now, an individual interaction vertex can be considered an event at which some number of incoming particles are destroyed and some number of outgoing particles are created. (Note that I am using the qualifiers ``incoming'' and ``outgoing'' here in the sense of the path evolution parameter $\lambda$, not time---which means that we are \emph{not} separately considering particles and antiparticles at this point.) Such an interaction can be modeled using a \emph{vertex operator} constructed from the appropriate number of annihilation and creation operators. For example, consider the case of an interaction with two incoming particles, one of type $n_{A}$ and one of type $n_{B}$, and two outgoing particles of the same types. The vertex operator for this interaction is \begin{equation} \label{eqn:C1} \opV \equiv g \intfour x\, \oppsit(x,n_{A};\lambdaz)\oppsit(x,n_{A};\lambdaz) \oppsi(x,n_{A})\oppsi(x,n_{A}) \end{equation} where the coefficient $g$ represents the relative probability amplitude of the interaction. In the following, it will be convenient to use the special adjoint $\oppsi\dadj$ defined by \begin{equation*} \oppsi\dadj(x,n) = \oppsit(x,n;\lambdaz) \text{ and } \oppsi\dadj(x,n;\lambdaz) = \oppsit(x,n) \,. \end{equation*} With this notation, the expression for $\opV$ becomes \begin{equation*} \opV = g \intfour x\, \oppsi\dadj(x,n_{A})\oppsi\dadj(x,n_{B}) \oppsi(x,n_{A})\oppsi(x,n_{B}) \,. \end{equation*} To account for the possibility of any number of interactions, we just need to sum up powers of $\opV$ to obtain the \emph{interaction operator} \begin{equation} \label{eqn:C2} \opG \equiv \sum_{m=0}^{\infty} \frac{(-\mi)^{m}}{m!}\opV^{m} = \me^{-i\opV} \,, \end{equation} where the $1/m!$ factor accounts for all possible permutations of the $m$ identical factors of $\opV$. Note that, unlike the usual scattering operator, there is no time ordering in the summation here. (More on this in \sect{sect:decoherence:scattering}.) The $-\mi$ factors are introduced in \eqn{eqn:C2} so that $\opG$ is unitary relative to the special adjoint (that is, $\opG\dadj\opG = \opG\opG\dadj = 1$), so long as $\opV$ is self-adjoint relative to it (that is, $\opV\dadj = \opV$). The self-adjointness of $\opV$ implies that an interaction must have the same number of incoming and outgoing particles, of the same types, at least when only one possible type of interaction is involved (as is the case with the example of \eqn{eqn:C1}). The formalism can be easily extended to allow for multiple types of interactions by adding additional terms to the definition of $\opV$. In this case, only the overall operator $\opV$ needs to be self-adjoint, not the individual interaction terms. Now, clearly we can also construct a Fock space from the 3-momentum representation states $\ketlz{t,\threep}$ and $\ket{t,\threep}$. We can then define the multiparticle time-limit states \begin{equation*} \begin{split} \bra{\threep'_{1\pm},n'_{1};\ldots} &\equiv \lim_{t'_{i} \to \pm\infty} \bra{t'_{1},\threep_{1\pm},n'_{1};\ldots} \,, \\ \ketlz{\threep_{1\pm},n_{1};\ldots} &\equiv \lim_{t_{i} \to \mp\infty} \ket{t_{1},\threep_{1\pm},n_{1},\lambdaz;\ldots} \,. \end{split} \end{equation*} In these states, each particle is \emph{either} a normal particle ($+$) \emph{or} and antiparticle ($-$). Note that the limit is taken to $+\infty$ for outgoing particles, but to $-\infty$ for outgoing antiparticles (and vice versa for incoming particles). These multiparticle 3-momentum states can be used with the interaction operator $\opG$ to compute multipoint interaction amplitudes. For example, the four point amplitude for one incoming particle, one incoming antiparticle, one outgoing particle and one outgoing antiparticle is given by \begin{multline} \label{eqn:C4} G(\adv{\threepp_{1}{}}, n'_{1}; \ret{\threepp_{2}{}}, n'_{2} | \adv{\threep_{1}{}}, n_{1}; \ret{\threep_{2}{}}, n_{2}; \lambdaz) \\ = (2\E{\threepp_{1}}2\E{\threepp_{2}} 2\E{\threep_{1}}2\E{\threep_{2}})^{1/2} \bra{\adv{\threepp_{1}{}}, n'_{1}; \ret{\threepp_{2}{}}, n'_{2}} \opG \ketlz{\adv{\threep_{1}{}}, n_{1}; \ret{\threep_{2}{}}, n_{2}} \,. \end{multline} (The $2\Ep$ factors are required by the resolution of the identity for the multiparticle 3-momentum states, generalizing the single particle case of \eqn{eqn:B3}.) Expanding $\opG$ as in \eqn{eqn:C2} gives a sum of Feynman diagrams for possible number of interactions. The time-limited 3-momentum states give the correct truncated amplitudes for the external legs of the diagrams \cite{seidewitz06a}. \section{Decoherence} \label{sect:decoherence} The bi-orthonormality condition of \eqn{eqn:B3} already provides an example of decoherence. The operator $(2\Ep) \ketarlz{t,\threep} \bra{\threep}$ represents the quantum proposition that a particle or antiparticle has a coarse-grained history in which it is free with 3-momentum $\threep$. The fact that these operators are orthogonal by \eqn{eqn:B3} and resolve the identity by \eqn{eqn:B4} indicates that these histories are decoherent and classical probabilities can be assigned as to whether a particle is in one such history or another \cite{griffiths02}. In this section I will explore further this concept of decohering histories of particle paths. I will start with the familiar case of the two slit experiment, to provide a heuristic example of the analysis of measurement-induced decoherence using the spacetime path formalism. This is followed by consideration of scattering experiments and then, finally, extension of these ideas to the universe as a whole. \subsection{Two Slit Experiment} \label{sect:decoherence:2slit} The canonical two-slit experiment has, of course, been analyzed several times previously, both in terms of path integrals and decoherence (see, for example, \refcite{feynman65, hartle91a, hartle95, halliwell04}). Nevertheless, it is still instructive to use this familiar case as a means for introducing the application of the formalism defined in \sect{sect:formalism}. Presume that incoming particles are prepared to have a fixed 3-momentum $\threep$. Then, we can take a particle emitted at time $\tz$ to be in the 3-momentum state $\ketalz{\tz,\threep}$. Further, assume that the flight time is long enough that, when the particles reach the slits, they can be considered to be in the on-shell state $\ketathreep$. For the purposes of the discussion here, it is sufficient to further idealize the experiment by considering the slits to be single points at positions $\threex_{i}$, for $i = 1,2$. The state for the particle to reach one or the other of the slits is then \begin{equation*} \keta{\threex_{i}{}} = (2\pi)^{-3/2} \intthree p\, e^{-\mi\threep\cdot\threex_{i}} \ketathreep \,. \end{equation*} From \eqn{eqn:B3a}, the corresponding probability amplitudes are \begin{equation*} \phi_{i} = \inner{\adv{\threex_{i}{}}}{\tz,\adv{\threep};\lambdaz}(2\Ep) = e^{\mi\threep\cdot\threex_{i}} \,, \end{equation*} corresponding to an incoming plane wave. Taking the plane of the slits to be perpendicular to the direction of $\threep$ results in $\phi_{1} = \phi_{2} = 1$, corresponding to the equal probability of the particle reaching any point on that plane. Since the particle is blocked from passing except through the slits, we can clearly renormalize the $\phi_{i}$ so that \begin{equation*} \phi_{1} = \phi_{2} = \frac{1}{\sqrt{2}} \,. \end{equation*} Suppose the particle passes through the slit at $\threex_{i}$ at some time $t_{i}$. One can now consider its remaining path separately, starting at $(t_{i},\threex_{i})$ and ending at some position $\threex$ on the final screen of the experiment. Qualitatively, the amplitude for this can be given by \begin{equation*} \psi_{i}(\threex) = \inner{\adv{\threex}}{t_{i},\adv{\threex_{i}{}};\lambdaz} \,. \end{equation*} The amplitude for passing through either slit and reaching $\threex$ is then \begin{equation} \label{eqn:D1} \psi(\threex) = \phi_{1}\psi_{1}(\threex) + \phi_{2}\psi_{2}(\threex) = \frac{1}{\sqrt{2}} ( \inner{\adv{\threex}}{t_{1},\adv{\threex_{1}{}};\lambdaz} + \inner{\adv{\threex}}{t_{2},\adv{\threex_{2}{}};\lambdaz} ) \,. \end{equation} The result of the experiment is a measurement made of the final position $\threex$. This measurement is represented by a measuring instrument eigenstate $\ket{m(\threex)}$ such that \begin{equation} \label{eqn:D2} \inner{m(\threex')}{m(\threex)} = \delta^{3}(\threex' - \threex) \,. \end{equation} The measurement eigenstate $\ket{m(\threex)}$ must be weighted by the amplitude $\psi(\threex)$ for the particle to reach position $\threex$. From the point of view of particle paths, each state $\psi(\threex)\ket{m(\threex)}$ can be viewed as representing the entire coarse-grained history of a particle being emitted, passing through one or the other of the slits and being measured as arriving at position $\threex$. Due to the orthogonality condition of \eqn{eqn:D2}, these coarse-grained history states do not interfere with each other---that is, the histories \emph{decohere}, so a classical probability of $\sqr{\psi(\threex)}$ can be assigned to them. From \eqn{eqn:D1}, it is clear that this probability will, however, include interference effects between the slit-specific amplitudes $\psi_{1}$ and $\psi_{2}$. We can, of course, also represent the less-coarse-grained histories for the particle passing through \emph{just} one slit as $\psi_{i}(\threex) \ket{m(\threex)}$, for $i = 1,2$. But these histories do \emph{not} decohere, since \begin{equation*} \psi_{1}^{*}(\threex)\psi_{2}(\threex) \inner{m(\threex)}{m(\threex)} \end{equation*} is not zero. (Actually, with the delta function normalization of \eqn{eqn:D2} this value is infinite, but that would not be so for a more realistic instrument with finite resolution.) Suppose, however, that we add a measuring device that measures whether the particle passes through slit 1 or slit 2. This device has two eigenstates denoted $\ket{s(i)}$, for $i = 1,2$, such that \begin{equation*} \inner{s(i)}{s(j)} = \delta_{ij} \,. \end{equation*} The coarse-grained history for a particle being measured as passing through slit $i$ then being measured as reaching position $\threex$ is now $\psi_{i}(\threex)\ket{s(i)}\ket{m(\threex)}$. These histories now \emph{do} decohere, since \begin{equation*} \psi_{i}^{*}(\threex')\psi_{j}(\threex) \inner{s(i)}{s(j)}\inner{m(\threex')}{m(\threex)} = \sqr{\psi_{i}(\threex)} \delta_{ij} \delta^{3}(\threex' - \threex) \,, \end{equation*} and they can be given the individual probabilities $\sqr{\psi_{i}(\threex)}$. The results of this analysis are, of course, as would be expected. Notice, however, that, rather than the usual approach of time evolving states, the approach here constructs states representing entire coarse-grained particle histories. Measurements are modeled as being coupled to specific points in these histories. Thus, rather than modeling some initial state of a measuring instrument evolving into a state with a specific measurement, the states $\ket{s(i)}$ and $\ket{m(\threex)}$ represent the occurrence of specific measurement values \emph{as part of} the overall history of the experiment. The occurrence of a specific measurement value places a constraint on the possible particle paths that can be included in any coarse-grained history consistent with that measurement. Thus, $\ket{s(i)}$ places the constraint that paths must pass through slit $i$, while $\ket{m(\threex)}$ places the constraint that the paths end at position $\threex$. If a coarse-grained history includes \emph{all} possible paths consistent with the constraints for specific measurement values, and no others, then the orthogonality of the measurement states causes such a history to decohere from other similar histories for different measurement values. In this sense, the tensor product of the measurement eigenstates provides a complete, orthogonal basis for decoherent coarse-grained histories of the experiment. Given observations of certain measurement values, the experiment, as a whole, can be said with certainty to be ``in'' the specific history eigenstate selected by those measurement values. Nothing definitive, however, can be said about finer-grained histories, since these histories do not decohere. The important point here is that the experiment is not modeled as ``evolving'' into a decoherent state. Rather it is \emph{entire coarse-grained histories} of the experiment that decohere, with observed measurements simply identifying which actual history was observed. \subsection{Scattering} \label{sect:decoherence:scattering} We now turn to the more general problem of multiparticle scattering, with the goal of providing an analysis similar to that provided for the two slit experiment in \sect{sect:decoherence:2slit}. Clearly, we can base this on the multiple particle interaction formalism discussed in \sect{sect:formalism:fields}. However, the formulation of \eqn{eqn:C4} is still not that of the usual scattering matrix, since the incoming state involves particles at $t \to +\infty$ but antiparticles at $t \to -\infty$, and vice versa for the outgoing state. To construct the usual scattering matrix, it is necessary to have incoming multiparticle states that are composed of individual asymptotic particle states that are all consistently for $t \to -\infty$ and outgoing states with individual asymptotic states all for $t \to +\infty$. That is, we need to shift to considering ``incoming'' and ``outgoing'' in the sense of \emph{time}. To do this, we can take the viewpoint of considering antiparticles to be positive energy particles traveling forwards in time, rather than negative energy particles traveling backwards in time. Since both particles and their antiparticles will then have positive energy, it becomes necessary to explicitly label antiparticles with separate (though related) types from their corresponding particles. Let $\na$ denote the type label for a normal particle type and $\nr$ denote the corresponding antiparticle type. For normal particles of type $\na$, position states are defined as in \eqn{eqn:B0}: \begin{equation*} \inner{x,\na}{\xz,\na;\lambdaz} = \thetaax\prop \,. \end{equation*} For antiparticles of type $\nr$, however, position states are now defined such that \begin{equation*} \inner{x,\nr}{\xz,\nr;\lambdaz} = \thetaax\propsym(\xz - x) \,. \end{equation*} Note the reversal with respect to \eqn{eqn:B0a} of $\xz$ and $x$ on the right side of this equation. Carrying through the derivation for antiparticle 3-momentum states based on the new antiparticle states $\ket{x,\nr}$ does, indeed, give positive energy states, but with reversed three momentum \cite{seidewitz06a}: \begin{equation*} \ket{t,\threep,\nr} = (2\Ep)^{-1}\int_{-\infty}^{t} \dt_{0}\, \ketlz{\tz,\threep,\nr} \,, \end{equation*} where \begin{equation*} \ketlz{t_{0},\threep,\nr} = \ketlz{\adv{t_{0},-\threep},n} \,. \end{equation*} Further, taking the limit $t \to +\infty$ gives the on-shell states \begin{equation*} \ket{\threep,\nr} \equiv \lim_{t \to +\infty} \ket{t,\threep,\nr} = (2\pi)^{1/2}(2\Ep)^{-1}\ketlz{+\Ep,-\threep} \,. \end{equation*} We can now reasonably construct Fock spaces with single time, multiparticle basis states \begin{equation*} \ketlz{t;\pnariN} \equiv \ketlz{t,\threep_{1},n_{1\pm};\ldots; t,\threep_{N},n_{N\pm}} \,, \end{equation*} over all combinations of particle and antiparticle types and, similarly, \begin{equation*} \ket{t;\pnariN} \equiv \ket{t,\threep_{1},n_{1\pm};\ldots; t,\threep_{N},n_{N\pm}} \,. \end{equation*} We can then take consistent time limits for particles and antiparticles alike to get the incoming and outgoing states \begin{equation*} \begin{split} \ketlz{\pnariN} &= \lim_{t \to -\infty}\ketlz{t;\pnariN} \,, \\ \ket{\pnariN} &= \lim_{t \to +\infty}\ket{t;\pnariN} \,. \end{split} \end{equation*} Reorganizing the interaction amplitude of \eqn{eqn:C4} in terms of these new asymptotic states gives the more usual form using the scattering operator $\opS$. Showing explicitly the asymptotic time limit used for each particle: \begin{equation} \label{eqn:E1} \begin{split} \bra{+\infty, \adv{\threepp_{1}{}}, n'_{1}; &-\infty, \ret{\threepp_{2}{}}, n'_{2}} \opG \ketlz{-\infty, \adv{\threep_{1}{}}, n_{1}; +\infty, \ret{\threep_{2}{}}, n_{2}} \\ &= \bra{+\infty, \threepp_{1}, \adv{n'_{1}{}}; +\infty, \threep_{2}, \ret{n_{2}{}}} \opS \ketlz{-\infty, \threep_{1}, \adv{n_{1}{}}; -\infty, \threepp_{2}, \ret{n'_{2}{}}} \\ &= \bra{\threepp_{1}, \adv{n'_{1}{}}; \threep_{2}, \ret{n_{2}{}}} \opS \ketlz{\threep_{1}, \adv{n_{1}{}}; \threepp_{2}, \ret{n'_{2}{}}} \,. \end{split} \end{equation} More generally, consider applying $\opS$ to an incoming state of $N$ particles, giving $\opS\ket{\pnarlziN}$. Using the resolution of the identity \begin{multline} \label{eqn:E2} \sum_{N = 0}^{\infty}\, \sum_{\advret{n_{i}{}}} \int \dthree p_{1} \cdots \dthree p_{N}\, \left[ \prod_{i=1}^{N} 2\E{\threep_{i}} \right] \\ \times \ket{\pnarlziN}\bra{\pnariN} = 1 \,, \end{multline} expand the state $\opS\ket{\pnarlziN}$ as \begin{multline*} \opS\ket{\pnarlziN} \\ = \sum_{N' = 0}^{\infty}\, \sum_{\advret{n_{i}{}}} \int \dthree p'_{1} \cdots \dthree p'_{N'}\, \left[ \prod_{i=1}^{N'} 2\E{\threepp_{i}} \right] \ket{\pnparlziN} \\ \times \bra{\pnpariN}\opS\ket{\pnarlziN} \,. \end{multline*} This shows how $\opS\ket{\pnarlziN}$ is a superposition of possible out states, with the square of the scattering amplitude giving the probability of a particular out state for a particular in state. Note that each operator \begin{equation*} \ket{\pnarlziN}\bra{\pnariN} \end{equation*} represents not the proposition that the particles have the 3-momenta $\threep_{i}$ at any one point in time, but, rather, that they have these momenta \emph{for their entire history}. Since, by \eqn{eqn:E2}, these operators orthogonally resolve the identity, these histories do not interfere with each other and are thus trivially decoherent. This is why the square of the scattering amplitude gives a classical probability. It should also be noted that both $\ket{\pnarlziN}$ and $\opS\ket{\pnarlziN}$ represent states of the entire ``universe'' under consideration. The state $\ket{\pnarlziN}$ represents a universe in which all particles remain free and there are no interactions. This free particle state does not evolve into $\opS\ket{\pnarlziN}$. Rather, $\opS\ket{\pnarlziN}$ is the state of a \emph{different} universe, in which interactions \emph{do} occur. The operator $\opS$ simply provides a convenient method for constructing the states of the interacting particle universe from the states of the free particle universe. \subsection{Probabilities} \label{sect:decoherence:probabilities} The decoherence of coarse-grained histories allows for a mathematically consistent assignment of probabilities. Physically, the concept of ``probability'' here is to be interpreted as meaning the likelihood that an arbitrary selection from the population of all possible coarse-grained histories will yield a specific history. In other words, the greater the probability assigned to a history, the more likely it is that it is actually the history of the ``universe'' under consideration. Of course, it is not immediately clear how the assignment of probabilities to entire histories relates to the statistics of physical results of measurement processes occuring within those histories. Before continuing, I would like to briefly consider this point. To simplify further discussion, let a single Greek letter, say $\alpha$, represent an entire configuration $\threep_{1}, \threep_{2}, \ldots$ of on-shell particle 3-momenta. In this notation, incoming states $\ket{\pnarlziN}$ are denoted as simply $\ketlz{\alpha}$ and outgoing states $\ket{\pnpariN}$ become $\ket{\alpha'}$. The resolution of the identity from \eqn{eqn:E2} is then \begin{equation*} \int \dif\alpha\, \ketlz{\alpha}\bra{\alpha} = 1 \,, \end{equation*} where $\int \dif\alpha$ denotes the entire set of integrals and summations. Suppose the same scattering experiment is repeated, independently, $n$ times. Let $\ketlz{\psi_{i}}$ be the asymptotic free incoming state for the $i$-th repetition. Considered all together, the overall free particle state of this ``universe'' of experiments is \begin{equation*} \ketlz{\psi} = \ketlz{\psi_{1}}\cdots\ketlz{\psi_{n}} \,. \end{equation*} The state $\opS\ketlz{\psi}$ is then the superposition of all possible histories of interactions among the incoming particles. At a large enough time after all the experiments take place, the outgoing particles should be on-shell in a state $\bra{\alpha} = \bra{\alpha_{1},\ldots,\alpha_{n}}$, where each $\bra{\alpha_{i}}$ is the outgoing state for the $i$-th repetition, and the probability for this overall result is $\sqr{\bra{\alpha}\opS\ketlz{\psi}}$. If we can neglect interactions between each experiment repetition, then the scattering amplitude should approximately factor: \begin{equation*} \bra{\alpha}\opS\ketlz{\psi} \approx \bra{\alpha_{1}}\opS\ketlz{\psi_{1}} \cdots \bra{\alpha_{n}}\opS\ketlz{\psi_{n}} \,. \end{equation*} (If the repetitions are widely spacelike separated, then this follows from the cluster decomposition of $\opS$ \cite{weinberg95, horwitz81}.) Thus, the overall probability for scattering into $\alpha$ is approximately the product of the scattering probabilities for each cluster. Now, consider a measurement $m(\alpha_{i})$ taken of each experimental result. Suppose the measurement determines in which member of a disjoint partition of values $\alpha_{i}$ lies. The probability amplitude for a measurement of $\alpha_{i}$ to have the specific (discrete) value $m_{i}$ is \begin{equation*} \psi_{i}(m_{i}) \equiv \int_{m_{i}} \dif \alpha_{i}\, \bra{\alpha_{i}}\opS\ketlz{\psi_{i}} \,, \end{equation*} where the integration is over the subset of values corresponding to the measurement result $m_{i}$. Assuming identical preparation for the experiments, the $\psi_{i}$ should all be the same function $\psi(m)$. The overall weighted measurement state is then \begin{equation} \label{eqn:F1} \psi(m_{1}) \cdots \psi(m_{n}) \ket{m_{1}} \cdots \ket{m_{n}} \,, \end{equation} where $\ket{m_{i}}$ is the measuring instrument eigenstate for the measurement of the $i$-th experimental result. Once again, this overall state represents a specific coarse-grained history in which the specific measurement results $m_{1}, \ldots, m_{n}$ are obtained for the $n$ repetitions of the scattering experiment. The question to be asked is how the relative frequency of any given result in this set compares to the quantum mechanically predicted probabilities $\sqr{\psi(m_{i})}$ (see also \refcites{hartle68, graham73} for discussions of this question in the context of traditional and many-worlds interpretations of quantum mechanics). Define the \emph{relative frequency} of some specific measurement result $\ell$ within the set $m_{1}, \ldots, m_{n}$ to be \begin{equation} \label{eqn:F1a} f_{\ell}(m_{1}, \ldots, m_{n}) \equiv \frac{1}{n} \sum_{i = 1}^{n} \delta_{m_{i}\ell} \,. \end{equation} Since this relative frequency is itself an observable, a relative frequency operator $\op{F}_{\ell}$ can be defined which has relative frequencies as its eigenvalues: \begin{equation*} \op{F}_{\ell}\ket{m_{1}} \cdots \ket{m_{n}} = f_{\ell}(m_{1}, \ldots, m_{n}) \ket{m_{1}} \cdots \ket{m_{n}} \,. \end{equation*} Define the average \begin{equation*} \avg{\op{F}_{\ell}} \equiv \sum_{m_{1} \ldots m_{n}} f_{\ell}(m_{1}, \ldots, m_{n}) \sqr{\psi(m_{1})} \cdots \sqr{\psi(m_{n})} \,. \end{equation*} Substituting \eqn{eqn:F1a} and using the normalization $\sum\sqr{\psi(m_{i})} = 1$ then gives \cite{hartle68, graham73} \begin{equation} \label{eqn:F2} \avg{\op{F}_{\ell}} = \sqr{\psi(\ell)} \,. \end{equation} \Eqn{eqn:F2} is mathematically consistent with the probability interpretation of quantum mechanics. However, this mathematical average still needs to be connected to physical results. To do this, consider a further measurement, this time of the relative frequency $\op{F}_{\ell}$. Note that this is a measurement \emph{of the previous measurements} $m_{i}$, perhaps simply by counting the records of the results of those measurements. The new measurement results are thus the functions $f_{\ell}(m_{1}, \ldots, m_{n})$, with corresponding eigenstates $\ket{f_{\ell}(m_{1}, \ldots, m_{n})}$. The overall state \begin{equation} \label{eqn:F3} \psi(m_{1}) \cdots \psi(m_{n}) \ket{m_{1}} \cdots \ket{m_{n}} \ket{f_{\ell}(m_{1}, \ldots, m_{n})} \end{equation} then represents the history in which a specific relative frequency is measured for a specific set of scattering results. Since these history states are still decoherent due to the original set of measurement states, the total probability for observing a certain relative frequency $f_{\ell}$ is given by the sum of the probabilities for each of the states for which $nf_{\ell}$ of the $m_{i}$ have the value $\ell$. This probability is \begin{equation*} p(f_{\ell}) = \binom{n}{nf_{\ell}} p_{\ell}^{nf_{\ell}} (1-p_{\ell})^{n(1-f_{\ell})} \,, \end{equation*} where $p_{\ell} = \sqr{\psi(\ell)}$. The probability $p(f_{\ell})$ is just a Bernoulli distribution. By the de Moivre--Laplace theorem, for large $n$, this distribution is sharply peaked about the mean $f_{\ell} = p_{\ell} = \avg{\op{F}_{\ell}}$. Thus, the probability becomes almost certain that a choice of one of the histories \eqref{eqn:F3} will be a history in which the observed relative frequency will be near the prediction given by the usual Born probability interpretation. Of course, for finite $n$, there is still the possibility of a ``maverick'' universe in which $f_{\ell}$ is arbitrarily far from the expected value---but it would seem that (in most cases, at least) our universe is simply not one of these. There have been a number of criticisms in the literature of using relative frequency as above as the basis for the quantum probability interpetation (see, for example, \refcites{kent90, squires90}). However, these criticisms relate to attempts to actually justify the Born probability interpretation itself. My goal here is more modest: to simply show that, assuming the Born probability rule applies for history states, that the statistics of repeated measurement results within such a history would be expected to follow a similar rule. In this regard, criticisms of, e.g., circularity and the need for additional assumptions, do not apply here. (For justification of the Born rule itself for quantum states, the arguments of Zurek based on ``environment-assisted invariance'' \cite{zurek03, zurek05} would seem to be relevent, but I will not pursue this further here.) \subsection{Cosmological States} \label{sect:decoherence:cosmo} Extending the ideas from \sect{sect:decoherence:probabilities}, let $\ketlz{\Psi}$ be the \emph{cosmological state} representing the free-particle evolution of the universe from the initial condition of the big bang. Then $\opS\ketlz{\Psi}$ is a superposition of all possible interacting particle histories of the universe. Obviously, this really should also include interactions leading to bound states, not just scattering. For the purposes of the present discussion, however, it is sufficient to simply allow that some of the products of the scattering interactions may be composite particles rather than fundamental. A specific coarse-grained history in this superposition can be identified by a specific configuration $\alpha$ of all classically observable particles throughout the life of the universe. (For the present discussion, assume that this is a large but finite number of particles.) In this case, $\Psi(\alpha) = \bra{\alpha}\opS\ketlz{\Psi}$ might reasonably be called the ``wave function of the universe'', since $\sqr{\Psi(\alpha)}$ is the probability of the universe having the configuration $\alpha$ given its cosmological state $\opS\ketlz{\Psi}$. (Clearly, for this to be the true wave function of the universe, $\opS$ would need to include the effects of all the actual types of interactions, including gravity \cite{hartle83}.) Further, given that the universe can be decomposed into approximately isolated subsystems, the overall probability $\sqr{\Psi(\alpha)}$ will approximately factor into a product of probabilities for the histories of each of the subsystems. Now, consider that any classically measurable quantity should be a function of some subset of the classical configuration $\alpha$. Divide $\alpha$ into $\alpha_{1}, \alpha_{2}, \ldots$ (this division need not be complete or disjoint), and let $m_{i}(\alpha_{i})$ represent the result of a measurement made on the subset $\alpha_{i}$. We can then represent a measuring instrument for $m_{i}$ as having a set of orthogonal states $\ket{m_{i}(\alpha_{i})}$ representing the various possible measurement outcomes. Of course, a measuring instrument is, itself, a part of the universe being measured. And a complete theory of measurement would have to account for how such an instrument, as a subsystem of the universe, becomes correlated with some other part of the universe and itself decoheres into non-interfering states. However, it is not the intent of this paper to present such a complete theory. (For a discussion of related issues in a non-relativistic context, see \refcites{zurek98, zurek03} and the references given there.) For our purposes here, it is sufficient to consider a ``measurement process'' to be a process that produces a persistent record of distinguishable results correlated with the measured subsystem, based on classical variables. By definition, such a process can be abstracted into a representation by orthogonal result states. We can then extend the kind of analysis used in \sect{sect:decoherence:2slit} for the two slit experiment, and consider the complete measurement state of the universe to be \begin{equation} \label{eqn:G1} \Psi(\alpha_{1}, \alpha_{2}, \ldots) \ket{m_{1}(\alpha_{1})}\ket{m_{2}(\alpha_{2})} \ldots \,, \end{equation} in which the measurement results are correlated with the corresponding configuration of the universe with probability amplitude given by the wave function $\Psi(\alpha_{1}, \alpha_{2}, \ldots)$. Further, suppose some of the measurements are of relative frequencies of results of repeated experiments. Then, by extension of the argument in \sect{sect:decoherence:probabilities}, for a large enough number of repetitions within a ``typical'' history, the observed relative frequency will accurately reflect the probabilities as predicted by quantum theory. It is worth emphasizing again that the universe does not ``evolve into'' the state \eqref{eqn:G1}. Rather, this state represents a \emph{complete} coarse-grained history of the universe, in which the measurement values $m_{1}(\alpha_{1}), m_{2}(\alpha_{2}), \ldots$ are observed, implying the corresponding classical configuration $\alpha_{1}, \alpha_{2}, \ldots$ for the universe. The correlation of the measurements with the configuration of the universe means that the measurement results effectively provide information on which coarse-grained history the universe is ``really in.'' It is in exactly this sense that the universe can be represented as the eigenstate \eqref{eqn:G1} of the measurements made within it. \section{Conclusion} \label{sect:conclusion} I would like to conclude with some remarks on the interpretational implications of the concept of cosmological states defined in \sect{sect:decoherence:cosmo}. Each cosmological state $\ket{\alpha_{1}, \alpha_{2}, \ldots}$, with corresponding measurement state \eqref{eqn:G1}, represents a possible, complete, coarse-grained history of the universe. Of course, each such course-grained history is still a quantum superposition of many fine-grained histories. However, if we include in the $m_{i}$ all the measurements made in the entire history of the universe, then the corresponding measurement states are the finest-grained possible that can be determined by inhabitants of the universe. The measurement states themselves are decoherent and orthogonal, but the distribution of measurement results in any specific coarse-grained history will still show the effects of interference of the superposed fine-grained histories (as we saw in the simple case of the two slit experiment in \sect{sect:decoherence:2slit}). This reflects the fact that such interference effects really are observed in our universe. Now, all measurements ever made so far determine only some very small portion of a configuration $\alpha$ of the universe. Nevertheless, in principle, it is consistent to consider all such measurements to be, indeed, made on a portion of some overall $\alpha$, selecting a specific classical history from the family given by $\opS\ketlz{\Psi}$, and that this is the ``real'' history of the universe. The formalism here allows for no further judgement on the ``real'' history of the universe beyond the coarse-grained superpositions determined by the measurement results. This conception is very much in the spirit of the original work by Everett \cite{everett57} on what has become known as the ``many worlds'' interpretation. The key point is that there is no need to consider any sort of observation by observation ``collapse of the wave function.'' Rather, consistent measurement results are determined by appropriately decohering histories \cite{hartle95, halliwell03, halliwell04}, and known measurement results constrain the possible histories. However, Everett and his successors \cite{dewitt73} generally considered the dynamic evolution of states in time. In this formulation, a measurement process at a certain time causes a state to ``branch'' into orthogonal components, one for each possible measurement result. This leads almost inevitably to the conception of the continual dynamic creation of ``many worlds,'' only one of which is ever really apparent to any observer. In contrast, in the approach presented here, entire coarse-grained histories of the universe decohere for all time. It is only necessary to consider one of these to be the ``real'' history of the actual universe, though we have only very partial information on which history this actually is. There is no need to consider the other histories to have any ``real'' existence at all. Nevertheless, within the ``real'' history of our universe, all observations made at the classical level will be distributed according to the probabilistic rules of quantum theory. Instead of a ``no collapse'' interpretation, this is, in a sense, a ``one collapse'' interpretation---the single collapse of the wave function of the universe into the cosmological state of the entire coarse-grained history of the universe. It is as if God did indeed play dice with the universe, but that He threw very many dice just once, determining the fate of the universe for all space and time. \endinput
2,877,628,090,936
arxiv
\section{Introduction} \label{intro} With the Atacama Large Millimeter/submillimeter Array (ALMA) now in full operation, our understanding of dust-enshrouded star formation at high redshifts is advancing more rapidly than ever before. The most intense star formation in the universe takes place in dusty, star-forming galaxies (DSFGs), at high redshifts ($z>1$), creating new stars at rates of $>100-1000$\,\ensuremath{\rm{M}_\odot}\xspace\,yr$^{-1}$ (see a recent review by \citealt{casey14}). The otherwise high UV luminosity from massive young stars in these galaxies is almost entirely reprocessed by interstellar dust, which absorbs the short-wavelength radiation and re-radiates it at far-infrared (FIR) and (sub)millimeter wavelengths. Although DSFGs represent a significant contribution to the comoving star formation rate density out to at least $z=4$ \citep[e.g.,][]{chapman05,casey13}, producing a realistic population of DSFGs has long been a challenge for theoretical models of galaxy evolution \citep[e.g.,][]{baugh05,dave10,hayward13,narayanan15}. Observations of these galaxies benefit from a strongly negative ``K-correction'' at submillimeter wavelengths \citep[e.g.,][]{blain93}, in which the dimming due to increased cosmological distance is countered by the rapidly rising dust spectral energy distribution (SED) at fixed observing wavelength. DSFGs were initially discovered in low-resolution ($>10$\ensuremath{''}\xspace) 850\,\ensuremath{\mu\rm{m}}\xspace deep images \citep{smail97,barger98,hughes98}, and high-resolution follow-up studies at submillimeter wavelengths remain challenging, as are observations at other wavelengths that do not benefit from the negative K-correction. One fairly straightforward method of gaining resolution is to target a sample of gravitationally lensed galaxies, such as those discovered by the South Pole Telescope \citep{carlstrom11,vieira10,mocanu13} or the \textit{Herschel} Space Observatory \citep{negrello10,wardlow13}. Follow-up observations of these galaxies at FIR/submillimeter wavelengths, where they are brightest, with interferometers such as ALMA and the Submillimeter Array have shown that the bulk of the brightest objects are consistent with strong gravitational lensing \citep[e.g.,][]{hezaveh13,vieira13,bussmann13}. Lensed samples offer the opportunity to study DSFGs at higher resolution and using fainter observational diagnostics than otherwise possible \citep[e.g.,][]{swinbank10,fu12,bothwell13b,spilker14}. Taking advantage of gravitational lensing requires careful modeling to understand its effects. In this paper, we present lens models of a sample of 47 DSFGs discovered in South Pole Telescope data and observed by ALMA at $\sim0.5$'' resolution. \citet{hezaveh13} presented models of four sources which were spatially resolved at the $\sim1.5$\ensuremath{''}\xspace resolution of the first data acquired for this project; here we expand this work to include the completed dataset, including all sources and array configurations. As in \citet{hezaveh13}, our models are performed in the Fourier plane native to the interferometer, and marginalized over several common calibration uncertainties. The resulting intrinsic source properties span a large range in luminosity, and we use these derived properties to explore the intrinsic size distribution of DSFGs, their dust SEDs, and the relation between the [C{\scriptsize II}]\xspace fine structure line and the FIR luminosity. In Section~\ref{obs}, we describe the selection criteria and ALMA observations. Section~\ref{lensmodels} describes our gravitational lens modeling technique, with the results of these models detailed in Section~\ref{results}. In Section~\ref{discussion} we use these models to address selected topics of interest, including the intrinsic size distribution of DSFGs and the relationship between the [C{\scriptsize II}]\xspace fine structure line and the FIR luminosity. We conclude in Section~\ref{conclusions}. Throughout this work, we assume a flat WMAP9 $\Lambda$CDM cosmology, $h=0.693$, $\Omega_m = 0.286$, and $\Omega_\Lambda = 0.713$ \citep{hinshaw13}. We define the far-infrared luminosity \ensuremath{L_{\rm{FIR}}}\xspace to be integrated over rest-frame $42.5-122.5$\,\ensuremath{\mu\rm{m}}\xspace \citep{helou88}. \section{Sample Selection and Observations} \label{obs} The selection criteria used to generate the SPT DSFG sample are described in detail by \citet{weiss13}. Briefly, sources were selected to have dust-like spectral indices between 1.4 and 2\,mm (i.e., \ensuremath{S_\mathrm{1.4mm}}\xspace/\ensuremath{S_\mathrm{2mm}}\xspace$>1.7$; \citealt{mocanu13}). Further selection criteria remove synchrotron-dominated and low-redshift ($z<0.1$) contaminant sources. Redshifts for some of the SPT DSFGs are presented in \citet{strandet16}. Optical and near-infrared spectroscopic redshifts of the foreground lenses, where available, will be presented in Rotermund et~al.\xspace, in prep. Finally, we make use of optical and infrared imaging data obtained from a variety of facilities, including the \textit{Hubble} Space Telescope, Very Large Telescope, Magellan-Baade telescope, and \textit{Spitzer}/IRAC. To refine the coarse SPT positions, each source was observed at higher spatial resolution to improve the positional accuracy, typically at 870\,\ensuremath{\mu\rm{m}}\xspace using the Large Apex BOlometer CAmera (LABOCA) or at 1.3\,mm using the Submillimeter Array (SMA). From this catalog, we selected 47 bright sources which could be placed into four groups of targets that lie within 15$^\circ$ of each other on the sky in order to share calibrator sources. The targets are listed in Table~\ref{tab:targets}. In Figure~\ref{fig:selection} we compare the objects in the subsample observed by ALMA with all SPT sources and with the \textit{Herschel}-selected objects observed by \citet{bussmann13,bussmann15}. These 47 SPT sources were observed by ALMA at 870\,\ensuremath{\mu\rm{m}}\xspace as part of Cycle 0 program 2011.0.00958.S (PI D. Marrone). The ALMA observations were carried out in eight sessions from November 2011 to August 2012 and are summarized in Table~\ref{tab:obstable}. Given the limited number of antennas available at the beginning of Cycle 0 (minimum 14), each group of sources was observed with two different array configurations, corresponding to approximately 0.5 and 1.5\ensuremath{''}\xspace resolution, to provide better sampling of the $uv$ plane. Over the series of observations the number of antennas increased (up to 25), providing greater sensitivity in later observations. Additional sources with precisely known positions from the International Celestial Reference Frame (ICRF; \citealt{ma98}) were observed to verify the astrometric and antenna baseline solutions. Each source was observed for 60--90\,s per array configuration. The total observing time for all calibrators and science targets was 9.4 hours. Four basebands, each processing 2\,GHz of telescope bandwidth, were centered near 336.8, 338.8, 348.8, and 350.8\,GHz. The correlator was configured to provide 128 channels of 15.6\,MHz width for each baseband. Bandpass calibration was performed by observing a bright quasar at the beginning of each track. Time-dependent amplitude and phase variations were calibrated using several quasars near (typically within $<5^\circ$ of) the science targets. The flux scale was determined at the beginning of each track using an available solar system object or quasar with a recently determined flux density, as detailed in Table~\ref{tab:obstable}. This flux scale is estimated to be correct to within 10\%, although we allow an amplitude re-scaling between the two observations of each group of sources in our modeling (see Section~\ref{lensmodels}). We estimate the noise on each visibility measurement by calculating the scatter after differencing successive visibilities on the same baseline, baseband, and polarization. After calibration, the data from each track were combined and imaged using Briggs weighting (robust parameter $= -0.5$). This weighting represents a compromise which somewhat favors higher resolution at the expense of sensitivity. In four objects (SPT0125-47, SPT0125-50, SPT2103-60, SPT2354-58), we serendipitously detected a spectral feature in the ALMA data. As we consider only models of the continuum emission in this work, for these sources, we exclude the spectral window containing the spectral line. Another four objects (SPT0550-53, SPT0551-50, SPT2351-57, SPT2353-50) appear to be lensed by galaxy groups or clusters. \textit{HST} imaging shows numerous galaxies in the vicinity of the 870\,\ensuremath{\mu\rm{m}}\xspace emission. Images of these sources are shown in Appendix~\ref{app:clusters}. The ALMA measurements show only single images, and the ALMA field of view does not encompass the expected locations of counterimages. For these sources, the lensing geometry cannot be constrained by the ALMA data alone. Beyond counting them among the sources identified as lensed, we ignore them for the remainder of this paper. Images of the sources we model in this paper, overlaid on the best-available near-IR or optical imaging, are shown in Fig.~\ref{fig:images}. \begin{figure}[htb]% \includegraphics[width=\columnwidth]{selection.pdf}% \caption{ Comparison of the subsample of SPT sources observed by ALMA to all SPT sources and the \textit{Herschel}-selected samples of \citet{bussmann13,bussmann15}. Note that \ensuremath{S_\mathrm{870\um}}\xspace shown in this figure is derived from single-dish LABOCA measurements for the SPT sources. Single-dish photometry is not available for the \textit{Herschel} sources, so these points are derived from interferometric (SMA or ALMA) observations only and may underestimate the true total flux density; see Section~\ref{fluxcomp}. \textit{Top:} The subsample of SPT sources observed by ALMA was selected to have high \ensuremath{S_\mathrm{1.4mm}}\xspace, and spans most of the range of \ensuremath{S_\mathrm{870\um}}\xspace seen in the full sample. \textit{Bottom:} Flux density -- FIR color diagram for SPT- and \textit{Herschel}- selected DSFGs \citep{bussmann13,bussmann15}. The SPT sources are redder on average, and at higher redshift \citep[e.g.,][]{weiss13,bethermin15b}, largely due to their longer selection wavelength. }% \label{fig:selection}% \end{figure} \input{Targetlist_table.tex} \input{Observation_table.tex} \section{Visibility-Based Lens Modeling} \label{lensmodels} When modeling the effects of gravitational lensing, many methods perform the fitting procedure directly on observed images of the lensed emission. However, ALMA does not directly image the sky emission; rather, it measures the Fourier components of the sky emission at a range of two-dimensional spatial frequencies. Inverting these visibilities leads to correlated noise in the resulting images which can introduce bias into later measurements. Instead, a better option is to model the visibilities directly, where the noise and measurement are well understood. Modeling in the $uv$ plane also allows us to model and account for residual calibration errors, including improper antenna delay calibrations and mismatched absolute flux scales from observations taken on different days. Our lens modeling procedure is based on the work of \citet{hezaveh13} (see also \citealt{bussmann12,bussmann13} for a similar technique). The lens mass profile is represented by one or more Singular Isothermal Ellipsoids (SIEs), with lensing deflections derived by \citet{kormann94}. The SIE is parameterized by its two-dimensional location relative to the phase center ($x_L$, $y_L$), the lens strength in the form of the angular Einstein radius $\theta_{E,L}$, ellipticity $e_L$, and position angle of the major axis $\phi_L$ in degrees east of north. In some cases, the data also favor the existence of an external tidal shear ($\gamma$), with deflections calculated as in \citet{keeton00} (we have redefined the shear position angle, $\phi_\gamma$, to match the convention used here for $\phi_L$). Background source emission and any unlensed sources are represented as one or more unresolved point sources (with position $x_S$ and $y_S$, and flux density \ensuremath{S_\mathrm{870\um}}\xspace as free parameters) or S\'{e}rsic profiles (\citealt{sersic68}; with position $x_S$ and $y_S$, flux density \ensuremath{S_\mathrm{870\um}}\xspace, S\'{e}rsic index $n_S$, half-light radius \ensuremath{r_\mathrm{eff}}\xspace, axis ratio $b_S$/$a_S$, and position angle $\phi_S$ as free parameters). Note that a S\'{e}rsic index $n=4$ corresponds to a \citet{devaucouleurs53} profile, $n=1$ an exponential disk, and $n=0.5$ a Gaussian light profile (in \citealt{hezaveh13}, all sources were modeled as circularly symmetric Gaussian profiles). For lensed sources, we define the location of the source to be relative to the primary lens in the model, while for unlensed sources it is defined relative to the ALMA phase center. Within the framework we have developed, any of these lens and source parameters may be held fixed during fitting, and loose flat priors may be used. We use available optical/NIR imaging to guide the models (e.g., a single lens vs. multiple lenses), but the positions of galaxies identified in these images are not otherwise used, except for singly-imaged sources for which the ALMA data alone are not sufficiently constraining. To reproduce the information present in our high signal-to-noise ratio measurements, and to represent realistic calibration uncertainties, our modeling must be more flexible than that used in previous work \citep[e.g.,][]{bussmann13,hezaveh13,bussmann15}. For example, because we are jointly modeling multiple datasets taken several months apart (see Table~\ref{tab:obstable}), small differences in absolute calibration or atmospheric conditions between epochs could be translated into false shifts in parameters. To address this possibility, we allow for a multiplicative amplitude re-scaling factor and an astrometric offset between the two tracks. We also calibrate uncorrected antenna-based phase errors using the procedure described in \citet{hezaveh13}. These phase errors may be attributed to uncompensated atmospheric delays or imprecisely known antenna positions. These phase errors are generally small except in the two Nov. 2011 tracks, which were observed prior to antenna baseline solutions being incorporated into the reduction pipeline. The phase errors and astrometric shifts derived from this procedure are consistent with those found for the ICRF sources that we added to our observations to test the calibration and astrometry of the data. We employ a Markov Chain Monte Carlo (MCMC) fitting procedure, using the \texttt{emcee} \citep{foremanmackey13} code to sample the posterior probability function. At each point in parameter space, we generate a model image from a given set of lens and source parameters, including the flux scaling and astrometric offsets mentioned above. We then invert this image to the Fourier plane and measure the modeled visibilities at the $uv$ coordinates of each dataset. The quality of fit is calculated using the $\chi^2$ metric. When comparing models of the same source with different numbers of free parameters, we use the Deviance Information Criterion (DIC; \citealt{spiegelhalter02}) for model selection. The DIC determines, for example, whether including an additional source-plane component is justified. The code used to generate all the models in this work, along with example usage scripts, is available at \url{https://github.com/jspilker/visilens}. \section{Results} \label{results} Images of each system along with the best-fit image- and source-plane models are shown in Fig.~\ref{fig:images}. These models are briefly described in Appendix~\ref{app:notes}. Summaries of the properties of the lenses and sources are provided in Tables~\ref{tab:lenses} and \ref{tab:sources}, respectively. \begin{figure*}[!tbp]% \begin{centering} \includegraphics[width=0.495\textwidth]{SPT0020-51_panels.png} \includegraphics[width=0.495\textwidth]{SPT0027-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT0103-45_panels.png} \includegraphics[width=0.495\textwidth]{SPT0109-47_panels.png} \includegraphics[width=0.495\textwidth]{SPT0113-46_panels.png} \includegraphics[width=0.495\textwidth]{SPT0125-47_panels.png} \includegraphics[width=0.495\textwidth]{SPT0125-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT0128-51_panels.png} \includegraphics[width=0.495\textwidth]{SPT0202-61_panels.png} \includegraphics[width=0.495\textwidth]{SPT0243-49_panels.png} \includegraphics[width=0.495\textwidth]{SPT0245-63_panels.png} \includegraphics[width=0.495\textwidth]{SPT0300-46_panels.png} \end{centering} \caption{ Images and lens models for all sources modeled in this work. \textit{Left:} ALMA 870\,\ensuremath{\mu\rm{m}}\xspace emission (blue contours) overlaid on the best-available optical/NIR image (greyscale) for each source. Contours are drawn at 10, 30, ... percent of the peak value. The synthesized beam is indicated in the lower left corner. For some objects, we also show images of the 870\,\ensuremath{\mu\rm{m}}\xspace emission which highlight the resolved structure present in the data (green contours; see Appendix~\ref{app:notes} for details). Greyscale images are logarithmically scaled to emphasize the objects detected. Fitted lens positions are shown with navy diamonds; sources with multiple lenses are labeled as in Table~\ref{tab:lenses}. In panels with a large field-of-view, the ALMA primary beam half-power radius is indicated with a dotted line; for the other objects, the primary beam correction at the center of the image is given in the middle panel as the scale factor before the noise level in mJy. \textit{Middle:} Model dirty images (greyscale), with residual contours (blue) in steps of $\pm$2, 4, ...$\sigma$. \textit{Right:} Fully resolved best-fit model images (blue), with caustics shown in green. The inset of each panel shows a zoomed-in view of the source-plane emission, where the size of the inset is given in the lower-center of each panel. Multiple sources are labeled as in Table~\ref{tab:sources}. \label{fig:images}} \addtocounter{figure}{-1} \end{figure*} \begin{figure*}[!tbp]% \begin{centering} \includegraphics[width=0.495\textwidth]{SPT0319-47_panels.png} \includegraphics[width=0.495\textwidth]{SPT0345-47_panels.png} \includegraphics[width=0.495\textwidth]{SPT0346-52_panels.png} \includegraphics[width=0.495\textwidth]{SPT0348-62_panels.png} \includegraphics[width=0.495\textwidth]{SPT0403-58_panels.png} \includegraphics[width=0.495\textwidth]{SPT0404-59_panels.png} \includegraphics[width=0.495\textwidth]{SPT0418-47_panels.png} \includegraphics[width=0.495\textwidth]{SPT0441-46_panels.png} \includegraphics[width=0.495\textwidth]{SPT0452-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT0459-58_panels.png} \includegraphics[width=0.495\textwidth]{SPT0459-59_panels.png} \includegraphics[width=0.495\textwidth]{SPT0529-54_panels.png} \end{centering} \caption{ Continued. \label{fig:images1}} \addtocounter{figure}{-1} \end{figure*} \begin{figure*}[!tbp]% \begin{centering} \includegraphics[width=0.495\textwidth]{SPT0532-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT0538-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT2031-51_panels.png} \includegraphics[width=0.495\textwidth]{SPT2048-55_panels.png} \includegraphics[width=0.495\textwidth]{SPT2052-56_panels.png} \includegraphics[width=0.495\textwidth]{SPT2103-60_panels.png} \includegraphics[width=0.495\textwidth]{SPT2132-58_panels.png} \includegraphics[width=0.495\textwidth]{SPT2134-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT2146-55_panels.png} \includegraphics[width=0.495\textwidth]{SPT2146-56_panels.png} \includegraphics[width=0.495\textwidth]{SPT2147-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT2311-54_panels.png} \end{centering} \caption{ Continued. \label{fig:images2}} \addtocounter{figure}{-1} \end{figure*} \begin{figure*}[htb]% \begin{centering} \includegraphics[width=0.495\textwidth]{SPT2319-55_panels.png} \includegraphics[width=0.495\textwidth]{SPT2340-59_panels.png} \includegraphics[width=0.495\textwidth]{SPT2349-50_panels.png} \includegraphics[width=0.495\textwidth]{SPT2349-56_panels.png} \includegraphics[width=0.495\textwidth]{SPT2354-58_panels.png} \includegraphics[width=0.495\textwidth]{SPT2357-51_panels.png} \end{centering} \caption{ Continued. \label{fig:images3}} \end{figure*} \input{Lens_table.tex} \input{Source_table.tex} \subsection{Basic Lens Model Properties} \label{lensstats} As expected, a large fraction of the 47 fields observed by ALMA are consistent with strongly lensed systems -- for 38 sources (81\%), strong gravitational lensing is the most plausible explanation for the ALMA emission. Of these, 4 sources (11\% of the strongly lensed sources) appear to be lensed by large groups or clusters of galaxies. An additional 8 sources (17\%) appear to be unlensed or weakly lensed. Of these sources, 2 are co-located ($<0.5$\ensuremath{''}\xspace) with objects also detected in the optical or near-infrared but do not appear to be lensed, two more are within 3\ensuremath{''}\xspace of optical/NIR counterparts and are likely either weakly lensed background sources or unlensed sources with undetected optical counterparts, while the remaining 4 sources do not appear to be closely associated with any objects detected in the best-available optical/NIR imaging. The final source, SPT2300-51, was undetected by ALMA at $>5\sigma$ significance within the ALMA primary beam half-power radius and was determined to be a spurious detection in the LABOCA follow-up of SPT sources; this source is shown in Appendix~\ref{app:notes}. Figure~\ref{fig:lensstats} summarizes some key parameters of the lens models. The left panel shows the distribution of Einstein radii for the strongly lensed sources, where we have added the Einstein radii of systems with multiple lenses in quadrature. We find a median Einstein radius of 0.64\ensuremath{''}\xspace, with the distribution rising until approximately the half-resolution radius of our data. This may indicate that higher resolution observations will reveal that some of the sources which are unresolved in the current data may also be gravitationally lensed, because the multiple images of strongly lensed sources are generally separated by $\sim$2 Einstein radii. A similar median Einstein radius of $\sim0.6$\ensuremath{''}\xspace was found for the \textit{Herschel}-selected sample of \citet{bussmann13}. This may indicate that the two surveys probe a similar population of lens galaxies, in spite of the difference in background source properties (e.g., Fig.~\ref{fig:selection}). We defer a more thorough discussion of the lens galaxies to a future work. The center panel of Fig.~\ref{fig:lensstats} shows the distribution of \ensuremath{\mu_\mathrm{870\um}}\xspace for the SPT sources, with a median magnification of 5.5 for all sources, or 6.3 for the strongly lensed subset alone. This is somewhat higher than the median magnification of 4.6 found by \citet{bussmann13} in a study of \textit{Herschel}-selected objects. The magnification distribution for the SPT sources also appears to contain a tail to higher magnifications compared to the \textit{Herschel} sample; for approximately 30\% of the strongly lensed sources, the best-fit magnifications are $\ensuremath{\mu_\mathrm{870\um}}\xspace>10$. The fraction of strongly lensed sources is expected to vary with the flux density threshold used to create the source catalogs. Lower flux density limits will include a higher proportion of unlensed sources. Equivalently, the median magnification of an observed sample of objects is a function of the flux density threshold. The right panel of Fig.~\ref{fig:lensstats} illustrates this effect: on average, apparently brighter sources are magnified more highly. This effect is also apparent in the brighter \textit{Herschel} sample studied by \citet{bussmann13}, in which at least 21 of 30 sources are strongly lensed, compared to a fainter sample described in \citet{bussmann15}, in which only 6 of 29 sources are strongly lensed. This difference is likely due to the shape of the submillimeter number counts, which drop steeply for sources with intrinsic $\ensuremath{S_\mathrm{870\um}}\xspace\gtrsim8.5$\,mJy \citep{karim13,simpson15b}. \begin{figure*}[thb]% \centering \includegraphics[width=\textwidth]{lensmodel_stats.pdf}% \caption{ \textit{Left:} Distribution of Einstein radii for the strongly lensed SPT sources. For objects with multiple lenses, the Einstein radii of the individual lens galaxies have been added in quadrature. \textit{Middle:} Distribution of \ensuremath{\mu_\mathrm{870\um}}\xspace for all modeled sources. For sources with multiple components, the flux density-weighted mean magnification is shown. \textit{Right:} Source magnification as a function of apparent LABOCA flux density. } \label{fig:lensstats} \end{figure*} \subsection{Flux Recovery} \label{fluxcomp} Every source targeted was detected, with the exception of SPT2300-51 (this source was determined to be false after it was included in the ALMA sample). Each source had previously been observed at the same frequency using LABOCA on APEX, a single-dish bolometer camera with the same primary beam size as the ALMA data. By comparing the 870\,\ensuremath{\mu\rm{m}}\xspace flux density measured by LABOCA to that recovered in the ALMA data, we can test whether significant flux has been resolved out by ALMA due to limited coverage of the $uv$-plane. This could occur if the sources have structure extended on scales greater than the largest scale recoverable by the data, or if additional sources are present in the maps which are too faint to have been detected individually or are outside the primary beam. Almost all of the sources in our sample are significantly resolved in the ALMA data. To estimate the total flux density present in the ALMA maps, we first image the data using a taper in the $uv$-plane at 50\,k$\lambda$, corresponding to a resolution of $\gtrsim4$''. This ensures that we measure a value as close as possible to the true single-dish ``zero-spacing'' flux density. We then CLEAN the images to a 3$\sigma$ threshold and correct for the response of the primary beam. The total ALMA flux is then defined as the sum of the CLEAN components, in order to avoid the need to define an aperture over which to measure the total flux density. If our sources were unresolved on scales $<50$\,k$\lambda$, this would be equivalent to reporting the maximum pixel value in the images; in practice, many of our sources still show some structure on these scales. In the left panel of Fig.~\ref{fig:fluxcomp}, we compare the total flux densities of the ALMA sources determined in this way to the LABOCA measurements \citep{weiss13}. Note that we have made no effort to correct for the different bandwidths of the two instruments (8 vs. $\sim$60\,GHz). We recover a median of ($91\pm24$)\% of the LABOCA flux density, consistent within the mutual absolute flux scaling uncertainties ($\sim$10\% for both instruments). Meanwhile, the middle panel of Fig.~\ref{fig:fluxcomp} shows no clear trend in the fraction of flux recovered as a function of LABOCA flux density. These plots suggest that, in general, the ALMA data do not resolve out significant extended emission or hide a large population of sources too faint to detect individually. \citet{hodge13} reached a similar conclusion using ALMA to image a large sample of unlensed 870\,\ensuremath{\mu\rm{m}}\xspace--selected sources discovered by LABOCA in the Extended \textit{Chandra} Deep Field-South. The sample of SPT DSFGs observed in this work shows a better degree of consistency between the ALMA and LABOCA flux densities, which may be due to the fact that the SPT-selected sources are apparently brighter. Indeed, the brightest sources studied by Hodge et~al.\xspace correspond to the faintest sources in the present sample. We also test the extent to which the total ALMA flux densities agree with the total flux densities inferred from the lens models. In this case, we define the total model flux density as the sum over all components of $\ensuremath{S_\mathrm{870\um}}\xspace \times \ensuremath{\mu_\mathrm{870\um}}\xspace$. This is shown in the right panel of Fig.~\ref{fig:fluxcomp}. The models contain a median of 102\% of the total ALMA flux densities, indicating that no significant sources of emission remain unaccounted for by the models. As the residual maps generated by the best-fit models shown in Fig.~\ref{fig:images} show no significant remaining peaks, this is unsurprising. \begin{figure*}[thb]% \centering \includegraphics[width=\textwidth]{fluxcomparison.pdf}% \caption{ Extent to which flux densities derived from ALMA, LABOCA, and the lens models agree; see Sec.~\ref{fluxcomp} for details. \textit{Left:} The ALMA data recover a median of 91\% of the single-dish flux density measured by LABOCA, indicated by the dashed line. \textit{Middle:} No clear trend is seen in the fraction of flux detected by ALMA as a function of LABOCA flux density. \textit{Right:} The lens models of all sources contain a median of 102\% of the total ALMA flux density.} \label{fig:fluxcomp} \end{figure*} \subsection{Multiplicity in the SPT Sample} \label{multiplicity} Several high-resolution ALMA follow-up studies of submillimeter sources originally detected in low-resolution single-dish surveys have concluded that a significant fraction of the sources break up into multiple components when observed at higher resolution. In the ALESS program, \citet{hodge13} find that at least 35\% of their sources contain multiple components, but that these components are consistent with being distributed randomly on the sky. In contrast, \citet{bussmann15} report a multiplicity fraction of 69\%, with the multiple sources strongly concentrated at separations $\lesssim3$\ensuremath{''}\xspace. Similarly, \citet{simpson15b} report that 61\% of SCUBA-2 sources contain multiple components. Our ability to determine multiplicity fractions from the follow-up of SPT-selected sources is hampered by two potential issues. First, the large majority of the sources considered here are strongly lensed. This makes finding close-in multiple components difficult, as any faint nearby companions will be overwhelmed by the much brighter lensed emission. Second, the SPT sources have a much higher apparent brightness compared to the unlensed single-dish sources observed in other follow-up campaigns. This reduces our ability to detect faint sources, as low-level phase errors can create spurious ``companions.'' For this reason, we use a higher (5$\sigma$, with $\ensuremath{\sigma_\mathrm{870\um}}\xspace \sim 0.18-0.5$\,mJy) threshold for source detection than other source catalogs. We also refrain from counting sources which require multiple source-plane components to reproduce the lensed emission as multiples, because these components are generally separated by $<0.5$\ensuremath{''}\xspace, and the source-plane components are likely an approximation of complex underlying structure within a single galaxy. In the ALMA data presented here, only 13\% (6/47) of sources contain multiple components at $>5\sigma$ significance. This fraction is significantly lower than the high multiplicity rates reported by other ALMA follow-up programs. While obviously dependent on the depth of the follow-up observations, the high reported multiplicity fractions in other programs come from data with roughly comparable depth and resolution to the ALMA data presented here (ALESS detection threshold $\sim1.1-2.1$\,mJy, compared to $\sim0.9-2.5$\,mJy here). After considering our lack of sensitivity to close-in sources and a higher source detection threshold, the ALESS sample is the most natural comparison sample -- the higher detection threshold in our data is balanced by the increased depth of our observations, and both samples are insensitive to multiples at separations of $\lesssim1.5$\ensuremath{''}\xspace. While the overall multiplicity fraction does appear to be lower in the SPT sample, the few multiples in our data are consistent with being uniformly distributed within the fields, as in the ALESS data. \section{Discussion} \label{discussion} We are now in a position to take advantage of the comprehensive followup programs we have been conducting to revisit a number of topics of interest which may be investigated further using our new knowledge of source sizes. \subsection{Reliability of Lens Models} \label{reliability} For the four sources studied by \citet{hezaveh13} using low-resolution ($\sim$1.5\ensuremath{''}\xspace) data, we find generally good agreement with the updated models. The differences between the previous and updated models can be entirely explained by the difference in background source parameterization -- that is, fitting only the data used by Hezaveh et~al.\xspace with elliptical source-plane components recovers the models presented here, while fitting all of the data used in this work with the circularly symmetric Gaussian components assumed by Hezaveh et~al.\xspace recovers the models shown there. This indicates that the model uncertainties on the properties of the background sources are dominated by systematic, rather than statistical, uncertainty. We have attempted to counter this issue by use of the DIC for model selection, which effectively penalizes models with more degrees of freedom unless they reproduce the data significantly better. ALMA is now capable of resolutions as fine as a few tens of milliarcseconds. To what extent can we expect that the model properties (e.g., \ensuremath{\mu_\mathrm{870\um}}\xspace) derived here would agree with the properties derived from observations with the $\sim$20$\times$ better resolution now possible? Given that the true source structure of DSFGs is expected to be clumpy and irregular \citep[e.g.,][]{swinbank11,dye15}, in contrast to the smooth source parameterization assumed here, this question is difficult to answer. This irregular structure means that different regions of a given source will be magnified by different amounts, as opposed to the magnifications derived here, which are averaged over the assumed-elliptical source profile. One instructive comparison comes via the ALMA Long Baseline Campaign observations of the lensed DSFG SDP.81 at $\sim$0.023\ensuremath{''}\xspace resolution \citep{almapartner15}. This source was also included in the sample studied by \citet{bussmann13}, who used $\sim$0.5--1\ensuremath{''}\xspace SMA observations to construct the lens models, comparable to the resolution of the ALMA data used in this work. Note, however, that the SMA observations reached only a peak signal-to-noise of 12 for SDP.81, far less than the typical significance of our detections (median peak signal-to-noise of 62). The lens model, which also represented the background source as an elliptical S\'{e}rsic profile, as in this work, yielded a magnification of $\ensuremath{\mu_\mathrm{870\um}}\xspace \sim 11$. Several authors have constructed lens models of the continuum emission using the high-resolution ALMA observations of SDP.81, finding magnifications $\ensuremath{\mu_\mathrm{870\um}}\xspace \sim 16-22$ using pixelated source-plane reconstructions \citep{rybak15,dye15,tamura15}. It is difficult to know whether these $\sim$50\% variations are to be generally expected, or whether the differences arise chiefly from data resolution, data signal-to-noise, or modeling approach. In at least this single case, however, shrinking the beam area by $\gtrsim$100$\times$ leads to less than factor of two changes in source-averaged magnification. \subsection{Size Distribution of Background Sources} \label{sourcesizes} Gravitational lensing allows us to study the background sources at effective resolutions higher than the instrumental resolution of our observations. It is worth considering, however, the biases which may be present when comparing lensed and unlensed samples. For example, numerous authors have explored a potential size bias of lensed samples \citep[e.g.,][]{serjeant12,hezaveh12a,wardlow13}, in which sources with high magnification are preferentially smaller than sources with lower magnification factors. This effect is due to the angular extent of the background source in comparison to the relatively small region near caustics over which high magnification is possible -- small sources near caustics can experience a higher net magnification compared to more extended sources. Different regions of a given background source experience different magnifications, depending on the lensing geometry, an effect known as differential magnification. We explore this effect in the left panel of Fig.~\ref{fig:sourcesizes}. Here, we show the source magnification as a function of its size for both the SPT sample and the \textit{Herschel}-selected samples of \citet{bussmann13,bussmann15}. Many of the lens models reproduce the complex background source morphology by invoking multiple S\'{e}rsic components. These components are likely to be physically associated, so we show the total flux-weighted magnification and the total source area of all related components (so the two components of, e.g., SPT0103-45 are shown as a single point, while the two components of, e.g., SPT0128-51 are shown separately). We find a median intrinsic FWHM of 0.28''. This figure shows no clear correlation between the two parameters. However, in agreement with the size bias mentioned, it does appear that the sources with the highest magnifications are preferentially smaller than sources with lower magnifications. In other words, small size appears to be a necessary but not sufficient criterion for the highest magnifications. A separate but related question is whether selecting strongly lensed sources results in a biased measurement of the true size distribution of DSFGs \citep[e.g.,][]{hezaveh12a}. Even though high-magnification sources are preferentially compact, the size distribution of lensed samples is not necessarily biased, depending on the true underlying brightness and size distributions (for example, if the true size distribution were a delta function, no bias would exist). The presence of such a bias can be investigated by comparing size distributions measured from lensed and unlensed samples. In the right panel of Fig.~\ref{fig:sourcesizes}, we compare the size distribution measured from the strongly lensed ($\ensuremath{\mu_\mathrm{870\um}}\xspace>2$) sources in the SPT and \textit{Herschel} samples with two unlensed DSFG samples. \citet{simpson15b} measure sizes of 22 sources based on 870\,\ensuremath{\mu\rm{m}}\xspace ALMA imaging of objects selected from the 850\,\ensuremath{\mu\rm{m}}\xspace SCUBA-2 Cosmology Legacy Survey \citep{geach13}. Only one source was unresolved by these data, with a FWHM$\lesssim0.18$'', although the sample is restricted to sources with $\ensuremath{S_\mathrm{870\um}}\xspace \sim 5-12$\,mJy to ensure sufficient signal-to-noise to measure an accurate source size. \citet{ikarashi15} report 1.1\,mm sizes from ALMA observations of 13 AzTEC 1.1\,mm-selected objects spanning $\ensuremath{S_\mathrm{1.1mm}}\xspace \sim 1.2-3.5$\,mJy. Assuming a dust emissivity index $\beta = 2$, this corresponds to $\ensuremath{S_\mathrm{870\um}}\xspace \sim 3-9$\,mJy. Even after accounting for the gravitational magnification, the SPT sources are typically brighter than many of the unlensed comparison sources, although no significant correlations between source flux density and size are seen in either unlensed sample or our own. Both unlensed samples have sizes measured from the dust continuum emission, eliminating possible confusion in comparing to sizes measured with alternative methods (e.g., from the radio continuum; \citealt{biggs11}). Given the present sample sizes, both samples of strongly lensed sources have size distributions consistent with the distribution of unlensed sources. The two-sample K--S test confirms that we cannot reject the hypothesis that both distributions are drawn from the same parent distribution ($p=0.84$). Few of the unlensed sources have robust spectroscopic redshifts, which hinders our ability to infer whether the consistent angular size distributions correspond to differing physical size distributions. As detailed in \citet{bethermin15b} and \citet{strandet16}, we expect more sources at higher redshifts in the SPT sample due to its long selection wavelength and preferential selection of lensed sources. We note, however, that the angular size scale evolves slowly for $z>2$; the difference in the size scale between the SPT median redshift and the median redshift of the unlensed DSFGs of \citealt{chapman05} is $<$15\%. The lensed samples appear to recover the ``true'' unlensed size distribution in spite of the bias discussed above. This seems to indicate one of two possibilities. First, it may be that neither the lensed nor unlensed samples are sufficiently complete for differences to be noticeable. The lensed samples effectively select sources based on the product of intrinsic flux density and magnification, while the unlensed samples would not measure the true size distribution if faint sources are preferentially more extended, precluding size measurements from the current ALMA data. Alternatively, the underlying DSFG size distribution may lack sufficient dynamic range for the size bias to become noticeable without a very large number of sources. The true size distribution may have few objects at both very small and very large sizes, making the magnification bias unimportant. Both scenarios are testable from deeper observations of a larger sample of unlensed sources. \begin{figure*}[htb]% \centering \includegraphics[width=\textwidth]{sizes_mu_hist.pdf}% \caption{ \textit{Left:} Intrinsic source size plotted as a function of 870\,\ensuremath{\mu\rm{m}}\xspace magnification, for all sources in the SPT and \textit{Herschel} DSFG samples. Sources with the highest magnifications are preferentially more compact than the full sample. \textit{Right:} Size distributions of strongly lensed ($\ensuremath{\mu_\mathrm{870\um}}\xspace>2$) SPT and \textit{Herschel} sources \citep{bussmann13,bussmann15}, compared to samples of unlensed DSFGs observed by ALMA from the 850\,\ensuremath{\mu\rm{m}}\xspace-selected SCUBA-2 Cosmology Legacy Survey \citep{simpson15b} and AzTEC 1.1\,mm-selected sources \citep{ikarashi15}. For all samples, we plot the circularized FWHM; Simpson et~al.\xspace report only source major axes (priv. comm.), so we have circularized their measurements assuming an average axis ratio of 0.8. This figure indicates that the lensed samples recover the same size distribution as the unlensed samples, despite the potential size bias shown in the left panel. } \label{fig:sourcesizes} \end{figure*} \subsection{Constraining the Dust Opacity} \label{dusttau} The size information we have determined affords us additional constraints on other fitted parameters which would be difficult to determine from unresolved observations. One of the most common fitting functions used to describe the dust emission of galaxies is the ``modified blackbody'' function, \begin{equation} \label{mbb} S_{\nu_r} = \frac{\Omega_{\mathrm{source}}}{(1+z_S)^3} (B_{\nu_r}(\ensuremath{T_{\rm{dust}}}\xspace) - B_{\nu_r}(T_{\mathrm{CMB}})) (1 - e^{-\tau_{\nu_r}}) \end{equation} where $B_{\nu_r}(T)$ is the Planck function evaluated at rest-frame frequency $\nu_r$ and temperature $T$. This blackbody is ``modified'' by the dust optical depth term, and the overall normalization of the SED is related to the intrinsic source solid angle $\Omega_{\mathrm{source}} = \pi \ensuremath{r_\mathrm{eff}}\xspace^2 / D_A^2$. At long wavelengths, the dust optical depth can be parameterized as a power-law in frequency \citep[e.g.,][]{draine06}, with $\tau_\nu = (\nu/\nu_0)^\beta = (\lambda_0/\lambda)^\beta$, and the optical depth reaching unity at wavelength \ensuremath{\lambda_0}\xspace. The value of $\beta$ governs the slope of the Rayleigh-Jeans tail of the dust emission, while the combination of \ensuremath{T_{\rm{dust}}}\xspace and \ensuremath{\lambda_0}\xspace governs the peak wavelength and width of the peak of the dust emission. The value of $\beta$ is generally in the range 1.5--2, while the value of \ensuremath{\lambda_0}\xspace is commonly assumed to be 100--200\,\ensuremath{\mu\rm{m}}\xspace (3--1.5\,THz) \citep[e.g.,][]{blain03,casey14}. For sources without size measurements, the source solid angle is unknown, in addition to the other parameters which control the shape of the dust SED. Even with sizes derived from the lens models, we are forced to assume a single dust temperature and value of \ensuremath{\lambda_0}\xspace averaged over the source for each object. Improvements on this scenario require spatially resolved continuum measurements at several widely spaced frequencies, especially those which straddle the SED peak. While this may one day be possible, at present we assume that the source emission is uniform, mirroring the assumptions which must be made with unresolved photometry. The spatially unresolved long-wavelength SED alone is usually insufficient to constrain the value of \ensuremath{\lambda_0}\xspace, as degeneracies with the other parameters (particularly the dust temperature \ensuremath{T_{\rm{dust}}}\xspace) allow for good matches to the data for a wide range of \ensuremath{\lambda_0}\xspace. The inferred \ensuremath{T_{\rm{dust}}}\xspace, in turn, has a large effect on other inferred quantities, such as the total dust mass \citep[e.g.,][]{casey12}. Our new knowledge of the intrinsic size of the SPT DSFGs offers an alternative avenue for constraining an effective \ensuremath{\lambda_0}\xspace. For those sources with spectroscopic redshifts, we fit the photometry at rest wavelengths $> 50$\,\ensuremath{\mu\rm{m}}\xspace with the modified blackbody function given above, assuming $\beta = 2$ and allowing \ensuremath{\lambda_0}\xspace to be a free parameter, although allowing $\beta$ as a free parameter does not alter our results. The cutoff at short wavelengths is used because neither the modified blackbody function nor our lens models are expected to capture the emission from hot dust which dominates the short-wavelength side of the SED. This assumption ignores any possible contribution of a hot dust component to the long wavelength photometry, but \textit{Herschel}/PACS photometry indicates that this component is negligible at the relevant wavelengths (Strandet et~al.\xspace, in prep.). We have verified that neither a hot dust component nor a short-wavelength power-law significantly affect our conclusions. We perform the fitting described using the source photometry in \citet{weiss13,strandet16}, and an MCMC fitting routine. The free parameters are the SED normalization (which stands in for the source solid angle at wavelengths without size measurements; see below), \ensuremath{T_{\rm{dust}}}\xspace, and \ensuremath{\lambda_0}\xspace. At each MCMC step, we calculate the log-likelihood of producing the spatially unresolved continuum measurements given the proposed combination of parameters, and add to this the log-likelihood of the proposed \ensuremath{T_{\rm{dust}}}\xspace and \ensuremath{\lambda_0}\xspace reproducing the intrinsic source flux density determined from the lens models, after marginalizing over the uncertainty in source size. The reason the contributions from the spatially unresolved and resolved measurements must be calculated separately is that, as shown in Fig.~\ref{fig:fluxcomp}, there is a median 10\% offset and large scatter between the total flux density measured in our (resolved) ALMA images compared to the (unresolved) LABOCA images at the same wavelength; presumably this scatter would also be present if we resolved the sources at all other wavelengths. In order to avoid biases introduced by this scatter, we use the exact form in Eq.~\ref{mbb} at observed-frame 870\,\ensuremath{\mu\rm{m}}\xspace only, and allow a normalization at other wavelengths. This normalization effectively allows us to match flux captured by the large single-dish beams (primarily from galaxies associated with the foreground lensing haloes; \citealt{welikala16}) not included in the lens modeling, as well as general measurement and calibration errors. We have verified that this method does not give unphysical results, and that the inclusion of the lens model sizes merely shrinks the allowable parameter space without driving the solutions to otherwise unfavored values. The results of this fitting are shown in the left panel of Fig.~\ref{fig:dustsed}. We find a median value of \ensuremath{\lambda_0}\xspace = 140 $\pm$ 40\,\ensuremath{\mu\rm{m}}\xspace, somewhat larger than the canonically assumed value of 100\,\ensuremath{\mu\rm{m}}\xspace \citep[e.g.,][]{greve12}. Moreover, as previously mentioned, this wavelength is correlated with the inferred dust temperature. Fitting a line to the points shown in Fig.~\ref{fig:dustsed} using orthogonal distance regression (marginalizing over the probability of points being outliers; e.g., \citealt{hogg10}) yields \begin{equation} \label{Lzeq} \ensuremath{\lambda_0}\xspace = (3.0 \pm 0.7) \times (\ensuremath{T_{\rm{dust}}}\xspace - 40) + (118 \pm 12) \ensuremath{\mu\rm{m}}\xspace. \end{equation} Using this relation provides a better alternative to assuming a single value for \ensuremath{\lambda_0}\xspace when the available photometry cannot constrain both \ensuremath{\lambda_0}\xspace and \ensuremath{T_{\rm{dust}}}\xspace -- this relation can be easily inserted into likelihood functions when fitting the dust SED. This correlation may manifest in part from the relationship between star formation and molecular gas -- at a simplistic level, the star formation rate of dusty galaxies is related to \ensuremath{L_{\rm{FIR}}}\xspace, which in turn is related to \ensuremath{T_{\rm{dust}}}\xspace; meanwhile, the gas mass is related to the dust mass, which, as we discuss further below, is related to the dust emissivity encapsulated in \ensuremath{\lambda_0}\xspace. The impact of this correlation has little effect on the integrated \ensuremath{L_{\rm{FIR}}}\xspace. This is as expected, since our photometric coverage fully samples the SED peak. The dust mass \ensuremath{M_{\rm{dust}}}\xspace, on the other hand, is strongly influenced. In the optically thin limit, \ensuremath{M_{\rm{dust}}}\xspace is related to the source flux density and \ensuremath{T_{\rm{dust}}}\xspace via \begin{equation} \label{mdust} \ensuremath{M_{\rm{dust}}}\xspace = \frac{S_{\nu_\mathrm{obs}}{D_L^2}}{\kappa_{\nu_r} (1+z_S) (B_{\nu_r}(\ensuremath{T_{\rm{dust}}}\xspace) - B_{\nu_r}(T_{\mathrm{CMB}}))}, \end{equation} \citep{greve12} where $\kappa_\nu$ is the dust mass absorption coefficient. At present, we are concerned only with the relative difference in the dust mass determined under various assumptions, so the form and normalization of $\kappa_\nu$ are irrelevant (as it is related to the source flux density at one frequency and \ensuremath{T_{\rm{dust}}}\xspace). The right-hand panel of Fig.~\ref{fig:dustsed} shows the ratio of the dust mass determined through our SED fitting when leaving \ensuremath{\lambda_0}\xspace as a free parameter compared to the dust mass inferred by assuming $\ensuremath{\lambda_0}\xspace = 100$\,\ensuremath{\mu\rm{m}}\xspace, effected through the changes in the fitted \ensuremath{T_{\rm{dust}}}\xspace. A similar range of inferred dust masses are seen for other assumed values, although the range of temperatures with reasonable agreement shifts higher for higher \ensuremath{\lambda_0}\xspace. For dust temperatures $\lesssim 45$\,K, the difference is relatively small. However, as dust temperature increases, the dust mass is increasingly over-predicted under the assumption that $\ensuremath{\lambda_0}\xspace = 100$\,\ensuremath{\mu\rm{m}}\xspace, reaching more than a factor of 2 for the hottest sources. A similar result, ignoring the dust optical depth and instead framed in terms of \ensuremath{T_{\rm{dust}}}\xspace, was obtained by \citet{magdis12}, who showed that single-temperature fits underestimated \ensuremath{M_{\rm{dust}}}\xspace compared to more complex models. This demonstrates that the assumption of a single, constant value of \ensuremath{\lambda_0}\xspace can cause a severe distortion in other derived quantities, especially those which rely on \ensuremath{T_{\rm{dust}}}\xspace. \begin{figure*}[htb]% \centering \includegraphics[width=0.95\textwidth]{dustsed.pdf}% \caption{ \textit{Left: } Correlation between the inferred dust temperature and \ensuremath{\lambda_0}\xspace, the wavelength where the dust optical depth is unity, derived from a joint fit to the FIR photometry and the source properties inferred from the lens models for sources with spectroscopic source redshifts. The solid line and grey region indicate the relation in Eq.~\ref{Lzeq} and its associated 68\% credibility interval, respectively. The histogram of the inferred values of \ensuremath{\lambda_0}\xspace is also shown. The median and standard deviation for the SPT DSFGs is 140 $\pm$ 40\,\ensuremath{\mu\rm{m}}\xspace. \textit{Right: } Ratio of the dust mass inferred by allowing \ensuremath{\lambda_0}\xspace to be a free parameter in the joint fit of the SED and derived source properties over the dust mass inferred by fixing \ensuremath{\lambda_0}\xspace to 100\,\ensuremath{\mu\rm{m}}\xspace. Fixing $\ensuremath{\lambda_0}\xspace = 100$\,\ensuremath{\mu\rm{m}}\xspace over-predicts the dust mass by more than a factor of 2 for the sources with the highest dust temperatures. } \label{fig:dustsed} \end{figure*} \subsection{Revisiting the [C{\scriptsize II}]\xspace/FIR Ratio} \label{cii} The 158\,\ensuremath{\mu\rm{m}}\xspace [C{\scriptsize II}]\xspace line has long been known as a powerful coolant of the ISM \citep[e.g.,][]{crawford86}, radiating about 0.1--1\% of the total IR luminosity \citep[e.g.,][]{stacey91,stacey10}. Unfortunately, [C{\scriptsize II}]\xspace can be emitted by gas under a wide variety of conditions, which makes its physical interpretation challenging. One challenge in interpreting [C{\scriptsize II}]\xspace manifests as the ``[C{\scriptsize II}]\xspace deficit,'' in which the [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace ratio can fall rapidly for $\ensuremath{L_{\rm{FIR}}}\xspace \gtrsim 10^{11}$\,\ensuremath{\rm{L}_\odot}\xspace \citep[e.g.,][]{malhotra97,luhman98,graciacarpio11}. A variety of physical mechanisms for this deficit have been proposed, including AGN contributions to \ensuremath{L_{\rm{FIR}}}\xspace \citep[e.g.,][]{sargsyan12}, increased ionization parameter \citep[e.g.,][]{malhotra01,graciacarpio11}, collisional de-excitation of [C{\scriptsize II}]\xspace \citep{appleton13}, and differences in emitting column \citep{goicoechea15}. The [C{\scriptsize II}]\xspace emission of a sample of 20 SPT DSFGs was studied in detail by \citet{gullberg15}, who noted that nearly saturated [C{\scriptsize II}]\xspace emission (via, e.g., excitation or optical depth effects) could cause much of the [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace variation to be controlled by variations in \ensuremath{L_{\rm{FIR}}}\xspace alone. This is tentatively supported by photodissociation models which attempt to simultaneously explain both the [C{\scriptsize II}]\xspace and CO(1--0) emission. In their studies of a large sample of local IR-luminous galaxies from the Great Observatories All-Sky LIRG Survey (GOALS), \citet{diazsantos13} find that the [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace ratio is also correlated with the FIR luminosity surface density \ensuremath{\Sigma_{\rm{FIR}}}\xspace. This correlation held for both purely star-forming galaxies as well as objects with significant AGN activity (although many of the AGN-dominated sources were spatially unresolved, resulting in lower limits on \ensuremath{\Sigma_{\rm{FIR}}}\xspace). A similar result was obtained for galaxies at $z<0.2$ by \citet{ibar15}, who additionally noted that the spiral galaxies in their sample had higher [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace ratios than irregular and elliptical galaxies. Using our new measurements of the size of the dust continuum emitting regions of the SPT DSFGs, and drawing on a compilation of high-redshift objects from the literature, we can extend this work two orders of magnitude higher in \ensuremath{\Sigma_{\rm{FIR}}}\xspace. The result is shown in Fig.~\ref{fig:ciifir}. We have re-fit the photometry of all sources to ensure a uniform determination of \ensuremath{L_{\rm{FIR}}}\xspace. The dashed line in Fig.~\ref{fig:ciifir} represents the best-fit relation determined by \citet{diazsantos13}. We have shifted their relation vertically to match our re-determination of \ensuremath{L_{\rm{FIR}}}\xspace, but the slope is exactly as determined by \citet{diazsantos13}, i.e., $[C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace \propto \ensuremath{\Sigma_{\rm{FIR}}}\xspace^{-0.35}$. The decline continues unabated another two orders of magnitude beyond the limits of the GOALS survey, to at least $\ensuremath{\Sigma_{\rm{FIR}}}\xspace \sim 10^{13}$\,\ensuremath{\rm{L}_\odot}\xspace/kpc$^2$. This lends further support to the claim that the compactness of the IR-emitting region drives the relationship between [C{\scriptsize II}]\xspace and \ensuremath{L_{\rm{FIR}}}\xspace. A similar correlation can be seen by comparing the [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace ratio with the dust temperature \ensuremath{T_{\rm{dust}}}\xspace, since, to first order, $\ensuremath{\Sigma_{\rm{FIR}}}\xspace \propto \ensuremath{L_{\rm{FIR}}}\xspace/\ensuremath{r_\mathrm{eff}}\xspace^2 \propto \ensuremath{T_{\rm{dust}}}\xspace^4$. This correlation was first shown by \citet{malhotra97} and further explored by \citet{gullberg15}, who determined that most of the variation could indeed by explained by the Stefan-Boltzmann law, with a small residual dependence on $T_\mathrm{dust}$. Formulating the correlation in terms of \ensuremath{\Sigma_{\rm{FIR}}}\xspace itself, however, leads to a dispersion approximately a factor of 2 smaller than formulating it in terms of \ensuremath{T_{\rm{dust}}}\xspace \citep{diazsantos13}. While the nature of the [C{\scriptsize II}]\xspace emission is still uncertain, it is clear that the compactness of the IR-emitting region plays a vital role in determining the coupling of the [C{\scriptsize II}]\xspace-emitting gas with the warm dust. \begin{figure}[htb]% \centering \includegraphics[width=\columnwidth]{lciifir_sigfir.pdf}% \caption{ The [C{\scriptsize II}]\xspace/FIR luminosity ratio as a function of \ensuremath{\Sigma_{\rm{FIR}}}\xspace for low-redshift star-forming sources with a resolved mid-IR size from the GOALS survey \citep{diazsantos13}, a collection of high-redshift sources from the literature, and the SPT DSFGs \citep{gullberg15}. The remarkably tight relation (dashed line) noted by \citet{diazsantos13} continues for at least another two orders of magnitude. A typical uncertainty for the GOALS objects is shown as a black cross. The high-redshift objects are drawn from \citet{walter09,carniani13,riechers13,wang13,debreuck14,neri14,riechers14,yun15, diazsantos16}. Note that we have re-fit the photometry of all objects in a consistent manner, as described in \citet{gullberg15}. } \label{fig:ciifir} \end{figure} \section{Conclusions} \label{conclusions} We have used ALMA 870\,\ensuremath{\mu\rm{m}}\xspace observations of 47 gravitationally lensed dusty, star-forming galaxies to model the effects of gravitational lensing. Using a visibility-based modeling routine which accounts for several calibration uncertainties, we can recover the intrinsic properties of the background sources. At least 33 of the sources are confirmed to undergo galaxy-scale strong lensing ($\ensuremath{\mu_\mathrm{870\um}}\xspace > 2$), while the remaining sources are lensed by galaxy clusters, or are weakly- or un-lensed ($\ensuremath{\mu_\mathrm{870\um}}\xspace < 2$). The background sources are magnified by a median factor of 5.5 for all sources, or 6.3 for the strongly lensed subset alone, with a tail that extends to $\ensuremath{\mu_\mathrm{870\um}}\xspace > 30$. The sources have a median intrinsic angular FWHM of 0.28''. In spite of a potential size bias of lensed systems, in which compact background sources can be magnified more highly than extended sources, we find no significant differences between the size distributions of existing strongly lensed and unlensed samples of DSFGs. Increasing the number of unlensed sources with spectroscopic redshifts will indicate whether this corresponds to a difference in physical size scale, though this effect is small over the plausible range of redshifts. If the similarity in size distributions is not a chance effect owing to the limited number of sources with size measurements, we argue that this may indicate that the intrinsic size distribution of DSFGs is sufficiently narrow that the effect of the size bias is not detectable. We use the sizes derived from the lens models together with the extensive FIR/submillimeter photometric coverage to constrain \ensuremath{\lambda_0}\xspace, the wavelength where the dust opacity is unity. The size information from the lens models allows us to overcome parameter degeneracies which limit our ability to constrain this wavelength from the SED alone. We find a median transition wavelength of $\ensuremath{\lambda_0}\xspace = 140 \pm 40$\,\ensuremath{\mu\rm{m}}\xspace, somewhat longer than the generally assumed 100\,\ensuremath{\mu\rm{m}}\xspace. We provide a fitting formula between \ensuremath{\lambda_0}\xspace and the dust temperature \ensuremath{T_{\rm{dust}}}\xspace which can be used for sources without size measurements. We show that assuming a single, fixed value for \ensuremath{\lambda_0}\xspace leads to variations of a factor of 2 in the inferred dust mass which can be propagated forward to, e.g., the gas mass under overly simplified assumptions. Finally, we make use of our extensive follow-up program targeting the 158\,\ensuremath{\mu\rm{m}}\xspace FIR fine structure line of [C{\scriptsize II}]\xspace. We show that high-redshift galaxies (over half of them from the SPT DSFG sample) follow the same relationship between [C{\scriptsize II}]\xspace/\ensuremath{L_{\rm{FIR}}}\xspace and \ensuremath{\Sigma_{\rm{FIR}}}\xspace as the $z \sim 0$ IR-luminous galaxies in the \textit{Herschel} GOALS sample, extending this correlation another two orders of magnitude higher in \ensuremath{\Sigma_{\rm{FIR}}}\xspace. This agrees with the claim that the controlling parameter in the ``[C{\scriptsize II}]\xspace deficit'' is the compactness of the IR-emitting region, regardless of the dust heating source. Future spatially resolved observations of the [C{\scriptsize II}]\xspace line at high redshifts will indicate whether this global correlation is also present on sub-galactic scales. The high-resolution images and lens models in this work, along with the high spectroscopic completeness of our sample, provide a wealth of information useful for future follow-up programs. The sensitivity and resolution afforded by ALMA in full operation indicate that the future of DSFG studies is bright, and the models we have presented should help prioritize the best sources to help answer questions of interest. \nocite{bethermin16} \acknowledgements { J.S.S., D.P.M., K.C.L. and J.D.V. acknowledge support from the U.S. National Science Foundation under grant No. AST-1312950 and through award SOSPA2-012 from the NRAO. M.A. acknowledges partial support from FONDECYT through Grant 1140099. This material has made use of the El Gato high performance computer, supported by the U.S. National Science Foundation under grant No. 1228509. This paper makes use of the following ALMA data: ADS/JAO.ALMA \#2011.0.00957.S, \#2011.0.00958.S, \#2012.1.00844.S, and \#2012.1.00994.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The SPT is supported by the National Science Foundation through grant PLR-1248097, with partial support through PHY-1125897, the Kavli Foundation and the Gordon and Betty Moore Foundation grant GBMF 947. This work makes use of an extensive optical/NIR follow-up campaign, including \textit{Hubble Space Telescope} programs \#12659 and \#13614, Very Large Telescope programs 085.A-0608, 086.A-0797, 088.A-0902, 090.A-0503, 092.A-0480, 284.A-5029, and 285.A-5034, Gemini Observatory programs GS-2013B-Q-5 and GS-2013A-Q-33, and data gathered with the Magellan Telescopes and the \textit{Spitzer Space Telescope}. This research has made use of NASA's Astrophysics Data System. } \bibliographystyle{apj} \input{lensmodel.bbl} \clearpage
2,877,628,090,937
arxiv
\section{Introduction} \begin{Assumption} Throughout this paper, $A$ is a non-zero commutative ring with identity, and $\mathbb{M}^1, \ldots, \mathbb{M}^d$ are non-zero additive subgroups of $\mathbb{R}$. For $i = 1,\ldots,d$ set $\mathbb{M}^i_{\geq 0} = \{m \in \mathbb{M}^i \; | \; m \geq 0\}$. \end{Assumption} We are interested in properties of monomial ideals in the polynomial ring $S=A[X_1,\ldots,X_d]$, that is, ideals generated by monomials. These ideals have deep applications to combinatorics; for instance, building from work of Hochster~\cite{hochster:cmrcsc} and Reisner~\cite{reisner:cmqpr}, Stanley uses monomial ideals to prove his upper bound theorem for simplicial spheres~\cite{stanley:ubccmr}. On the other hand, one can use the combinatorial aspects of these ideals to construct interesting examples and verify their properties. See, e.g., \cite{bruns:cmr, miller:cca, rogers:mid, stanley:cca, villarreal:ma} for some aspects of this. For small values of $d$, one can study a given monomial ideal $I\subseteq S$ visually. For instance, when $d=2$, one considers the set of points $(a_1,a_2)\in\mathbb Z^2$ such that $X_1^{a_1}X_2^{a_2}\in I$. This ``graph of $I$'' contains non-trivial information about $I$; for example, one can read certain decompositions of $I$ from the graph. Given the fact that these graphs are (in general) subsets of $\mathbb R^d$, one should be able to study these ideals using geometric techniques, as follows. To prove a result about a given monomial ideal $I$, prove it for monomial ideals $J$ that are ``close'' to $I$ in some suitable sense, and then prove that the closeness of $I$ to $J$ forces the conclusion to transfer from $J$ to $I$. The problem with this idea is the following: the ideal $I$ is defined by discrete data (e.g., the lattice points $(a_1,a_2)$ described above). Thus, a reasonable notion of ``closeness'' for classical monomial ideals based on the euclidean metric in $\mathbb R^n$ will likely be trivial. As a possible remedy for this defect, we switch perspectives from the discrete setting to a continuous one. We consider the semigroup ring $R = A[\mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}]$ which is the set of all (finite) $A$-linear combinations of \textit{monomials} $\mono{X}{m} = X_1^{m_1}X_2^{m_2}\cdots X_d^{m_d}$ where $\ul{m} = (m_1,m_2,\ldots,m_d) \in \mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}$. A \textit{monomial ideal} of $R$ is an ideal of $R$ that is generated by a set of monomials. (This includes the ideal $0=(\emptyset)R$.) For instance, in the case $\mathbb M^i=\mathbb R$, the monomials of $R$ correspond to the points of the non-negative orthant $\mathbb R^d_{\geq 0}$, and any monomial ideal $I\subseteq S$ induces a monomial ideal $IR\subseteq R$ since $S\subseteq R$.\footnote{It is worth noting that the ring $A[\mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}]$ has been studied previously, for instance, to construct interesting counterexamples to questions about non-noetherian rings; see, e.g., \cite{anderson:mdcad}. We are grateful to Jim Coykendall for teaching us about these constructions.} On the other hand, the case $\mathbb M^i=\mathbb Z$ recovers the monomial ideals of $S$. The main results of this paper are in \sref{mdim} where we study a version of the Krull dimension for this setting. We prove that the set of non-zero finitely generated monomial ideals in $R$ has the structure of a metric space in \tref{metric}. (For example, in the case $\mathbb M^i=\mathbb R$, this applies to the ideals of the form $IR$ where $I$ a non-zero monomial ideal of $S$.) Then in \cref{or130521a} we prove that our Krull dimension is lower semicontinuous with respect to this metric space structure. This suggests that one may be able to apply the geometric techniques described above in this setting, even to monomial ideals in $S$. In \sref{ec130510a}, we run this idea in reverse, in some sense, by showing how to use discrete techniques from $S$ to study monomial ideals in $R$ for the case $\mathbb M^i=\mathbb R$. Specifically, we apply techniques from \cite{paulsen:eiwg} to a special class of monomial ideals of $R$ that behave like edge ideals of weighted graphs. The main result of this section is \tref{hm130513a} which provides non-trivial decompositions of these ideals determined by objects that we call ``interval vertex covers''. In a sense, \sref{finite} consists of background material and examples. On the other hand, many of the results in this section are technically new, being versions of results from~\cite{ingebretson:dmirsr} for our more general context. \section{Monomial Ideals and their Decompositions} \label{sirred}\label{sfinite} This section contains foundational material for use in the rest of the paper. Most of the ideas are standard for the case $\mathbb{M}^i=\mathbb{Z}$, and the case $\mathbb M^i=\mathbb R$ is developed in \cite{ingebretson:dmirsr}. \begin{Assumption} Throughout this section, set $R = A[\mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}]$. \end{Assumption} \subsection*{Monomial Basics} \begin{defin}\label{dlcm} For $i = 1,\ldots,d$ set $\mathbb{M}^i_{> 0} = \{m \in \mathbb{M}^i \; | \; m > 0\}$ and $\mathbb{M}^i_{\infty \geq 0} = \mathbb{M}^i_{\geq 0} \cup \{ \infty \}$. A \textit{pure power} in $R$ is a monomial of the form $X_i^r$. For any subset $G \subseteq R,$ we let $\monset{G}$ denote the set of all monomials in $G$, so $\monset{G} = G \cap \monset{R}$. For $i=1,\ldots,d,$ we define\footnote{Despite this notation, note that 0 is not a monomial according to our definition.} $X_i^\infty = 0$. \end{defin} The following is a straightforward consequence of our definitions. \begin{fact}\label{f2.2} \label{f2.3} Fix a set $\{I_\lambda\}_{\lambda \in \Lambda}$ of monomial ideals of $R$. \begin{enumerate}[(a)] \item \label{item130521a} The monomial ideal $I_\lambda$ is generated by $\monset{I_\lambda}$, so we have $I_\lambda \subseteq I_\mu$ if and only if $\monset{I_\lambda} \subseteq \monset{I_\mu},$ and hence $I_\lambda=I_\mu$ if and only if $\monset{I_\lambda} = \monset{I_\mu}$. \item \label{item130521b} Given monomials $f = \mono{X}{r}$ and $g = \mono{X}{s}$ in $R,$ we have $g \in (f)R$ if and only if for all $i$ there exists $t_i \in \mathbb{M}^i_{\geq 0}$ such that $r_i + t_i = s_i$. When these conditions are satisfied, we have $g=fh$ where $h=\ul{X}^{\ul{t}}$. \item \label{item130521c} Given a monomial $f \in \monset{R}$ and a subset $S \subseteq \monset{R},$ we have $f \in (S)R$ if and only if $f \in (s)R$ for some $s \in S$. \item \label{item130521d} The sum $\sum_{\lambda \in \Lambda} I_\lambda$ and intersection $\bigcap_{\lambda \in \Lambda} I_\lambda$ are monomial ideals such that $\monset{\sum_{\lambda \in \Lambda}I_\lambda} = \bigcup_{\lambda \in \Lambda} \monset{I_\lambda}$ and $\monset{\bigcap_{\lambda \in \Lambda} I_\lambda} = \bigcap_{\lambda \in \Lambda} \monset{I_\lambda}$. \end{enumerate} \end{fact} \begin{ex}\label{ex130508a} As with monomial ideals in the polynomial ring $A[X_1,X_2]$, we can visualize a monomial ideal $I$ in $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ via $\monset{I}$. For instance, here is $\monset I$ where $I=(X_1X_2^a\mid a>1)R$. \ \begin{center} \begin{tikzpicture} \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (1.5,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw (1.5,1.5) -- (1.5,3.2); \draw[color=white] (1.5,1.5) -- (3.2,1.5); \draw[dashed] (1.5,1.5) -- (3.2,1.5); \draw (1.5,1.5)[fill,color=white] circle (2pt); \draw (1.5,1.5) circle (2pt); \end{tikzpicture} \end{center} \end{ex} \begin{defin} Let $I$ be a monomial ideal of $R$ and suppose that $I = (\ul{X}^{\ul{\alpha}_1},\ldots,\ul{X}^{\ul{\alpha}_n})R$. We say that the list $\ul{X}^{\ul{\alpha}_1},\ldots,\ul{X}^{\ul{\alpha}_n}$ is an \textit{irredundant generating sequence} for $I$ if for all $i\neq j$ we have $\ul{X}^{\ul{\alpha}_i} \notin (\ul{X}^{\ul{\alpha}_j})R$. \end{defin} \begin{fact} As a consequence of \fref{2.2}, one checks readily that every finitely generated monomial ideal in $R$ has a unique irredundant monomial generating sequence. (Note, however, that $R$ may have monomial ideals that are not finitely generated.) \end{fact} The next result is proved as in \cite[Lemma~2.7]{ingebretson:dmirsr}, using \fref{2.3}. \begin{lem}\label{l2.7} For $t = 1,\ldots,l,$ let $\{K_{t,i_t}\}_{i_t=1}^{m_t}$ be a collection of monomial ideals of $R$. Then the following equalities hold: \[ \bigcap_{t=1}^l \sum_{i_t = 1}^{m_t} K_{t,i_t} = \sum_{i_1=1}^{m_1} \sum_{i_2 = 1}^{m_2} \cdots \sum_{i_l = 1}^{m_l} \bigcap_{t=1}^l K_{t,i_t} \] \[ \sum_{t=1}^l \bigcap_{i_t = 1}^{m_t} K_{t,i_t} = \bigcap_{i_1=1}^{m_1}\bigcap_{i_2=1}^{m_2}\cdots \bigcap_{i_l = 1}^{m_l} \sum_{t=1}^l K_{t,i_t} \] \end{lem} \subsection*{Generators of Intersections of Monomial Ideals} \ \noindent \fref{2.2}\eqref{item130521d} shows that an intersection of monomial ideals is again a monomial ideal. In this subsection, we explicitly describe a monomial generating sequence for any finite intersection of monomial ideals; see \pref{2.5}. This is key for many of our results, and it strongly relies on the assumption that each $\mathbb{M}^i$ is closed under subtraction. \begin{defin}\label{dlcmA} Let $\mon{X}{r}{1}, \ldots, \mon{X}{r}{k} \in \monset{R}$ with $\ul{r}_i = (r_{i,1},\ldots,r_{i,d})$. The \textit{least common multiple} of the $\mon Xri$ is $\lcm_{\begin{subarray}{1}1 \leq i \leq k\end{subarray}}(\mon{X}{r}{i}) = \mono{X}{p}$ where $\ul{p}$ is defined as \[p_j = \inf\{ m \in \mathbb{M}^i_{\geq 0} \; | \; r_{i,j} + t_i = m\ \text{ for some } t_i \in \mathbb{M}^i_{\geq 0} \text{ and } i=1,\ldots,d\}.\] \end{defin} \begin{lem}\label{llcmsub} Given $\mon{X}{r}{1},\ldots,\mon{X}{r}{k} \in \monset R$, we have that $\lcm_{1 \leq i \leq k}(\mon{X}{r}{i}) = \mono{X}{p}$ where $p_i = \max_{1 \leq j \leq k} \{r_{i,j}\}$. \begin{proof} We prove the case $k=2$; the general case is handled similarly. Fix $\mono{X}{r}$ and $\mono{X}{q} \in \monset R$, and for $i=1,\ldots,d,$ set \[L_i = \{m \in \mathbb{M}^i_{\geq 0} \; | \; r_i + \alpha = m =q_i + \beta \text{ for some } \alpha,\beta \in \mathbb{M}^i_{\geq 0}\}.\] We can rewrite each $L_i$ as \begin{align*} L_i &= \{m \in \mathbb{M}^i_{\geq 0} \; | \; m-r_i \geq 0 \text{ and } m-q_i \geq 0 \} \\ &= \{m \in \mathbb{M}^i_{\geq 0} \; | \; m \geq r_i \text{ and } m \geq q_i \} \\ &= \{m \in \mathbb{M}^i_{\geq 0} \; | \; m \geq \max\{r_i,q_i\} \}. \end{align*} The first equality follows from the fact that $\mathbb{M}^i$ is closed under subtraction, and the other equalities are straightforward. Thus, by definition, we have $\lcm(\mono{X}{r},\mono{X}{q}) = \mono{X}{s},$ where $s_i = \inf(L_i) = \max\{r_i,q_i\}$. \end{proof} \end{lem} The next result is proved like~\cite[Theorem 2.5]{ingebretson:dmirsr}, using \lref{lcmsub}. \begin{prop}\label{p2.5} Given subsets $S_1,\ldots, S_k \subseteq \monset{R},$ we have \[ \bigcap_{i=1}^k(S_i)R = \left(\left\{\lcm_{1\leq i \leq k} (f_i) \; | \; f_i \in S_i \text{ for } i = 1,\ldots, k \right\}\right)R.\] \end{prop} \subsection*{M-Irreducible Monomial Ideals} \ \noindent Here we characterize the monomial ideals that cannot be decomposed as a non-trivial, finite intersection of monomial ideals; see \pref{classif1}. \begin{notation} Let $\varepsilon \in \{0,1\}$. Given $r \in \mathbb{M}^i$ and $\alpha \in \mathbb{R},$ we define \[ r \geq_\varepsilon \alpha \textnormal{ provided that } \begin{cases} r \geq \alpha & \text{if $\varepsilon = 0$} \\ r > \alpha & \text{if $\varepsilon = 1$}. \end{cases} \] Given $s \in \mathbb{M}^i_{\infty \geq 0},$ we define \[s \geq_{\varepsilon} \infty \text{ provided that } s = \infty.\] Given $\ul{\alpha} = (\alpha_1,\ldots,\alpha_d) \in \mathbb{R}_{\infty \geq 0}^d$ and $\ul{\varepsilon} = (\varepsilon_1,\ldots,\varepsilon_d) \in \{0,1\}^d,$ we set \[ J_{\ul{\alpha},\ul{\varepsilon}} = (\{ X_i^{r_i} \; | \; r_i \in \mathbb{M}^i_{\infty \geq 0} \text{ and } r_i \geq_{\varepsilon_i} \alpha_i \text{ for } i = 1,\ldots,d\})R. \] Note that $J_{\ul{\alpha},\ul{\varepsilon}}$ is generated by pure powers in $R$. \end{notation} \begin{ex}\label{ex130508g} In $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ we illustrate the ideals $J_{(1,1),(0,1)}$ and $J_{(1,1),(0,0)}=(X_1,X_2)R$. \ \begin{center} \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0,1.5) -- (1.5,1.5) -- (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw[color=white] (0,1.5) -- (1.5,1.5); \draw[dashed] (0,1.5) -- (1.5,1.5); \draw (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0,1.5) -- (1.5,1.5) -- (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (0,1.5) -- (1.5,1.5); \draw (1.5,0) -- (1.5,1.5); \end{tikzpicture} \end{center} \end{ex} \begin{defin}\label{d3.7} A monomial ideal $I \subseteq R$ is \textit{m-irreducible} (short for \textit{mono-mial-irreducible}) provided that for all monomial ideals $J$ and $K$ of $R$ such that $I = J \cap K,$ either $I = J$ or $I = K$. \end{defin} The following characterization of m-irreducible monomial ideals is proved as in \cite[Theorem 3.9]{ingebretson:dmirsr} using \fref{2.2} and \pref{2.5}. \begin{prop}\label{pclassif1} For a monomial ideal $I \subseteq R$, the following conditions are equivalent. \begin{enumerate}[\rm(i)] \item $I$ is generated by pure powers of a subset of the variables $X_1,\ldots,X_d$. \item there exist $\ul{\alpha} \in \mathbb{R}_{\infty \geq 0}^d$ and $\ul{\varepsilon} \in \{0,1\}^d$ such that $I = J_{\ul{\alpha},\ul{\varepsilon}}$. \item $I$ is m-irreducible. \end{enumerate} \end{prop} \begin{ex}\label{ex130508i} \pref{classif1} implies that the ideals in Example~\ref{ex130508g} are m-irreducible. It is worth noting that, even though the following graph has roughly the same shape as those in Example~\ref{ex130508g} \ \begin{center} \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0,1.5) -- (1.5,1.5) -- (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw[color=white] (1.5,0) -- (1.5,0.75); \draw[dashed] (1.5,0) -- (1.5,0.75); \draw (1.5,0.75) -- (1.5,1.5); \draw (0,1.5) -- (1.5,1.5); \draw (1.5,0.75)[fill,color=black] circle (2pt); \end{tikzpicture} \end{center} the ideal $I=(X_1^a,X_2,X_1X_2^{1/2}\mid a>1)R$ it represents in $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ is not m-irreducible. This follows from \pref{classif1} because $I$ cannot be generated by pure powers of the variables. One also deduces this by definition using the decomposition $$I=(X_1,X_2)R\cap(X_1^a,X_2^{1/2}\mid a>1)R=J_{(1,1),(0,0)}\cap J_{(1,1/2),(1,0)}.$$ This decomposition is non-trivial, since \fref{2.3} implies that we have $X_1\in J_{(1,1),(0,0)}\smallsetminus J_{(1,1/2),(1,0)}$ and $X_2^{1/2}\in J_{(1,1/2),(1,0)}\smallsetminus J_{(1,1),(0,0)}$. \end{ex} \subsection*{M-Prime Monomial Ideals} \ \noindent Here we characterize the ideals of $R$ that are prime with respect to monomials, for use in \sref{mdim}. \begin{defin}\label{dmprime} A monomial ideal $P \subsetneq R$ is \textit{m-prime} (short for \textit{monomial-prime}) provided that for all $f,g \in \monset{R},$ if $f\cdot g \in P,$ then $f \in P$ or $g \in P$. Given a subset $T \subseteq \{1,\ldots,d\}$, set \[Q_T = (X_i^n \; | \; i \in T \text{ and } n \in \mathbb{M}^i_{> 0})R. \] \end{defin} \begin{prop}\label{ppure} For a monomial ideal $I \subseteq R$, the following conditions are equivalent. \begin{enumerate}[\rm(i)] \item $I$ is m-prime. \item There exists $T \subseteq \{1,\ldots,d\}$ such that $I = Q_T$. \item $I = J_{\ul\alpha,\ul 1}$ where $\alpha_i \in \{0,\infty\}$ for all $i$ and $\ul{1}=(1,\ldots,1)$. \end{enumerate} \end{prop} \begin{proof} (i) $\Rightarrow$ (ii): Assume that $I$ is m-prime, and set $$T=\{i\mid\text{$X_i^n\in I$ for some $n\in\mathbb M^i_{<0}$}\}.$$ First, observe that if $i\in T$, then $X_i^a\in I$ for all $a\in\mathbb M^i_{>0}$. Indeed, by definition of $T$, there is an element $n\in\mathbb M^i_{>0}$ such that $X_i^n\in I$. Fix a positive integer $k$ such that $ak > n$. Since $\mathbb{M}^i$ is closed under subtraction, we have $ak -n \in \mathbb{M}^i_{> 0}$. It follows that $X_i^{ak}=X_i^{ak - n}X_i^n \in I$, so the fact that $I$ is m-prime implies that $X_i^a\in I$, as claimed. Now we show that $I=Q_T$. For the containment $I\supseteq Q_T$, it suffices to show that each generator $X_i^a$ of $Q_T$ is in $I$; here we have $i\in T$ and $a\in\mathbb M^i_{>0}$. This follows from the above observation. For the reverse containment $I\supseteq Q_T$, let $X_1^{\alpha_1}\cdots X_d^{\alpha_d}\in\monset I$. Since $I$ is m-prime, there is an index $i$ such that $\alpha_i>0$ and $X_i^{\alpha_i}\in I$. It follows that $i\in T$, so $X_1^{\alpha_1}\cdots X_d^{\alpha_d}\in(X_i^{\alpha_i})R\subseteq Q_T$. (ii) $\Rightarrow$ (iii): Let $T \subseteq \{1,\ldots,d\}$ and let $I = Q_T$. For $i=1,\ldots,d$ set $$\alpha_i=\begin{cases} 0 & \text{if } i \in T \\ \infty &\text{if } i \not \in T.\end{cases} $$ It is straightforward to show that $I = Q_T=J_{\ul\alpha,\ul 1}$. (iii) $\Rightarrow$ (i): Let $I = J_{\ul\alpha,\ul 1}$ such that $\alpha_i \in \{0,\infty\}$ for $i=1,\ldots,d$. To show that $I$ is m-prime, let $\ul\gamma,\ul\beta\in\mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}$ such that $\mono{X}{\gamma}\mono{X}{\beta} \in I$. Since $\mono{X}{\gamma}\mono{X}{\beta}$ must be a multiple of one of the generators of $J_{\ul\alpha,\ul 1}$, there exists an $i$ such that $\alpha_i = 0$ and $\gamma_i + \beta_i > 0.$ Hence, either $\gamma_i > 0$ or $\beta_i >0.$ Suppose without loss of generality that $\gamma_i > 0$. Then $X_i^{\gamma_i} \in I,$ which implies that $\mono{X}{\gamma} \in I.$ Hence, $I$ is m-prime. \end{proof} \subsection*{M-Irreducible Decompositions} \ \noindent Here we characterize the monomial ideals that can be expressed as finite intersections of monomial irreducible ideals; see \pref{classif2}. \begin{defin} \label{d4.1} Let $I \subseteq R$ be a monomial ideal. An \textit{m-irreducible decomposition} of $I$ is a decomposition $I=\bigcap_{\lambda \in \Lambda} I_\lambda$ where each $I_\lambda$ is an m-irreducible monomial ideal of $R$. If the index set $\Lambda$ is finite, we say that $I = \bigcap_{\lambda \in \Lambda} I_\lambda$ is a \textit{finite m-irreducible decomposition}. An m-irreducible decomposition is \textit{irredundant} if for all distinct $\lambda,\mu\in\Lambda$ we have $I_\lambda\not\subseteq I_\mu$. \end{defin} \begin{notation} Given $\ul{\alpha} \in \mathbb{R}_{\infty \geq 0}^d$ and $\ul{\varepsilon} \in \{0,1\}^d$, we set \[ I_{\ul{\alpha},\ul{\varepsilon}} = (\{ \mono{X}{r} \; | \; i = 1,\ldots,d \text{, } r_i \geq_{\varepsilon_i} \alpha_i \text{ and } r_i \in \mathbb{M}^i_{\infty \geq 0} \})R. \] Note that $I_{\ul{\alpha},\ul{\varepsilon}}$ is not in general generated by pure powers of the variables, so it is different from $J_{\ul{\alpha},\ul{\varepsilon}}$. \end{notation} \begin{ex}\label{e4.3} With the zero-vector $\ul{0} = (0,\ldots,0)$, we have $I_{\ul{\alpha},\ul{0}} = (\mono{X}{\alpha})R$. From this it follows that if $I$ is a finitely generated monomial ideal in $R$, then $I$ is a finite sum of ideals of the form $I_{\ul{\alpha},\ul{0}}$. Indeed, let $\underline{X}^{\underline\alpha_1},\ldots,\underline{X}^{\underline\alpha_n}$ be a monomial generating sequence for $I$. Then we have $$I=(\underline{X}^{\underline\alpha_1},\ldots,\underline{X}^{\underline\alpha_n})R=\sum_{i=1}^n(\underline{X}^{\underline\alpha_i})R=\sum_{i=1}^nI_{\ul{\alpha}_i,\ul{0}}$$ On the other hand, if $\alpha_i = \infty$ for any $i,$ then $I_{\ul{\alpha},\ul{\varepsilon}} = 0$. In $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ the ideal $I_{(1,1),(0,1)}$ is graphed in Example~\ref{ex130508a}. \end{ex} We think of the ideal $I_{\ul{\alpha},\ul{\varepsilon}}$ as ``almost principal'' since it is very close to the principal ideal $(\ul X^{\ul\alpha})R$. Hence, a finite sum of ideals of the form $I_{\ul{\alpha},\ul{\varepsilon}}$ is ``almost finitely generated''. \begin{prop}\label{pclassif2} A monomial ideal $I \subseteq R$ has a finite m-irreducible decomposition if and only if it can be expressed as a finite sum of ideals of the form $\ideal{I}{\alpha}{\varepsilon}$. If $I$ has a finite m-irreducible decomposition, then $I$ has a unique irredundant finite m-irreducible decomposition. \begin{proof} The first statement is proved as in \cite[Theorem 4.12]{ingebretson:dmirsr} using \pref{classif1}. For existence in the second statement, let $I=\cap_{i=1}^mP_i$ be a finite m-irreducible decomposition. If this decomposition is irredundant, then there is nothing to show. Assume that the decomposition is redundant, so we have $P_k\subseteq P_j$ for some $k\neq j$. It follows that $I=\cap_{i\neq j}P_i$ is another finite m-irreducible decomposition. Continue removing redundancies in this way. The process terminates in finitely many steps since the original decomposition is finite. To show uniqueness, let $I=\cap_{i=1}^mP_i=\cap_{j=1}^nQ_j$ be two irredundant finite m-irreducible decompositions. For $s=1,\ldots,m$ it follows that $P_s\supseteq I=\cap_{j=1}^nQ_j$, so \lref{2.7} implies that $$P_s=P_s+\bigcap_{j=1}^nQ_j=\bigcap_{j=1}^n(P_s+Q_j).$$ Since $P_s$ is m-irreducible, it follows that $P_s=P_s+Q_t\supset Q_t$ for some $t$. Similarly, there is an index $u$ such that $Q_t\supseteq P_u$. Hence, the irreduncancy of the intersection $\bigcap_{i=1}^mP_i$ implies that $P_s=P_u$, and thus $P_s=Q_t$. We conclude that $\{P_1,\ldots,P_m\}\subseteq\{Q_1,\ldots,Q_n\}$. Symmetrically, we have $\{P_1,\ldots,P_m\}\supseteq\{Q_1,\ldots,Q_n\}$, so the decompositions are equal. \end{proof} \end{prop} \begin{cor}\label{cor130513a} Every finitely generated monomial ideal $I \subseteq R$ has a finite m-irreducible decomposition. \end{cor} \begin{proof} This follows from \eref{4.3} and \pref{classif2}. \end{proof} \begin{ex}\label{ex130508c} Here we illustrate an m-irreducible decomposition of the ideal $I=I_{(1,1),(0,1)}$ in $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ from Example~\ref{ex130508a}. As with such decompositions in the standard polynomial ring $A[X_1,X_2]$, the key is to use the graph of the monomial set $\monset I$ to find the decomposition. The first diagram in the following display is the graph of $\monset{I}$. The second one shows how we use the boundary lines from $\monset I$ to write $\monset I$ as the intersection $\monset J\cap\monset K$ where $J$ and $K$ are generated by pure powers of $X_2$ and $X_1$, respectively. The third and fourth diagrams show $J=J_{(\infty,1),(0,1)}$ and $K=J_{(1,\infty),(0,1)}$ separately. \ \begin{center} \begin{tikzpicture} \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (1.5,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw (1.5,1.5) -- (1.5,3.2); \draw[color=white] (1.5,1.5) -- (3.2,1.5); \draw[dashed] (1.5,1.5) -- (3.2,1.5); \draw (1.5,1.5)[fill,color=white] circle (2pt); \draw (1.5,1.5) circle (2pt); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!15] (0,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[fill,color=black!35] (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw[fill,color=black!50] (1.5,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,0) -- (1.5,3.2); \draw[color=white] (0,1.5) -- (3.2,1.5); \draw[dashed] (0,1.5) -- (3.2,1.5); \draw (1.5,1.5)[fill,color=white] circle (2pt); \draw (1.5,1.5) circle (2pt); \end{tikzpicture} \end{center} \ \begin{center} \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!15] (0,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw[color=white] (0,1.5) -- (3.2,1.5); \draw[dashed] (0,1.5) -- (3.2,1.5); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!35] (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,0) -- (1.5,3.2); \end{tikzpicture} \end{center} One checks readily that $I=J\cap K$, say, using \pref{2.5}. Note that Example~\ref{ex130508i} provides another m-irreducible decomposition. Moreover, it shows that one needs to be careful when using these diagrams to generate decompositions, as the rough shape of the diagram (ignoring the distinction between dashed and solid lines, etc.) does not contain enough information. \end{ex} We conclude this section with a discussion of (possibly infinite) irredundant m-irreducible decompositions, beginning with existence. \newcommand{\lola}[1]{\mathfrak{C}_{#1}} \begin{prop}\label{p4.14} Let $I$ be a monomial ideal in $R$, and let $\lola{I}$ denote the set of m-irreducible monomial ideals of $R$ that contain $I$. Let $\lola{I}'$ denote the set of minimal elements of $\lola{I}$ with respect to containment. \begin{enumerate}[\rm(a)] \item \label{p4.14a} For every $J\in \lola{I}$, there is an ideal $J'\in\lola{I}'$ such that $J'\subseteq J$. \item \label{p4.14b} With $\ul{1} = (1,\ldots,1)$, we have the following m-irreducible decompositions $$I = \bigcap_{\mono{X}{r} \not \in I} J_{\ul{r},\ul{1}} =\bigcap_{J\in\lola{I}} J=\bigcap_{J\in\lola{I}'} J.$$ The third decomposition is irredundant. \end{enumerate} \end{prop} \begin{proof} \eqref{p4.14a} Let $J\in\lola I$, and let $\lola{J,I}$ denote the set of ideals $K\in\lola{I}$ contained in $J$. By Zorn's Lemma, it suffices to show that every chain $\mathfrak T$ in $\lola{J,I}$ has a lower bound in $\lola{J,I}$. To this end, it suffices to show that the ideal $L:=\cap_{K\in\mathfrak T}K$ is m-irreducible. If the chain $\mathfrak T$ has a minimal element, then $L$ is the minimal element, hence it is m-irreducible. Thus, we assume that $\mathfrak T$ does not have a minimal element. Fact~\ref{f2.2}\eqref{item130521d} shows that $L$ is a monomial ideal of $R$, and the containments $L\subseteq J\subsetneq R$ implies that $L\neq R$. Thus, to show that $L$ is m-irreducible, let $M$ and $N$ be monomial ideals of $R$ such that $L=M\cap N$; we need to show that $L=M$ or $L=N$. Since we have $L=M\cap N\subseteq M$, and similarly $L\subseteq N$, it suffices to show that $L\supseteq M$ or $L\supseteq N$. For each $K\in\mathfrak T$, we have $K\supset L=M\cap N$, so Lemma~\ref{l2.7} implies that $$K=K+L=K+(M\cap N)=(K+M)\cap(K+N).$$ The fact that $K$ is m-irreducible implies that either $K=K+M\supseteq M$ or $K=K+N\supseteq N$. Case 1. For every $K\in\mathfrak T$, there is a $K'\in\mathfrak T$ such that $K\supseteq K'\supseteq M$. In this case, it follows that $L=\cap_{K\in\mathfrak T}K\supseteq M$, as desired. Case 2. There is an ideal $K\in\mathfrak T$ such that for every $K'\in\mathfrak T$ with $K\supseteq K'$, one has $K'\not\supseteq M$. (Note that the fact that $\mathfrak T$ does not have a minimal element implies that at least one such $K'$ exists.) From the paragraph before Case 1, we conclude that for every $K'\in\mathfrak T$ with $K\supseteq K'$, one has $K'\supseteq N$. It now follows that $L\supseteq N$, as desired. \eqref{p4.14b} If $I=R$, then the desired conclusions are trivial since the empty intersection of ideals of $R$ is itself $R$. Thus, we assume that $I\neq R$. The equality $I = \bigcap_{\mono{X}{r} \not \in I} J_{\ul{r},\ul{1}}$ is proved like \cite[Proposition~4.14]{ingebretson:dmirsr}. For each monomial $\mono{X}{r} \not \in I$, it follows that $J_{\ul{r},\ul{1}}\in\lola{I}$, so we have the first containment in the following display. \begin{align*} I &= \bigcap_{\mono{X}{r} \not \in I} J_{\ul{r},\ul{1}} \supseteq \bigcap_{J\in\lola{I}} J \supseteq \bigcap_{J\in\lola{I}'} J \supseteq I \end{align*} The second containment follows from part~\eqref{p4.14a}, and the third containment follows from the definition of $\lola{I}'$. This establishes the desired decompositions. Finally, the decomposition $\bigcap_{J\in\lola{I}'} J$ is irredundant because there are no proper containments between minimal elements of$\lola I$, by definition. \end{proof} The following example shows that infinite irredundant m-irreducible decompositions need not be unique. \begin{ex}\label{ex131224a} Set $d=2$ and $ I= \left( \{ X^r Y^{1-r} \mid 0 \leq r \leq 1 \} \right)R$ with $\mathbb M_i=\mathbb R$ for $i=1,2$. In~\cite[Example 4.13]{ingebretson:dmirsr}, it is shown that $I$ does not admit a finite m-irreducible decomposition. However, it is straightforward to show that the following m-irreducible decompositions are irredundant and distinct: $$I=\bigcap_{0\leq r\leq 1}J_{(r,1-r),(1,1)}=\bigcap_{0\leq r\leq 1}J_{(r,1-r),(1,0)}.$$ Moreover, one can use this idea to construct infinitely many distinct irredundant m-irreducible decompositions of $I$. Indeed, for every subset $S$ of the closed interval $[0,1]$, we have $$I=\left(\bigcap_{r\in S}J_{(r,1-r),(1,1)}\right)\bigcap\left(\bigcap_{r\in[0,1]\smallsetminus S}J_{(r,1-r),(1,0)}\right).$$ \end{ex} \section{An Extended Example} \label{sec130510a} \newcommand{s}{s} \newcommand{S}{S} \newcommand{\varepsilon}{\varepsilon} Here we show how to use discrete techniques from~\cite{paulsen:eiwg} to compute some decompositions in our setting. This section's main result is \tref{hm130513a}. \begin{Assumption}\label{a130510a} Throughout this section, $I$ is an ideal in the ring $R = A[\mathbb{R}_{\geq 0}\times\cdots\times\mathbb{R}_{\geq 0}]$ generated by a non-empty set of monomials of the form $X_i^aX_j^a$ with $i\neq j$ and $a\in \mathbb{R}_{> 0}$. Also, we consider the standard polynomial ring $S=A[X_1,\ldots,X_d]$. Let $\Omega$ denote the following set of intervals: $$\Omega=\{(a,\infty)\mid a\in\mathbb R_{\geq 0}\}\cup \{[b,\infty)\mid b\in\mathbb R_{> 0}\}.$$ \end{Assumption} \begin{ex}\label{ex130508d} In the case $d=3$, we may consider the ideal $$I=(X_1^aX_2^a,X_2^bX_3^b\mid a\geq 1,b>2)R=(X_1X_2,X_2^bX_3^b\mid b>2)R.$$ By \pref{2.5} it is routine to verify the irredundant decomposition \begin{align*} I&=(X_1,X_2^b\mid b>2)R\cap(X_1,X_3^b\mid b>2)R\cap(X_2)R\\ &=J_{(1,\infty,2),(0,0,1)}\cap J_{(\infty,1,\infty),(0,0,0)}\cap J_{(1,2,\infty),(0,1,0)}. \end{align*} \end{ex} \begin{notation}\label{notn130512a} Define $\Gamma$ to be the finite simple graph with vertex set $V=\{1,\ldots,d\}$ and edge set $$E=\{ij\mid \text{$i\neq j$ and $X_i^aX_j^a\in I$ for some $a>0$}\}$$ where $ij=\{i,j\}$. For each $ij\in E$, set \begin{align*} S(ij)&=\{a>0\mid X_i^aX_j^a\in I\}. \\ s(ij)&=\inf S(ij)\\ \varepsilon(ij)&=\begin{cases} 0 & \text{if $s(ij)\in S(ij)$} \\ 1 & \text{if $s(ij)\notin S(ij)$.}\end{cases} \end{align*} This defines functions $s\colon E\to\mathbb R_{\geq 0}$ and $\varepsilon\colon E\to\{0,1\}$. \end{notation} \begin{ex}\label{ex130510a} Continue with the ideal $I$ from Example~\ref{ex130508d}. The graph $\Gamma$ in this case is $$\xymatrix{1\ar@{-}[r] &2\ar@{-}[r]&3.}$$ And the values $S(ij)$ are \begin{align*} S(12)&=[1,\infty)& S(23)&=(2,\infty) \\ s(12)&=1 & s(23)&=2 \\ \varepsilon(12)&=0&\varepsilon(23)&=1. \end{align*} \end{ex} \begin{fact}\label{fact130510a} For each $ij\in E$, the set $S(ij)$ is an interval. Indeed, if $a\in S(ij)$, then $X_i^aX_j^a\in I$, so for all $r>0$, we have $X_i^{r+a}X_j^{r+a}=X_i^{r}X_j^{r}X_i^{a}X_j^{a}\in I$, implying that $r+a\in S(ij)$. Moreover, it is straightforward to show that $$S(ij)=\begin{cases} [s(ij),\infty) & \text{if $\varepsilon(ij)=0$} \\ (s(ij),\infty) & \text{if $\varepsilon(ij)=1$.}\end{cases} $$ In particular, we have a function $S\colon E\to\Omega$. The ideal $I$ is a finite sum $\sum_{ij\in E}I_{\underline\alpha(ij),\underline\epsilon(ij)}$. This essentially follows from the previous paragraph, with the following definitions of $\underline\alpha(ij)$ and $\underline\epsilon(ij)$: \begin{align*} \alpha(ij)_k&=\begin{cases} \infty & \text{if $k\notin\{i,j\}$} \\ s(ij) & \text{if $k\in\{i,j\}$}\end{cases} & \epsilon(ij)_k&=\begin{cases} 1 & \text{if $k\notin\{i,j\}$} \\ \varepsilon(ij) & \text{if $k\in\{i,j\}$.}\end{cases} \end{align*} \pref{classif2} implies that $I$ has a finite m-irreducible decomposition. \end{fact} \begin{ex}\label{ex130510e} With the ideal $I$ from Example~\ref{ex130508d}, we have $$I=I_{(1,1,\infty),(0,0,1)}+I_{(\infty,2,2),(1,1,1)}.$$ \end{ex} \begin{defin}\label{defn130510a} A \textit{vertex cover} of the graph $\Gamma$ is a subset $W\subseteq V$ such that for all $ij\in E$ we have either $i\in W$ or $j\in W$. A vertex cover is \textit{minimal} if it is minimal in the set of all vertex covers with respect to inclusion. \end{defin} \begin{ex}\label{ex130510b} Continue with the ideal $I$ from Example~\ref{ex130508d}. The graph $\Gamma$ in this case has two minimal vertex covers, namely $\{1,3\}$ and $\{2\}$. It has the following non-minimal vertex covers: $\{1,2\}$, $\{2,3\}$, and $\{1,2,3\}$. We will see below that the irredundant m-irreducible decomposition of $I$ is given by the vertex covers $\{1,2\}$, $\{1,3\}$, and $\{2\}$ with some additional data. \end{ex} The work from~\cite{paulsen:eiwg} takes its cue from the following decomposition result that we know from~\cite[Proposition 6.1.16]{villarreal:ma}. \begin{fact}\label{fact130610a} The \textit{edge ideal} of the finite simple graph $\Gamma$ is the ideal $I(\Gamma)=(X_iX_j\mid ij\in E)S$. Then we have the following m-irreducible decompositions $$I(\Gamma)=\bigcap_{W}Q_W=\bigcap_{\text{$W$ min}}Q_W$$ where the first intersection is taken over all vertex covers of $\Gamma$, and the second intersection is taken over all minimal vertex covers of $\Gamma$. The second intersection is irredundant. \end{fact} \begin{ex}\label{ex130510d} Continue with the graph $\Gamma$ from Example~\ref{ex130510a}. Using the minimal vertex covers from Example~\ref{ex130510b}, we have $$I(\Gamma)=Q_{\{1,3\}}\cap Q_{\{2\}}=(X_1,X_3)S\cap(X_2)S.$$ One can verify these equalities using \pref{2.5}, and the irredundancy is straightforward. \end{ex} To prepare for the decomposition result for the ideal $I$, we review the decomposition result from~\cite{paulsen:eiwg} for weighted edge ideals. \begin{defin}\label{defn130510b} A \textit{weight function} for the graph $\Gamma$ is a function $\omega\colon E\to\mathbb Z_{>0}$. For each $ij\in E$, the value $\omega(ij)$ is the \textit{weight} of the edge $ij$. Write $\Gamma_{\omega}$ for the ordered pair $(\Gamma,\omega)$. Fix a weight function $\omega$ of $\Gamma$. A \textit{weighted vertex cover} of $\Gamma_{\omega}$ is a pair $(W,\delta)$ such that \begin{enumerate}[(1)] \item $W$ is a vertex cover of $\Gamma$, and \item $\delta\colon W\to \mathbb Z_{>0}$ is a function such that for each edge $ij\in E$, either \begin{enumerate}[(a)] \item $i\in W$ and $\delta(i)\leq\omega(ij)$, or \item $j\in W$ and $\delta(j)\leq\omega(ij)$. \end{enumerate} \end{enumerate} The value $\delta(i)$ is the \textit{weight} of the vertex $i$. Given two weighted vertex covers $(W,\delta)$ and $(W',\delta')$ of $\Gamma_{\omega}$, we write $(W,\delta)\leq(W',\delta')$ provided that \begin{enumerate}[(1)] \item $W\subseteq W'$, and \item for all $i\in W$, we have $\delta(i)\geq\delta'(i)$. \end{enumerate} A weighted vertex cover is \textit{minimal} if it is minimal in the set of all weighted vertex covers with respect to this ordering. Given a weighted vertex cover $(W,\delta)$ of $\Gamma_{\omega}$, set $$P_{W,\delta}=(X_i^{\delta(i)}\mid i\in W)S.$$ The \textit{weighted edge ideal} of $\Gamma_{\omega}$ is the ideal $$I(\Gamma_{\omega})=(X_i^{\omega(ij)}X_i^{\omega(ij)}\mid ij\in E)S.$$ \end{defin} \begin{fact}\label{fact130510b} Given a weight function $\omega$ of $\Gamma$, we have the following m-irreducible decompositions $$I(\Gamma_{\omega})=\bigcap_{(W,\delta)}P_{W,\delta}=\bigcap_{\text{$(W,\delta)$ min}}P_{W,\delta}$$ where the first intersection is taken over all weighted vertex covers of $\Gamma_{\omega}$, and the second intersection is taken over all minimal weighted vertex covers of $\Gamma_{\omega}$. The second intersection is irredundant. See~\cite[Theorem 3.5]{paulsen:eiwg}. This technique allows us to find an irredundant m-ireducible decomposition for any ideal $J$ in $S$ generated by monomials of the form $X_i^aX_j^a$ with $i\neq j$ and $a\in\mathbb Z_{>0}$. Indeed, such an ideal is of the form $I(\Gamma_\omega)$ where $ij$ is an edge of $\Gamma$ if and only if the monomial $X_i^aX_j^a$ is in $J$ for some $a\in\mathbb Z_{>0}$, and $\omega(ij)$ is the least $a$ such that $X_i^aX_j^a\in J$. \end{fact} \begin{ex}\label{ex130510c} Continue with the graph $\Gamma$ from Example~\ref{ex130510a}. Consider the weight function $\omega$ with $\omega(12)=1$ and $\omega(23)=2$. We represent this graphically by labeling each edge $ij$ with the value $\omega(ij)$: $$\xymatrix{1\ar@{-}[r]^-1 &2\ar@{-}[r]^-2&3.}$$ We also represent weighted vertex covers graphically with a box around each vertex in the vertex cover and using a superscript for the weight, as follows: $$\xymatrix{*+[F]{1^1}\ar@{-}[r]^-1 &2\ar@{-}[r]^-2&*+[F]{3^2}.}$$ This weighted graph has three minimal weighted vertex covers, the one represented above, and the next two: $$\xymatrix{1\ar@{-}[r]^-1 &*+[F]{2^1}\ar@{-}[r]^-2&3\\ *+[F]{1^1}\ar@{-}[r]^-1 &*+[F]{2^2}\ar@{-}[r]^-2&3.}$$ Note that the first two correspond to minimal vertex covers of the unweighted graph $\Gamma$, but the third one does not. The irredundant decomposition of $I(\Gamma_{\omega})$ coming from \fref{act130510b} is $$I(\Gamma_{\omega})=(X_1,X_2^2)S\cap(X_1,X_3^2)S\cap(X_2)S.$$ One can check this equality using \pref{2.5}, and the irredundancy is straightforward. \end{ex} Now we develop a version of this construction for the ideal $I$ from Assumption~\ref{a130510a}. \begin{defin}\label{defn130510c} Let $\Gamma_{S}$ denote the ordered pair $(\Gamma,S)$ where $S$ is from Notation~\ref{notn130512a}. An \textit{interval vertex cover} of $\Gamma_{S}$ is a pair $(W,\sigma)$ such that \begin{enumerate}[(1)] \item $W$ is a vertex cover of $\Gamma$, and \item $\sigma\colon W\to \Omega$ is a function such that for each edge $ij\in E$, either \begin{enumerate}[(a)] \item $i\in W$ and $S(ij)\subseteq\sigma(i)$, or \item $j\in W$ and $S(ij)\subseteq\sigma(j)$. \end{enumerate} \end{enumerate} The value $\sigma(i)$ is the ``interval weight'' of the vertex $i$. Given ordered pairs $(W,\sigma)$ and $(W',\sigma')$ where $W,W'\subseteq V$ are subsets and $\sigma\colon W\to\Omega$ and $\sigma'\colon W'\to\Omega$ are functions, write $(W,\sigma)\leq(W',\sigma')$ whenever \begin{enumerate}[(1)] \item $W\subseteq W'$, and \item for all $i\in W$, we have $\sigma(i)\subseteq\sigma'(i)$. \end{enumerate} An interval vertex cover of $\Gamma_{S}$ is \textit{minimal} if it is minimal in the set of all interval vertex covers with respect to this ordering. Given an ordered pair $(W,\sigma)$ where $W\subseteq V$ and $\sigma\colon W\to\Omega$, set $$Q_{W,\sigma}=(X_i^{a}\mid \text{$i\in W$ and $a\in\sigma(i)$})R.$$ \end{defin} \begin{ex}\label{ex130511a} Continue with the ideal $I$ from Example~\ref{ex130508d}. We visualize the associated data from Example~\ref{ex130510a} similarly to the labeled graph from Example~\ref{ex130510c}, keeping track of the entire interval $S(ij)$ for each edge $ij$: $$\xymatrix{1\ar@{-}[r]^-{\geq 1} &2\ar@{-}[r]^-{>2}&3.}$$ We also represent interval vertex covers graphically with a box around each vertex in the vertex cover and a superscript for the interval weight, as follows: $$\xymatrix{(W_1,\sigma_1):&*+[F]{1^{\geq 1}}\ar@{-}[r]^-{\geq 1} &2\ar@{-}[r]^-{>2}&*+[F]{3^{>2}}.}$$ This weighted graph has three minimal interval vertex covers, the one represented above and the next two: $$\xymatrix{(W_2,\sigma_2):&1\ar@{-}[r]^-{\geq 1} &*+[F]{2^{\geq 1}}\ar@{-}[r]^-{>2}&3\\ (W_3,\sigma_3):&*+[F]{1^{\geq 1}}\ar@{-}[r]^-{\geq 1} &*+[F]{2^{>2}}\ar@{-}[r]^-{>2}&3.}$$ Again, the first two correspond to minimal vertex covers of the unweighted graph $\Gamma$, but the third one does not. \end{ex} \begin{fact}\label{fact130510d} The ideals in $R$ of the form $Q_{W,\sigma}$ are exactly the ideals of the form $J_{\underline\alpha,\underline\epsilon}$, since they are exactly the ones generated by (intervals of) pure powers of the variables. Thus, the ideals of the form $Q_{W,\sigma}$ are exactly the m-irreducible ideals of $R$ by \pref{classif1}. \end{fact} The next lemma is the key to the main result of this section. \begin{lem}\label{lem130511a} Consider ordered pairs $(W,\sigma)$ and $(W',\sigma')$ where $W,W'\subseteq V$ are subsets and $\sigma\colon W\to\Omega$ and $\sigma'\colon W'\to\Omega$ are functions. \begin{enumerate}[\rm(a)] \item\label{lem130511a1} One has $Q_{W,\sigma}\subseteq Q_{W',\sigma'}$ if and only if $(W,\sigma)\leq(W',\sigma')$. \item\label{lem130511a2} One has $I\subseteq Q_{W,\sigma}$ if and only if $(W,\sigma)$ is an interval vertex cover of~$\Gamma_{S}$. \end{enumerate} \end{lem} \begin{proof} \eqref{lem130511a1} We prove the forward implication; the reverse implication is similar and easier. Assume that $Q_{W,\sigma}\subseteq Q_{W',\sigma'}$. To show that $(W,\sigma)\leq(W',\sigma')$, let $i\in W$; we need to show that $i\in W'$ and that $\sigma(i)\subseteq\sigma'(i)$. Let $a\in\sigma(i)$. By definition, it follows that $X_i^a\in Q_{W,\sigma}\subseteq Q_{W',\sigma'}$. \fref{2.3} implies that $X_i^a$ is a multiple of a monomial generator of $Q_{W',\sigma'}$, so there exist $j\in W'$ and $b\in\sigma'(j)$ such that $X_i^a\in(X_j^b)R$. Note that $a,b>0$. It follows that $i=j\in W'$ and $b\leq a$. Since $\sigma'(i)$ is an interval of the form $[c,\infty)$ or $(c,\infty)$, the conditions $b\in\sigma'(i)$ and $b\leq a$ implies that $a\in\sigma'(i)$, as desired. \eqref{lem130511a2} Again, we prove the forward implication. Assume that $I\subseteq Q_{W,\sigma}$. To show that $W$ is a vertex cover of $\Gamma$, let $ij\in E$. By definition of $E$ and $S(ij)$, there is an element $a\inS(ij)$ such that $X_i^aX_j^a\in I\subseteq Q_{W,\sigma}$. By \fref{2.3}, the element $X_i^aX_j^a$ is a multiple of a monomial generator of $Q_{W,\sigma}$, so there exist $k\in W$ and $b\in\sigma(k)$ such that $X_i^aX_j^a\in (X_k^b)R$. Since $a,b>0$, we conclude that either $i=k\in W$ or $j=k\in W$, so $W$ is a vertex cover of $\Gamma$. To show that $(W,\sigma)$ is an interval vertex cover of $\Gamma$, let $ij\in E$. We proceed by cases. Case 1: $i\notin W$. Since $W$ is a vertex cover, we have $j\in W$. In this case, we need to show that $S(ij)\subseteq\sigma(j)$, so let $a\inS(ij)$. It follows that $X_i^aX_j^a\in I\subseteq Q_{W,\sigma}$. Hence, there is a monomial generator $X_k^b\in Q_{W,\sigma}$ such that $X_i^aX_j^a\in(X_k^b)R$. Since $i\notin W$, the first paragraph of the proof of part~\eqref{lem130511a2} shows that $j=k$, and we conclude that $b\leq a$. As in the proof of part~\eqref{lem130511a1}, we conclude that $a\in\sigma(j)$, as desired. Case 2: $j\notin W$. This case is handled like Case 1. Case 3: $i,j\in W$ and $S(ij)\not\subseteq\sigma(i)$. Again, we need to show that $S(ij)\subseteq\sigma(j)$, so let $a\inS(ij)$. The condition $S(ij)\not\subseteq\sigma(i)$ implies that there is an element $a'\inS(ij)\smallsetminus\sigma(i)$. If $a'\leq a$, then it suffices to show that $a'\in\sigma(j)$, as above. If $a\leq a'$, then the assumption $a'\inS(ij)\smallsetminus\sigma(i)$ implies that $a\inS(ij)\smallsetminus\sigma(i)$ because of the shape of the interval $\sigma(i)$. Thus, we may replace $a$ by $a'$ if necessary to assume that $a\inS(ij)\smallsetminus\sigma(i)$. As above, there is a monomial generator $X_k^b\in Q_{W,\sigma}$ such that $X_i^aX_j^a\in(X_k^b)R$. It follows that either $i=k$ or $j=k$, and $b\leq a$. If $i=k$, then the condition $X_k^b\in Q_{W,\sigma}$ implies that $b\in\sigma(i)$; hence the inequality $b\leq a$ implies that $a\in\sigma(i)$ because of the shape of $\sigma(i)$; this is a contradiction. Thus, we have $j=k$ and, as in the previous sentence, $a\in\sigma(j)$. \end{proof} \begin{thm}\label{thm130513a} For the ideal $I$ from Assumption~\ref{a130510a}, we have the following m-irreducible decompositions $$I=\bigcap_{(W,\sigma)}Q_{W,\sigma}=\bigcap_{\text{$(W,\sigma)$ min}}Q_{W,\sigma}$$ where the first intersection is taken over all interval vertex covers of $\Gamma$, and the second intersection is taken over all minimal interval vertex covers of $\Gamma$. The second intersection is finite and irredundant, and the graph $\Gamma$ with data from Notation~\ref{notn130512a} has only finitely many minimal interval vertex covers. \end{thm} \begin{proof} In the next display, the containment is from \lref{em130511a}\eqref{lem130511a2}. $$I\subseteq\bigcap_{(W,\sigma)}Q_{W,\sigma}=\bigcap_{\text{$(W,\sigma)$ min}}Q_{W,\sigma}$$ For the equality, we have $\bigcap_{(W,\sigma)}Q_{W,\sigma}\subseteq\bigcap_{\text{$(W,\sigma)$ min}}Q_{W,\sigma}$ by basic properties of intersections, and the reverse containment follows from \lref{em130511a}\eqref{lem130511a1}. Also, the second intersection is irredundant by \lref{em130511a}\eqref{lem130511a1}. By \pref{classif2} and \fref{act130510a}, the ideal $I$ has a finite m-irreducible decomposition which is of the form $I=\cap_{k=1}^mQ_{W_k,\sigma_k}$ by \fref{act130510d}. Note that \lref{em130511a}\eqref{lem130511a2} implies that each pair $(W_k,\sigma_k)$ is an interval vertex cover of $\Gamma$. Thus, we have $$I=\bigcap_{k=1}^mQ_{W_k,\sigma_k}\supseteq \bigcap_{(W,\sigma)}Q_{W,\sigma}.$$ With the previous display, this provides the equalities from the statement of the result. Thus, it remains to show that the intersection $\bigcap_{\text{$(W,\sigma)$ min}}Q_{W,\sigma}$ is finite. For this, let $(W,\sigma)$ be a minimal interval vertex cover of $\Gamma$. It suffices to show that $(W,\sigma)=(W_k,\sigma_k)$ for some $k$. From the equalities we have already established, we have $$Q_{W,\sigma}\supseteq I=\bigcap_{k=1}^mQ_{W_k,\sigma_k}.$$ The proof of \pref{classif2} shows that $Q_{W,\sigma}\supseteq Q_{W_k,\sigma_k}$ for some $k$. So, \lref{em130511a}\eqref{lem130511a1} implies that $(W,\sigma)\geq(W_k,\sigma_k)$. As these are both interval vertex covers of $\Gamma$, the minimality of $(W,\sigma)$ yields the desired equality $(W,\sigma)=(W_k,\sigma_k)$. \end{proof} \begin{ex}\label{ex130511b} Continue with the ideal $I$ from Example~\ref{ex130508d}. The minimal vertex covers from Example~\ref{ex130511b} provide the following irredundant m-irreducible decomposition $$I=Q_{(W_1,\sigma_1)}\cap Q_{(W_1,\sigma_2)}\cap Q_{(W_3,\sigma_3)}$$ which is exactly the decomposition computed in Example~\ref{ex130508d}. \end{ex} \section{Monomial Krull Dimension}\label{smdim} We now introduce and study a notion of Krull dimension for this setting. The main result of this section is \cref{or130521a}. \begin{Assumption} Throughout this section, set $R = A[\mathbb{M}^1_{\geq 0}\times\cdots\times\mathbb{M}^d_{\geq 0}]$. \end{Assumption} \begin{defin}\label{dmdim} For an m-prime ideal $P=Q_T$, we employ the notation $v(P)$ to denote the number of variables from which $P$ is generated, i.e., \[ v(P) =|T|= |\{ X_i \; | \; X_i^k \in P \text{ for some } k \in \mathbb{M}^i_{> 0} \text{ and } 1 \leq i \leq d \}|\] and set $\mdimen^*(R/P) = d - v(P)$. The \textit{monomial Krull dimension} of an arbitrary monomial ideal $I$ is \[\mdim{R/I} = \sup\{\mdimen^*(R/Q) \; | \; Q \text{ is m-prime and } Q \supseteq I\}.\] \end{defin} \begin{fact}\label{fsubdim} By definition, if $I$ and $J$ are monomial ideals in $R$ such that $I \supseteq J,$ then $\mdim{R/I} \leq \mdim{R/J}$, and furthermore we have the following. \[\mdim{R/I} = \begin{cases} \max\{\mdimen^*(R/Q) \; | \; Q \text{ is m-prime and } Q \supseteq I\} &\text{if $I\neq R$}\\ -\infty&\text{if $I= R$}\end{cases} \] Also, given an m-prime ideal $P$ of $R$, it is straightforward to show that $\mdim{R/P}=\mdimen^*(R/P)$. \end{fact} \begin{ex}\label{ex130509a} Let $I$ be a monomial ideal in the ring $R$. It is straightforward to show that $\mdim{R/I}=d$ if and only if $I=0$. Also, if $I\neq R$, then $\mdim{R/I}=0$ if and only if for $i=1,\ldots,d$ there is an element $a_i\in\mathbb M^i_{>0}$ such that $X_i^{a_i}\in I$. In the case $d=2$, this tells us that the ideals from Examples~\ref{ex130508a} and~\ref{ex130508c} have $\mdim{R/I}=1$, and the ideals from Examples~\ref{ex130508g} and~\ref{ex130508i} have $\mdim{R/I}=0$. \end{ex} Before proving our main results, we verify some desired properties m-dim. \begin{prop}\label{pdimchain} For a monomial ideal $I$ in $R$, we have \[ \mdim{R/I} = \sup\{n \geq 0 \; | \; \exists \text{ m-prime } P_0 \subset P_1 \subset \cdots \subset P_n \text{ and } I \subseteq P_0 \}. \] \begin{proof} Assume without loss of generality that $I\neq R$. Let $m=\max\{n \geq 0 \; | \; \exists \text{ m-prime } P_0 \subset P_1 \subset \cdots \subset P_n \text{ and } I \subseteq P_0 \}$ with corresponding maximal chain $I \subseteq P_0 \subset P_1 \subset \cdots \subset P_m$. The maximality of this chain implies that $P_m=Q_{\{1,\ldots,d\}}$. Then for $i=0,\ldots,m-1$ we have $v(P_{i+1})=v(P_i)+1$; otherwise, we could find an m-prime $P$ such that $P_i \subset P \subset P_{i+1},$ which would contradict the maximality of $m$. Hence, we have $d=v(P_{m})=v(P_0)+m$, and it follows that $m = \mdim{R/P_0}$. It remains to show that $\mdim{R/P_0} = \mdim{R/I}$. We have that $\mdim{R/P_0} \leq \mdim{R/I}$ by \fref{subdim}. Now, suppose that $\mdim{R/P_0}<\mdim{R/I}$. That would mean there is an m-prime ideal $P \supseteq I$ such that $v(P)<v(P_0)$. We could then create a chain $P \subset P_0' \subset \cdots \subset P_m'$, contradicting the maximality of $m$. \end{proof} \end{prop} The next result applies whenever $I$ is finitely generated by \cref{or130513a}. \begin{prop} Given a monomial ideal $I$ in $R$ with a finite m-irreducible decomposition $I = \bigcap_{i=1}^t J_i$, one has \[ \mdim{R/I} = \sup_{i}\{\mdim{R/J_i}\}.\] \begin{proof} Assume without loss of generality that $t\geq 1$, i.e., that $I\neq R$. Since $J_i \supseteq I$ for all $i$, \fref{subdim} implies that $\sup_{i}\{\mdim{R/J_i}\}\leq \mdim{R/I}$. We now claim that for every m-prime ideal $P,$ if $P \supseteq I,$ then there is an index $k$ such that $P \supseteq J_k$. By way of contradiction, suppose that for all $k$ there is a monomial $f_k \in \monset{J_k}\smallsetminus\monset{P}$. Since $P$ is m-prime and $f_k \notin \monset{P}$ for all $k,$ we have that $\prod_{k=1}^t f_k \notin \monset{P}$. However, $f_k \in J_k$ implies that $\prod_{k=1}^tf_k \in \bigcap_{k=1}^tJ_k = I \subseteq P$, a contradiction. Now, let $P$ be m-prime such that $I\subseteq P$ and $\mdim{R/P}=\mdim{R/I}$. The claim implies that there is an index $k$ such that $P \supseteq J_k$, so we have $$\mdim{R/I}=\mdim{R/P}\leq\mdim{R/J_k}\leq\sup_{i}\{\mdim{R/J_i}\}$$ as desired. \end{proof} \end{prop} \begin{ex} Consider the case $\mathbb{M}^i=\mathbb R$ for all $i$ and the ideal $I$ from Assumption~\ref{a130510a}. Using ideas from~\cite{paulsen:eiwg}, one shows that $\mdim{R/I}=d-\tau(\Gamma)$ where $\tau(\Gamma)$ is the \textit{vertex cover number} of $\Gamma$ $$\tau(\Gamma)=\min\{|W|\mid\text{$W$ is a vertex cover of $\Gamma$}\}.$$ \end{ex} To consider the semicontinuity of monomial Krull dimension, we introduce the next definition. \begin{defin}\label{def130521a} Let $\varepsilon \in \mathbb{R}_{> 0}$. For monomial ideals $I,J$ in $R$, we say that dist($I,J$) $< \varepsilon$ if \begin{enumerate}[(1)] \item\label{def130521a1} for all $\underline{X}^{\underline{\gamma}} \in \monset{I},$ there exists $\underline{X}^{\underline{\delta}} \in \monset{J}$ such that dist($\underline{\gamma},\underline{\delta}$) $< \varepsilon,$ and \item\label{def130521a2} for all $\underline{X}^{\underline{\delta}'} \in \monset{J},$ there exists $\underline{X}^{\underline{\gamma}'} \in \monset{I}$ with $\dist{\underline{\delta}'}{\underline{\gamma}'} < \varepsilon$. \end{enumerate} Here dist($\underline{\gamma},\underline{\delta}$) = $|\underline{\gamma}-\underline{\delta}| = \sqrt{(\gamma_1-\delta_1)^2+\cdots+(\gamma_d-\delta_d)^2}$. \end{defin} \begin{defin}\label{def130521b} The distance between two monomial ideals $I$, $J$ in $R$ is \[\dist{I}{J} = \inf\{\varepsilon > 0 \; | \; \dist{I}{J} < \varepsilon \}.\] \end{defin} \begin{ex}\label{ex130508f} Let $I$ be a monomial ideal in $R$. Since $\monset 0=\emptyset$, we have $$\dist I0=\begin{cases} 0 & \text{if $I=0$} \\ \infty &\text{if $I\neq 0$}\end{cases} $$ \end{ex} \begin{ex} \label{ex130509c} The ideals $J_{2,1,0}=(X_2)R$ and $J_{1,1,0}=(X_1)R$ in $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ satisfy $\dist{(X_2)R}{(X_1)R}=1$. \ \begin{center} \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (0,1.5) -- (3.2,1.5); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,0) -- (1.5,3.2); \end{tikzpicture} \end{center} This is intuitively clear from the above diagrams. To verify this rigorously, first note that, for every monomial $X_1^aX_2^b\in(X_2)R$ one has $X_1^{a+1}X_2^b\in(X_1)R$ and $\dist{X_1^aX_2^b}{X_1^{a+1}X_2^b}=1$. Similarly, every monomial in $(X_1)R$ is distance 1 from a monomial in $(X_2)R$. This implies that $\dist{(X_2)R}{(X_1)R}<1+\epsilon$ for all $\epsilon>0$, so $\dist{(X_2)R}{(X_1)R}\leq1$. Finally, note that the monomial in $(X_2)R$ that is closest to $X_1\in(X_1)R$ is $X_1X_2$, which is distance 1 from $X_1$, so $\dist{(X_2)R}{(X_1)R}\geq1$. \end{ex} \begin{ex}\label{ex130509b} The ideals $I_{(1,1),(0,0)}$ and $I_{(1,1),(1,1)}$ in $R=A[\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}]$ \ \begin{center} \begin{tikzpicture} \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (1.5,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw (1.5,1.5) -- (1.5,3.2); \draw (1.5,1.5) -- (3.2,1.5); \draw (1.5,1.5)[fill,color=black] circle (2pt); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (1.5,1.5) -- (3.2,1.5) -- (3.2,3.2) -- (1.5,3.2) -- cycle; \draw[color=white] (1.5,1.5) -- (1.5,3.2); \draw[dashed] (1.5,1.5) -- (1.5,3.2); \draw[color=white] (1.5,1.5) -- (3.2,1.5); \draw[dashed] (1.5,1.5) -- (3.2,1.5); \draw (1.5,1.5)[fill,color=white] circle (2pt); \draw (1.5,1.5) circle (2pt); \end{tikzpicture} \end{center} \ \noindent have $\dist{I_{(1,1),(0,0)}}{I_{(1,1),(1,1)}}=0$ even though $I_{(1,1),(0,0)}\neq I_{(1,1),(1,1)}$. This explains (at least partially) why we restrict to the set of finitely generated monomial ideals in our next result. \end{ex} \begin{thm}\label{tmetric} The function $\operatorname{dist}$ is a metric on the set of non-zero finitely generated monomial ideals of $R$. \begin{proof} Let $I,J,K$ be non-zero finitely generated monomial ideals of $R$. To see that $\dist{I}{J} \in\mathbb R_{\geq 0}$, let $\ul X^{\ul\alpha}\in\monset I$ and $\ul X^{\ul\beta}\in\monset J$. Let $\delta>0$ be given. We claim that $\dist IJ<\max\{|\ul\alpha|,|\ul\beta|\}+\delta$. \eqref{def130521a1} For every $\ul X^{\ul\gamma}\in\monset I$ there is a monomial $\ul X^{\ul\gamma+\ul\beta}=\ul X^{\ul\gamma}\ul X^{\ul\beta}\in\monset{J}$ such that $\dist{\ul\gamma+\ul\beta}{\ul\gamma}=|\ul\beta|<\max\{|\ul\alpha|,|\ul\beta|\}+\delta$. \eqref{def130521a2} For every $\ul X^{\ul\gamma}\in\monset J$ there is a monomial $\ul X^{\ul\gamma+\ul\alpha}=\ul X^{\ul\gamma}\ul X^{\ul\alpha}\in\monset{I}$ such that $\dist{\ul\gamma+\ul\alpha}{\ul\gamma}=|\ul\alpha|<\max\{|\ul\alpha|,|\ul\beta|\}+\delta$. This established the claim. Consequently, it follows that $\dist IJ\in\mathbb R$ such that $0\leq\dist IJ\leq\max\{|\ul\alpha|,|\ul\beta|\}$, as desired. The condition $\dist{I}{J} = \dist{J}{I}$ follows from the symmetry of Definition~\ref{def130521a}. The equality $\dist{I}{I} = 0$ is similarly straightforward. Next, assume that $\dist{I}{J} = 0$. We show that $I=J$. Let $\ul{X}^{\ul{\alpha}} \in \monset I$ and let $J=(\ul{X}^{\ul{\beta}_1}, \ldots, \ul{X}^{\ul{\beta}_n})R$. For each $i,j$ we set $\beta'_{i,j}=\max\{\beta_{i,j},\alpha_j\}$. Then $\beta'_{i,j} \geq \alpha_j$ and $\beta'_{i,j} \geq \beta_{i,j}$. Therefore, for all $j,$ we have $\ul{X}^{\ul{\beta}'_j} \in (\mono{X}{\alpha} )R \subseteq I$ and $\ul{X}^{\ul{\beta}'_j} \in (\ul{X}^{\ul{\beta}_j})R \subseteq J$. Since $\dist{I}{J} = 0,$ we have that $\inf\{\dist{\ul{X}^{\ul{\alpha}}}{\ul{X}^{\ul{\gamma}}} \;|\; \ul{X}^{\ul{\gamma}} \in J\} = 0$. We claim that $$\inf\{\dist{\mono{X}{\alpha}}{\mono{X}{\gamma}} \;|\; \ul{X}^{\ul{\gamma}} \in \monset J\} = \min\{\dist{\mono{X}{\alpha}}{\ul{X}^{\ul{\beta}'_i}} \; | \; 1 \leq i \leq n \}.$$ The condition $\ul{X}^{\ul{\beta}'_j}\in \monset J$ explains the inequality $\leq$. For the reverse inequality, let $\ul{X}^{\ul{\gamma}} \in J$. Then $\mono{X}{\gamma} = \mon{X}{\beta}{j}\mono{X}{\delta}$ for some $j$ and some $\mono{X}{\delta} \in \monset R$. We have that $\dist{\mono{X}{\alpha}}{\mono{X}{\gamma}} = \sqrt{(\gamma_1-\alpha_1)^2 + \cdots + (\gamma_n - \alpha_n)^2}$. If $\gamma_i \leq \alpha_i,$ then we have $\beta_{j,i} \leq \gamma_i \leq \alpha_i$, so $\beta_{j,i}' = \alpha_i$, which implies that $| \beta_{j,i}' - \alpha_i | = 0 \leq | \gamma_i - \alpha_i |$. If $\gamma_i \geq \alpha_i,$ then the condition $\gamma_i \geq \beta_{j,i}$ implies that $\gamma_i \geq \beta_{j,i}' \geq \alpha_j,$ so we have that $0 \leq \beta_{j,i}' - \alpha_i \leq \gamma_i - \alpha_i$. Therefore, for all $\mono{X}{\gamma} \in J,$ we have $\dist{\mono{X}{\gamma}}{\mono{X}{\alpha}} \geq \dist{\ul{X}^{\ul{\beta}_j'}}{\mono{X}{\alpha}}$ for some $j$. This proves the claim. It follows that $\min\{\dist{\mono{X}{\alpha}}{\ul{X}^{\ul{\beta}'_i}} \; | \; 1 \leq i \leq n \} = 0$. Thus, there exists an index $i$ such that $\mono{X}{\alpha}=\ul{X}^{\ul{\beta}'_i}\in J$. As this is so for all $\ul{X}^{\ul{\alpha}} \in \monset I$, we conclude that $I \subseteq J$. By symmetry, we have $I = J$, as desired. Now, we check the triangle inequality: $\dist{I}{K} \leq \dist{I}{J} + \dist{J}{K}$. Let $\varepsilon > 0$ be given; we show that $\dist{I}{K} < \dist{I}{J} + \dist{J}{K}+\varepsilon$. Set $d=\dist{I}{J}$ and $e=\dist{J}{K}$, and let $\mono{X}{\alpha} \in \monset{I}$. We need to find a monomial $\mono{X}{\gamma} \in \monset{K}$ such that $\dist\alpha\gamma<d+e+\varepsilon$. Since $\dist{I}{J} < d + \frac{\varepsilon}{2}$, there is a monomial $\mono{X}{\beta} \in \monset{J}$ such that $\dist\alpha\beta<d + \frac{\varepsilon}{2}$. Similarly, there is a monomial $\mono{X}{\gamma} \in \monset{K}$ such that $\dist\beta\gamma<e + \frac{\varepsilon}{2}$. From the triangle equality in $\mathbb R^d$, we conclude that \begin{align*} \dist\alpha\gamma\leq\dist\alpha\beta+\dist\beta\gamma<d+e+\varepsilon \end{align*} as desired. \end{proof} \end{thm} \begin{thm}\label{tsemi-cont} Given a non-zero finitely generated monomial ideal $I$ in $R$, there exists $\varepsilon > 0$ such that for all non-zero monomial ideals $J,$ with $\dist{I}{J} < \varepsilon,$ we have $\mdim{R/J} \geq \mdim{R/I}$. \begin{proof} Since $\mdim{R/R}=-\infty$, we assume without loss of generality that $I\neq R$. Let $I=(\ul{X}^{\ul{\alpha}_1},\ldots, \ul{X}^{\ul{\alpha}_n})R$, and set $\varepsilon = \min\{\alpha_{ij} \; | \; \alpha_{ij} > 0\} > 0$. Let $J$ be a non-zero monomial ideal with $\dist{I}{J} < \varepsilon$. Claim: For every m-prime ideal $Q$ of $R$ such that $Q \supseteq I,$ we have $Q \supseteq J$. Let $\mono{X}{\gamma} \in \monset J$ and choose $\mono{X}{\delta} \in I$ such that $\dist{\ul\gamma}{\ul\delta} < \varepsilon$. There exists an index $i$ such that $\mono{X}{\delta}\in(\mon{X}{\alpha}{i})R\subseteq I\subseteq Q$. Since $Q$ is m-prime, there exists an index $j$ such that $\alpha_{i,j} > 0$ and $X_j^{\alpha_{i,j}} \in Q$. Hence, $X_j^t \in Q$ for all $t > 0$. Note that $0 < \varepsilon \leq \alpha_{i,j} \leq \delta_j$ and $|\gamma_j - \delta_j| \leq\dist{\ul{\gamma}}{\ul{\delta}}<\varepsilon$. Therefore, we have $- \varepsilon < \gamma_j - \delta_j < \varepsilon$, which implies that $0 \leq \delta_j - \varepsilon < \gamma_j$, so we conclude that $\mono{X}{\gamma} \in Q$, which establishes the claim. It follows $\{ P \supseteq I \; | \; P \text{ is m-prime}\} \subseteq \{ P' \supseteq J \; | \; P' \text{ is m-prime}\}$, and hence $\mdim{R/I} \leq \mdim{R/J}$. \end{proof} \end{thm} \begin{cor}\label{cor130521a} The monomial Krull dimension function is lower semicontinuous on the set of finitely generated monomial ideals of $R$. \end{cor} \begin{ex}\label{ex130508j} Let $I$ be a non-zero finitely generated monomial ideal of~$R$. If $\mdim{R/I}=d-1$, then there is a real number $\epsilon>0$ such that for all monomial ideals $J$ in $R$ such that $\dist IJ<\epsilon$, one has $\mdim{R/J}=d-1$. Indeed, \tref{semi-cont} provides a real number $\epsilon>0$ such that for all non-zero monomial ideals $J$ in $R$ such that $\dist IJ<\epsilon$, one has $\mdim{R/J}\geq d-1$. If $\mdim{R/J}> d-1$, then $\mdim{R/J}=d$, so Example~\ref{ex130509a} implies that $J=0$, a contradiction. If $\mathbb M^i=\mathbb R$ for all $i$, then (regardless of the value of $\mdim{R/I}$) for each real number $\epsilon>0$ there is a non-zero finitely generated monomial ideal $J\subset I$ in $R$ such that $0<\dist IJ<\epsilon$ and $\mdim{R/J}=d-1$. Indeed, consider the ideal $X_1^{\epsilon/2}I$, which is non-zero and finitely generated since $I$ is so. As in Example~\ref{ex130509c}, it is straightforward to show that $\dist{I}{X_1^{\epsilon/2}I}=\epsilon/2<\epsilon$. However, the ideal $X_1^{\epsilon/2}I$ is contained in $Q_{\{1\}}=(X_1^a\mid a>0)R$, so we have $\mdim{X_1^{\epsilon/2}I}\geq d-v(Q_{\{1\}})=d-1$. In particular, if $\mdim{R/I}<d-1$, then strict inequality can occur in \tref{semi-cont}. This behavior is depicted in the following two diagrams with $d=2$: the first diagram represents the ideal $I=J_{(1,1),(0,0)}=(X_1,X_2)R$ and the second one represents $X_1^{\epsilon/2}I=(X_1^{1+(\epsilon/2)},X_1^{\epsilon/2}X_2)R=(X_1^{\epsilon/2})R\cap(X_1^{1+(\epsilon/2)},X_2)R$. \ \begin{center} \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0,1.5) -- (1.5,1.5) -- (1.5,0) -- (3.2,0) -- (3.2,3.2) -- (0,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (0,1.5) -- (1.5,1.5); \draw (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \qquad \qquad \begin{tikzpicture} \draw (1.5,-0.2) node[below,scale=.75]{$1$} -- (1.5,0.2); \draw (3,-0.2) node[below,scale=.75]{$2$} -- (3,0.2); \draw (0.3,-0.2) node[below,scale=.75]{$\frac\epsilon 2$} -- (0.3,0.2); \draw (1.8,-0.2) node[below,scale=.75]{\,\,\,\,\,\,\,\,\,$1+\frac\epsilon 2$} -- (1.8,0.2); \draw (-0.2,1.5) node[left,scale=.75]{$1$} -- (0.2,1.5); \draw (-0.2,3) node[left,scale=.75]{$2$} -- (0.2,3); \draw[fill,color=black!50] (0.3,1.5) -- (1.8,1.5) -- (1.8,0) -- (3.2,0) -- (3.2,3.2) -- (0.3,3.2) -- cycle; \draw[->] (-0.2,0) -- (3.4,0); \draw[->] (0,-0.2) -- (0,3.4); \draw (0.3,1.5) -- (0.3,3.2); \draw (0.3,1.5) -- (1.8,1.5); \draw (1.8,0) -- (1.8,1.5); \end{tikzpicture} \end{center} \end{ex} \section*{Acknowledgments} We are grateful Jon Totushek for teaching us how to create the diagrams for our examples. \bibliographystyle{plain}
2,877,628,090,938
arxiv
\section{Introduction} With the development of computing power and AI (artificial intelligence), significant progress has been made in unmanned aerial vehicles, autonomous driving and visual tracking. The applications have to deal with many challenging scenarios including high-speed motion and low light condition in real time. However, common traditional cameras with frame-based paradigms are not a fine choice to capture the high-speed motion of objects due to their low time resolution. Besides, video streams from the traditional cameras are not conducive to real-time computing because the dense and redundant data brings extra computational costs to classical algorithms such as convolutional neural networks (CNNs) \cite{net1, net2, net3, net4, net5}. \\\indent Event cameras referring to the human visual system can asynchronously output sparse event streams according to the change of brightness and have high temporal resolution. Hence, it is more suitable for robotics and computer vision tasks in high-speed motion scene than traditional cameras. However, event cameras cannot capture the visual texture of objects such as dynamic vision sensor (DVS) \cite{dvs1, dvs2}. Although some event cameras can solve above problems by combing DVS and conventional image sensor (DAVIS \cite{dvs3}), or adding an extra photo-measurement circuit (ATIS \cite{dvs4}, CeleX \cite{dvs5}), there is motion mismatch due to the difference of the sampling time resolution between DVS and extra photo-measurement circuit. Similar to event cameras, Spike camera \cite{spikecamera} is also inspired by biological visual systems. Specifically, Spike camera models integrate-and-fire neurons and can report per-pixel brightness accumulation by outputting a sparse spike data. Hence, it has not only the similar advantage as event cameras, i.e. high temporal resolution (40000Hz), but also it can capture the visual texture of objects. Although Spike camera can theoretically sample in all kinds of scene, its sampling model (FSM \cite{spikecamera}) in complex environments is not ideal enough due to the presence of quantization (as fig3) and noise (as fig4). And the sampling of Spike camera is susceptible to noise largely while human visual systems are robust to noise. Therefore, there still exists significant potential for the improvement of the bio-inspired sampling model.\\\indent To this end, our aim is to improve the ability of Spike camera to capture the texture information in high-speed motion scenes. Our main contributions are summarized as follows:\\\indent \begin{figure*}[ht] \includegraphics[width=\linewidth]{f3.pdf} \centering \caption{The connection structure of cells in the human visual system. (a) denotes the actual connection structure of cells in retina \cite{bio5}. (b) denotes the simplified connection structure in FSM. (c) denotes the critical connection structure ignored by FSM. (d) shows ``Off" and ``on" receptive field. ``+" represents that corresponding position is sensitive to brightening, and ``-" represents that corresponding position is sensitive to darkening.}\label{fig1} \end{figure*} \begin{itemize} \item[1)] We propose a novel and robust visual sampling model inspired by receptive field (RVSM) where wavelet filter bank generated by DoG (RVSM$_{Dog}$) and Gaussian filter bank (RVSM$_{Gauss}$) are used to mimic the receptive field mechanism of ganglion cells in human retina respectively. \end{itemize} \begin{itemize} \item[2)] An efficient method to convert spike data from RVSM into images is proposed which is similar to inverse wavelet transform. By comparing images reconstructed from spike data in our dataset, we find RVSM can capture much more texture details in motion scenes than FSM. Besides, by collecting regional information, RVSM can filter high intensity noise effectively which is consistent with our understanding of the human visual system. \end{itemize} \begin{itemize} \item[3)] We propose a high-speed motion spike dataset (HMD) which covers various motion scenes (single object and multi-object motion). HMD includes spike data from RVSM$_{Dog}$, RVSM$_{Gauss}$ and FSM respectively generated by Spike camera simulator and corresponding image sequences. \end{itemize} \section{Related Work and Expansion} \subsection{Fovea-like Sampling Method} As a bio-inspired sampling method on Spike camera, FSM mainly mimics fovea in human visual system, where a photoreceptor first converts optical signal into electrical signal, then some bipolar cell processes electrical signal and sends it to some connected ganglion cell, finally the ganglion cell decides whether to output a spike \cite{fovea1, fovea2}. FSM conceives the above visual sampling process as a summation process (sometimes also called ‘integration’ process) combined with a mechanism that triggers action potentials above some critical voltage (as fig2). \begin{figure}[htb] \includegraphics[width=8cm]{f2.pdf} \centering \caption{Fovea-like sampling workflow}\label{fig2} \end{figure} Specifically, in Spike camera, the intensity of light is converted into voltage by the photoreceptor. Once the analog-to-digital converter (ADC) completes the signal conversion and outputs the digital brightness, the accumulator at each pixel accumulates the brightness. At the moment \begin{small}$t$\end{small}, for pixel \begin{small}$(i, j)$\end{small}, if the accumulated brightness arrives a fixed threshold \begin{small}$\phi$\end{small} (as (1)), then a spike is fired and the corresponding accumulator is reset. \begin{eqnarray} B(i, j, t) = \int_{t_{i, j}^{pre}}^{t} I(i, j, \tau) d\tau\geq \phi, \end{eqnarray} where \begin{small}$B(i, j, t)$\end{small} is the accumulated brightness at sampling time \begin{small}$t$\end{small}, \begin{small}$I(i, j, \tau)$\end{small} refers to the brightness of pixel \begin{small}$(i, j)$ \end{small} at time \begin{small}$\tau$\end{small}, and \begin{small} $t_{i, j}^{pre}$\end{small} expresses the last time when spike is fired at pixel \begin{small}$(i, j)$ \end{small} before time \begin{small}$t$ \end{small}. If \begin{small} $t$\end{small} is the first time to send a spike, then \begin{small}$t_{i, j}^{pre}$ \end{small} is set 0. Further, spike data can be mathematically defined as, \begin{eqnarray} S_{FSM}(i, j, t) = \begin{cases} 1 &\mbox{ if (1) is satisfied}, \\ 0 &\mbox{ if (1) is not satisfied}, \\ \end{cases} \end{eqnarray} where, for pixel \begin{small}$(i, j)$ \end{small}, if a spike is outputted at sampling time \begin{small}$t$ \end{small}, \begin{small}$S(i, j, t)$ \end{small} is set digital signal “1”, otherwise \begin{small} $S(i, j, t)$\end{small} is set “0”. Accordingly, the average brightness of pixel \begin{small}$(i, j)$ \end{small} between time \begin{small}$t$ \end{small} and \begin{small}$t_{i, j}^{pre}$ \end{small} can be calculated \cite{spikecamera}, i.e., \begin{eqnarray} \bar I(i, j) \approx \dfrac{\phi}{t - t_{i, j}^{pre}} = \dfrac{\phi}{{n\Delta t}}, \end{eqnarray} where, \begin{small}$\Delta t$ \end{small} is the sampling time interval and \begin{small}$n \in \mathbb{N}$ \end{small} denotes the number of intervals. Hence, Spike camera can capture the visual texture of objects in all kinds of scenes including static and dynamic. \begin{figure}[t] \includegraphics[width=8cm]{quantity.pdf} \centering \caption{Quantization error in Spike camera. (a) is a virtual scene. (b) is the reconstructed image from spike data. The whole sampling process is simulated in a Spike camera simulator \cite{sim2}, threshold $\phi = 400$ and the reconstructed method is TFI \cite{spikecamera}. }\label{fig3} \end{figure} \begin{figure}[t] \includegraphics[width=8cm]{noise.pdf} \centering \caption{Noise in Spike camera. We use the Spike camera to sample four kinds of spike data under different light conditions and the sampled scene is a black paper. (a)(b)(c)(d) are respectively from four kinds of spike data. (a) is sampled under no illumination in the room. (b) is sampled under weak illumination in the room. (c) is sampled under strong illumination in the room. (d) is sampled under direct illumination in the room. All white dot in (a)(b)(c)(d) is from noise and the number of white dot increases with the increase of illumination. (e) is our experimental device.} \end{figure} \subsection{Quantization Error} In fact, Spike camera in complex environments is not ideal enough, i.e., there is texture blur in reconstructed images (fig3). Especially for extreme light conditions, e.g., direct sunlight, the texture blur is more obvious. Quantization error is one of the main factors leading to above problem. Before that, quantization error in Spike camera has not been discussed in detail. In the work, we first give corresponding theoretical analysis. Quantization error comes from discrete sampling which results in spike is sent discretely and $\bar I(i, j)$ in a certain range can be estimated to be the same value. Although quantization error cannot be avoided, it can be reduced by increasing threshold. The cost of the method is to increase the response time of Spike camera to scenes, i.e., the time from the beginning of sampling to first spike. Hence, increasing threshold is not a good method to improve the performance of Spike camera. The proof of above conclusion is given in the appendix. In addition, \cite{recon1} uses network to post process spike data to improve the quality of reconstructed images. However, it can introduce a lot of extra time and space costs. \subsection{Receptive Field Model} FSM mimicking human visual system ignores some important mechanisms, e.g., receptive field mechanism of ganglion cells which causes the loss of some function in sampling. Specifically, FSM considers that a bipolar cell only connects one photoreceptor and one ganglion cell, but ganglion cells can actually respond to a field of photoreceptors referred to receptive field \cite{bio1, bio2, bio3, bio4, bio5} (as fig1). The receptive field mechanism plays an important role in describing image texture information distinctly and robustly \cite{derf}. Different ganglion cells have receptive fields of different scales and different polarity (on or off, as fig1(d)) which controls the size of response area and the sensitivity to light changes respectively \cite{recep1, recep2, recep3, recep4}. And Gaussian filter bank (as (4)) is a popular model for receptive field \cite{freak, daisy}. However, the signal representation ability of Gaussian filter bank is limited which means the information in scene cannot be fully characterized. Beisdes, receptive field can also be simulated by DoG filter \cite{bio1}. And \cite{derf} use wavelet filter bank, generated by DoG filters consisting of the difference of two Gaussian filters, to simulate the receptive field of ganglion cells and making a big success on local image descriptor. \\\indent Accordingly, the process where ganglion cells deal with the electrical signal from photoreceptors is similar to wavelet transform \cite{bio1}. After ganglion cells have processed electrical signal, they decide whether to fire a spike according to their own activation level \cite{bio4, bio5}. Although wavelet transform is suitable to model the receptive field mechanism and its related theories are mature \cite{wave1, wave2, wave3}, the transform is not used in the visual sampling of Spike camera. Hence, how to embed wavelet transform into visual sampling process is a problem worthy of study. \begin{figure}[htb] \includegraphics[width=0.8\linewidth]{newp.jpg} \centering \caption{The function ``v2s" in the Spike camera simulator where image sequences can be converted to spike data.}\label{fig4} \end{figure} \begin{figure*}[htbp] \includegraphics[width=\linewidth]{workflow.pdf} \centering \caption{The workflow of RVSM. Firstly, every accumulator accumulates the brightness in Photoreceptos according to own receptive field. Then, the accumulators completing trigger condition fire spike and reset own accumulation. And spike data records whether accumulators send spike or not each sampling.}\label{fig5} \end{figure*} \subsection{Spike Camera Simulator} All our experiments are finished in the Spike camera simulator \cite{sim2}. The Spike camera simulator provides the simulation only for Spike camera and has some same extended function as esim \cite{sim1} (a simulator for the sampling of DVS). Specifically, the Spike camera simulator implements an approximate simulation about the sampling mechanism of the Spike camera in the time domain and the space domain. We can introduce RVSM into the Spike camera simulator and use it to convert videos to spike data (as fig5). \section{A Robust Visual Sampling Model Inspired by Receptive Field} \subsection{Model Architecture} RVSM is a bio-inspired visual sampling model which can capture the texture of objects in all kinds of scenes. Different from FSM, RVSM considers receptive field mechanism in the human visual system and is more similar to biological visual sampling. And, in RVSM, the intensity of light is converted into voltage by the photoreceptor. Once the analog-to-digital converter (ADC) completes the signal conversion and outputs the digital luminance intensity, the accumulator at each pixel can accumulate the weighted sum of intensity in its own receptive field where weighted is controlled by used filter, called as "summation process". Here, we use the normalized DoG filter to simulate the receptive field (RVSM$_{DoG}$) and it can be generated by Gaussian filter (as (4)). \begin{equation} G_{\sigma}^{x_0, y_0}(i, j) = \dfrac{1}{{2*\pi}\sigma^2}\exp^{-\dfrac{(i - x_0)^2 + (j - y_0)^2}{2\sigma^2}}, \end{equation} where \begin{small}$G_{\sigma}^{x_0, y_0}(i, j)$\end{small} is Gaussian filter, \begin{small}$\sigma$ \end{small} is standard deviation and \begin{small}$(x_0, y_0)$ \end{small} is expectation. Accordingly, the DoG filter (as (5)) can be as the mother wavelet of normalized DoG filter bank. \begin{equation} \begin{split} DoG(i, j)= {G_{a_1}^{0, 0}(i, j) - G_{a_2}^{0, 0}(i, j) }, \end{split} \end{equation} where, we set \begin{small}$a_1 = 1$\end{small},\begin{small}$a_2 = 1.5874$\end{small}. Further, we can get normalised DoG filter bank by translation, scaling and normalization \begin{equation} \begin{split} DoG_{\sigma}^{x_0, y_0}(i, j)= \dfrac{DoG(\dfrac{i-x_0}{\sigma},\dfrac{j-y_0}{\sigma})}{\sum\limits_{(p, q)\in C_{\sigma}^{x_0, y_0}} \!\!\!\!\!\!\!\! |DoG(\dfrac{p-x_0}{\sigma},\dfrac{q-y_0}{\sigma})|}, \end{split} \end{equation} where \begin{small}$i, j \in \mathbb{Z}$\end{small}, \begin{small} $C_{\sigma}^{x_0, y_0} = [x_0-L_{\sigma}, x_0+L_{\sigma}] \times[y_0-L_{\sigma}, y_0+L_{\sigma}] \cap \mathbb{Z}^2$\end{small}, \begin{small}$L_{\sigma} \in \mathbb{Z}$\end{small} is the template size decided by the scale of receptive field, \begin{small} $\sigma$\end{small} controls the scale of receptive field. And the above summation process can be expressed as, \begin{eqnarray} A_{\sigma}^{x_0, y_0}(t) =\!\!\!\!\!\!\! \sum\limits_{(i, j)\in C_{\sigma}^{x_0, y_0}} \!\!\!\!\!\!\!DoG_{\sigma}^{x_0, y_0}(i, j) \int_{t_{DoG_{\sigma}^{x_0, y_0}}^{pre}}^{t} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! I(i, j, \tau) d\tau, \end{eqnarray} where \begin{small}$A_{\sigma}^{x_0, y_0}(t)$\end{small} expresses the accumulation of an accumulator with receptive field \begin{small}$DoG_{\sigma}^{x_0, y_0}$\end{small} in pixel \begin{small}$(x_0, y_0)$\end{small} at sampling time \begin{small}$t$\end{small}, \begin{small}$t^{pre}_{DoG_{\sigma}^{x_0, y_0}}$\end{small} is the last time when a spike is fired by the normalized DoG filter \begin{small}$DoG_{\sigma}^{x_0, y_0}$\end{small} before sampling time \begin{small}$t$\end{small} and is initially set to 0, and the number of normalised DoG filter is limited, i.e., \begin{small}$\sigma$\end{small} take finite values. And we assume the set of all possible \begin{small}$\sigma$\end{small} is \begin{small}$P$\end{small}. In particular, RVSM$_{DoG}$ is the same as FSM when we only use a normalized DoG filter with unit scale (the template size of filter is 1) to sample. If the absolute value of accumulation of an accumulator arrives a fixed threshold \begin{small}$\phi$\end{small}, a spike can be fired. Hence, the spike data from RVSM$_{DoG}$ can be expressed as, \begin{eqnarray} S_{\sigma}(x_0, y_0, t) = \begin{cases} 1 &\mbox{ if $A_{\sigma}^{x_0, y_0}(t) >= \phi$}, \\ -1 &\mbox{ if $ A_{\sigma}^{x_0, y_0}(t) <= -\phi$}, \\ 0 &\mbox{ else}, \\ \end{cases} \end{eqnarray} where \begin{small}$\phi \geq 0$\end{small}, for pixel \begin{small}$(x_0, y_0)$\end{small}, \begin{small}$S_{\sigma}(x_0, y_0, t)$\end{small} is set digital signal “1” (“-1”) if corresponding accumulation arrives threshold (negative threshold), otherwise \begin{small}$S_{\sigma}(x_0, y_0, t)$\end{small} is set “0”. After a spike is fired, the accumulation of corresponding accumulator is reset. Beisdes, we can also use normalized Gaussian filter to simulate the receptive field in the sampling model (we called RVSM$_{Gauss}$), i.e., replacing normalized DoG filter with normalized Gaussian filter. Detailed formula is in appendix. The whole sampling workflow is shown in fig6. \subsection{Visual Texture Reconstruction} \begin{figure*}[htbp] \includegraphics[width=\linewidth]{reconstruct.png} \centering \caption{The result of reconstructed images. From left to right, scenes are ``Character", ``Teapot", ``Coin", ``Grasshopper", ``Flyball" and ``Driving" respectively. Besides, (a) is ground truth, the reconstructed images in (b1)(b2)(b3) corresponds to spike data sampled by FSM, RVSM$_{Gauss}$ (One Gauss) and RVSM$_{DoG}$ (Three DoG) respectively in the absence of noise and the reconstructed images in (c1)(c2)(c3) corresponds to spike data sampled by FSM, RVSM$_{Gauss}$ (Four Gauss) and RVSM$_{DoG}$ (Four DoG) respectively in the presence of noise where the noise intensity uses default settings in Spike camera simulator.}\label{fig6} \end{figure*} \begin{table*}[ht]\large \centering \begin{spacing}{1.1} \renewcommand\tabcolsep{10.0pt} \resizebox{\hsize}{!}{ \begin{tabular}{c|c|cccccccccccc} \toprule[2pt] \hline \multirow{2}*{\textbf{Metric}} & \multirow{2}*{\textbf{Method}} & \multicolumn{12}{|c}{\textbf{Scene}} \\ \cline{3 - 14} & & \textbf{Character} & \textbf{Character(N)} & \textbf{Teapot} & \textbf{Teapot(N)} & \textbf{Coin} & \textbf{Coin(N)} & \textbf{Grasshopper} & \textbf{Grasshopper(N)} & \textbf{Flyball} & \textbf{Flyball(N)} & \textbf{Driving} & \textbf{Driving(N)} \\ \hline \multirow{9}*{\textbf{MSE}} & FSM & 129.29 & 400.43 & 187.38 & 318.33 & 352.71 & 488.72 & 60.78 & 123.58 & 432.64 & 653.92 & 186.32 & 316.69 \\ & One DoG & 170.44 & 653.81 & 216.27 & 505.21 & 381.64 & 774.12 & 68.27 & 199.09 &477.32 &1031.74 &297.91 &557.42 \\ & Two DoG & 152.06 & 586.81 & 206.24 & 455.25 & 370.52 & 686.57 & 54.38 & 166.22 &459.16 &927.13 &344.85 &568.38 \\ & Three DoG & \textbf{96.16} & 283.18 & \textbf{87.47} & 204.87 & 135.59 & 289.90 & 56.65 & 105.88 &150.74 &373.40 &\textbf{160.69} &264.39 \\ & Four DoG & 109.86 & 209.19 & 93.34 & 130.23 & \textbf{111.12} & 132.19 & \textbf{51.46} & 60.32 &\textbf{127.85} & 182.14&217.25 &242.17 \\ & One Gauss & 119.93 & 320.79 & 182.46 & 262.76 & 355.08 & 398.87 & 59.26 & 100.40 &431.44 &533.63 &185.38 &271.96 \\ & Two Gauss & 110.56 & 139.77 & 175.69 & 129.68 & 351.99 & 185.62 & 57.59 & 46.84 &420.70 &249.24 & 196.18&\textbf{163.05} \\ & Three Gauss & 109.50 & 98.24 & 174.79 & 100.73 & 353.63 & 133.90 & 57.67 & 33.09&418.12 &179.57 &232.50 &172.86 \\ & Four Gauss & 112.74 & \textbf{89.02} & 176.92 & \textbf{95.83} & 357.77 & \textbf{118.84} & 58.55 & \textbf{28.67} &419.15 & \textbf{157.89}&277.38 &209.13 \\ \hline \hline \multirow{9}*{\textbf{PSNR}} & FSM & 27.02 & 22.11 & 25.41 & 23.10 & 22.68 & 21.26 & 30.29 & 27.21 &21.86 &20.02 &25.43 &23.12 \\ & One DoG & 25.81 & 19.97 & 24.78 & 21.09 & 22.33 & 19.25 & 29.79 & 25.14 &21.42 &18.04 &23.38 &20.67 \\ & Two DoG & 26.31 & 20.45 & 24.98 & 21.55 & 22.47 & 19.78 & 30.77 & 25.92 &21.59 &18.50 &22.75 &20.59 \\ & Three DoG & \textbf{28.30} & 23.61 & \textbf{28.71} & 25.01 & 26.83 & 23.52 & 30.59 & 27.88 &26.37 &22.43 & \textbf{26.07}&23.91 \\ & Four DoG & 27.72 & 24.92 & 28.43 & 26.98 & \textbf{27.75} & 26.96 & \textbf{31.06} & 30.32 &\textbf{27.12} &25.53 &24.76 &24.28 \\ & One Gauss & 27.34 & 23.06 & 25.51 & 25.51 & 22.65 & 22.14 & 30.40 & 28.11 & 21.88&20.91 & 25.45&23.78 \\ & Two Gauss & 27.69 & 26.67 & 25.68 & 27.00 & 22.69 & 25.46 & 30.52 & 31.42 & 21.99 &24.21 & 25.20&\textbf{26.01} \\ & Three Gauss & 27.73 & 28.19 & 25.70 & 28.09 & 22.67 & 26.88 & 30.52 & 32.93&22.02 &25.63 & 24.47&25.75 \\ & Four Gauss & 27.61 & \textbf{28.63} & 25.65 & \textbf{28.31} & 22.62 & \textbf{27.41} & 30.45 & \textbf{33.55} &22.00 &\textbf{26.19} &23.70 &24.92 \\ \hline \hline \multirow{9}*{\textbf{SSIM}} & FSM & 0.749 & 0.346 & 0.837 & 0.524 & 0.881 & 0.362 & 0.943 &0.438 &0.770 & 0.266 &0.801 &0.581 \\ & One DoG & 0.707 & 0.267 & 0.812 & 0.434 & 0.854 & 0.288 & 0.930 & 0.318 & 0.747& 0.203 & 0.748&0.522 \\ & Two DoG & 0.724 & 0.285 & 0.824 & 0.455 & 0.857 & 0.306 & 0.931 & 0.352 &0.757 &0.214 & 0.739&0.523 \\ & Three DoG & 0.797 & 0.413 & 0.876 & 0.586 & 0.901 & 0.432 & 0.947 & 0.539 &0.811 &0.319 & 0.821&0.622 \\ & Four DoG & \textbf{0.803} & 0.535 & \textbf{0.888} & 0.703 & \textbf{0.919} & 0.626 & \textbf{0.965} & 0.752 &\textbf{0.852} &0.494 & \textbf{0.854}&0.697 \\ & One Gauss & 0.760 & 0.387 & 0.842 & 0.563 & 0.889 & 0.396 & 0.946 & 0.495 &0.778 &0.298 &0.808 &0.598 \\ & Two Gauss & 0.777 & 0.582 & 0.851 & 0.719 & 0.893 & 0.561 & 0.952 & 0.712 & 0.786&0.466 &0.816 &0.703 \\ & Three Gauss & 0.787 & 0.694 & 0.854 & 0.798 & 0.895 & 0.669 & 0.955 & 0.821 &0.791 &0.588 & 0.806&0.749 \\ & Four Gauss & 0.789 & \textbf{0.755} & 0.854 & \textbf{0.840} & 0.894 & \textbf{0.739} & 0.957 & \textbf{0.878} & 0.793& \textbf{0.671} &0.786 &\textbf{0.760} \\ \hline \bottomrule[2pt] \end{tabular} } \end{spacing} \caption{The quantitative metrics on HMD where the spike data is sampled in the presence of noise for Scene(N) and the spike data is sampled without noise for Scene.} \label{table5} \end{table*} Spike data from FSM can be easily used to reconstruct scene information according to its actual meaning, i.e., a spike means brightness accumulation is large enough. Similarly, to illustrate the validity of spike data from RVSM, we also provide an easy method to restore the captured scene according to actual meaning of spike data from RVSM. In RVSM, a spike means that the weighted accumulation of brightness in receptive field arrives activation level. Further, a spike in RVSM$_{DoG}$ means the absolute value of the coefficient of the brightness accumulation signal on a normalized DoG basis is large enough because the whole summation process is realized by an inner product operation (as (7)). Hence, spike data from RVSM$_{DoG}$ can report brightness accumulation signal in wavelet domain. We assume the coefficient matrix of brightness accumulation signal in wavelet domain is \begin{small}$K_{\sigma}^{i, j}(t)$\end{small}. And \begin{small}$K_{\sigma}^{i, j}(t)$\end{small} can be estimated as, \begin{align} K_{\sigma}^{i, j} & (t) \approx \\ & \begin{cases} \dfrac{\phi}{t - t_{DoG_{\sigma}^{i, j}}^{pre}} &\mbox{ if $S_{\sigma}(x_0, y_0, t) = 1$}, \\\\ \dfrac{-\phi}{t - t_{DoG_{\sigma}^{i, j}}^{pre}} &\mbox{ if $S_{\sigma}(x_0, y_0, t) = -1$}, \\\\ K_{\sigma}^{i, j}(t - 1) &\mbox{ else}, \\ \nonumber \end{cases} \end{align} where \begin{small}${t - t_{DoG_{\sigma}^{i, j}}^{pre}}$\end{small} expresses the time from the last spike to present, \begin{small}$K_{\sigma}^{i, j}(t)$\end{small} can be updated if the accumulator with receptive field \begin{small}$DoG_{\sigma}^{i, j}$\end{small} fires a spike at the sampling time \begin{small}$t$\end{small}, otherwise \begin{small}$K_{\sigma}^{i, j}(t)$\end{small} uses the coefficient at the sampling time \begin{small}$t - 1$\end{small}, and \begin{small}$K_{\sigma}^{i, j}(0)$\end{small} is set 0. Accordingly, we can obtain the approximate brightness accumulation signal using the inverse wavelet transformation of the coefficient matrix, \begin{eqnarray} I(x, y, t) \approx \sum\limits_{i, j, \sigma} K_{\sigma}^{i, j}(t) DoG_{\sigma}^{i, j}(x, y) \end{eqnarray} where \begin{small}$I(x, y, t)$\end{small} denotes the brightness of pixel \begin{small}$(x, y)$\end{small} at sampling time \begin{small}$t$\end{small}, \begin{small}$(i, j) \in \mathbb{Z}^2 \cap ([1, height] \times [1, width])$\end{small}, \begin{small}$height \times width$\end{small} controls the resolution size of sampling and \begin{small}$\sigma \in P$\end{small}. Similarly, spike data from RVSM$_{Gauss}$ and more general RVSM also can estimate brightness of scenes and detailed formula is in appendix. \subsection{The Generalization of RVSM} Although, we choose Spike camera as the carrier of RVSM due to great potential of Spike camera, this does not mean that RVSM is only suitable for Spike camera. As an idea inspired by receptive field to collect regional information, RVSM can also be used in other neuromorphic vision sensors e.g., DVS and we just need to make corresponding changes according to the principle of different sensors. The related detail is in appendix. \section{Experiment} \subsection{Dataset} To fully compare the sampling performance among RVSM$_{DoG}$, RVSM$_{Gauss}$ and FSM, we provide a high-speed motion spike dataset (HMD) including six scenes. The dataset has $6 \times 9 \times 2$ spike sequences, i.e., each scene contains $9\times 2$ spike sequences (with or without noise) captured by FSM, RVSM$_{DoG}$ with four kinds of normalized DoG filter bank (we called them as One DoG, Two DoG, Three DoG and Four DoG, the corresponding \begin{small}$P$\end{small} is \begin{small}$\{0.24\}, \{0.24,\; 0.348\}, \{0.24,\; 0.348,\; 0.5046\}, \{0.24,\; 0.348, \\ 0.5046,\, 0.7317\}$\end{small} and RVSM$_{Gauss}$ with four kinds of normalized Gaussian filter bank (we called them as One Gauss, Two Gauss, Three Gauss and Four Gauss) respectively. The configuration of noise is the same as simulator. Note that, for fairness, we ensure that RVSM$_{DoG}$ and RVSM$_{Gauss}$ have the same response time to scenes with FSM and the related details are in appendix. The above spike data is generated by Spike camera simulator. Besides, the dataset also has 6 image sequences as ground truth. The six class scenes are named as ``Character", ``Teapot", ``Coin", ``Grasshopper", ``Flyball" and ``Driving" where ``Character" corresponding to 500 images describes the characters with simple rotation (uniform rotation on a dimension), ``Teapot" corresponding to 500 images describes a teapot with easy rotation (uniform rotation in three dimensions), ``Coin" corresponding to 500 images describes the complex rotation of a coin, ``Grasshopper" corresponding to 500 images describes a simple motion of grasshopper jumping, ``Flyball" corresponding to 500 images describes a complex motion of flying ball and ``Driving" corresponding to 500 images describes vehicle driving in complex scenes. \subsection{The Performance of RVSM} We compare the sampling performance among RVSM$_{DoG}$, RVSM$_{Gauss}$ and FSM by calculating the metric (PSNR, MSE and SSIM) of reconstructed images from RVSM$_{DoG}$ (One DoG, Two DoG, Three DoG and Four DoG), RVSM$_{Gauss}$ (One Gauss, Two Gauss, Three Gauss and Four Gauss) and FSM respectively. For spike data from FSM, we use TFI \cite{spikecamera} to reconstruct images because TFP \cite{spikecamera} is not suitable for spike data sampled in high-speed scenes. For One DoG, Two DoG, Three DoG and Four DoG (One Gauss, Two Gauss, Three Gauss and Four Gauss), the latter introduces a normalized DoG filter (normalized Gaussian filter) with a larger scale than the former and their the minimum scale of filter corresponds to template size 3x3. Besides, the reconstructed images are adjusted to the same brightness level as ground truth. \\\indent Fig7 shows the experimental results. In the absence of noise (fig7(b1)(b2)(b3)), we can find that, for complex scenes e.g., ``Flyball", RVSM$_{DoG}$ can capture fine textures and has a higher contrast. From table 1, we can also get a consistent conclusion that the images from RVSM$_{DoG}$ (Three DoG and Four DoG) have much better quality than the images from FSM and RVSM$_{Gauss}$ in the absence of noise. It shows that RVSM$_{DoG}$ is so less affected by quantization error that the model can more effectively sample the texture information in high-speed motion scenes. Although the performance of RVSM$_{Gauss}$ is far inferior to that of RVSM$_{DoG}$, RVSM$_{Gauss}$ (One Gauss) also has similar performance in most scenes to FSM. The above results shows that RVSM by introducing receptive field mechanism to sample regional information is an effective visual sampling model for Spike camera. And how to choose the filter bank used to simulate receptive field is so important which decides the performance of RVSM. \\\indent RVSM still shows powerful performance compared with FSM in the presence of noise (as Table 1). The conclusion is a little different from that without noise. First, for all scenes, the quality of reconstructed images from RVSM$_{DoG}$ (Four DoG) and RVSM$_{Gauss}$ (Four Gauss) is much better than that from FSM. It means that, in a more real environment (with noise), RVSM can sample the information of scenes more effectively due to suffering less quantization error and noise error and improves the problem that Spike camera is sensitive to noise largely. Interestingly, by comparing the change of quantization metrics before and after adding noise, we also find that the reconstruction result from RVSM$_{Gauss}$ is better in the presence of noise than in the absence of noise because noise offsets part of the quantization error. Besides, as more the receptive field with large scale is used to sample (from One DoG to Four DoG, from One Gauss to Four Gauss), the quality of reconstructed images is greatly improved for most scenes. This is because the receptive field with large scale is more robust to noise and the conclusion is confirmed by the subsequent robustness experiments. \subsection{The Robustness of RVSM} \begin{figure*}[ht] \includegraphics[width=\linewidth]{f7.pdf} \centering \caption{The influence of noise intensity on ASS $I_1$, ASAS $I_2$ and ASASS $I_3$ where the scale set $\sigma$ of Scale 1, Scale 2, Scale 3 and Scale 4 is 0.24, 0.348, 0.5046 and 0.7317.}\label{fig8} \end{figure*} In the actual sampling process, noise is nowhere and excellent sampling methods can effectively filter all kinds of noise. Hence, we study the effects of different intensities of noise on RVSM$_{DoG}$ and FSM. In FSM and RVSM$_{DoG}$, Noise mainly occurs in the process of light intensity accumulation \cite{sim2}. Here, we consider the noise caused by dark electric current, the offset voltage and capacitor. Further, the process of light intensity accumulation and spike data for RVSM$_{DoG}$ can be updated as, \begin{align} \!\!\!\!A_{\sigma}^{x_0, y_0}(t) & =\!\!\!\!\!\! \sum\limits_{(i, j)\in C_{\sigma}^{x_0, y_0}} \!\!\!\!\!\!DoG_{\sigma}^{x_0, y_0}(i, j) \!\!\int_{t_{DoG_{\sigma}^{x_0, y_0}}^{pre}}^{t} \!\!\!\!\!\!\!\!\!(I(i, j, \tau) \nonumber \\ \\ & + I_{dark}(i, j, \tau)) d\tau, \nonumber \end{align} \begin{align} S_{\sigma}&(x_0, y_0, t) = \\ & \begin{cases} 1 &\mbox{ if \begin{small} $A_{\sigma}^{x_0, y_0}(t) >= \theta(i, j) \phi + V_{OS}(i, j)$\end{small}}, \\ -1 &\mbox{ if \begin{small}$ A_{\sigma}^{x_0, y_0}(t) <= -(\theta(i, j) \phi + V_{OS}(i, j))$\end{small}}, \\ 0 &\mbox{ else}, \nonumber \end{cases} \end{align} where $I_{dark}(i, j, \tau)$, $V_{OS}(i, j)$ and $\theta(i, j)$ are noise random variable. They denote the dark electric current, the offset voltage and capacitor noise respectively. For FSM, the process of light intensity accumulation and spike data is in simulator. \\\indent We design an easy scene to test the robustness of the sampling model in Spike camera simulator. In the scene, the background is black which means that the work current $I$ is ``0". Therefore, all spikes are generated by the noise. Further, we use three kinds of the index to describe the robustness of sampling models, i.e., the average number of spike per sampling (ASS, $I_1$), the average number of spike generated by each accumulator per sampling (ASAS, $I_2$) and the average number of spike generated by all accumulators with the same scale per sampling (ASASS, $I_3$). The ASS can be defined as, \begin{equation} I_1 = \begin{cases} \dfrac{\sum\limits_{i,j,t} S_{FSM}(i, j, t)}{T} &\mbox{ for FSM }, \\\\ \dfrac{\sum\limits_{\sigma,i,j,t} |S_{\sigma}(i, j, t)|}{T} &\mbox{ for RVSM$_{DoG}$}, \\ \end{cases} \end{equation} where \begin{small}$T$\end{small} is set to 1000 which denotes the total number of sampling, \begin{small}$\sigma \in P$\end{small}, \begin{small}$(i, j) \in \mathbb{Z}^2 \cap [1, H] \times [1, W]$\end{small}, \begin{small}$H \times W$\end{small} controls the resolution size of sampling and here \begin{small}$H$\end{small} and \begin{small}$W$\end{small} are both set to 100. Further, the ASAS can be expressed as, \begin{equation} I_2 = \begin{cases} \dfrac{I_1}{WH} &\mbox{ for FSM }, \\\\ \dfrac{I_1}{WH|P|} &\mbox{ for RVSM$_{DoG}$}, \\ \end{cases} \end{equation} where $|P|$ is the number of elements in $P$. And the ASASS can be expressed as, (\begin{small}$\sigma$\end{small}) as, \begin{equation} I_3(\sigma) = \begin{cases} I_1 &\mbox{ for FSM }, \\\\ \dfrac{\sum\limits_{i,j,t}|S_{\sigma}(i, j, t)|}{WH} &\mbox{ for RVSM$_{DoG}$}, \\ \end{cases} \end{equation} Besides, for simplicity, we assume that the noise are independent and identically distributed. In the experiment, \begin{small}$I_{dark}(i, j, \tau)$\end{small}, \begin{small}$V_{OS}(i, j)$\end{small} and \begin{small}$\theta(i, j)$\end{small} are set to Gaussian distribution, i.e., \begin{align} I_{dark}(i, j, \tau) \sim N(e_1, (\beta_1 *k)^2), \\ V_{OS}(i, j) \sim N(e_2, (\beta_2 *k)^2), \\ \theta(i, j) \sim N(e_3, (\beta_3 *k)^2), \end{align} where $e_1$, $e_2$ and $e_3$ are expectation, $\beta_1*k$, $\beta_2*k$ and $\beta_3*k$ are standard deviation and we use $k$ to control their noise intensity. \\\indent The result is showed in fig8. In fig8(a), we can find the average number of spike per sampling increases with the increase of standard deviation, i.e., the amount of noise increases. Besides, the average number of spike per sampling in RVSM$_{DoG}$ is less when the standard deviation is lower than some fixed value and the average number of spike per sampling in Three DoG and Four DoG is more than FSM a little when the standard deviation is large. It means RVSM$_{DoG}$ can produce less noise than FSM when the noise intensity is not large. fig8(b) shows the average number of spike generated by each accumulator per sampling is less than FSM for all standard deviation. It means accumulators in RVSM$_{DoG}$ have a better ability to filter noise. Further, we can see that, with the introduction of accumulators with a larger scale, the average number of spike generated by each accumulator per sampling decreases. Hence, the accumulator with large scale is more resistant to noise. And this conclusion is more directly verified by fig8(c), i.e., the average number of spike generated by accumulators with more large scale per sampling is less under the same standard deviation. Besides, RVSM$_{Gauss}$ has the similar robustness to RVSM$_{DoG}$ and, in appendix, we give a theoretical explanation about the robustness of RVSM in simple case. \section{Conclusion} In this paper, we propose a novel sampling model (RVSM) for Spike camera which respectively uses wavelet filter bank (RVSM$_{DoG}$) and Gaussian filter bank (RVSM$_{Gauss}$) to simulate receptive field and is closer to the human visual system than FSM. The spike data from RVSM can report the brightness accumulation signal in function domain. Accordingly, we propose an efficient method similar to inverse wavelet transform to convert spike data from RVSM into images. Besides, we test the performance of FSM, RVSM$_{DoG}$ and RVSM$_{Gauss}$ in proposed HDM which is a high-speed motion spike dataset including a variety of motion scenes. Interestingly, we find the sampling with the receptive field (RVSM$_{DoG}$ and RVSM$_{Gauss}$) has the better ability to capture the texture information of objects than FSM. Further, we discuss the robustness of RVSM and FSM to noise. The results show that FSM suffers the attack of noise easily, while RVSM can filter high intensity noise effectively by collecting regional information. Besides, RVSM is not only suitable for Spike camera, but also for other neuromorphic vision sensors e.g., DVS. Next, we will study the performance of RVSM in other sensors and port it to hardware. All code can be released after the paper is published. \hupar{Acknowledgments.}This work is supported by grants from the National Natural Science Foundation of China under contract No.61806010. \bibliographystyle{ieee_fullname}
2,877,628,090,939
arxiv
\section{Introduction} Symmetry plays an important role in physics. Sometimes it is spontaneously broken in the low energy, and as a remnant, there appears a massless Nambu-Goldstone boson. If the symmetry is a local one, it is absorbed by the corresponding gauge boson. On the other hand, if the symmetry is a global and approximate one, there remains a light pseudo-Nambu-Goldstone boson (PNGB), which has been a subject of considerable interest. PNGBs, if exist, will provide us with invaluable information on the high energy physics. Various types of global symmetries and the associated PNGBs have been considered so far. One example is the QCD axion, which arises in association with the spontaneous breakdown of the Peccei-Quinn symmetry~\cite{Peccei:1977hh,QCD-axion}. Importantly, the QCD axion is coupled to gluons and photons through anomalies, as well as to the quarks and the leptons at tree or one-loop level. The interactions are suppressed by the decay constant, which parametrizes the symmetry breaking scale. In other extensions of the standard model (SM), there arise PNGBs with similar properties, the so called axion-like particles, and especially those with couplings to photons have been studied extensively from both theoretical and experimental aspects~\cite{Jaeckel:2010ni}. The couplings of the QCD axion and the axion-like particles to photons, nucleons, and electrons are tightly constrained by cosmology, astrophysics and the ground-based experiments~\cite{Jaeckel:2010ni,Andreas:2010ms,Cadamuro:2011fd}. In particular, the astrophysical constraints are extremely tight, pushing the scale of new physics to an intermediate scale or above. Still, there may be other kind of PNGBs with different properties at a scale around or even below the intermediate scale, without any conflict with those constraints. In this paper we pursue a possibility that a PNGB associated with new physics is lurking around or below the intermediate scale. For this, we need to evade tight astrophysical bounds on the PNGBs. One way is to consider PNGBs, which are not directly coupled to the SM sector, but mainly coupled to a hidden sector~\cite{Weinberg:2013kea}. Instead, we want to consider here the case in which some of the SM particles are charged under a global flavor symmetry. The maximal possible flavor symmetry for the SM particles with three right-handed neutrinos is ${\rm U(3)}^6$. We consider an anomaly-free global U(1)$_F$ flavor symmetry, which is a subgroup of the maximal flavor symmetry. In particular, a leptophilic PNGB model is simple and phenomenologically interesting, and we will construct concrete models along this lines. Such leptophilic PNGBs without anomalous couplings to photons evade various experimental and astrophysical bounds coming from couplings with nucleons and photons. We will mainly focus on very light PNGBs with mass lighter than the twice the electron mass.\footnote{ Experimental bounds on PNGBs with mass heavier than ${\cal O}(1)$\,MeV including leptophilic ones were studied in Ref.~\cite{Essig:2010gu}. } There is an interesting point of the PNGB associated with an anomaly-free global symmetry. Although suppressed, such PNGB is necessarily coupled to photons through threshold corrections. In particular, the decay into two photons can be the main decay mode if the PNGB of mass is less than twice the electron mass. If such light PNGB constitutes dark matter, it mainly decays into two photons, producing a narrow X-ray line. This can explain the recent hint for the X-ray line at about $3.5$ keV~\cite{Bulbul:2014sua,Boyarsky:2014jta} for the PNGB mass of about $7$\,keV. As we shall see, the required decay constant is $f_a = {\cal O}(10^{10})$\,GeV if the electron is charged under the symmetry, whereas it is $f_a = {\cal O}(10^{5})$\,GeV if the electron is neutral under the symmetry. This should be contrasted to the fact that the observed X-ray flux can also be explained by the string axion with a decay constant of order $ {\rm GeV} {14-15}$ as first pointed out in Ref.~\cite{Higaki:2014zua}.\footnote{ The X-ray line produced by light modulus decay was studied many years ago by Kawasaki and one of the present authors (TTY) in Ref.~\cite{Kawasaki:1997ah} (see also Refs.~\cite{Hashiba:1997rp,Kusenko:2012ch}). Recently there appeared various possibilities to explain the $3.5$\,keV X-ray line~\cite{Ishida:2014dlp,Higaki:2014zua, Finkbeiner:2014sja, Jaeckel:2014qea,Lee:2014xua,Krall:2014dba}. } The rest of the paper is organized as follows. In Sec.~\ref{sec:2} we discuss the coupling of the PNGB to photons through threshold corrections, and its implications for the $3.5$\,keV X-ray line. We discuss production of PNGB dark matter in Sec.~\ref{sec:3}. In Sec.~\ref{sec:4}, we will build concrete models for leptophilic PNGBs. The last section is devoted for discussion and conclusions. \section{Couplings of PNGBs to photons} \label{sec:2} Let us consider a global U(1)$_F$ flavor symmetry under which only leptons are charged. Most important, we assume that the global U(1)$_F$ symmetry is anomaly free so that the PNGB coupling to photons is suppressed, evading various observational constraints. The coupling to photons is nevertheless induced by threshold corrections, which we will study in this section. Let us study the interactions of the PNGB with leptons in the low energy. Later we will construct concrete flavor models. The relevant low-energy interactions are given by \bea \label{NGlepton} - {\cal L} &=& m_e {\bar e}_R e_L e^{i q_e \frac{a}{f_a}}+ m_\mu {\bar \mu}_R \mu_L e^{i q_\mu \frac{a}{f_a}}+ m_\tau {\bar \tau}_R \tau_L e^{i q_\tau \frac{a}{f_a}} + {\rm h.c.} \end{eqnarray} where $a$ is the PNGB associated with the flavor symmetry, $f_a$ the decay constant, and $q_{e}$, $q_\mu$, and $q_{\tau}$ the coupling constants for electron, muon and tau leptons, respectively. We exclude the case of $q_e =q_\mu = q_\tau = 0$ in the following analysis. We are interested in the case where the PNGB mass is much lighter than twice the electron mass. Integrating out electron, muon, and tau leptons, therefore, we obtain the effective interaction, \bea {\cal L}_{\rm eff} &\simeq& - (q_e+q_\mu+q_\tau) \frac{ \alpha_{em} }{4 \pi f_a} a F_{\mu \nu} {\tilde F}^{\mu \nu}\non\\&& + \frac{ \alpha_{em} }{48 \pi f_a} \left( \frac{q_e}{m_e^2} + \frac{q_\mu}{m_\mu^2} + \frac{q_\tau}{m_\tau^2} \right) \left( (\partial^2 a) F_{\mu \nu} {\tilde F}^{\mu \nu} + 2a F_{\mu \nu} \partial^2{\tilde F}^{\mu \nu} \right), \label{Leff1} \end{eqnarray} where the first line corresponds to the anomaly term, and the second line arises from the threshold corrections. We require $q_e + q_\mu+q_\tau = 0$ to ensure that the flavor symmetry is anomaly-free. Then the first term in \EQ{Leff1} vanishes, and we are left with the finite threshold corrections. Therefore the PNGB coupling to photons is significantly suppressed for anomaly-free symmetry. As long as we are interested in the decay or production of the on-shell PNGB and photons, we can use their equations of motion. Then the effective interaction for the PNGB to photons becomes \bea {\cal L}_{\rm eff} &=& \frac{ \alpha_{em} m_a^2 }{48 \pi f_a} \left(\frac{q_e}{m_e^2} + \frac{q_\mu}{m_\mu^2} + \frac{q_\tau}{m_\tau^2}\right) a F_{\mu \nu} {\tilde F}^{\mu \nu} \end{eqnarray} for the on-shell PNGB and photons and $m_a^2 \ll m_e^2$. The PNGB coupling to photons is dominated by the first term if $q_e \ne 0$; otherwise it is dominated by the second term. Note that both $q_e$ and $q_\mu$ cannot vanish simultaneously to satisfy the anomaly-free condition. The decay rate of the PNGB into two photons is approximately given by \bea \Gamma_{a \to \gamma \gamma} &\simeq& \frac{\alpha_{em}^2}{9216\pi^3} \frac{m_a^7}{f_a^2} \times \left\{ \bear{cc} q_e^2 /m_e^4&~~~{\rm for~~}q_e \ne 0,\\ &\\ q_\mu^2 /m_\mu^4&~~~{\rm for~~}q_e = 0 \eear \right. \end{eqnarray} where we have approximated $m_e^2\ll m_\mu^2 \ll m_\tau^2$ and assumed that there is no large hierarchy among the U(1)$_F$ charges. Assuming that the PNGB decays mainly into photons via the above interaction, we can estimate the lifetime as \bea \tau_{a \to \gamma \gamma} &\simeq& \left\{ \bear{ll} \displaystyle{ 2.9 \times 10^{28}\, q_e^{-2} \lrfp{m_a}{7 {\rm keV}}{-7} \lrfp{f_a}{ {\rm GeV} {10}}{2} {\rm sec. }}&~~~{\rm for~~}q_e \ne 0\\ \displaystyle{2.1 \times 10^{28}\, q_\mu^{-2} \lrfp{m_a}{7 {\rm keV}}{-7} \lrfp{f_a}{2 \times {\rm GeV} {5}}{2} {\rm sec.} }&~~~{\rm for~~}q_e = 0 \eear \right. \end{eqnarray} Thus the PNGB is so long-lived that it can contribute to dark matter. We will show in the next section that, in fact, the right amount of PNGBs can be produced to explain the observed dark matter abundance. The recent hint for the X-ray line at about $3.5$\,keV can be explained by dark matter with the following mass and lifetime~\cite{Bulbul:2014sua,Boyarsky:2014jta}: \bea m_{\rm DM} &\simeq&7.1 {\rm \, keV},\\ \tau_{\rm DM} &\simeq& 4 \times 10^{27} - 4 \times 10^{28}\, {\rm sec}, \label{tau-obs} \end{eqnarray} if it decays into a pair of photons. Therefore, the $3.5$\,keV X-ray line can be explained by the decay of the PNGB dark matter with $m_a \simeq 7$\,keV and $f_a/q_e = 4\times {\rm GeV} {9} - 1\times {\rm GeV} {10} $ for $q_e \ne 0$, or $f_a/q_\mu = 9 \times {\rm GeV} {4} - 3\times {\rm GeV} {5} $ for $q_e = 0$ and $q_\mu \ne 0$. Interestingly, a relatively small decay constant below the intermediate scale is needed because of the suppression factor for the threshold corrections. This should be contrasted to the fact that the observed X-ray flux can also be explained by the string axion with a decay constant of order $ {\rm GeV} {14-15}$~\cite{Higaki:2014zua}. \section{PNGB dark matter} \label{sec:3} A light PNGB contributes to dark matter, if it is sufficiently long-lived. In order to explain the observed dark matter density, the right amount of PNGBs need to be produced in the early Universe. There are two important production processes. One is non-thermal production by the initial misalignment mechanism, and the other is thermal production.\footnote{The production of PNGB dark matter was recently studied in Ref.~\cite{Jaeckel:2013uva}.} We will consider these production processes in turn. The PNGB number density to entropy ratio can be written as \bea Y_a &\simeq& 6 \times 10^{-5} \lrfp{m_a}{7 {\rm keV}}{-1} \lrf{\Omega_a h^2}{0.12}, \end{eqnarray} where $\Omega_a$ is the density parameter for the PNGB and $h$ is the reduced Hubble constant. On the other hand, if the PNGB is in equilibrium, its abundance is given by \bea Y_a^{\rm (eq)} &\simeq&2.6 \times 10^{-3} \lrfp{g_*}{106.75}{-1}, \end{eqnarray} where $g_*$ counts the relativistic degrees of freedom in thermal plasma. Therefore, if the PNGBs constitute the observed dark matter, they should not be in equilibrium, otherwise there must be late-time entropy dilution by a factor $40$ for $m_a \simeq 7$keV. Let us first consider the case of $q_e \ne 0$. In this case, the decay constant suggested by the observed X-ray line is $f_a/q_e = 4\times {\rm GeV} {9} - 1\times {\rm GeV} {10}$. The thermal production process depends on the charge of $\tau$. If the PNGB is directly coupled to $\tau$, the main production process will be through scatterings between leptons and Higgs bosons such as $\ell_3 H^* \to a \tau_R$. The abundance is roughly estimated as follows \bea Y_a^{{\rm (th)}} &\sim&6 \times 10^{-5} \lrfp{g_*}{106.75}{-1} \lrf{T_R}{ {\rm GeV} {6}} \lrfp{f_a}{ {\rm GeV} {10}}{-2}, \end{eqnarray} where $T_R$ is the reheating temperature. Thus, the right amount of PNGBs are thermally produced for $T_R \sim {\rm GeV} {6}$ and $f_a \sim {\rm GeV} {10}$. Alternatively, if the PNGB is not directly coupled to $\tau$, the abundance is suppressed by $\sim (m_\mu/m_\tau)^2$ and given by \bea Y_a^{{\rm (th)}} &\sim&2 \times 10^{-5} \lrfp{g_*}{106.75}{-1} \lrf{T_R}{ {\rm GeV} {8}} \lrfp{f_a}{ {\rm GeV} {10}}{-2}. \end{eqnarray} In this case successful thermal leptogenesis may be possible~\cite{Fukugita:1986hr}, with a mild degeneracy among the right-handed neutrinos. Note that the thermally produced PNGBs of $7$keV mass behave as warm dark matter because of their non-negligible free streaming. The PNGBs can also be produced by the initial misalignment mechanism. The PNGB starts to oscillate when the Hubble parameter becomes comparable to the mass $m_a$. In the radiation dominated Universe, this happens when $T \sim 2 \times {\rm GeV} {6} (m_a/7{\rm keV})^{1/2}$. Therefore, for $T_R \lesssim {\rm GeV} {6}$, the oscillations starts before the reheating, and the PNGB abundance is given by \bea Y_a^{\rm (mis)} &\sim& 3 \times 10^{-7} \lrf{T_R}{ {\rm GeV} {6}} \lrfp{m_a}{7{\rm keV}}{-1} \lrfp{f_a}{ {\rm GeV} {10}}{2} \theta_*^2, \end{eqnarray} where $\theta_* \equiv a_{\rm ini}/f_a$ denotes the initial oscillation amplitude. If the U(1)$_F$ symmetry is spontaneously broken after inflation, we should replace $\theta_*$ with its averaged value, $\sqrt{\langle \theta_*^2\rangle} = \pi/\sqrt{3}$.\footnote{ Recently, the BICEP2 experiment found the primordial B-mode polarization, implying that the inflation scale is about $H_{\rm inf} \sim {\rm GeV} {14}$~\cite{BICEP2}. If this is true, the global U(1)$_F$ symmetry must become spontaneously broken after inflation to avoid generating too large isocurvature perturbations. In this case, one needs to introduce extra breaking terms to avoid the cosmological catastrophe induced by domain walls. } For $T_R \gtrsim 2 \times {\rm GeV} {6}$, the abundance of PNGBs produced by the initial misalignment mechanism becomes independent of $T_R$. Therefore, the initial misalignment mechanism is subdominant compared to the thermal production for $f_a = {\rm GeV} {10}$. Note that the dependence of the abundance on $f_a$ is different between the two production processes, and that for slightly larger values of $f_a$, the initial misalignment mechanism can dominate over the thermal production. This is the case if $q_e$ is comparable to $\sim 3$ or larger. Lastly we consider the case of $q_e = 0$. In this case the preferred value of $f_a$ is about $ {\rm GeV} {5}$, and the thermal production always dominate over the initial misalignment mechanism unless the anharmonic effect becomes significant~\cite{Turner:1985si,Lyth:1991ub,Visinelli:2009zm,Kobayashi:2013nva}. For $T_R$ above the weak scale, the PNGBs are thermalized. For $m_\mu < T < m_\tau$, the PNGBs can be produced by scattering processes such as $\mu + \gamma \to \mu + a$ with a rate given by \bea \Gamma_{\mu + \gamma \to \mu + a} \sim \alpha_{em} \frac{m_\mu^2}{f_a^2} T, \end{eqnarray} where $T$ is the temperature. The production through the above process is most efficient at $T = m_\mu$, and the production rate exceeds the Hubble parameter at that time if \bea f_a &\lesssim& 4 \times {\rm GeV} {7}. \end{eqnarray} Therefore, for $T_R \gtrsim m_\mu$, the PNGBs are thermalized, and we need an additional entropy dilution by a factor of $40$.\footnote{ If the PNGB mass is of ${\cal O}(0.1)$\,eV or lighter, there is no problem even if it is thermalized. It would contribute to hot dark matter~\cite{Archidiacono:2013cha,Jeong:2013oza} or the effective neutrino species $\Delta N_{\rm eff} \simeq 0.39$~\cite{Weinberg:2013kea}. Their existence are favored by recent observations~\cite{Wyman:2013lza,Hamann:2013iba,Battye:2013xqa,Ade:2013zuv}. Interestingly, hot dark matter or dark radiation can relax the tension between BICEP2 and Planck. } If $T_R = {\cal O}(10)$ MeV, it is possible to produce the right amount of PNGBs to account for the observed dark matter abundance. \section{Anomaly-free flavor model for leptons} \label{sec:4} In this section we build anomaly-free flavor models for leptons. For simplicity we focus on a case in which electrons and muons are charged under the U(1)$_F$ symmetry, while tau leptons are neutral. The extension to a more general charge assignment is straightforward. \begin{table}[t] \begin{center} \begin{tabular}{c||c|c|c|c|c|c} &$e_R$&$\mu_R$&$\tau_R$&$\ell_1$&$\ell_2$&$\ell_3$\\ \hline $Q$&$-a$&$-b$&0&$c$&$d$&0\\ \end{tabular} \end{center} \caption{The charge assignment of leptons under the global U(1)$_F$ flavor symmetry.} \label{Q} \end{table}% Let us parametrize the global U(1)$_F$ charges of $e_{i}$ and $\ell_j$ as $Q(e_i)=(-a ,-b,0)$ and $Q(\ell_j)=(c,d,0)$, where $e_{i}$ and $\ell_j$ are the right-handed charged-lepton singlet and the left-handed lepton doublet, respectively, and the subindices $i,j = 1,2,3$ represent the generation. The charge assignment is also shown in Table~\ref{Q}. As long as there are no other fermions charged under both the global U(1)$_F$ and SM gauge symmetries, the absence of the SM gauge anomalies requires \bea &&a +b = 0 \\ &&c+d =0. \end{eqnarray} In order to write down Yukawa interactions for leptons, we need Higgs fields charged under the U(1)$_F$ symmetry. Although not mandatory, let us seek the charge assignment, for which the off-diagonal elements are forbidden by the U(1)$_F$ symmetry. We introduce three Higgs doublets, $H(0)$, $H(a+c)$, and $H(-a-c)$, and require the following conditions: \bea \label{cond1} &&a \ne 0 \\ \label{cond2} && c \ne 0\\ \label{cond3} && a \ne c\\ \label{cond4} &&2a+c \ne 0\\ \label{cond5} &&a+2c \ne 0 \end{eqnarray} For any charged assignment satisfying the above conditions, the Yukawa interactions take the diagonal form, \begin{eqnarray} {\cal L} \supset y_e {\bar e}_R \ell_1 H(-a-c)+ y_\mu {\bar \mu}_{R} \ell_2 H(a+c) + y_\tau {\bar \tau}_{R} \ell_3 H(0)+{\rm h.c.} \end{eqnarray} Let us normalize the global U(1)$_F$ charge so that $c = 1$. Then the above conditions from (\ref{cond1}) to (\ref{cond5}) read $a \ne 0, 1,-2$, and so, the allowed integer values of $a$ are $a=-1, +2, \pm3, \cdots$. Let us take up the first two cases, namely, $a=-1$ and $a=2$. \subsection{Case of $a=-1$} In this case the global U(1)$_F$ symmetry is identical to $L_e - L_\mu$, and there is only one Higgs doublet, $H(0)$. Then the PNGB does not have direct couplings with charged leptons like \EQ{NGlepton}, as it does not reside in the phase of $H(0)$. Let us extend the SM to include right-handed neutrinos $N_i$. If we assign the global U(1)$_F$ charges as $Q(N_i) = (1,-1,0)$, the neutrino Yukawa interaction is diagonal; \begin{eqnarray} {\cal L}\;\supset\; y_{i}^N {\bar N}_i \ell_i {\tilde H}(0) + {\rm h.c.} \end{eqnarray} where ${\tilde H}(0) = i \sigma_2 H(0)^*$. The observed large neutrino mixings can be explained if the Majorana mass matrix for $N_i$ contains large off-diagonal elements. To this end we introduce U(1)$_{B-L}$ gauge symmetry and the $B-L$ Higgs fields, $\phi(0)$, $\phi(\pm1)$, and $\phi(\pm2)$, where the numbers in the parentheses represent the U(1)$_F$ charge and they are assumed to have a common $B-L$ charge $+2$. If these $B-L$ Higgs fields develop non-zero vacuum expectation values (VEVs), the U(1)$_F$ symmetry is spontaneously broken, and the Majorana mass matrix for $N_i$ is induced as \bea -{\cal L} \supset \frac{1}{2} (M_{N})_{ij} {\bar N}^c_i N_j + {\rm h.c.} \end{eqnarray} with \begin{eqnarray} M_N \sim \left( \bear{ccc} \phi(-2)&\phi(0)&\phi(-1)\\ \phi(0)&\phi(2)&\phi(1)\\ \phi(-1)&\phi(1)&\phi(0) \eear \right), \end{eqnarray} where the $B-L$ Higgs fields are understood to represent their VEVs, and we have dropped $O(1)$ numerical coefficient in each element. If the VEVs are comparable to each other, the large neutrino mixing angles are realized. The light neutrino masses can be explained by the seesaw mechanism~\cite{seesaw}. The PNGB resides in the phase of $\phi(1)$ and $\phi(2)$, and the decay constant $f_a$ is approximately given by their VEVs. In fact, the PNGB in this case is similar to the majoron. The cosmological constraints on the majoron dark matter were studied in e.g. Ref.~\cite{Lattanzi:2013uza}. One can also introduce the Higgs portal couplings $\sim |\phi|^2 |H|^2$. The situation would be similar to the model proposed by Weinberg~\cite{Weinberg:2013kea}. For a certain set of parameters, massless PNGBs would contribute the effective neutrino species, $\Delta N_{\rm eff}$. \subsection{Case of $a=+2$} In this case the charge assignment is $Q(e_{i}) = (-2,2,0)$ and $Q(\ell_i)=(1,-1,0)$, and there are three Higgs doublets, $H(0)$ and $H(\pm3)$. The charged lepton Yukawa interactions are given by \begin{eqnarray} {\cal L}\;\supset\; y_e {\bar e}_R \ell_1 H(-3) +y_2 {\bar \mu}_R \ell_2 H(3) +y_3 {\bar \tau}_R \ell_3 H(0) + {\rm h.c.}. \end{eqnarray} Th previous argument on the neutrino Yukawa interaction and the right-handed neutrino mass matrix can be applied to the present case, and the observed large neutrino mixing as well the neutrino mass scale can be similarly explained. The global U(1)$_F$ symmetry is spontaneously broken by both the Higgs doublets and the $B-L$ Higgs fields. We assume that the symmetry breaking scale is of order $ {\rm GeV} {10}$ (or smaller). Then, while the PNGB resides mainly in the phase of $\phi(1)$ and $\phi(2)$, it also appears in the phase of $H(3)$ and $H(-3)$, and so, electrons and muons are coupled to the PNGB in the low energy as in \EQ{NGlepton}. Since the PNGB does not have (sizable) couplings with gluons, photons, and quarks, the astrophysical bounds are considered to be rather weak. A couple of comments are in order. In order to give a mass to the PNGB, one needs an explicit U(1)$_F$ symmetry breaking. It is interesting to note that the following term \bea {\cal L}_{breaking} &=& m^2 H(-3)^\dag H(3) + {\rm h.c.} \end{eqnarray} breaks the U(1)$_F$ symmetry down to the subgroup $Z_6$, giving rise to a PNGB mass $m_a \sim {\cal O}(1)$\,keV for $m \sim \langle H(-3) \rangle \sim \langle H(3) \rangle \sim {\rm GeV} {2}$ and $f_a = {\cal O}(10^{10})$\,GeV. Therefore, the PNGB associated with anomaly-free flavor symmetry broken at $f_a = {\cal O}(10^{10})$\,GeV nicely explains both the mass and the lifetime suggested by the observed $3.5$\,keV X-ray line. The off-diagonal elements of the charged lepton Yukawa matrix receive non-zero contributions, as the U(1)$_F$ symmetry is spontaneously broken. Their contributions to the lepton-flavor violating processes, however, are negligible in our model. In the presence of the $B-L$ Higgs fields, there are in general mixings between $H$ and $\phi$. Such mixings are assumed to be small in our context to keep the hierarchy between the weak scale and the flavor symmetry breaking scale. Also we assume that the lightest Higgs has a property similar to the SM Higgs and the other Higgs fields are so heavy that they evade the current collider search. Some of them, however, may be within the reach of LHC and/or ILC. \section{Discussion and conclusions} \label{sec:5} Some comments and discussions are in order. In the case of $q_e \ne 0$, $f_a = {\cal O}(10^{9-10})$\,GeV is needed to explain the $3.5$\,keV X-ray line. Since the couplings to photons, gluons and nucleons are suppressed, the PNGBs avoid various astrophysical and ground-based constraints. Still, it may be possible to find them in the future. Interestingly, there is a hint for an extra cooling of white dwarfs, which can be explained by light PNGBs coupled to electrons with the decay constant in this range~\cite{Isern:1992gia}.\footnote{ In Ref.~\cite{Isern:1992gia}, the QCD axion was considered, and so, the cooling rate due to the $7$\,keV axion can be much smaller for the same decay rate. There may be another PNGB, if the flavor symmetry group larger than U(1)$_F$. } If such light PNGBs are coupled with electrons but not with photons, it is possible that they are copiously produced in the Sun, but cannot be detected by experiments using the magnetic field like the CAST experiment~\cite{Barth:2013sma}. In the case of $q_e = 0$ and $q_\mu \ne 0$, the preferred value of $f_a$ is of order $ {\rm GeV} {5}$, much smaller than the previous case. Still, as the effective PNGB coupling to the photon is so weak that the constraint from the cooling of horizontal branch stars can be satisfied~\cite{Raffelt:1996wa}. On the other hand, the bound from supernova cooling will be more non-trivial since the PNGB couples to muons directly and the muons might be abundant in the supernova core~\cite{Canuto:1975kr,Colucci:2013pya}. Although the muon abundance depends sensitively on the temperature, the preferred value of $f_a$ may be in tension with the observation. As a rough estimate, we refer to the constraint on the majoron coupling constants to neutrinos from the supernova cooling: it is bounded as $g_{ee} \lesssim 10^{-6}$ where $g_{ee}$ is the yukawa coupling between the majoron and electron neutrinos~\cite{Farzan:2002wx}. In our case, the effective coupling constant between the PNGB and the muon reads $m_{\mu}/f_a \sim 10^{-6}$. A more detailed study is needed to test the viability of this model. So far we have considered the U(1)$_F$ flavor symmetry, under which only leptons are charged, and we constructed models in which the lepton mass matrix is (almost) diagonal. It is possible to extend the models to allow larger off-diagonal terms, or to extend the flavor symmetry to the quark sector, by enlarging the flavor symmetry and adding more Higgs fields. If the actual flavor symmetry group is larger than U(1)$_F$ and if it is broken at a scale of ${\cal O}(10^{9-10})$\,GeV, there may be more PNGBs with different masses with or without couplings to photons and/or gluons. Then it may be possible to provide a unified picture of the QCD axion well as other PNGBs. In this case the light PNGBs can be searched for by flavor-changing processes such as $\tau \to \mu + a$, $\mu^+ \to e^+ + a$, $K^+ \to \pi^+ + a$~\cite{Gelmini:1982zz}. We have pursued a possibility that a PNGB is lurking below the intermediate scale, evading the astrophysical bounds. Along this lines we have proposed flavor models based on an anomaly-free U(1)$_F$ symmetry, where the PNGB is preferentially coupled to the leptons. In particular, its anomalous couplings to gluons and photons are absent, greatly relaxing the astrophysical bounds. We have also pointed out that, although suppressed, the PNGB coupling to photons is induced by threshold corrections. Interestingly, the recent hint for the X-ray line at about $3.5$\,keV~\cite{Bulbul:2014sua,Boyarsky:2014jta} can be explained by the PNGB dark matter with $m_a \simeq 7$\,keV for the decay constant $f_a = {\rm GeV} {9-10}$ ($f_a = {\rm GeV} {5-6}$) if electrons are (not) charged under the flavor symmetry. \section*{Acknowledgments} This work was supported by the Grant-in-Aid for Scientific Research on Innovative Areas (No. 21111006 [KN and FT], No.23104008 [FT], No.24111702 [FT]), Scientific Research (A) (No. 22244030 [KN and FT], 21244033 [FT], 22244021 [TTY]), JSPS Grant-in-Aid for Young Scientists (B) (No.24740135) [FT], and Inoue Foundation for Science [FT]. This work was also supported by World Premier International Center Initiative (WPI Program), MEXT, Japan.
2,877,628,090,940
arxiv
\section{Introduction} \label{intro} Fractional quantum Hall (FQH) fluids have long constituted the best known paradigm of strongly-correlated topological systems \cite{Jain}. Nonetheless, several fundamental issues remain unresolved. These include the exact mechanism leading to the quasiparticle (or fractional electron) excitations and viable universal signatures in edge transport that are rooted in the topological characteristics of the bulk FQH fluid. This state of affairs is partially due to a dearth of rigorous microscopic approaches capable of dealing with these highly entangled systems. A case in point is a first-principles computation of the quasielectron exchange statistics. In \cite{BCTONS}, the {\it Entangled Pauli Principle} (EPP) was advanced as an organizing principle for FQH ground states. The EPP provides information about the pattern of entanglement of the complete subspace of zero-energy modes, i.e., ground states, of quantum Hall parent Hamiltonians for both Abelian and non-Abelian fluids. Those states are generated from the so-called ``DNA'' \cite{BCTONS}, or root patterns \cite{Bernevig2008,ONDS}, which encode the elementary topological characteristics of the fluid. In this work we advance second-quantization many-body techniques that allow for new fundamental insights into the nature of quasiparticle excitations of FQH liquids. In particular, we present an exact fractionalization procedure that allows for a very natural fusion mechanism of quasiparticle generation. We determine the quasihole and quasiparticle operators that explicitly flesh out Laughlin's flux insertion/removal mechanism and provide the associated quasielectron wave function. The quasielectron that we find differs from Laughlin's original proposal \cite{laughlin}. We determine the Berry connection of this quasielectron wave function, considered as an Ehresmann connection on a principal fiber bundle, and as a result a natural fusion mechanism gets unfolded. This, in turn, leads to the exact determination of the quasielectron fractional charge. We perform Monte Carlo simulations to numerically confirm this fusion mechanism of fractionalization. In addition, we introduce an unequivocal diagnostic for characterizing and detecting the topological order of the FQH fluid in terms of a {\it condensation of a non-local operator} and present a constructive subspace bosonization (fermionization) dictionary for the bulk fluid that highlights the topological nature of the underlying theory. Our organizing EPP and the corresponding fluid's DNA encode universal features of the bulk FQH state and its edge excitations. Here we formulate a conjecture that enables a demonstration of the universal long-distance behavior of edge excitations in weak confining potentials. This is based on the exact computation of the edge Green's function over the DNA or root state of the topological fluid. Although our main results are derived in a field-theoretical manner, we will reformulate some of our conclusions in a first quantization language, where states become wave functions. For clarity, we will occasionally use a mixed representation. \section{States and Operator Algebra \\ in the LLL} \label{OperatorAlgebra} The LLL is spanned by single-particle orbitals $\phi_r(x,y)$ whose functional form depends on geometry \cite{ONDS}. We consider genus zero manifolds such as those of the disk and the cylinder. Lengths are measured in units of the magnetic length $\ell=\sqrt{\frac{\hbar c}{|e| B}}$, where $B$ is the magnetic field strength, $\hbar$ the reduced Planck constant, $c$ the speed of light, and $|e|$ the magnitude of the elementary charge. For ease of presentation, we will primarily focus on the disk geometry \footnote{One can always apply a similarity transformation to map to the cylinder \cite{ONDS}.}. Then, $\phi_r(z=x+ {\sf i} y)= z^r/\mathcal{N}_r$, $\mathcal{N}_r=\sqrt{2\pi 2^r r!}$, with $r\ge 0$ a non-negative integer labeling the angular momentum and $z \in \mathbb{C}$ \footnote{Normalization is defined as $\int {\cal D}[z] (z^*)^r z^{r'} ={\cal N}_r^2 \delta_{r,r'}$ with ${\cal D}[z]=d^2z \, e^{-\frac{1}{2} |z|^2}$, $z^*=x-{\sf i} y$, and magnetic unit length $\ell=1$.}. $N$-particle states (elements of the Hilbert space $\mathcal{H}_{\mathrm{LLL}}$) belong to either the totally symmetric (bosons) or anti-symmetric (fermions) representations of the permutation group $S_N$. Whenever results apply to either representation, we use second-quantized creation (annihilation) $a_r^\dagger$ ($a_r^{\;}$) operators instead of the usual $c_r^\dagger$ ($c_r^{\;}$), $b_r^\dagger$ ($b_r^{\;}$), for fermions and bosons, respectively. The field operator \begin{eqnarray} \Lambda^{\;}(z)=\sum_{r\ge 0} \phi_r(z) a_r^{\;}, \end{eqnarray} and its adjoint $\Lambda^\dagger(z)$ satisfy canonical (anti-)commutation relations: $[\Lambda^{\;}(z),\Lambda^{\;}(z')]_\pm=0$, $[\Lambda^{\;}(z),\Lambda^{\dagger}(z')]_\pm=\{z'|z\}$, where $\{z'|z\}=\frac{1}{2\pi}e^{\frac{zz'^\ast}{2}}$ is a bilocal kernel satisfying $\int \mathcal{D}[z]\, \Lambda(z)\{z|z'\}=\Lambda(z')$ \cite{stone1}. Many-particle states $|\psi\rangle \in \mathcal{H}_{\mathrm{LLL}}$ are characterized by the number of particles $N$ and the maximal occupied orbital $r_\mathrm{max}$, defining a filling factor $\nu=(N-1)/r_{\mathrm{max}}$. Given an antisymmetric holomorphic function $\psi$, one can construct the states \begin{eqnarray} |\psi\rangle= \int \big( \prod\limits_{i=1}^N \mathcal{D}[z_i]\big) \psi(z_1,...,z_N)\Psi^\dagger(z_1) \dots \Psi^\dagger(z_N) |0\rangle, \nonumber \end{eqnarray} in terms of the fermionic field operators $\Psi(z)$. Similarly, one can construct states for bosons in terms of permanents and field operators $\Phi(z)$. We now introduce the operator algebra necessary for the LLL operator fractionalization and constructive bosonization. We first review the operator equivalents of the multivariate power-sum, $p_d(z)$, and elementary, $s_d(z)$, symmetric polynomials ($d\ge 0$). As shown in \cite{MONS}, these are, respectively, given by \begin{eqnarray} \mathcal{O}_d&=&\sum_{r\ge 0}\bar a_{r+d}^\dagger \bar a^{\;}_r \ , \ \ \mbox{and } \nonumber \\ e_d&=&\frac{1}{d!} \!\! \sum_{r_1,\dots ,r_d\ge0} \!\!\!\! \bar a_{r_1+1}^\dagger \dots \bar a_{r_d+1}^\dagger \bar a^{\;}_{r_d}\dots \bar a^{\;}_{r_1} \end{eqnarray} (with $\bar a_r^\dagger=\mathcal{N}_r a_r^\dagger$, $\bar a_r^{\;}=\mathcal{N}_r^{-1} a_r^{\;}$). The operator Newton-Girard relations $de_d +\sum_{k=1}^d (-1)^k \mathcal{O}_k e_{d-k}=0$ (with $e_0=\mathds{1}$) link these operators with each other. The second-quantized extensions of the Newton-Girard relations are similar to dualities \cite{CONPRL,CON,dual^2} in that applying them twice in a row yields back the original operators. Interestingly, the operators $\mathcal{O}_d$ can be expressed in terms of Bell polynomials in $e_d$'s (Appendix \ref{sec:OA}). Consequently, any quantity expressible in terms of $\mathcal{O}_d$'s can be also written in terms of $e_d$'s and vice versa. Both the $\mathcal{O}_d$ and $e_d$ operators generate the same commutative algebra $\mathsf{A}$. Furthermore, they satisfy the commutation relations $[\mathcal{O}_d,\bar a_r]_-=-\bar a_{r-d}$, $[\mathcal{O}_d, \bar a_r^\dagger]_-=\bar a^\dagger_{r+d}$ and $[e_d,\bar a^\dagger_r]_-=\bar a_{r+1}^\dagger e_{d-1}$, $[e_d,\bar a_r]_-=-e_{d-1}\bar a_{r-1}$. A set of first-quantized symmetric operators, of relevance to Laughlin's quasielectron and conformal algebras, involves derivatives in $z$. Similar to the operators defined above, we introduce symmetric polynomials $p_d(\partial_z)$ and $s_d(\partial_z)$ whose second-quantized representations are \begin{eqnarray} \mathcal{Q}_d&=&\sum_{r>d}r(r-1)\dots(r-d)\bar a_{r-d}^\dagger \bar a^{\;}_r \ , \ \mbox{ and} \nonumber \\ f_d&=&\frac{1}{d!}\sum_{r_1,\dots ,r_d>0} r_1 \dots r_d \, \bar a_{r_1-1}^\dagger \dots \bar a_{r_d-1}^\dagger \bar a^{\;}_{r_d}\dots \bar a^{\;}_{r_1} , \nonumber \end{eqnarray} and are Newton-Girard-related, $d f_d +\sum_{k=1}^d (-1)^k \mathcal{Q}_k f_{d-k}=0$, with $f_0=\mathds{1}$. One can, analogously, define operators mixing polynomials and derivatives as in the positive ($d,d'\geq 0$) Witt algebra $[\ell_d,\ell_{d'}]_-=(d-d')\ell_{d+d'}$. These Witt algebra generators are $\ell_d=-\sum_{i=1}^N z_{i}^{d+1}\partial_{z_i}$. Their second-quantized version is $\hat{\ell}_d=-\sum_{r\ge 0}r \, \bar a_{r+d}^\dagger \bar a^{\;}_r$. Physically, the operators ${\cal O}_d, e_d, \hat \ell_d$ (${\cal Q}_d, f_d$) increase (decrease) the total angular momentum or ``add (subtract) fluxes''. Rigorous mathematical proofs appear in Appendix \ref{sec:OA}. Symmetric operators stabilizing incompressible FQH fluids as their eigenvector with lowest eigenvalue are known as ``parent FQH Hamiltonians''. The EPP \cite{BCTONS,BONS} is an organizing principle for generating both Abelian and non-Abelian FQH states as zero modes (ground states) of frustration-free positive-semidefinite microscopic Hamiltonians. The Hamiltonian stabilizing Laughlin states of filling factor $\nu=1/M$, with $M$ a positive integer, is $H_{M}=\sum_m H_m$. Here $H_m$ are the Haldane pseudopotentials and the sum is performed over all $0\le m<M$ sharing the (even/odd) parity of $M$. As demonstrated in \cite{ONDS}, $H_m=\sum_{0<j<r_{\rm max}} T_{j,m}^{+}T_{j,m}^{-}$, where $T_{j,m}^{-}=\sum_{k} \eta_{k}(j,m) a_{j-k}a_{j+k}=(T_{j,m}^{+})^\dagger$ with $j=\frac{1}{2},1,\frac{3}{2},\dots,r_{\rm max}-\frac{1}{2}$ ($2k$ shares the parity of $2j$), and $\eta_{k}(j,m)$ are geometry-dependent form factors. For odd (even) $m$, the operator $a_r=c_r$ ($b_r$). The space $\mathcal{Z}_M$ of all zero modes of $H_M$ is generated by the states $|\psi\rangle\in \mathcal{H}_{\mathrm{LLL}}$ satisfying $T_{j,m}^{-}|\psi\rangle=0$. This space contains the Laughlin state $|\psi_{M}^N\rangle$ as its minimal total angular momentum, $J=MN(N-1)/2$, state. All other zero modes are obtained by the action of some linear combination of products of $\mathcal{O}_d$, equivalently $e_d$, operators onto $|\psi_{M}^N\rangle$ \cite{ONDS,MONS}. Inward squeezing is an angular-momentum preserving operation generated by \begin{eqnarray} A^{d}_{r,r'}=\bar a_{r}^\dagger \bar a_{r'}^\dagger \bar a^{\;}_{r'+d}\bar a^{\;}_{r-d}, \ r\le r' , \mbox{ and } d>0 , \end{eqnarray} whose multiple actions on the root partition $|\widetilde{\psi}_{M}^N\rangle= \prod_{i=1}^N \bar a_{M(N-i)}^\dagger|0\rangle$ generate all occupation number eigenstates $|\lambda\rangle$ in the expansion of Laughlin state $|\psi_{M}^N\rangle=|\widetilde{\psi}_{M}^N\rangle +\sum_{\lambda}C_\lambda |\lambda\rangle$, with integers $C_\lambda$ \cite{ONDS,MONS}. By angular momentum conservation, $\langle \psi_{M}^N|\bar a_r^\dagger \bar a^{\;}_s|\psi_{M}^N \rangle =\alpha (r) \delta_{r,s} \|\psi_{M}^N\|^2$. In the thermodynamic limit ($N,r_{\rm max} \rightarrow \infty$ such that $\nu$ remains constant) $\alpha=N/(r_{\max}+1)\rightarrow \nu$. \section{Operator Fractionalization and Topological Order} \label{OperatorFractionalization} Our next goal is to construct second-quantized quasihole and quasiparticle operators. Following Laughlin's insertion/removal of magnetic fluxes, fractionalization is the notion behind that construction. Repeating this procedure $M$ times should yield an object with quantum numbers corresponding to a hole or a particle. Surprisingly, as we will show, the case of quasielectron excitations does not coincide with Laughlin's proposal (nor other proposals). As a byproduct, we will obtain a compact representation of Laughlin states (bosonic and fermionic) that emphasizes a sort of {\it condensation} of a non-local quantity relating to the topological nature of the FQH fluid. As shown in \cite{MONS}, the second-quantized version of the quasihole operator $U_N(\eta)=\prod_{i=1}^N(z_i-\eta)$, $\eta\in \mathbb{C}$, is $\widehat{U}_N(\eta)=\sum_{d=0}^N(-\eta)^{N-d}e_d$, and satisfies $[\mathcal{O}_d, \widehat{U}_N(\eta)]_-=0$. Moreover \cite{MONS}, $\widehat{U}_N (\eta)\bar a_r^\dagger =-\eta \bar a_r^\dagger \widehat{U}_N (\eta) +\bar a_{r+1}^\dagger \widehat{U}_N (\eta)$ and $\bar a_r\widehat{U}_N(\eta)=-\eta \widehat{U}_N(\eta) \bar a_r +\widehat{U}_N(\eta) \bar a_{r-1}$ (see Appendix \ref{sec:OB}). The action of the quasihole operator on the field operator is given by \footnote{We remark that in \cite{stone1} orbitals $\phi_r(z)$ include Gaussian factors in contrast to our convention. That implies a change in the differential operator to $D^{(z)}=2\partial_{z^\ast}+\frac{1}{2}z$.} \begin{eqnarray} \widehat{U}_N(\eta) \Lambda^\dagger(z)\!&=&\!-\eta \Lambda^\dagger(z) \widehat{U}_N(\eta)+\! \sum_{r\ge 1} \mathcal{N}_{r-1}^{-1}\phi_{r-1}^\ast(z) \bar a_r^\dagger \widehat{U}_N(\eta) \nonumber \\ &=& (D^{(z)}-\eta)\Lambda^\dagger(z) \widehat{U}_N(\eta), \end{eqnarray} where $D^{(z)}=2\partial_{z^\ast}.$ The latter operator identity can be replaced by $\widehat{U}_N(\eta)\Lambda^\dagger(z)\overset{\int}{=} \left(z-\eta\right)\Lambda^\dagger(z)\widehat{U}_N(\eta)$, where the symbol $\overset{\int}{=}$ stresses validity following a $z$ integration \cite{stone1}. Having established the properties of $\widehat{U}_N$, we introduce the operator $\widehat{K}_{\mathcal{M}}(\eta)=\Lambda^\dagger(\eta) \widehat{U}_N(\eta)^\mathcal{M}$, for any positive integer $\mathcal{M}$. For odd $\mathcal{M}=M$ and $\Lambda(\eta)=\Psi(\eta)$, it agrees with Read's non-local operator for the LLL \cite{read}. One can show that for odd (even) $\mathcal{M}$, the commutator (anticommutator) $[\widehat{K}_{\mathcal{M}}(\eta),\widehat{K}_{\mathcal{M}}(\eta')]_{-}=0$ ($[\widehat{K}_{\mathcal{M}}(\eta),\widehat{K}_{\mathcal{M}}(\eta')]_{+}=0$), in the fermionic case, while opposite commutation relations hold for bosons. This is a consequence of the composite particle nature induced by the flux insertion mechanism \cite{Jain}. One can prove (Appendix \ref{sec:OC}) that Laughlin state can be expressed as \begin{eqnarray} |\psi_{M}^N\rangle = \frac{1}{N!}{K}_{M,N-1}{K}_{M,N-2}\ldots {K}_{M,0}|0\rangle , \label{Laughlinstate} \end{eqnarray} where ${K}_{M,N}=\int \mathcal{D}[z]\, \widehat{{K}}_{M}(z)$. This indicates that the Laughlin state does not feature a local particle condensate of ${K}_{M,N}$. This impossibility is made evident by a counting argument. Each operator ${K}_{M,N}$ adds a maximum of $M N$ units of angular momentum. Thus, a condensation of these objects would lead to a state with maximum total angular momentum $M N^2$. On the other hand, a state such as \eqref{Laughlinstate} has angular momentum $\sum_{i=0}^{N-1} M i=J$, as it should. This illustrates the above noted impossibility. The Laughlin state, however, can be understood as a condensate of non-local objects. Consider ${\mathcal{K}}_{M}= \int \mathcal{D}[z]\, \widehat{\mathcal{K}}_{M}(z)$ with $\widehat{\mathcal{K}}_{M}(z)=\Lambda^\dagger(z) \widehat{\mathcal{U}}_M(z)$, and $\widehat{\mathcal{U}}_M(z)=\sum_{N\ge 0}\widehat{U}_N(z)^M|\psi_{M}^N\rangle \langle \psi_{M}^N|$ the flux-number non-conserving quasihole operator. Then, for both bosons and fermions, \begin{eqnarray} |\psi_{M}^N\rangle = \frac{1}{N!}{\mathcal{K}}_{M}^N |0\rangle . \end{eqnarray} Although illuminating, this representation depends on $|\psi_{M}^N\rangle$ itself through $\widehat{\mathcal{U}}_M(z)$ (Appendix \ref{sec:OC}). This condensation of non-local objects is behind the intrinsic topological order of Laughlin fluids. One can show this by studying the long-range order behavior of Read's operator \cite{read}. Before doing so, we need a result (Appendix \ref{sec:OD}) that justifies calling $\widehat{U}_N(\eta)$ the quasihole operator. Had one created $M$ quasiholes at position $\eta$ one should generate an object with the quantum numbers of a hole \footnote{In first-quantization language this first appeared in \cite{Girvin84} and an early discussion in \cite{Anderson83}. For a rigorous and general proof of Eq. (\ref{quasiholefractionalization}) see Appendix \ref{sec:OD}.}. That is, \begin{eqnarray} \Lambda(\eta)|\psi_{M}^{N+1}\rangle = \widehat{U}_N(\eta)^M|\psi_{M}^{N}\rangle. \label{quasiholefractionalization} \end{eqnarray} Studying the long-range order of Read's operator \cite{Girvin1987} amounts to establishing that $\langle\widehat{\mathcal{K}}_M(z)^\dagger\widehat{\mathcal{K}}_M(0)\rangle$ approaches a non-zero constant at large $|z|$ \cite{CS2019}, or alternatively, the condensation of $\widehat{\mathcal{K}}_M(0)$ in the U(1) coherent state $|\theta\rangle = \sum_{N\ge 0}\sigma_{M,N} e^{-{\sf i} N\theta} |\psi^{N}_{M}\rangle$, where $\sigma_{M,N}=\alpha_N \|\psi_{M}^N\|^{-1}$ with $\alpha_N\in \mathbb{C}$ and $\theta \in \mathbb{R}$. We next expand on Read's arguments. Let us choose $\alpha_N$ such that $\gamma_{M,N}=\sigma^\ast_{M,N}\sigma^{\;}_{M,N-1}\|\psi_M^N\|^{2}$ represents a probability distribution concentrated around (an assumed large) $\overline{N}$. Using the operator fractionalization relation, $\langle\theta |\widehat{\mathcal{K}}_M(0)|\theta\rangle=e^{{\sf i}\theta}\sum_{N\ge 1}\gamma_{M,N}\|\psi_M^N\|^2\langle\psi_M^N|\Lambda^\dagger(0)\Lambda(0)|\psi_{M}^N\rangle$. Leading contributions to the sum come from terms with $N$ close to $\overline{N}$, in which case $\langle \psi_{M}^{N}|\Lambda^\dagger(0)\Lambda(0)|\psi_{M}^{N}\rangle\cong \frac{\nu}{2\pi}\|\psi_{M}^N\|^2$. Therefore, $\langle\theta|\widehat{\mathcal{K}}_M(0)|\theta\rangle\rightarrow \frac{\nu}{2\pi}e^{{\sf i}\theta}$ for $\overline{N}\rightarrow\infty$. Obviously, $\langle\theta|\widehat{\mathcal{K}}_M(0)|\theta\rangle$ is not a {\it local order parameter} \cite{read}. Do we have a similar operator fractionalization relation for the quasiparticle operator $\widehat{V}_N(\eta)$, which reduces to Laughlin's quasielectron in the case of fermions? Since within the LLL one has $\Lambda(z)\widehat{U}^\dagger_N(\eta)=(2\partial_z -\eta^\ast)\widehat{U}^\dagger_N(\eta) \Lambda(z)$ it seems natural, by analogy to the quasihole, to define quasiparticles as the second-quantized version of $W_N(\eta)=\prod_{i=1}^N(2\partial_{z_i}-\eta^\ast)$, Laughlin's original proposal \cite{laughlin}. Note, though, that the second-quantized representation of this operator is $\widehat{W}_N(\eta)=\sum_{d=0}^N (-\eta^\ast)^{N-d}2^d f_d$, and not $\widehat{U}^\dagger_N(\eta)$. This proposal does not satisfy the operator fractionalization relation $\Lambda^\dagger(\eta) |\psi_{M}^{N-1}\rangle =\widehat{W}_{N}(\eta)^M|\psi_{M}^{N}\rangle$ since total angular momenta do not match. A simple modification $\Lambda^\dagger(\eta) |\psi_{M}^{N-1}\rangle =\widehat{W}_{N-1}(\eta)^M|\psi_{M}^{N}\rangle$, can be made to match total angular momenta as can be easily verified by localizing the quasiparticle at $\eta=0$. A close inspection of the case $N=5$ shows that such a modification cannot work since, albeit conserving the total angular momenta, individual component states display different angular momenta distributions (Appendix \ref{sec:OE}). A proper embodiment of the quasiparticle should satisfy \begin{eqnarray} \Lambda^\dagger(\eta) |\psi_{M}^{N-1}\rangle \!\!&=& \!\! \widehat{V}_{N-1}(\eta)^M|\psi_{M}^{N}\rangle \ \ \mbox{ with} \nonumber \\ \widehat{V}_{N-1}(\eta)^M \!\! &=& \!\!\Lambda^\dagger(\eta) \widehat{U}_{N-1}(\eta)^{-M}\Lambda(\eta), \end{eqnarray} as can be derived from the quasihole (i.e., hole fractionalization) relation. Indeed, this operator is well-defined when acting on the $N$-particle Laughlin state. Can $\widehat{V}_{N-1}(\eta)^M$ be written as the $M$-th power of another operator? Suppose that one wants to localize a quasiparticle at $\eta=0$, then $\widehat{U}_{N-1}(0)^M=e_{N-1}^M$ and the problem reduces to proving that $\bar a_0^\dagger e^{-M}_{N-1}\bar a^{\;}_0=(\bar a_0^\dagger e_{N-1}^{-1} \bar a^{\;}_0)^{M}$. Recall that any Laughlin state can be obtained by an inward squeezing process of a root partition. Even in the bosonic case, any term in such an expansion has the zeroth angular momentum orbital either empty or singly occupied. In the first (empty) case, the action of $\bar a^{\;}_0$ annihilates such a term while in the second (singly occupied) case we are left with an $(N-1)$-particle state. The action of $e_{N-1}^{-1}$ on such a state reduces each remaining orbital component by a unit of flux. Since any such state has the smallest occupied orbital with $r\ge M$, the consecutive actions of $\bar a_0^\dagger$ and $\bar a^{\;}_0$ are well defined. It follows from the above that we can replace $\bar a_0^\dagger e_{N-1}^{-1} \bar a^{\;}_0$ by $\bar a_0^\dagger e_{N-1}^\dagger \bar a^{\;}_0$. Therefore, \begin{equation} \Lambda^\dagger(0) |\psi_{M}^{N-1}\rangle =(\Lambda^\dagger(0) \widehat{U}^\dagger_{N-1}(0)\Lambda(0))^{M}|\psi_{M}^{N}\rangle. \end{equation} Analogous considerations apply to $\eta\neq 0$, as long as one can argue that the action of $\widehat{V}_{N-1}(\eta)^{k}$ is well-defined on the Laughlin state $|\psi_{M}^{N}\rangle$, for $k=1,\dotsc, M$. Indeed, if $T(\eta)$ is the magnetic translation operator by $\eta$, the translated state $T(-\eta)|\psi_{M}^{N}\rangle$ is still a zero mode of the Laughlin state parent Hamiltonian. Thus by the same squeezing argument, $\widehat{V}_{N-1}(0)^{k}T(-\eta)|\psi_{M}^{N}\rangle$ is well-defined. Since (up to phases) $T(\eta)\widehat{U}_{N-1}(0)T(-\eta)$ equals $\widehat{U}_{N-1}(\eta)$, this behavior under translation carries over to $\widehat{U}_{N-1}(\eta)^{-k}$, $(\widehat{U}_{N-1}(\eta)^{\dagger})^k$, and $\widehat{V}_{N-1}(\eta)^{k}$. Thus, the stated relations for the actions of these operators on the Laughlin state extend to finite $\eta$. We would like to stress that our quasiparticle (quasielectron) operator $\widehat{V}_{N-1}(\eta)$ does {\bf not} constitute an arbitrary Ansatz. It has been rigorously derived from the exact kinematic constraint that $M$ quasiparticles located at $\eta$ in an $N$-particle vacuum should be equivalent to the addition of one particle at the same location in an $(N-1)$-particle vacuum, i.e., the ``exact inverse" process advocated for a quasihole. From a physics standpoint, this constraint represents Laughlin's flux removal/insertion mechanism and is a universal property of the ground state independent of the Hamiltonian. \section{Quasiparticles Wave Functions} \label{QuasiparticleWf} The field-theoretic approach provides an elegant formalism to prove the exact mechanism behind particle fractionalization. We next illustrate how this mechanism is translated in a first-quantized language. To this end, we start using a {\it mixed} representation of the quasiparticle wave function. In this representation the corresponding quasiparticle (quasielectron) wave function, localized at $\eta \in \mathbb{C}$, is given by \begin{equation} \Psi^{\mathrm{qp}}_{\eta}(Z_N) = \Lambda^\dagger(\eta) \Psi_{\eta}^{(M-1)\mathrm{qh}}(Z_{N-1}), \end{equation} where $Z_N=\{z_1,z_2,\ldots,z_N\}$, $\Lambda^\dagger(\eta)$ creates a particle in the state $\psi_\eta^{0}(z)={\cal N}_0 \, e^{-\frac{1}{4}|z-\eta|^2}$ \footnote{In first quantization the Gaussian factor is typically not included in the integration measure. } and \begin{eqnarray} \Psi_{\eta}^{(M-1)\mathrm{qh}}(Z_{N-1})\!=\!\mathcal{N}_{\eta,N-1}^{(M-1)\mathrm{qh}}\! \prod\limits_{k=1}^{N-1} \! (z_k-\eta)^{M-1} \Psi_M(Z_{N-1}) \nonumber \end{eqnarray} is the $M-1$-quasiholes, located at $\eta$, wave function for $N-1$ particles, and Laughlin's (un-normalized) state \begin{eqnarray} \Psi_{M}(Z_{N-1})= \prod\limits_{1\le i<j\le N-1}(z_i-z_j)^{M}e^{-\frac{1}{4}\sum_{k=1}^{N-1}|z_k|^2} . \nonumber \end{eqnarray} By the definition of the operator $\Lambda^\dagger(\eta)$, then, \begin{equation} \Psi^{\mathrm{qp}}_{\eta}(Z_N) = \sqrt{N}\hat{ \mathcal{A}}\left[\psi_\eta^{0}(z_N)\Psi_{\eta}^{(M-1)\mathrm{qh}}(Z_{N-1})\right], \end{equation} where, for fermions for instance, \begin{equation} \hat{\mathcal{A}}\Phi(Z_N)=\frac{1}{N!}\sum\limits_{\sigma\in S_N}\mathrm{sgn}(\sigma)\Phi(z_{\sigma(1)},\ldots, z_{\sigma(N)}). \end{equation} This straightforwardly gives the first quantized quasiparticle wave function \begin{equation} \begin{split} \Psi^{\mathrm{qp}}_{\eta}(Z_N)=\sqrt{N} \mathcal{N}_{\eta,N-1}^{(M-1)\mathrm{qh}}{\cal N}_0 \, e^{-\frac{|\eta|^2}{4}}e^{-\frac{1}{4} \sum_{k=1}^N|z_k|^2}&\\ \times \hat{\mathcal{A}}\left[e^{\frac{z_N \eta^\ast}{2}}\prod\limits_{k=1}^{N-1}(z_k-\eta)^{M-1}\prod\limits_{1\le i<j\le N-1}(z_i-z_j)^M\right]&, \end{split} \end{equation} with all normalization factors included. We claim that this wave function is properly normalized. Indeed, we have \begin{equation} \langle \Psi^{\mathrm{qp}}_{\eta} | \Psi^{\mathrm{qp}}_{\eta}\rangle = \langle\Psi^{(M-1)\mathrm{qh}}_{\eta} |\Lambda(\eta)\Lambda^\dagger(\eta)|\Psi^{(M-1)\mathrm{qh}}_{\eta}\rangle. \end{equation} Since the orbital $\psi_\eta^0$ is unoccupied in $\Psi_{\eta}^{(M-1)\mathrm{qh}}$, $|\Psi_{\eta}^{(M-1)\mathrm{qh}}\rangle$ is an eigenstate of $\Lambda(\eta)\Lambda^\dagger(\eta)$ with eigenvalue $1$. Therefore, \begin{equation} \langle \Psi^{\mathrm{qp}}_{\eta} | \Psi^{\mathrm{qp}}_{\eta}\rangle = \langle \Psi_{\eta}^{(M-1)\mathrm{qh}}|\Psi_{\eta}^{(M-1)\mathrm{qh}}\rangle \end{equation} and $\Psi_{\eta}^{\mathrm{qp}}$ is normalized if $\Psi_{\eta}^{(M-1)\mathrm{qh}}$ is normalized. One can re-write the (un-normalized) quasiparticle (quasielectron) wave function $\bar \Psi_{\eta}^{\mathrm{qp}}$ in an enlightening manner \begin{equation} \bar \Psi^{\mathrm{qp}}_\eta(Z_N)=\Gamma_\eta^\dagger(Z_N) \Psi_{M}(Z_N) , \end{equation} with the {\it quasiparticle (quasielectron) operator} \begin{equation} \Gamma_\eta^\dagger(Z_N)=\sum\limits_{i=1}^N e^{\frac{z_i\eta^\ast}{2}}\prod\limits_{j\neq i}\frac{(z_j-\eta)^{M-1}}{(z_j-z_i)^M}, \end{equation} which clearly shows how it differs significantly from prior proposals \cite{laughlin, Hansson17, Jain03, Hansson, Nielsen, Girvin86} (see Appendix \ref{sec:OE}). But this is not the whole story. It is even more illuminating to understand the precise mechanism leading to this remarkable quasiparticle, that we emphasize once more is not an Ansatz. Before doing so, we will first compute the charge of this excitation using the Berry connection idea proposed in \cite{Arovas1984} and further elaborated in Section 2.4 of \cite{stone1} for the quasihole, that is the Aharonov-Bohm effective charge coupled to magnetic flux. We will then show a remarkable exact property of the charge density that will shed light on the underlying fractionalization mechanism. \subsection{Berry connection for one quasiparticle} For pedagogical reasons, we next focus on the fermionic (electron) case. Consider an adiabatic process (in time $t$) where the position of the quasiparticle, $\eta=\eta(t)$, is encircling an area enclosing a magnetic flux $\phi$. We will next show that the Berry connection decomposes into \begin{equation} \left\langle \Psi^{\mathrm{qp}}_{\eta} \biggr|\frac{d}{dt} \Psi^{\mathrm{qp}}_{\eta}\right\rangle={\sf i} \, {\cal A}_1+{\sf i} \, \tilde{\cal A}_{M-1}. \end{equation} As we will explain, ${\cal A}_1$ describes the Berry phase contribution from a single particle (electron) and $\tilde{\cal A}_{M-1}$ is the contribution from $M-1$ quasiholes. It is convenient to demonstrate this relation in second quantization, where only in the end, $\tilde{\cal A}_{M-1}$ is computed from first quantization methods \cite{Arovas1984, Rigolin2008,Rigolin2010}. So let $|\Psi_{\eta}^{(M-1)\mathrm{qh}}\rangle=\widehat{\psi}_\eta^\dagger |0\rangle$, where $\widehat{\psi}_\eta^\dagger$ is an element in the algebra generated by $c_j^\dagger$s, where $c_j^\dagger$ creates a particle in the orbital $\psi_0^j(z)$. Thus, \begin{equation} \widehat{\psi}_\eta^\dagger=\sum\limits_{j_1,\ldots,j_{N-1}}F_{j_1,\ldots,j_{N-1}}c_{j_1}^\dagger\ldots c_{j_{N-1}}^\dagger \end{equation} with some coefficients $F_\bullet$ dependent on $\eta$. The statement made earlier that $\psi_\eta^{0}(z)$ is not occupied in $|\Psi^{(M-1)\mathrm{qh}}_{\eta}\rangle$ is equivalent to saying that \begin{equation} \!\!\! \Lambda(\eta)\widehat{\psi}_\eta^\dagger =(-1)^{N-1}\widehat{\psi}_\eta^\dagger\Lambda(\eta), \ \ \Lambda^\dagger(\eta)\widehat{\psi}_\eta=(-1)^{N-1}\widehat{\psi}_\eta\Lambda^\dagger(\eta). \nonumber \end{equation} Trivially, also, $\Lambda(\eta)$ has the same relation with $\widehat{\psi}_\eta$, and $\Lambda^\dagger(\eta)$ (or $\frac{d}{dt}\Lambda^\dagger(\eta)$) with $\widehat{\psi}_\eta^\dagger$. From normalization, \begin{equation} \Lambda(\eta)\Lambda^\dagger(\eta)|0\rangle =|0\rangle = \widehat{\psi}_\eta\widehat{\psi}_\eta^\dagger |0\rangle. \end{equation} Thus, \begin{eqnarray} \hspace*{-0.8cm} \left\langle \Psi^{\mathrm{qp}}_{\eta} \biggr|\frac{d}{dt} \Psi^{\mathrm{qp}}_{\eta}\right\rangle &=&\left\langle 0 \biggr| \widehat{\psi}_\eta\Lambda(\eta) \left(\frac{d}{dt}\Lambda^\dagger(\eta)\right)\widehat{\psi}_\eta^\dagger \biggr|0 \right\rangle \nonumber \\ && \hspace*{-2cm} + \left\langle 0 \biggr| \widehat{\psi}_\eta\Lambda(\eta)\Lambda^\dagger(\eta) \left(\frac{d}{dt}\widehat{\psi}_\eta^\dagger\right) \biggr|0 \right\rangle\equiv {\sf i} \, {\cal A}_1+{\sf i} \, \tilde{\cal A}_{M-1}, \end{eqnarray} where \begin{equation} \begin{split} {\sf i}\, {\cal A}_1&=\left\langle 0 \biggr| \widehat{\psi}_\eta\Lambda(\eta) \left(\frac{d}{dt}\Lambda^\dagger(\eta)\right)\widehat{\psi}_\eta^\dagger \biggr|0 \right\rangle\\ &=\left\langle 0 \biggr| \widehat{\psi}_\eta \widehat{\psi}_\eta^\dagger \Lambda(\eta) \left(\frac{d}{dt}\Lambda^\dagger(\eta)\right) \biggr|0 \right\rangle\\ &=\left\langle 0 \biggr| \Lambda(\eta) \left(\frac{d}{dt}\Lambda^\dagger(\eta)\right) \biggr|0 \right\rangle=\left\langle \psi_\eta^0\biggr|\frac{d}{dt}\psi_\eta^0\right\rangle, \end{split} \end{equation} and \begin{eqnarray} \hspace*{-0.1cm} {\sf i}\, \tilde{\cal A}_{M-1} \hspace*{-0.05cm}&=&\hspace*{-0.05cm}\left\langle0\biggr| \widehat{\psi}_\eta \Lambda(\eta)\Lambda^\dagger(\eta) \left(\frac{d}{dt}\widehat{\psi}_\eta^\dagger\right)\biggr|0\right\rangle \nonumber\\ &=&\hspace*{-0.05cm}\left\langle0\biggr| \Lambda(\eta)\Lambda^\dagger(\eta)\widehat{\psi}_\eta \left(\frac{d}{dt}\widehat{\psi}_\eta^\dagger\right)\biggr|0\right\rangle \\ &=&\hspace*{-0.05cm}\left\langle0\biggr|\widehat{\psi}_\eta \left(\frac{d}{dt}\widehat{\psi}_\eta^\dagger\right)\biggr|0\right\rangle\! = \!\left\langle \Psi_{\eta}^{(M-1)\mathrm{qh}}\biggr|\frac{d}{dt}\Psi_{\eta}^{(M-1)\mathrm{qh}} \right\rangle. \nonumber \end{eqnarray} This finishes the proof. Therefore, the quasiparticle charge $e^*$ has a contribution from a particle of charge $e$ and $M-1$ quasiholes of charge $-e/M$, i.e., $e^*=e-e (M-1)/M=e/M$, as expected. In simple terms, the channel fusing two quasiholes with one electron leads to a quasielectron of charge $e/3$ in an $\nu=1/3$ Laughlin fluid. This is a very intuitive (and exact) mechanism that has been overlooked until now. Notice that we proved that the evaluation of the quasiparticle Berry connection is exact for any $N$, while the quasihole charge $-e/M$ is only exact asymptotically in the limit $N\rightarrow \infty$ (see Section 2.4 of \cite{stone1}). \begin{figure*}[htb] \centering \includegraphics[width=1.5\columnwidth]{Fig1_new.pdf} \caption{(Color online.) Density profiles (in units of $\rho_0=\frac{\nu}{2\pi}$) for 1 quasielectron located at the position $\eta$ of an incompressible $\nu=\frac{1}{3}$ Laughlin fluid with $N=7$ particles (left and middle panels). The right panel depicts 2 quasiholes in an otherwise $\nu=\frac{1}{3}$ Laughlin fluid with $N=6$ particles (adding the electron charge density $\frac{1}{2\pi}e^{-\frac{1}{4}|z-\eta|^2}$ leads to the exact same middle panel). Lower panels are contour plots of their 3D plots above. Monte Carlo simulations averaged over more than 2$\times 10^{10}$ equilibrated configurations.} \label{fig:03} \end{figure*} \subsection{Charge density} A consequence of this effective fusion mechanism manifests in the calculation of the quasiparticle charge density $\rho_{\mathrm{qp}}(z)$. We appeal once more to the fact that \begin{equation} \Lambda(\eta)|\Psi^{(M-1)\mathrm{qh}}_{\eta}\rangle=0. \end{equation} This can be expressed as \begin{equation} \label{eq:int} \int d^2 r_j \psi_\eta^0(z_j)^\ast\Psi^{(M-1)\mathrm{qh}}_{\eta}(Z_{N-1})=0, \end{equation} for $j=1,\ldots,N-1$. Here $d^2r_j=\frac{1}{2{\sf i}}dz_j^\ast \wedge dz_j$ is the usual two dimensional measure on the complex plane. We can write the quasielectron wave function as \begin{equation} \Psi^{\mathrm{qp}}_{\eta}(Z_N)=\frac{1}{\sqrt{N}}\sum\limits_{j=1}^N(-1)^j \psi_\eta^0(z_j) \Psi^{(M-1)\mathrm{qh}}_{\eta}(Z_{N-1},\widehat{z_j}), \end{equation} where $\widehat{z_j}$ means that coordinate $z_j$ is absent. We want to evaluate \begin{eqnarray} \ \rho_{\mathrm{qp}}(z) &=& \! \sum\limits_{j=1}^N\int d^2 r_1 \ldots d^2 \widehat{r_j}\ldots d^2 r_N \, |\Psi^{\mathrm{qp}}_{\eta}(Z_N)|^2 \\ =&& \hspace*{-0.4cm} N \int d^2 r_1\ldots d^2 r_{N-1} \, |\Psi_{\eta}^{\mathrm{qp}}(Z_{N-1},z_N=z)|^2 \nonumber\\ =&& \hspace*{-0.4cm} \Big[ \sum\limits_{j,j'=1}^N(-1)^{j+j'}\int d^2 r_1 \ldots d^2 r_{N-1}\psi_\eta^0(z_j)^\ast\psi_\eta^0(z_{j'}) \nonumber \\ \times&& \hspace*{-0.4cm} \Psi_{\eta}^{(M-1)\mathrm{qh}}(Z_{N-1},\widehat{z_j})^\ast \Psi^{(M-1)\mathrm{qh}}_{\eta}(Z_{N-1},\widehat{z_{j'}}) \Big]_{z_N=z}. \nonumber \end{eqnarray} We now see that terms with $j\neq j'$ do not contribute. This is so since in such a case, at least one of them is not equal to $N$, say $j\neq N$, and then \eqref{eq:int} gives zero. In the $j=j'=N$ term, the integrals give a value of unity for reasons of normalization and we get \begin{equation} \psi_\eta^0(z)^\ast \psi_\eta^0(z)=\langle \psi_\eta^0|\widehat{\rho}(z) |\psi_\eta^0\rangle , \end{equation} where $\widehat{\rho}(z)$ is the density operator. For $j=j'\neq N$, the integral over the $j$th variable gives $1$, and the rest precisely gives $\langle \Psi^{(M-1)\mathrm{qh}}_{\eta}|\widehat{\rho}(z) |\Psi^{(M-1)\mathrm{qh}}_{\eta}\rangle$. We have just shown that \begin{equation} \rho_{\mathrm{qp}}(z)=\left\langle\psi_\eta^0|\widehat{\rho}(z) |\psi_\eta^0\right\rangle+\left\langle \Psi^{(M-1)\mathrm{qh}}_{\eta}\bigr|\widehat{\rho}(z) \bigr|\Psi^{(M-1)\mathrm{qh}}_{\eta}\right\rangle. \end{equation} Here, the first term is just $\frac{1}{2\pi}e^{-\frac{1}{2}|z-\eta|^2}$, while the second one is the local particle density at $z$ of $\Psi_{\eta}^{(M-1)\mathrm{qh}}(Z_{N-1})$, which is governed by a plasma analogy. This picture is physically appealing. On one hand, there is no (local) plasma analogy for a state such as $\Psi^{\mathrm{qp}}_{\eta}(Z_N)$, but certain properties such as the Berry connection or the quasiparticle charge density simplify because of the fusion mechanism of fractionalization for any finite $N$. On the other hand, this same mechanism facilitate numerical computations, such as Monte Carlo \cite{Ortiz1993}, of certain physical properties. For example, in Fig. \ref{fig:03} we have checked numerically that the fusion mechanism works for the charge density of an $N=7$ electron system and $\nu=1/3$. In this way, we can simulate an arbitrary large system of electrons because there is an "effective plasma analogy" and the Monte Carlo updates become quite efficient. Figure \ref{fig:04} shows Monte Carlo simulations of the radial density for a system of $N=50$ electrons. We can measure the charge of the quasiparticle by using the expression $\delta \rho_{\mathrm{qp}} =2\pi \int_{0}^{r_{\mathrm{cut-off}}} \left[\rho_{\mathrm{qp}}(r)-\rho_L(r)\right]r\,dr$ where, in a finite system, the cut-off radius $r_{\mathrm{cut-off}}$ must at least enclose completely the quasiparticle and, at the same time, be sufficiently far from the boundary to avoid boundary effects \cite{KivelsonSchrieffer82}. Using the Monte Carlo data for $N=400$ particles (see Fig. 4 in Appendix \ref{sec:OF}) and choosing $r_{\mathrm{cut-off}}\leq 30 \ell$, we get a saturation of the fractional charge at the value $\delta\rho_{\mathrm{qp}}=0.3330(30) e$. Similarly, for two quasiholes we get $\delta\rho_{\mathrm{2qh}}=-0.6634(30) e$. \begin{figure}[htb] \centering \includegraphics[trim={0 5.2cm 0 0}, width=1.0\columnwidth]{Fig_2_to_be_trimmed.pdf} \caption{ (Color online.) Left panel: Radial charge density $\rho(r)$ (in units of $\rho_0=\frac{\nu}{2\pi}$) of a $\nu=1/3$ Laughlin fluid ($\rho_L(r)$ with $N=50$ electrons), 2 quasiholes ($N=49$ electrons), 1 electron (of density $\frac{1}{2\pi}e^{-\frac{1}{4}|z|^2}$), and 1 quasielectron ($N=50$ electrons) localized at $\eta=0$. The fusion mechanism dictates that the sum of 2 quasiholes and 1 electron is identical to 1 quasielectron. Right panel: Quasiparticle localized at $\eta=4+3{\sf i}$. } \label{fig:04} \end{figure} \subsection{Berry connection for two quasiparticles: \\ The problem of statistics} Our mechanism for particle fractionalization suggests the following form of the wave function for a system of $N_{\mathrm{qp}}\ll N$ well-separated quasiparticles \begin{equation} \begin{split} \Psi^{N_{\mathrm{qp}}\mathrm{qp}}_{\eta_1,\ldots,\eta_{N_{\mathrm{qp}}}}(Z_N)={\cal N}^e_{\eta_1\dotsc\eta_{N_{\mathrm{qp}}}}&\Lambda^\dagger(\eta_1)\ldots \Lambda^\dagger(\eta_{N_{\mathrm{qp}}})\\ &\times \Psi^{N_{\mathrm{qp}}(M-1)\mathrm{qp}}_{\eta_1,\ldots,\eta_{N_{\mathrm{qp}}}}(Z_{N-N_{\mathrm{qp}}}),\label{unocc} \end{split} \end{equation} where $\Psi^{N_{\mathrm{qp}}(M-1)\mathrm{qp}}_{\eta_1,\ldots,\eta_{N_{\mathrm{qp}}}}(Z_N)$ denotes the state with $M-1$ quasiholes at $\eta_1$, $M-1$ quasiholes at $\eta_2$ and so on, up to $M-1$ quasiholes at $\eta_{N_{\mathrm{qp}}}$. ${\cal N}^e_{\eta_1\dotsc\eta_{N_{\mathrm{qp}}}}$ is a normalization factor associated with the electron creation operators, as shown below. To address the quasiparticle (a composite of one electron and $M-1$ quasiholes) exchange statistics, we next focus on $N_{\mathrm{qp}}=2$, in which case we get \begin{equation} \begin{split} &\Psi^{2\mathrm{qp}}_{\eta_1,\eta_2}(Z_N)=\sqrt{N(N-1)}\;\mathcal{N}^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2,N-2}\; {\cal N}^e_{\eta_1,\eta_2} \\ &\times e^{-\frac{1}{4}\sum\limits_{k=1}^N|z_k|^2}\hat{\mathcal{A}}\left(e^{\frac{\eta_1^\ast z_N+\eta_2^\ast z_{N-1}}{2}} \prod\limits_{k=1}^{N-2}(z_k-\eta_1)^{M-1}\right. \\ &\left.\times\prod\limits_{l=1}^{N-2}(z_l-\eta_2)^{M-1} \prod\limits_{1\leq i<j \leq N-2}(z_i-z_j)^M \right). \end{split} \end{equation} Similar to the one-particle case, $\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}(Z_{N-2})$ has orbitals $\psi_{\eta_i}^0$, $i=1,2$, unoccupied, owing to the presence of factors $\prod_k (z_k-\eta_i)$, so that \begin{equation} \Lambda(\eta_i)\Lambda^\dagger(\eta_i)|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle =|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle, \quad i=1,2. \end{equation} By a straightforward computation, in the mixed representation, we get \begin{equation} \begin{split} &\langle \Psi^{2\mathrm{qp}}_{\eta_1,\eta_2}| \Psi^{2\mathrm{qp}}_{\eta_1,\eta_2}\rangle =\langle \Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle ({\cal N}^e_{\eta_1,\eta_2})^2 \\ &\hspace{70pt}\times \left(1-\{\Lambda(\eta_1),\Lambda^\dagger(\eta_2)\}\{\Lambda(\eta_2),\Lambda^\dagger(\eta_1)\}\right)\\ &=\langle \Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle=1, \end{split} \end{equation} where we choose real normalization factors such that $\mathcal{N}^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2,N-2}$ normalizes the quasihole cluster state $|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle$ and $({\cal N}^e_{\eta_1,\eta_2})^2$ cancels the second line. The latter is just the normalization of the 2-fermion state $\Lambda(\eta_1)^\dagger\Lambda(\eta_2)^\dagger|0\rangle$, so this choice of ${\cal N}^e_{\eta_1,\eta_2}$ can also be expressed as \begin{equation} ({\cal N}^e_{\eta_1,\eta_2})^2 \Lambda(\eta_2)\Lambda(\eta_1) \Lambda(\eta_1)^\dagger\Lambda(\eta_2)^\dagger|0\rangle=|0\rangle \label{2fermnorm} \end{equation} and/or its Hermitian adjoint, which will be useful in the following. For the computation of the Berry connection, just as in the one quasiparticle case, one can write $|\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle =\hat{\psi}^\dagger_{\eta_1,\eta_2}|0\rangle$ for some $N-2$ particle operator $\hat{\psi}^\dagger_{\eta_1,\eta_2}$ in the algebra generated by the $\Lambda(\eta)^\dagger$, in terms of which \eqref{unocc} can be equivalently stated as \begin{equation} \Lambda(\eta_i)\hat{\psi}^\dagger_{\eta_1,\eta_2}=(-1)^{N-2}\hat{\psi}^\dagger_{\eta_1,\eta_2}\Lambda(\eta_i), \quad i=1,2.\label{unocc2} \end{equation} Then, utilizing the last two equation, the calculation of the Berry connection proceeds analogous to the single-particle case. In particular, one obtains two contributions \begin{equation} \left\langle \Psi^{2\mathrm{qp}}_{\eta_1,\eta_2} \biggr|\frac{d}{dt} \Psi^{2\mathrm{qp}}_{\eta_1,\eta_2}\right\rangle ={\sf i}\mathcal{A}_2+{\sf i} \tilde{\mathcal{A}}_{2(M-2)}, \end{equation} where \begin{equation} {\sf i}\mathcal{A}_2= \langle \eta_1,\eta_2|\frac{d}{dt}|\eta_1,\eta_2\rangle \end{equation} is the Berry connection of a normalized 2-electron state $ |\eta_1,\eta_2\rangle=\mathcal{N}^e_{\eta_1,\eta_2}\Lambda^\dagger(\eta_1)\Lambda^\dagger(\eta_2) |0\rangle $, and \begin{equation} {\sf i}\tilde{\mathcal{A}}_{2(M-2)}=\langle \Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}| \frac{d}{dt}\Psi^{2(M-1)\mathrm{qh}}_{\eta_1,\eta_2}\rangle \end{equation} is that of a state of two clusters of $M-1$ quasiholes. For large $|\eta_1-\eta_2|$, both contributions are analytically under control, the 2-electron one $ {\sf i}\mathcal{A}_2$ trivially so, and the one from the quasihole cluster state, ${\sf i}\tilde{\mathcal{A}}_{2(M-2)}$, via methods along the lines of Arovas-Schrieffer-Wilczek \cite{stone1,Arovas1984}. Dropping Aharonov-Bohm contributions, and defining the statistical phase as $e^{{\sf i}\pi\gamma}$, the contribution to $\gamma$ from the 2-electron state is $1$ (assuming, for the time being, that the underlying particles are fermions with $M$ odd), and that of the quasihole-cluster is $(M-1)^2/M$ \cite{Su1986}. Thus, \begin{equation} \pi \gamma\equiv \pi+ (M-1)^2 \cdot \frac{\pi}{M} \!\!\!\!\pmod{2 \pi} \equiv \frac{\pi}{M} \!\!\!\! \pmod{2 \pi}, \end{equation} as expected for a quasielectron. The same final result $\pi/M$ would be obtained for bosonic states and even $M$. \section{Constructive subspace bosonization} A bosonization map is an example of a duality \cite{CON}. Typically, dualities are {\it dictionaries} constructed as isometries of bond algebras acting on the whole Hilbert space \cite{CON}. A weaker notion may involve subspaces defined from a prescribed vacuum and, thus, are Hamiltonian-dependent. This is the case of Luttinger's bosonization \cite{delft} that describes, in the thermodynamic limit, collective low energy excitations about a gapless fermion ground state. Our bosonization is performed with respect to a radically different vacuum- that of the gapped Laughlin state. Unlike most treatments, we will not bosonize the one-dimensional FQH edge (by assuming it to be a Luttinger system) but rather bosonize the entire two-dimensional FQH system. Contrary to gapless collective excitations about the one-dimensional Fermi gas ground state associated with the Luttinger bosonization scheme, our bosonization does not describe modes of arbitrarily low finite energy but rather only the zero-energy (topological) excitations \cite{MONS} that are present in the gapped Laughlin fluid. As illustrated in \cite{ONDS,MONS}, the zero-mode subspace $\mathcal{Z}=\bigoplus_{N=0}^\infty \mathcal{Z}_N$ is generated by the action of the commutative algebra $\mathsf{A}$ on the Laughlin state $|\psi_{M}^N\rangle$ for different particle numbers $N$. Yet another notable difference with the conventional Luttinger bosonization (and conjectured extensions to 2+1 dimensions \cite{SSWW}) is, somewhat similar to earlier continuum renditions (as opposed to our discrete case), e.g., \cite{Fuentes}, that the indices parameterizing our bosonic excitations, $d\ge 0$, are taken from the discrete positive half-line (angular momentum values) instead of the continuous full real line of the Luttinger system (or plane of \cite{SSWW}). Each zero-energy state in our original (fermionic/bosonic) Hilbert space has an image in the mapped bosonized Hilbert space. Consider the following bosonic creation (annihilation) operators $\mathfrak{b}^\dagger_d=\mathcal{O}_{d}/\sqrt{d \nu}$ ($\mathfrak{b}^{\;}_d=\mathcal{O}^\dagger_{d}/\sqrt{d \nu}$). Then, $d \nu [\mathfrak{b}^{\;}_d, \mathfrak{b}^\dagger_d]_-= \sum_{r=0}^{d-1}\bar a_r^\dagger \bar a^{\;}_r$ and, in the thermodynamic limit, \begin{equation} \langle \psi_{M}^N|[\mathfrak{b}^{\;}_d, \mathfrak{b}^\dagger_d]_- |\psi_{M}^N \rangle\|\psi_{M}^N\|^{-2}\rightarrow 1. \end{equation} The commutator $[\mathfrak{b}^{\;}_d, \mathfrak{b}^\dagger_{d'}]_-$ does not preserve total angular momentum when $d\ne d'$. It follows that, in the thermodynamic limit, within the Laughlin state subspace, $[\mathfrak{b}^{\;}_d, \mathfrak{b}^\dagger_{d'}]_-=\delta_{d,d'}$. The field operator $\varphi(z)=\sum_{d\ge 0}\phi_d(z) \mathfrak{b}_d$ and its adjoint $\varphi^\dagger(z)$ satisfy $[\varphi(z),\varphi(z')]_-=0$ and $[\varphi(z), \varphi^\dagger(z')]_-=\{z'|z\}$. We next construct the operators connecting different particle sectors, that is, the Klein factors that commute with the bosonic degrees of freedom $\mathfrak{b}^{\;}_d, \mathfrak{b}^{\dagger}_d$ and are $N$-independent. Since $|\psi_{M}^{N+1}\rangle = \frac{1}{N+1}{K}_{M,N}|\psi_{M}^N\rangle$ we define ${F}_{M,N}^\dagger=\frac{1}{N+1}{K}_{M,N}$ and $\mathcal{F}^\dagger_{M}=\sum_{N\ge 0} {F}^\dagger_{M,N} |\psi_{M}^N \rangle \langle \psi^N_{M}|$. This illustrates the relation between the Klein factors of bosonization with the (non-local) Read operator. We then get $\langle\psi_{M}^{N+1}|[\mathcal{O}_d,{F}^\dagger_{M,N}]_-|\psi_{M}^{N}\rangle=0$ and $\langle\psi_{M}^{N+1}|[\mathfrak{b}^{\dagger}_d, \mathcal{F}^\dagger_{M}]_-|\psi_{M}^{N}\rangle=0$. One can prove a similar relation for $\mathcal{F}^{\;}_M:=(\mathcal{F}_{M}^\dagger)^\dagger$ and, analogously, for $\mathfrak{b}^{\dagger}_d$ replaced by $\mathfrak{b}^{\;}_d$ (see Appendix \ref{sec:OG}). Since the $\widehat{U}_N(\eta)$ operators can be expressed in terms of $\mathfrak{b}_d^\dagger$'s, the fractionalization equations (both for quasiparticle as well as quasihole) can be thought of as the dictionary, at the field operator level, for our bosonization. We reiterate that this bosonization within the zero-mode subspace reflects its purely topological character. Indeed, the only Hamiltonian that commutes with the generators of $\mathsf{A}$ is the null operator. \section{Universal Edge Behavior} An understanding of the bulk-boundary correspondence in interacting topological matter is a long standing challenge. For FQH fluids, Wen's hypothesis \cite{Wen} for using Luttinger physics for the edge compounded by further effective edge Hamiltonian descriptions \cite{FBS,Jain01} constitutes our best guide for the edge physics. We now advance a conjecture enabling direct analytical calculations. We posit that the asymptotic long-distance behavior of the single-particle edge Green's function may be calculated by evaluating it for the root partition (the DNA) of the corresponding FQH state. As we next illustrate, our computed long-distance behavior shows remarkable agreement with Wen's hypothesis. Our (root pattern) angular momentum basis calculations do not include the effects of boundary confining potentials (if any exist). Most notably, we do not, at all, assume that the FQH edge is a Luttinger liquid or another effective one-dimensional system. Consider the fermionic Green's function \begin{eqnarray} - {\sf i}G(z,z')=\rho(z,z')&=& \langle\psi_{M}^N|\Psi^\dagger(z)\Psi(z')|\psi_{M}^N \rangle\|\psi_{M}^N\|^{-2} \nonumber \\ && \times e^{-\frac{1}{4}(|z|^2 +|z'|^2)}, \end{eqnarray} and coordinates $z={R}e^{{\sf i}\theta}$, $z'={R}e^{{\sf i}\theta'}$, where ${R}=\sqrt{2(r_{\rm max}+1)}$ is the radius of the last occupied orbital and it can be identified with the classical radius of the droplet, i.e. it satisfies $\pi R^2 \cdot \alpha=N$ with $\alpha=N/(r_{\mathrm{max}}+1)$ being the average density of the (homogeneous) droplet. Then, \begin{equation}\rho(z,z')=\frac{e^{-\frac{1}{2}{R}^2}}{2\pi} \sum_{r=0}^{r_\mathrm{max}} \left(\frac{R^2}{2}\right)^r \frac{e^{{\sf i}(\theta'-\theta)r}}{r!} \\ \frac{\langle \psi_{M}^N|\bar c_r^\dagger \bar c^{\;}_r|\psi_{M}^N\rangle}{\|\psi_{M}^N\|^{2}}.\end{equation} Similarly, the edge Green's function associated with the root partition $|\widetilde{\psi}_{M}^N\rangle$ is \begin{equation} \tilde \rho(z,z')\!= \!\frac{e^{-\frac{1}{2}{R}^2}e^{{\sf i} NM(\theta'-\theta)}}{2\pi} \! \sum_{k=1}^{N} \! \left(\frac{{R}^2}{2}\right)^{(N-k) M} \!\!\!\frac{ e^{{\sf i} k M(\theta-\theta')}}{[(N-k) M]!}, \end{equation} where we used $\langle \widetilde{\psi}_{M}^N|\bar c_r^\dagger \bar c^{\;}_r|\widetilde{\psi}_{M}^N\rangle\|\widetilde{\psi}_{M}^N\|^{-2}=1$ for $r=0,M,\ldots, (N-1)M$, and $0$ otherwise. Thus far, our root partition calculation is exact. We next perform asymptotic analysis. For large $k$, the largest phase oscillations appear when $\cos(M k(\theta-\theta'))=(-1)^k$, i.e., for $|\theta-\theta'|=\tilde \theta$ near $\pi\frac{1+2l}{M}$ with $l=0,\ldots, m$ and $M=2m+1$. This implies that the dominant contributions to the sum originate from small $k$ values. We can then apply Stirling's approximation $\left[(N-k)M\right]!\cong\sqrt{\pi} R \left(R^2/2\right)^{(N-k)M}e^{-R^2/2}$, where we used $R^2\nu\approx 2N$ (valid since $1-\nu \ll R^2$) leading to \begin{equation} \label{sM1} | \tilde \rho (z,z')|\cong\frac{1}{4\pi^{3/2}R \left|\sin \frac{M \tilde \theta}{2}\right|}. \end{equation} Long distances correspond to $\tilde \theta$ near $\pi$. As \begin{eqnarray} \left|\sin\left(\frac{M\tilde \theta}{2}\right)\right|&=&\left|\sin\left(\frac{\tilde \theta}{2}\right)\right|^M \\ &&-\frac{1}{8}M(M-1)(\tilde \theta-\pi)^2 + \mathcal{O}((\tilde \theta-\pi)^4), \nonumber \end{eqnarray} the edge Green function \begin{equation} \label{sM2} |\tilde \rho (\tilde \theta)|=\frac{1}{4\pi^{3/2}R \left|\sin\left(\frac{\tilde \theta}{2}\right)\right|^M}\left(1+\mathcal{O}((\tilde \theta-\pi)^2)\right), \end{equation} or, equivalently, $|\widetilde{\rho}(\widetilde{\theta})|\propto |z-z'|^{-M}$. This is only valid in the vicinity of $\widetilde{\theta}=\pi$ (e.g., demanding the corrections to be $\leq 1\%$, for $M=3$, restricts us to $[0.96\pi,\pi]$), while Eq. \eqref{sM1} spans a broader range -- see Fig. \ref{fig:01}. The Green's function was computed by using the tables of characters for permutation groups $S_{N(N-1)}$ for $M=3$ (up to $N=8$ and then extrapolating the results), adjusting the method in \cite{Dunne}. The difference between $|\rho|$ and $|\widetilde{\rho}|$ vanishes at $\pi$ as $N^{-1/2}$. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{plotF.pdf} \caption{\label{fig:01} (Color online.) Edge Green's function corresponding to the Laughlin's (blue line) and the root (orange dotted line) state, for 8 particles. Both $(\sin(\frac{\tilde{\theta}}{2}))^{-3}$ (green dashed line) and $\sin(\frac{3\tilde{\theta}}{2})$ (red dashed line) laws are also shown. The latter is a better approximation of $|\widetilde{\rho}|$ in a broader range around $\pi$. In the vicinity of blue points Stirling's approximation is valid. Insets: $\log|G(\widetilde{\theta})|$ as a function of $\log|\sin(\widetilde{\theta}/2)|$ in the range (Right): $[0.6,1]$ for $\sin(\widetilde{\theta}/2)$, for both $N=8$ (red dashed line) and $N=7$ (blue line); (Left): $[0.967,1]$ for $N=8$ (red dashed line) and $[0.991,1]$ for $N=7$ (blue line).} \end{figure} Nevertheless, the long-distance ($\widetilde{\theta} \sim \pi$) behavior of the Green's function, in the thermodynamic limit, cannot be reliably determined from small $N$ calculations \cite{wan}. For instance, by examining the slope $\mu$ of $\log|G(\widetilde{\theta})|$ when plotted as a function of $\log|\sin(\widetilde{\theta}/2)|$ for $N=8$ (Fig. \ref{fig:01}), we get $\mu=-3.88$ when using the range $[0.967,1]$ for $\sin(\widetilde{\theta}/2)$, while the value for $N=7$ in the range $[0.991,1]$ is $\mu=-6$. The deduced numerical value is highly dependent on the range used in the fitting procedure, e.g. for $N=8$ and the range $[0.6,1]$ we obtain $\mu=-3.23$ (for linear scale of $\widetilde{\theta}$). We established that the asymptotic long-distance behavior of the edge Green's function corresponding to the root state coincides with Wen's conjecture \cite{Wen}. \subsection{Beyond the LLL} The aforementioned behaviour remains true also beyond the LLL that forms the focus of our work. Indeed, repeating the above calculation when using the DNA \cite{chen17,BCTONS} of the Jain's $2/5$ state, we found $\mu=-3$, in agreement with Wen's hypothesis \cite{Wen}. In this Jain's state example, our computation captures the (EPP) entanglement structure of the root partition \cite{BCTONS} not present in Laughlin states. In this case we need the exact form of the following orbitals: \begin{equation} \psi_{0,r}(z)=\frac{z^r}{\mathcal{N}_{0,r}}, \qquad \psi_{1,r}(z)=\frac{{z}^*z^{r+1}-2(r+1)z^r}{\mathcal{N}_{1,r}}, \end{equation} with $\mathcal{N}_{0,r}=\sqrt{2\pi 2^r r!}$ and $\mathcal{N}_{1,r}=\sqrt{2\pi 2^{r+2}(r+1)!}$. The fermionic field operator is now $\Psi(z)=\sum\limits_{n,r}\psi_{n,r}(z)\overline{c}_{n,r}$ which leads to the Green's function of the following form: \begin{equation} \begin{split} \rho(z,z')=\sum\limits_{n,n'}\sum\limits_{r,r'}&\psi_{n,r}^\ast(z)\psi_{n',r'}(z')\frac{\langle \psi |\overline{c}^\dagger_{n,r}\overline{c}_{n',r'} |\psi\rangle}{\|\psi\|^2}\\ &\times e^{-\frac{1}{4}(|z|^2+|z'|^2)}, \end{split} \end{equation} where $|\psi\rangle$ is the corresponding ground state. By the angular momentum conservation, $r=r'$ under the above summation, so that \begin{equation} \begin{split} \rho(z,z')=\sum\limits_{n,n'}\sum\limits_{r}&\psi_{n,r}^\ast(z)\psi_{n',r}(z')\frac{\langle \psi |\overline{c}^\dagger_{n,r}\overline{c}_{n',r} |\psi\rangle}{\|\psi\|^2}\\ &\times e^{-\frac{1}{4}(|z|^2+|z'|^2)}, \end{split} \end{equation} For the ``DNA'' of the ground state $|\psi\rangle$ we get \begin{equation} \begin{split} \widetilde{\rho}(z,z')=\sum\limits_{n,n'}\sum\limits_{r}&\psi_{n,r}^\ast(z)\psi_{n',r}(z')\frac{\langle \mathrm{DNA} |\overline{c}^\dagger_{n,r}\overline{c}_{n',r} |\mathrm{DNA}\rangle}{\|\mathrm{DNA}\|^2}\\ &\times e^{-\frac{1}{4}(|z|^2+|z'|^2)}. \end{split} \end{equation} The ``DNA'' of the ground state of the Jain's $2/5$ state \cite{BCTONS,chen17} is of the form $|\mathrm{DNA}\rangle = \prod\limits_{k\ge 0} \widehat{\varphi_{5k+3}}|0\rangle$ with \begin{equation} \begin{split} \widehat{\varphi_r}=\alpha_{0,0}(r)&\left(\overline{c}_{0,r-1}^\dagger \overline{c}_{0,r+1}^\dagger +\frac{\sqrt{r+2}}{2}\overline{c}_{0,r-1}^\dagger \overline{c}_{1,r+1}^\dagger\right.\\ &\left.-\frac{\sqrt{r}}{2}\overline{c}_{1,r-1}^\dagger \overline{c}_{0,r+1}^\dagger\right), \end{split} \end{equation} where $\alpha_{0,0}$ is an $r$-dependent factor. As a result, \begin{equation} \label{rholl} \begin{split} &\widetilde{\rho}(z,z')=\frac{e^{-\frac{1}{4}(|z|^2+|z'|^2)}}{2\pi}\\ &\times\left\{ \sum\limits_{0\le 5l+2\le r_{\mathrm{max}}}\frac{({z}^*z')^{5l+2}}{2^{5l+2}h_l(5l+2)!}\left[1+(5l+5)^2\right.\right.\\ &\left.\left.+ \frac{2(5l+3)-|z'|^2}{4} + (2(5l+3)-|z|^2)(5 l +3)\right]\right.\\ &\left.+\sum\limits_{0\le 5l+4\le r_{\mathrm{max}}}\frac{({z}^*z')^{5l+4}}{2^{5l+4}h_l(5l+4)!}\left[1+(5l+3)^2\right.\right.\\ &\left.\left.+ \frac{|z'|^2-2(5l+5)}{4} + (|z|^2-2(5l+5))(5 l +5)\right] \right\}, \end{split} \end{equation} where $h_l=1+(5l+3)^2+(5l+5)^2$. Henceforth, we will focus on points $z=R e^{{\sf i} \theta}$ and $z'=R e^{{\sf i} \theta'}$ that lie on the edge. We next discuss the two contributions to $\widetilde{\rho}(z,z')$ in Eq. (\ref{rholl}). We start by discussing the contribution to $\widetilde{\rho}(z,z')$ of exponent $5l+2$. With the above polar substitution for the boundary points $z$ and $z'$, this becomes \begin{equation} \begin{split} &\frac{e^{-\frac{1}{2}R^2}}{2\pi}\sum\limits_{k=1}^{\mathcal{N}}\left(\frac{R^2}{2}\right)^{5(\mathcal{N}-k)+2}\frac{ e^{-{\sf i}(5(\mathcal{N}-k)+2)(\theta-\theta')} }{h_{\mathcal{N}-k}\left(5(\mathcal{N}-k)+2\right)!}\\ &\times \left[1+ \left(5(\mathcal{N}-k)+5\right)^2\right. \\ &\left.+\frac{2\left(5(\mathcal{N}-k)+3\right)-R^2}{4}\cdot (20(\mathcal{N}-k)+13)\right] , \end{split} \end{equation} where $\mathcal{N}=\lfloor \frac{1}{5}\left(\frac{N-1}{\nu}-2\right)\rfloor+1$ with $\nu=\frac{2}{5}$, and we have used the same change of summation index as in the case of the LLL. Assume now that only small integer $k$ are of relevance in the above summation - we will check validity of this assumption later on. Then using the Stirling approximation, the fact that $N\cong \frac{R^2 \nu}{2} \gg 1$ and $\mathcal{N}-k\cong \mathcal{N}$, we get \begin{equation} \left[5(\mathcal{N}-k)\right]!\cong \sqrt{\pi} R \left(\frac{R^2}{2}\right)^{5(\mathcal{N}-k)+2} e^{-\frac{R^2}{2}}, \end{equation} and, as a result, for the part with the exponent $5l+2$, we end up with \begin{equation} \frac{e^{-\i5\mathcal{N}(\theta -\theta')}}{2R \pi^{3/2}}\kappa_2(\mathcal{N},R)\sum\limits_{k=1}^{\mathcal{N}} e^{{\sf i}(5k-2)(\theta -\theta')}, \end{equation} where $\kappa_2(\mathcal{N},R)$ is a certain rational function in $\mathcal{N}$. Similar to the above, for the part having $5l+4$ as an exponent, we get \begin{equation} \frac{e^{-\i5\widehat{\mathcal{N}}(\theta-\theta')}}{2R \pi^{3/2}}\kappa_4(\widehat{\mathcal{N}},R) \sum\limits_{k=1}^{\widehat{\mathcal{N}}} e^{{\sf i}(5k-4)(\theta-\theta')} \end{equation} with a rational, in $\widehat{\mathcal{N}}=\lfloor\frac{1}{5}\left(\frac{N-1}{\nu}-4\right)\rfloor+1$, function $\kappa_4(\widehat{\mathcal{N}},R)$. Next, we observe that for large radius $R$ we can without the loss of generality assume that $\widehat{\mathcal{N}}=\mathcal{N}$, so that $e^{-{\sf i} 5\widehat{\mathcal{N}}(\theta-\theta')}$'s lead to an irrelevant global factor since at the very end we will be interested in the absolute value of the Green's function. We now argue that in the thermodynamic limit, $\frac{\kappa_2}{\kappa_4}\rightarrow 1$. Indeed, since $R^2\sim \frac{2N}{\nu}$ and $\nu=\frac{2}{5}$ we have $R^2\sim 5N$. Moreover, we know that $\mathcal{N}\sim \frac{N}{5\nu}\sim \frac{N}{2}$. Hence $R^2\sim 10\mathcal{N}$. This shows that $\frac{\kappa_2}{\kappa_4}\rightarrow 1$. Furthermore, this also shows that, in the limit $\mathcal{N}\rightarrow \infty$, we have for $\widetilde{\rho}$: \begin{equation} \frac{e^{-{\sf i} 5\mathcal{N}(\theta -\theta')}}{4R \pi^{3/2}}\left( \sum\limits_{k=1}^{\infty} e^{{\sf i}(5k-2)(\theta-\theta')}+\sum\limits_{k=1}^\infty e^{{\sf i}(5k-4)(\theta-\theta')}\right), \end{equation} since both $\kappa_2$ and $\kappa_4$ tends to $\frac{1}{2}$ in this limit. We next explain why the assumption $\mathcal{N}-k\cong \mathcal{N}$ is valid. Towards this end, one needs to verify that the only $k$ values that matter are the small ones, i.e., that $\cos((5k-2)(\theta-\theta'))\cong(-1)^k$. This is indeed true (in particular around $\theta-\theta'=\pi$, which is exactly our point of interest). Analogous considerations work also for the term of exponent $5l+4$. Therefore, the problem reduces to the evaluation of \begin{equation} \begin{split} &\frac{1}{4R \pi^{3/2}}\left|\sum\limits_{k=1}^\infty e^{{\sf i}(5k-2)(\theta-\theta')}+e^{{\sf i}(5k-4)(\theta-\theta')}\right|\\ &=\frac{1}{4R \pi^{3/2}}\left| \frac{{\sf i} e^{{\sf i} \frac{3}{2}(\theta-\theta')} \cos(\theta-\theta')}{\sin\left(\frac{5 (\theta-\theta')}{2}\right)} \right|. \label{eq:0} \end{split} \end{equation} To ascertain the long distance behavior, we examine angular differences $|\theta-\theta'|=\tilde \theta$ close to $\pi$, where this asymptotically becomes \begin{equation} \begin{split} \frac{1}{4R \pi^{3/2}}\left|\frac{1-2\sin^2\left(\frac{\widetilde{\theta}}{2}\right)}{\sin\left(\frac{5\widetilde{\theta}}{2}\right)}\right|\cong \frac{1}{2R \pi^{3/2}}\frac{1}{\left|\sin\left(\frac{\widetilde{\theta}}{2}\right)\right|^3}. \end{split} \end{equation} The above derived result is in agreement with Wen's conjecture \cite{Wen} for the FQH Jain's $2/5$ liquid. \section{Conclusions} Our approach sheds light on the elusive exact mechanism underlying fractionalized quasielectron excitations in FQH fluids (and formalizes the fractionalization of quasihole excitations \cite{Girvin84}). By solving an outstanding open problem \cite{Hansson,Nielsen}, our construct underscores the importance of a systematic operator based microscopic approach complementing Laughlin's original quasiparticle wave function Ansatz. The algebraic structure of the LLL is deeply tied to the Newton-Girard relations. We have shown that there are numerous pairs of ``dual'' operators that are linked to each other via these relations (including operators associated with the Witt algebra). The Newton-Girard relations typically convert a local operator into a non-local ``dual'' operator. A main message of the present work is that ``derivative operations'' on FQH vacua do not lead to exact quasiparticle excitations. The precise mechanism leading to charge fractionalization consists of the joint process of flux and (original) particle insertions. In other words, an elementary fusion channel of quasiholes and an electron generates a quasielectron excitation. For instance, to generate one quasielectron excitation in a $\nu=1/M$ Laughlin fluid one needs to insert $M-1$ fluxes, in an ($N-1$)-electron fluid, and fuse them with an additional electron. A fundamental difference between quasihole and quasiparticle excitations can be traced back to their $M$-clustering properties \cite{BCTONS}. While quasiholes preserve the $M$-clustering property of the incompressible (ground state) fluid, quasiparticle states breaks it down. This is at the origin of the asymmetry between these two kinds of excitations. Equivalently, while quasihole wave functions sustain a (local) plasma analogy this is not the case for quasielectrons. We explicitly {\it constructed the quasiparticle (quasielectron) wave function}. Our found fusion mechanism of quasiparticle generation is not only the mathematically exact (for an arbitrary number of particles) field-theoretic operator procedure but it is also behind the exact analytic computation of other quasiparticle properties, such as its charge density and Berry connections leading to the right fractional charge and exchange statistics. This is a truly unprecedented remarkable result that we have numerically confirmed via detailed Monte Carlo simulations. Intriguingly, within our field-theoretic framework, we find that the Laughlin state is a condensate of a non-local Read type operator. Our approach allows for a constructive (zero-energy) subspace bosonization of the full two-dimensional system that further evinces the non-local topological character of the problem and, once again, cements links to Read's operator. The constructed Klein operator associated with this angular momentum (and flux counting) root state based bosonization scheme is none other than Read's non-local operator. We suspect that this angular momentum (flux counting) based mapping might relate to real-space flux attachment (and attendant Chern-Simons) type bosonization schemes \cite{fradkin,SSWW}. Lastly, we illustrated how the long-distance behavior of edge excitations associated with the root partition component (DNA) of the {\it bulk} FQH ground state may be readily calculated. Strikingly, the asymptotic long-distance edge physics derived in this manner matches Wen's earlier hypothesis in the cases that we tested. This agreement hints at a possibly general powerful {\it computational recipe for predicting edge physics}. \section{Acknowledgements} We thank J. Jain, H. Hansson, and S. Simon for useful comments. G.O. acknowledges support from the US Department of Energy grant DE-SC0020343. A.B. acknowledges the Polish-US Fulbright Commission for the possibility of visiting Indiana University, Bloomington, during the Fulbright Junior Research Award scholarship. Work by A.S. has been supported by the National Science Foundation Grant No. DMR-2029401.
2,877,628,090,941
arxiv
\section{Introduction} When a massive particle falls in a black hole, it induces changes in the spacetime metric that, although considered as an ``ordinary'' perturbation, can produce a phenomenology hardly comparable to what it would be in a flat background. This fact, even if dealing with elementary concepts of General Relativity, found its first serious applications in a celebrated paper of 1957 by T. Regge and J. A. Wheeler \cite{rw} that somehow founded the theory of the stability of the Schwarzschild black hole. However, due to hard complications in the calculations, this investigation came to a first end, after a number of intermediate stages (in particular \cite{mathews, edelvish, zerilli1, zerilli2}), only in 1970-1974 with the work of F. J. Zerilli, who, in collaboration with Wheeler and R. Ruffini \cite{jorufzer1, jorufzer2}, attacked the problem of considering the perturbation of the e.m. and gravitational fields produced in the falling in greater detail. In \cite{zerilli3}, in particular, a realistic treatment is made, considering the first-order stress-energy tensor contributions of the perturbing particle, but the ideas lying at the base of the angular decoupling of the equations still remain cryptical in some of their aspects; it is now possible, also by the help of quite popular computer algebra means ({\it Mathematica} by S. Wolfram and the tensor calculus-dedicated package {\it MathTensor} by L. Parker and S. Christensen), to account for some hidden aspects of the logic path followed by the authors and fix some errors. \section{Formulating the problem} The perturbation analysis is performed by writing the Einstein's system (in geometric units: $G=c=1$) as \begin{equation}\label{eins} G_{\mu\nu}(\mathbf{h})=8\,\pi\,T_{\mu\nu}(\mathbf{h}) \end{equation} where $\mathbf{h}$ is the perturbation tensor accounting for the presence of a massive particle, eventually endowed with an electric charge, that acts directly on the spacetime geometry in addition to the Schwarzschild metric tensor: $\,g_{\mu\nu}=g^{\mbox{\bf \tiny S}}_{\mu\nu}+h_{\mu\nu}\,$ where $\,g^{\mbox{\bf \tiny S}}_{\mu\nu}\,$ corresponds to the line element $$\,ds^2=-e^{\nu(r)}dt^2+e^{\lambda(r)}dr^2+ r^2(d\theta^2+\sin^2\theta\,d\phi^2)$$ with $\nu(r)=-\lambda(r)=\ln (1-2\,m/r)$, $m$ being the mass of the BH.\\ Historically speaking, almost all the ideas developed to perform a pure metric perturbation analysis ({\it i.e.} with $T_{\mu\nu}=0$) in the original paper of 1957 were adopted by the successive works, in particular the ``harmonic syntax'' of the angular terms and the distinction between two different kinds of perturbations, belonging to different choices among the angular operators that generate the multipole expansion itself. \section{Nature and aims of the angular bases} To write the equations in a smart basis, accounting for the angular properties of a static, spherically symmetric problem (by this fact allowing the separation of the radial parts), a set of harmonic ``objects'' is originally introduced by Regge-Wheeler, by a mechanism of parity splitting and repeated derivations of the scalar harmonics; those objects are seen to form a useful basis for rank-2 tensors in the Euclidean 3-dim. space and can successively be modified to form a basis of the Minkowski spacetime. They are called ``tensor multipole'' ({\it TM}) basis. Later, Frank J. Zerilli modified the original choice in two of the ten elements and noticed that the way those tensors are constructed must be consistent with the procedure adopted by Jon Mathews \cite{mathews}, consisting in a chain of external products of elements, starting from the spherical vector basis, aimed to the forming of a set which, in its spatial Euclidean version, transforms under the irreducible representations of $SO(3)$. This set is called ``tensor harmonic'' ({\it TH}) basis.\\ The procedure to obtain the {\it TM} basis resides on the combined action of the time and radial projection, ${\bf P_t}$, ${\bf P_r}$, the covariant derivative $\boldsymbol{\nabla}$ and orbital angular momentum $\mathbf{L}=-i\,{\bf r}\times\boldsymbol{\nabla}$ operators on the scalar harmonics, that turn out to be linear combinations of the {\it TH}s. Exploring the relationships between bases that originate from external products of unit vectors and those derived from operator compositions, one is soon aware of the different characteristics of those sets concerning parity and with respect to scalar products in $\mathcal{E}_c\otimes\mathcal{E}_c$ and in $\mathcal{L}_2(\mathcal{S}^2)$ (respectively, the Euclidean complexified tensor product space and the square-integrable functions' Hilbert space on the unit 2-sphere). This topic was formerly treated in a couple of papers (\cite{dm1, dm2}) that unfortunately contain a certain amount of errors and misprints. Some of those results (those, in particular, about the construction of a new $TH$ basis of $\mathcal{E}_c^3$ with improved features) are revisited and presented in a correct, implementable form in Appendix A. \section{Scalar, vector and tensor harmonics in $\mathcal{E}_c^3$} To reach a satisfying definition of rank-2 {\it TH}s, it is natural to hierarchically construct them from simpler objects belonging to the same family: scalar and vector harmonics.\\ The basis of the Hilbert space of the square-integrable complex functions on the unit sphere embedded in $\mathcal{E}^3$, whose elements possess the property of being eigenfunctions of $\mathbf{L}^2$ and $L_z$, is indicated by the double-indexed function $Y_{JM}(\theta,\phi)$ (the scalar spherical harmonics), and its full expression normally adopted (after having made explicit the Legendre polynomials that appear in it) is: \begin{equation}\label{scalhar} \begin{split} Y_{JM}(\theta,\phi)=&\,(-1)^{M/2}\sqrt{\frac{2\,J+1}{4\pi}}\, \sqrt{\frac{(J-|M|)!}{(J+|M|)!}}\;(\cos^2\theta-1)^{|M|/2}\cdot\\ &\cdot\left\{\frac{\partial^{|M|}}{\partial(\cos\theta)^{|M|}} \left[\frac{1}{2^J\,J!}\, \frac{\partial^J}{\partial(\cos\theta)^J}\,(\cos^2\theta-1)^J \right]\right\}\,e^{i M\phi} \end{split} \end{equation} in which the spherical coordinates are, conventionally, $\theta\in [0,\pi]$ (the polar angle referred to the z-axis) and $\phi\in [0,2\pi)$ (the azimuthal angle referred to the x-axis of a rectangular cartesian frame).\\ In fact, it is of common experience that the practical problem of separating the angular parts in a set of equations that must be reduced to a pure radial form can be efficaciously treated once a gauge choice is made that transforms those angular parts in algebraic expressions depending only on spherical harmonics of same $(J,\,M)$ and their derivatives (usually up to the second order in $\theta$). All combinations of those objects can be rewritten in terms of, at most, two different harmonics, in $(J,\,M)$ and $(J-1,\,M)$, making use of well-known recurrent relations between Legendre polynomials of different $J$, leading to the following formulas: \begin{equation}\label{dty} \frac{\partial\,Y_{JM}}{\partial\,\theta}=J\,\cot\theta\,Y_{JM}- l(J,\,M)\,\csc\theta\,Y_{J-1,\,M} \end{equation} \begin{equation} \frac{\partial^2\,Y_{JM}}{\partial\,\theta^2}=\{J^2\,\cot^2\theta- [J\,(J+1)-M^2]\,\csc^2\theta\}\,Y_{JM}+l(J,\,M)\,\cot\theta\, \csc\theta\,Y_{J-1,\,M} \end{equation} with $l(J,\,M)=\sqrt{\frac{(2\,J+1)\,(J^2-M^2)}{2\,J-1}}$.\\[1mm] Those two expressions can be joined together, once noticed that $\frac{\partial^2 Y_{JM}}{\partial\phi^2}=-M^2\,Y_{JM}$, in a second order partial differential equation satisfied by the $Y_{JM}$: \begin{equation}\label{rel} \frac{\partial^2\,Y_{JM}}{\partial\theta^2}+\csc^2\theta\, \frac{\partial^2\,Y_{JM}}{\partial\phi^2}+\cot\theta\, \frac{\partial\,Y_{JM}}{\partial\theta}+J\,(J+1)\,Y_{JM}=0\ . \end{equation} This relation will be used later to write in a more compact form the Zerilli's $TM$s.\\[3mm] The second step on the way to the {\it TH}s was made by Blatt and Weisskopf \cite{blwei}, who introduced a basis of the space of the complex vector fields on $\mathcal{S}^2$, which is built as: \begin{equation} Y^l_{JM}(\theta,\phi)=\langle1,m,1,n\,|\,J,M\rangle \;Y_{lm}(\theta,\phi)\;\mathbf{e}_n \end{equation} where $\mathbf{e}_i$ symbolizes the generic element of the basis of the complexified Euclidean space $\mathcal{E}_c^3$ composed by the simultaneous eigenvectors of the spin operators $\mathbf{S},\ S_z$ with $S=1$: \begin{align} &e_1=-(\hat{x}+i\;\hat{y})/\sqrt{2}\nonumber\\ &e_0=\hat{z}\nonumber\\ &e_{-1}=(\hat{x}-i\;\hat{y})/\sqrt{2} \end{align} and the symbol $\langle\,|\,\rangle$ is the bracketed Dirac notation of a {\it Clebsch-Gordan coefficient}.\\ After this definition, the rank-2 tensor space $\mathcal{E}_c^3\otimes \mathcal{E}_c^3$ can be provided a proper basis by considering orthonormalized external products of the $\mathbf{e}_i$; keeping the formalism of \cite{zerilli1}: \begin{equation} t^{\,j}_m=\sum_{\mu=-1}^1\;\langle 1,\mu,1,m-\mu\,|\,j, m \rangle\;\mathbf{e}_{m-\mu}\otimes\mathbf{e}_\mu \end{equation} Now, the {\it harmonic} tensor spherical basis, spanning the space of the finite-dimensional irreducible representations of $SO(3)$, can be defined as: \begin{equation} Y^{\,j\;l}_{\,J\,M}\,(\theta,\,\phi)=\sum_{m=-j}^j \langle L,M-m,j,m\,|\,J,M\rangle\;t^{\,j}_m\;Y_{l\,M\!-m}(\theta,\phi) \end{equation} The orthonormality in $\mathcal{L}_2^2(\mathcal{S}^2)$ of these objects is guaranteed once a scalar product in this tensor space is defined as (the overbar meaning complex conjugation): \begin{equation}\label{norm} (T,S)=\int T:S\;d\Omega\equiv \int_0^{\,2\pi}\hspace{-2mm}\int_0^{\,\pi} \overline{T}^{\;\rho\sigma}S_{\rho\sigma}\,\sin\theta\,d\theta\,d\phi\ . \end{equation} It is worth noting that such a frame {\it is not} orthogonal in $\mathcal{E}_c^3\otimes \mathcal{E}_c^3$ endowed with the scalar product $(T,S)=T:S$, for fixed values of $J,\,M$. \section{Tensor multipoles} It is not difficult to directly write Zerilli's {\it TM} covariant basis in $\mathcal{M}^4$ once a proper definition of $\mathbf{L}$ acting on scalar functions is adopted: \begin{equation} \mathbf{L}\,f=-i\,\mathbf{r}\wedge\mathbf{\nabla}\,f \stackrel{\mbox{\tiny comp}}{=}-i\,r\,{E_\mu}^\rho\,f_{\,;\,\rho} \end{equation} where ${E_\mu}^\nu$ belongs to a rank-2 component of the well-known Levi-Civita's tensor $\epsilon$: \begin{equation} {E_\mu}^\nu=\eta_{\,\mu\rho}\;{\epsilon}^{\,\rho\nu 01}=\left( \begin{array}{cccc} 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&\sin \theta \\ 0&0&-\frac{1}{\sin \theta} &0 \end{array}\right)\ , \end{equation} $\eta_{\,\mu\nu}$ being the covariant Minkowski metric in spherical coordinates. With this specification, the {\it TM}s are defined as follows (the symbol ``${}_{()}$'' means that only the symmetric part of the tensor is considered): \begin{eqnarray} && \\[1mm] && \left. \begin{array}{l} TM_1=({\bf P_t}\circ {\bf P_t})\,Y_{JM}\\[1mm] TM_2=({\bf P_r}\circ {\bf P_r})\,Y_{JM}\\[1mm] TM_3=\sqrt{2}\,(i\,{\bf P_t}\circ {\bf P_r})_{()}\,Y_{JM} \end{array} \right\}\mbox{scalar components}\nonumber\\[2mm] && \left. \begin{array}{l} TM_4=m(J)\,(i\,r\,{\bf P_t}\circ\boldsymbol{\nabla})_{()}\,Y_{JM}\\[1mm] TM_5=m(J)\,(r\,{\bf P_r}\circ\boldsymbol{\nabla})_{()}\,Y_{JM} \end{array} \right\}\mbox{vector electric components}\nonumber\\[2mm] && \left. \begin{array}{l} TM_6=m(J)\,(-i\,{\bf P_t}\circ \mathbf{L})_{()}\,Y_{JM}\\[1mm] TM_7=m(J)\,({\bf P_r}\circ \mathbf{L})_{()}\,Y_{JM} \end{array} \right\}\mbox{vector magnetic components}\nonumber\\[2mm] && \hspace{2.1mm}TM_8=n(J)\left[({\bf P_r}\circ \mathbf{L})_{()}+r\,(\mathbf{L}\circ\boldsymbol{\nabla})_{()}\right]\, Y_{JM}\hspace{5mm}\mbox{tensor magnetic component}\nonumber\\[2mm] && \left. \begin{array}{l} TM_9=\frac{n(J)}{2}\left[(\mathbf{L}\circ \mathbf{L})_{()}+3\,r\,({\bf P_r}\circ \boldsymbol{\nabla})_{()}+r^2\,\boldsymbol{\nabla}\circ \boldsymbol{\nabla}\right]\,Y_{JM}\\[1mm] TM_{10}=\frac{m^2(J)}{2\,\sqrt{2}}\left[(\mathbf{L}\circ \mathbf{L})_{()}-r\,({\bf P_r}\circ \boldsymbol{\nabla})_{()}-r^2\,\boldsymbol{\nabla}\circ \boldsymbol{\nabla}\right]\,Y_{JM}\nonumber \end{array} \right\}\mbox{tensor electric components} \end{eqnarray} where $m(J)=\sqrt{\frac{2}{J\,(J+1)}}$, $n(J)=\sqrt{\frac{2}{(J-1)\,J\,(J+1)\,(J+2)}}$.\\ The explicit matrix form, with rows and columns labelled as $(t,\,r,\,\theta,\,\phi)$, of these objects, with the help of (\ref{rel}), can be cast as:\small $$ \begin{array}{c} TM_1=\left(\begin{array}{cccc} Y_{JM}&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ 0&0&0&0 \end{array}\right) TM_2=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&Y_{JM}&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ 0&0&0&0 \end{array}\right) TM_3=\left(\begin{array}{cccc} 0&\frac{i\,Y_{JM}}{\sqrt{2}}&0&0 \vspace{2mm}\\ *&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ 0&0&0&0 \end{array}\right)\vspace{5mm}\\ TM_4=\left(\begin{array}{cccc} 0&0&i\,r\,U&i\,r\,V\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ *&0&0&0\vspace{2mm}\\ *&0&0&0 \end{array}\right)\hspace{2mm} TM_5=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&0&r\,U&r\,V \vspace{2mm}\\ 0&*&0&0\vspace{2mm}\\ 0&*&0&0 \end{array}\right)\vspace{5mm}\\ TM_6=\left(\begin{array}{cccc} 0&0&\frac{r}{\sin\theta}\,V &-r\,\sin\theta\,U\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ *&0&0&0\vspace{2mm}\\ *&0&0&0 \end{array}\right)\hspace{2mm} TM_7=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&0&\frac{i\,r}{\sin\theta}\,V&-i\,r\,\sin\theta\,U\vspace{2mm}\\ 0&*&0&0\vspace{2mm}\\0&*&0&0 \end{array}\right)\vspace{5mm}\\ TM_8=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\0&0&\frac{i\,r^2}{\sin\theta}\,X&-i\, r^2\sin\theta\,W\vspace{2mm}\\0&0&*&-i\,r^2\sin\theta\,X \end{array}\right)\hspace{2mm} TM_9=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\0&0&r^2 W&r^2 X\vspace{2mm}\\ 0&0&*&-r^2\sin^2\theta\,W\vspace{2mm} \end{array}\right) \end{array}$$ $$ \begin{array}{c} TM_{10}=\left(\begin{array}{cccc} 0&0&0&0\vspace{2mm}\\ 0&0&0&0\vspace{2mm}\\ 0&0&\frac{r^2 Y_{JM}}{\sqrt{2}}&0\vspace{2mm}\\ 0&0&0&\frac{r^2\sin^2\theta\,\,Y_{JM}}{\sqrt{2}}\vspace{2mm} \end{array}\right) \hspace{4mm}\mbox{with}\hspace{0.8mm}\left\{\begin{array}{l} U=\frac{m(J)}{2}\,\frac{\partial\,Y_{JM}}{\partial \theta}\\[2.5mm] V=\frac{m(J)}{2}\,\frac{\partial\,Y_{JM}}{\partial \phi}\\[2.5mm] X=n(J)\left[\frac{\partial}{\partial\theta}- \cot\theta\right]\frac{\partial\,Y_{JM}}{\partial \phi}\\[2mm] W=n(J)\left[\frac{\partial^2}{\partial\theta^2}+\frac{J\, (J+1)}{2}\right]Y_{JM} \end{array}\right. \end{array} $$\normalsize\\[2mm] where ``$\,*\,$'' denotes a symmetric component of a tensor.\\ Under a geometric point of view, the {\it TM}s can be classified as: \begin{itemize} \item Space longitudinal ({\it i.e.} orthogonal to the unit 2-sphere) elements:\\$TM_2,\,TM_5,\,TM_7$ (in Zerilli's notation ``$a$'', ``$b$'', ``$c$''); \item Space transverse ({\it i.e.} tangent to the unit 2-sphere) elements:\\ $TM_8,\,TM_9,\,TM_{10}$ (``$d$'',``$f$'',``$g$''); \item Time addition elements:\\$TM_1,\,TM_3,\,TM_4,\,TM_6$ (``$a_0$'',``$a_1$'',``$b_0$'',``$c_0$''). \end{itemize} Being the ideas underlying the formation of this set mainly based on the {\it euclidean} properties of transformation under {\bf L} and $\boldsymbol{\nabla}$, orthonormalization problems were expected to arise for the components belonging to the $\mathcal{E}_c^3\rightarrow\mathcal{M}_c^4$ extension: in fact, it is straightforward to see that the last three cited multipoles ($TM_{3,\,4,\,6}$) have their norm (defined by (\ref{norm})) equal to -1.\\ \subsection{Regge-Wheeler perturbation tensors} In the {\it TM} basis, ${\bf h}^m,\;{\bf h}^e$, the well-known Regge-Wheeler perturbation tensors \cite{rw} that split (\ref{eins}) in {\it magnetic} and {\it electric} parts, totally decoupled from each other, can be written as: \begin{eqnarray} && \\ && {\bf h}^m = \frac{2}{m(J)\,r}\,\big[-h_0(t,r)\,TM_6+i\,h_1(t,r) \,TM_7\big]-\frac{i}{n(J)\,r^2}\,h_2(t,r)\,TM_8 \nonumber\\[3mm] && {\bf h}^e = e^{\nu(r)}H_0(t,r)\,TM_1+e^{\lambda(r)}H_2(t,r)\,TM_2-i\, \sqrt2\,H_1(t,r)\,TM_3 \nonumber\\ && \hspace{8mm}+\frac{2}{m(J)\,r}\,\left[-i\,h_0(t,r)\, TM_4+h_1(t,r)\,TM_5\right] \nonumber\\ && \hspace{8mm}+\frac{1}{n(J)}\,G(t,r)\,TM_9+\sqrt{2}\,\left[K(t,r)-\frac{1}{m^2(J)}\, G(t,r)\right]\, TM_{10} \nonumber \end{eqnarray} and, once the celebrated gauge choices ($h_2=0$ for ${\bf h}^m$ and $h_0=h_1=G=0$ for ${\bf h}^e$) that share the same names are applied, they reduce to: \begin{eqnarray} && \\ && {\bf h}^m = \frac{2}{m(J)\,r}\,\big[-h_0(t,r)\,TM_6+i\,h_1(t,r) \,TM_7\big] \nonumber\\[3mm] && {\bf h}^e = e^{\nu(r)}H_0(t,r)\,TM_1+e^{\lambda(r)}H_2(t,r)\,TM_2+ \sqrt2\,\left[-i\,H_1(t,r)\,TM_3+K(t,r)\,TM_{10}\right] \nonumber \end{eqnarray} making evident their belonging to different classes of transformations of $SO(3)$.\\It is easier to explain their decoupling in those terms than invoking an ``opposite parity'' feature which is hardly recognizable, once we refer to the eigenvalues (if existing) of the operator $P$ that acts on a function defined on $\mathcal{S}^2$ as $P(f(\theta,\,\phi)) =f(\pi-\theta,\,\phi+\pi)$: in the case of ${\bf h}^e$, it is manifestly $P({\bf h}^e_{\mu\nu})=P(Y_{JM})=(-1)^J\,Y_{JM}$ for any choice of $\mu,\,\nu$, while ${\bf h}^m$ has no defined parity, since it is immediately seen from (\ref{dty}) that $P(\partial_\theta Y_{JM})=(-1)^{J+1}\,Y_{JM}$ while $\partial_\phi Y_{JM}=i\,M\,Y_{JM}$ so that $P(\partial_\phi Y_{JM})=P(Y_{JM})=(-1)^J\,Y_{JM}$.
2,877,628,090,942
arxiv
\section{Supplemental Material for ``Coupled superconducting spin qubits with spin-orbit interaction"} \author{Maria Spethmann} \affiliation {Department of Physics, University of Basel, 4056 Basel, Switzerland} \author{Xian-Peng Zhang} \thanks{These two authors contributed equally} \affiliation {Department of Physics, University of Basel, 4056 Basel, Switzerland} \author{Jelena Klinovaja} \affiliation {Department of Physics, University of Basel, 4056 Basel, Switzerland} \author{Daniel Loss} \affiliation {Department of Physics, University of Basel, 4056 Basel, Switzerland} \subsection{Details of the perturbation theory} In this section we show the details of the perturbation theory that lead to the spin interaction [see Eq.~\eqref{spinexchangecouplingsmv1} of the main text]. Our Hamiltonian is, as a reminder, % \begin{align} H_D&=\sum_{n s}\epsilon d_{n s}^{\dagger}d_{n s}+C_d d^{\dagger}_{n\uparrow}d_{n\uparrow}d^{\dagger}_{n\downarrow}d_{n\downarrow}\\\nonumber H_{L}&=\sum_{j\vec{k}}\left( \sum_{ s}\xi_k c_{j\vec{k} s}^{\dagger}c_{j\vec{k} s}\right)-\Delta_{j} c_{j\vec{k}\uparrow}^{\dagger}c_{j\,\shortminus\vec{k} \downarrow}^{\dagger} + H.c.\\\nonumber H_T&=\sum_{jn\vec{k}}\begin{psmallmatrix} c_{j\vec{k}\uparrow}^{\dagger}&c_{j\vec{k}\downarrow}^{\dagger}\end{psmallmatrix} t_{jn} \,U_{jn}\begin{psmallmatrix}d_{n\uparrow}\\d_{n\downarrow}\end{psmallmatrix}+H.c. \end{align} Here, $d_{n s}^{\dagger}$ creates an electron on dot $n\in\{1,2\}$ and spin $s\in\{\uparrow,\downarrow\}$ with energy $\epsilon<0$. The Coulomb repulsion is given by $C_d$ ($C_d\gg\Delta, |\epsilon|$). An electron is created by $c_{jk s}^{\dagger}$ at the superconducting lead $j\in\{L,R\}$ with spin $s$, momentum $\vec{k}$, and energy $\xi_k$. % The pairing potential on lead $j$ is $\Delta_j=\Delta e^{-i\varphi_j}$ and $t_{jn}$ describes the tunnel amplitude between dot $n$ and lead $j$. The unitary matrix $U_{jn} \in SU(2)$ describes a rotation in spin space. First, we perform three subsequent basis transformations, which do not change the Hamiltonian of the dot or superconductors because they are assumed to be rotationally invariant: \begin{align} \label{basistransformationsSM} \begin{psmallmatrix} \Tilde{c}_{R\vec{k}\uparrow}^{} \\ \Tilde{c}_{R\vec{k}\downarrow}^{} \end{psmallmatrix}&= U^{\dagger}_{R1}\begin{psmallmatrix} c_{R\vec{k}\uparrow}^{} \\ c_{R\vec{k}\downarrow}^{} \end{psmallmatrix},\nonumber\\ \begin{psmallmatrix}\Tilde{d}_{2\uparrow}\\ \Tilde{d}_{2\downarrow}\end{psmallmatrix}&= U_{R1}^{\dagger}U^{}_{R2} \begin{psmallmatrix}d_{2\uparrow}\\ d_{2\downarrow}\end{psmallmatrix},\\ \begin{psmallmatrix} \Tilde{c}_{L\vec{k}\uparrow}^{} \\ \Tilde{c}_{L\vec{k}\downarrow}^{} \end{psmallmatrix} &=U^{\dagger}_{R1} U_{R2}^{} U^{\dagger}_{L2} \begin{psmallmatrix} c_{L\vec{k}\uparrow}^{} \\ c_{L\vec{k}\downarrow}^{} \end{psmallmatrix}. \nonumber \end{align} We also diagonalize the Hamiltonian of the superconductors via a Bogoliubov transformation ($\Tilde{c}_{j\vec{k} s}=u_k\gamma_{j\vec{k} s}+ s v_{jk}\gamma_{j\shortminus\vec{k}\shortminus s}^{\dagger}$ with $u_k=\sqrt{\frac{E_k+\xi_k}{2E_k}}$, $v_{jk}=\sqrt{\frac{E_k-\xi_k}{2E_k}}e^{-i\varphi_j}$, $E _k=\sqrt{({\xi_k})^2+\Delta^2}\,$), \begin{align} H_{L}=\sum_{j\vec{k}\sigma} E_k \gamma_{j\vec{k}\sigma}^{\dagger}\gamma_{j\vec{k}\sigma}. \end{align} Excitations from the superconducting ground state are described by Bogoliubov quasiparticles ($\gamma_{j\vec{k} s}^{\dagger}$). The tunnel Hamiltonian becomes (with $\Tilde{d}_{1\sigma}=d_{1\sigma}$) \begin{align} H_T=\sum_{\substack{jn\vec{k}s\\(j,n)\neq(L,1)}}\left[t_{jn}(u_k\gamma_{j\vec{k} s}^{\dagger} + s v_{jk}^*\gamma_{j\shortminus\vec{k}\shortminus s}) \Tilde{d}_{n s} + H.c.\left] + \sum_{\vec{k} s s'}\right[t_{L1}(u_k\gamma_{Lk s}^{\dagger}+s v_{Lk}^*\gamma_{L\shortminus\vec{k}\shortminus s})U_{s s'}\Tilde{d}_{1 s'} + H.c.\right]. \end{align} We see that the total unitary basis transformation $U=U_{R1}^{\dagger}U_{R2}U_{L2}^{\dagger}U_{L1}$ now only acts on the spins of electrons that tunnel between the left superconductor and dot 1. We now want to calculate the spin exchange interaction between the dots mediated by the Cooper pairs in the superconductors. For this, we evaluate the fourth order perturbation theory contribution in the weak coupling limit $t_{jn},\Gamma_{j}\ll \Delta,|\epsilon|$ \cite{Choi2000,probst2016signatures}, with $\Gamma_{j}=\pi\rho_Ft_{j1}t_{j2}$ and $\rho_F$ being the normal density of states per spin of the leads at the Fermi energy: \begin{align}\label{eq:pertubation_theory} H=P H_T \left(\frac{1-P}{E_0-H_0}H_T\right)^3 P. \end{align} Here, the unperturbed Hamiltonian is $H_0=H_L+H_D$ with the ground state energy $E_0$. The operator $P$ projects to the spin-1/2 low-energy subspace $\{|s_1,s_2;0\rangle\}_{s_1 s_2}$, where both dots are occupied with one electron with spins $s_1$ and $s_2$ (in the $d_{1 s}$, $\Tilde{d}_{2 s}$-basis) and both superconductors are in their ground state. In other words, we evaluate $ \langle s'_1,s'_2;0|H_{\text{eff}}|s_1,s_2;0\rangle$, by summing up virtual tunneling paths. We take the perturbation theory to fourth order because it is the smallest order at which spin interaction between the two quantum dots is possible. The first $H_T$ will always destroy one electron on one of the dots (because in the $C_d\rightarrow\infty$ limit we do not allow double occupancy) and create one Bogoliubov quasiparticle in the superconductors. Thus, the first virtual intermediate state will be $|s_1,0;\gamma_{j\vec{k} s}\rangle$ or $|0,s_2;\gamma_{j\vec{k}s}\rangle$. Next, in the limit $\Delta\gg |\epsilon|$, the electron on the other dot will tunnel to the superconductor and destroy the Bogoliubov particle again, in total creating a Cooper pair. The second virtual intermediate state is then $|0,0;0\rangle$. This process will dominate as it costs more energy to create a second Bogoliubov quasiparticle than to remove an electron from the dot. The third virtual intermediate state involves a Bogoliubov quasiparticle and an electron, similar as the first virtual intermediate state, and after the fourth tunneling we get back to our ground state. For virtual paths that involve the right superconductor only (see Fig. 1d) one can permute the order of the first two tunneling events and of the third and fourth tunneling events, giving a total of four virtual tunneling paths. It turns out that each of them gives the same contribution to $ \langle s'_1,s'_2;0|H_{\text{eff}}|s_1,s_2;0\rangle$, namely \begin{align} t_{R1}^2t_{R2}^2\sum_{\vec{k},\vec{q}}\frac{u_kv^*_{Rk}u_qv_{Rq}}{(E_k-\epsilon)2\epsilon(E_q-\epsilon)} s_1 s_1'\delta_{s_1,-s_2}\delta_{s_1',-s_2'}. \end{align} We evaluate \begin{align} \sum_{\vec{k},\vec{q}}\textstyle\frac{|u_k v_{Rk} u_q v_{Rq}|}{(E_k-\epsilon)2\epsilon(E_q-\epsilon)}&={\displaystyle\frac{1}{8\epsilon}}\Bigg(\sum\limits_{\vec{k}}\frac{1}{\sqrt{\left(\frac{\xi_k}{\Delta}\right)^2+1}\left(\sqrt{\left(\frac{\xi_k}{\Delta}\right)^2+1}-\frac{\epsilon}{\Delta}\right)\Delta }\Bigg)^2 = {\displaystyle\frac{1}{8\epsilon}}\Bigg[\int\limits_{-\Lambda}^{\Lambda}\frac{\rho(\xi)}{ \sqrt{\left(\frac{\xi}{\Delta}\right)^2+1} \left(\sqrt{\left(\frac{\xi}{\Delta}\right)^2+1} -\frac{\epsilon}{\Delta} \right)\Delta} \text{d}\xi \Bigg] ^2\\ &\approx \frac{\rho_F^2}{8\epsilon}\Bigg[\int\limits_{-\infty}^{\infty}\frac{\text{d}x}{\sqrt{x^2+1}(\sqrt{x^2+1}-\frac{\epsilon}{\Delta})}\Bigg]^2\overset{\Delta\gg\epsilon}{\approx}-\frac{\rho_F^2\pi^2}{8|\epsilon|}.\notag \end{align} Here, $\Lambda$ is some cut-off energy (of the order of the Fermi energy) and $\rho(\xi)$ is the density of states per spin, which we assume to be approximately constant $\rho(\xi)\approx\rho(0)=\rho_F$ in the region around the Fermi energy, which contributes most to the integral. Further, we can replace $s_1 s_1'\delta_{s_1,-s_2}\delta_{s_1',-s_2'} \to -2\vec{S}_1\cdot\Tilde{\vec{S}}_2-\frac{1}{2}$, where the $-\frac{1}{2}$ is an irrelevant constant. The operator $\vec{S}_1$ is the spin operator of dot $n=1$ and $\Tilde{\vec{S}}_2$ is the spin operator of dot $n=2$ in the rotated basis [see Eq.~\eqref{basistransformationsSM}]: $\tilde{\mathbf{S}}_2=U_{R2}U^{\dagger}_{R1}\mathbf{S}_2 U_{R1}^{\dagger}U_{R2}$. All virtual paths corresponding to processes shown in Fig.~\ref{pic01}(d) together give \begin{align} \frac{\rho_F^2\pi^2}{|\epsilon|}t_{R1}^2t_{R2}^2\vec{S}_1\cdot\Tilde{\vec{S}}_2. \end{align} For virtual paths involving the left superconductor only, [see Fig.~\ref{pic01}(c)] we can use the result for the right lead and simply rotate the spin operator of the upper dots by $U$, as the `left-lead-paths' and `right-lead-paths' are equivalent up to a basis rotation: \begin{align} \frac{\rho_F^2\pi^2}{|\epsilon|}t_{L1}^2t_{L2}^2(U^{\dagger} \vec{S}_1 U)\cdot\Tilde{\vec{S}}_2. \end{align} For virtual paths, where a Cooper pair transfers from the left to the right superconductor [see Fig.~\ref{pic01}(e)], there exist four permutations of tunneling paths. Each of them contributes with \begin{align}\label{eq:left-to-right-contr} -\frac{\rho_F^2\pi^2}{8|\epsilon|}t_{L1}t_{L2}t_{R1}t_{R2}e^{-i\varphi}(-1)s_1 s_2'\delta_{s_1-s_2}U^*_{\shortminus s_2' s_1'}. \end{align} Using that \begin{align} U=\textstyle\cos(\frac{\alpha}{2})+2 i\vec{u}\cdot\vec{S}_1\sin(\frac{\alpha}{2})=\begin{pmatrix}\cos{(\frac{\alpha}{2})}+iu_z\sin{(\frac{\alpha}{2})}&(u_y+iu_x)\sin{(\frac{\alpha}{2})}\\(-u_y+iu_x)\sin{(\frac{\alpha}{2})}&\cos{(\frac{\alpha}{2})}-iu_z\sin{(\frac{\alpha}{2})}\end{pmatrix}, \end{align} we simplify \begin{align} s_1 s_2'\delta_{s_2-s_1}U^*_{\shortminus s_2' s_1'} \to \,\, &\textstyle 2\cos(\frac{\alpha}{2})(\vec{S}_1\cdot\Tilde{\vec{S}}_2-\frac{1}{4}) +u_z\sin(\frac{\alpha}{2})(2S_{1y}\Tilde{S}_{2x}-2S_{1x}\Tilde{S}_{2y}+iS_{1z}-i\Tilde{S}_{2z}) \\\nonumber &+u_y\sin\textstyle(\frac{\alpha}{2})(2S_{1x}\Tilde{S}_{2z}-2S_{1z}\Tilde{S}_{2x}+iS_{1y}-i\Tilde{S}_{2y}) +u_x\sin(\frac{\alpha}{2})(2S_{1z}\Tilde{S}_{2y}-2S_{1y}\Tilde{S}_{2z}+iS_{1x}-i\Tilde{S}_{2x}) . \end{align} When we reverse the tunnel sequence such that Cooper pairs travel from the right to the left superconductor, this adds the hermitian conjugate to the expression above [Eq.~\eqref{eq:left-to-right-contr}]. Multiplying with the permutation factor 4 we get: \begin{align} \frac{2\rho_F^2\pi^2}{|\epsilon|} t_{L1}t_{L2}t_{R1}t_{R2}\Bigg\{\!\cos{\varphi} \Big[\!\cos\textstyle(\frac{\alpha}{2})&\vec{S}_1\!\cdot\Tilde{\vec{S}}_2\!-\!\sin(\textstyle\frac{\alpha}{2})\vec{u}\cdot\!(\vec{S}_1\!\times\!\Tilde{\vec{S}}_2)\Big] +\frac{1}{2}\sin(\varphi)\sin(\textstyle\frac{\alpha}{2})\vec{u}\cdot\!(\vec{S}_1\!-\!\Tilde{\vec{S}}_2)\Bigg\}+g(\varphi),\\\nonumber &g(\varphi)=-\frac{\rho_F^2\pi^2}{2|\epsilon|}t_{L1}t_{L2}t_{R1}t_{R2}\cos{\varphi}\cos{\left(\frac{\alpha}{2}\right)}. \end{align} Adding all contributions together, we calculate the effective Hamiltonian in the spin-1/2 subspace to be \begin{align} &H= \frac{\rho_F^2\pi^2}{|\epsilon|}\Bigg(t_{R1}^2t_{R2}^2\vec{S}_1\cdot\Tilde{\vec{S}}_2 +t_{L1}^2t_{L2}^2 (U^{\dagger}\vec{S}_1 U) \cdot\Tilde{\vec{S}}_2\\\nonumber &+ 2t_{L1}t_{L2}t_{R1}t_{R2}\Big\{\!\cos{\varphi} \Big[\!\cos(\textstyle\frac{\alpha}{2})\vec{S}_1\!\cdot\Tilde{\vec{S}}_2\!-\!\sin(\textstyle\frac{\alpha}{2})\vec{u}\cdot\!(\vec{S}_1\!\times\!\Tilde{\vec{S}}_2)\Big] +\frac{1}{2}\sin(\varphi)\sin(\textstyle\frac{\alpha}{2})\vec{u}\cdot\!(\vec{S}_1\!-\!\Tilde{\vec{S}}_2)\Big\}\Bigg)+g(\varphi). \end{align} This equation is Eq.~\eqref{spinexchangecouplingsmv1} of the main text for $t_{L1}t_{L2}=t_{R1}t_{R2}$. We note that the spin Hamiltonian in Eq.~\eqref{vfdnakfvm01} can be viewed as a form of Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction \cite{ruderman1954indirect,kasuya1956theory,yosida1957magnetic} between the dot spins which is mediated by Cooper pairs that virtually split and recombine or vice versa. \subsection{Ahoronov-Bohm flux} In this section, we present a more general result of the effective Hamiltonian found by using the perturbation theory: {\it without} the restriction $\Delta\gg|\epsilon|$ and accounting for an Aharonov-Bohm flux $f$ inclosed by the two quantum dots and two superconductors. We note that, in the limit $\Delta\gg|\epsilon|$, those terms that depend on the Aharonov-Bohm flux are higher order contributions \cite{Choi2000} and correspond to virtual tunneling paths as depicted in Fig.~\ref{pic01}(f) of the main text. We adopt a more general description of the tunnel Hamiltonian, which is $\vec{k}$-dependent and includes a Peierls phase in the presence of an external magnetic vector potential $\vec{A}$. For this, we replace $t_{jn}$ by $t_{jn}\exp\left(\shortminus i\vec{k}\cdot\vec{r}_j-{\textstyle \frac{i\pi}{\phi_0}}\int\limits_{\vec{r}_n}^{\vec{r}_j}\!\text{d}\vec{l}\cdot\!\vec{A}\right)$, where $\vec{r}_j$ is the position on the superconductor that couples to both quantum dots, each located at position $\vec{r}_n$. In addition, $\phi_0=\frac{hc}{2e}$ is the superconducting flux quantum. (We assume the associated Zeeman splitting on the dot to be small and neglect it.) From the symmetry arguments presented in the main part, it still follows that the total Hamiltonian has a similar form as Eq. \eqref{spinexchangecouplingsmv1} in the main text. The generalized result then becomes ({\it without} the restriction $\Delta\gg|\epsilon|$): \begin{align} \label{fvnkdv} H&=\textstyle\Gamma_{R}^2(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta})\mathbf{S}_1\cdot\Tilde{\mathbf{S}}_2+\Gamma_{L}^2(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta})(U^{\dagger}\mathbf{S}_1 U)\cdot\Tilde{\mathbf{S}}_2\\ &\textstyle+ 2\Gamma_L\Gamma_R\left[\frac{C_1}{\Delta}\cos (f)+(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta})\cos(\varphi)\right] \left[\cos(\frac{\alpha}{2})\mathbf{S}_1\cdot\Tilde{\mathbf{S}}_2-\sin(\frac{\alpha}{2})\mathbf{u}\cdot(\mathbf{S}_1\times\Tilde{\mathbf{S}}_2)\right]\notag\\ &\textstyle+\Gamma_L\Gamma_R\sin\left(\frac{\alpha}{2}\right)\left[\frac{C_1}{\Delta}\sin(f)\mathbf{u}\cdot(\mathbf{S}_1+\Tilde{\mathbf{S}}_2)+(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta})\sin(\varphi)\mathbf{u}\cdot(\mathbf{S}_1-\Tilde{\mathbf{S}}_2) \right]+g(\varphi,f). \notag \end{align} In order to reveal the structure of the spin interaction, we rewrite Eq. \eqref{fvnkdv} in a form similar to Eq. \eqref{vfdnakfvm01} in the main text: \begin{equation} \label{vysfdnakfvm01} H=h(S^{\Vert}_{1}-\tilde{S}^{\Vert}_{2})+h_z(S^{\Vert}_{1}+\tilde{S}^{\Vert}_{2})+J_{\Vert} S^{\Vert}_{1}\tilde{S}^{\Vert}_{2}+ J_{\perp} \mathbf{S}^{\perp}_{1}\cdot \tilde{\mathbf{S}}^{\perp}_{2}+J_{\text{DM}}(\mathbf{S}_1\times\tilde{\mathbf{S}}_2)^{\Vert}+g(\varphi,f). \end{equation} Here, the effective magnetic fields, spin exchange constant, and a spin-irrelevant constant $g(\varphi,f)$, respectively, are given by \begin{subequations} \begin{align} &h=\textstyle\Gamma_L\Gamma_R\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right)\sin\left(\frac{\alpha}{2}\right)\sin(\varphi), \\ &h_z=\textstyle\Gamma_L\Gamma_R\frac{C_1}{\Delta}\sin\left(\frac{\alpha}{2}\right)\sin(f), \\ &J_{\Vert}= \textstyle(\Gamma_{R}^2+\Gamma_{L}^2)\left(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta}\right)+2\Gamma_L\Gamma_R\left[\frac{C_1}{\Delta}\cos (f)+\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right)\cos(\varphi)\right] \cos(\frac{\alpha}{2}), \\ &J_{\perp}=\textstyle\left[\Gamma_{R}^2+\Gamma_{L}^2\cos\left(\alpha\right)\right]\left(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta}\right)+2\Gamma_L\Gamma_R\left[\frac{C_1}{\Delta}\cos (f)+\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right)\cos(\varphi)\right] \cos(\frac{\alpha}{2}),\\\ &J_{\text{DM}}=-\textstyle\Gamma_{L}^2\left(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta}\right)\sin(\alpha)-2\Gamma_L\Gamma_R\left[\frac{C_1}{\Delta}\cos (f)+\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right)\cos(\varphi)\right] \sin(\frac{\alpha}{2}),\\ &g(\varphi,f)=\textstyle\frac{1}{2}\Gamma_L\Gamma_R\left[\frac{C_1}{\Delta}\cos(f)\cos(\frac{\alpha}{2})-(\frac{C_0}{\Delta}+\frac{C}{|\epsilon|})\cos(\varphi)\cos(\frac{\alpha}{2})\right]+(t_{L1}^2t_{R1}^2+t_{L2}^2t_{R2}^2)\frac{\rho_F^2\pi^2C_0}{2\Delta}\cos(\varphi). \end{align} \end{subequations} We have parameterized the tunneling coupling by $\Gamma_j=\rho_F\pi t_{j1}t_{j2}$. The dimensionless parameters $C$, $C_0$, and $C_1$ are given by \begin{align} C&=\left[\int\limits_{-\infty}\limits^{\infty}\frac{\text{d}x}{\pi h_0(x)g_0(x)}\right]^2\\ C_0&=\int\limits_{-\infty}\limits^{\infty}\int\limits_{-\infty}\limits^{\infty}\frac{\text{d}x\,\text{d}y}{\pi^2 h_0(x)h_0(y)g_0(x)g_0(y)[h_0(x)+h_0(y)]}\\ C_1&=\int\limits_{-\infty}\limits^{\infty}\int\limits_{-\infty}\limits^{\infty}\frac{\text{d}x\,\text{d}y\,[h_0(x)-\frac{|\epsilon|}{\Delta}]}{\pi^2[h_0(x)+h_0(y)][g_0(x)]^2g_0(y)}, \end{align} where $h_0(x)=\sqrt{x^2+1}$, $g_0(x)=\sqrt{x^2+1}+\frac{|\epsilon|}{\Delta}$ \cite{Choi2000}. The phase $\varphi$ and Aharonov-Bohm flux $f$, respectively, are given by \begin{align} \varphi=\varphi_{L}-\varphi_R-\frac{\pi}{\phi_{0}} \int_{\mathbf{r}_{R}}^{\mathbf{r}_{L}}(d \vec{\ell}_{1} +d \vec{\ell}_{2})\cdot \mathbf{A}, \end{align} \begin{align} f=\frac{\pi}{\phi_{0}} \int_{\mathbf{r}_{R}}^{\mathbf{r}_{L}}(d \vec{\ell}_{1} - d \vec{\ell}_{2})\cdot \mathbf{A}. \end{align} Here, $\vec{\ell}_{n}$ corresponds to the path $\vec{r}_{R}\rightarrow\vec{r}_{n}\rightarrow\vec{r}_{L}$ and $f$ is the dimensionless Aharonov-Bohm flux $f$ running through the area between the dots and the superconductors \cite{Choi2000}. In addition to the effective staggered magnetic fields [first term in Eq. \eqref{vysfdnakfvm01}], there exists a symmetry-allowed Zeeman term [second term in Eq. \eqref{vysfdnakfvm01}]. Generally, the external flux $f$ offers one more experimentally tunable parameter, which increases the tunability of our scalable architecture. Again, if $\Gamma_L= \Gamma_R$, we can achieve the purely Ising coupling with $J_\perp=J_{DM}=0$: \begin{align} \left(\frac{C}{|\epsilon|}+\frac{C_0+C_1}{\Delta}\right)\cos\left(\frac{\alpha}{2}\right)=-\left[\frac{C_1}{\Delta}\cos (f)+\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right)\cos(\varphi)\right]. \label{is} \end{align} In the absence of flux, Eq. (\ref{is}) reproduces the criterion derived in the main part. The flux $f$ gives us one more parameter to be used to achieve the optimal regime with high fidelities (see Fig. \ref{pic02}). We also note that, if $\Gamma_L\neq \Gamma_R$, it is not possible to achieve the pure Ising regime. \subsection{Josephson supercurrent} The detection of spin interactions can be also realized by observing the Josephson supercurrent $\hat I$, being defined by the derivative of $H$ with respect to $\varphi$, i.e., $\hat I=\partial H/\partial \varphi$. Using Eq. \eqref{vysfdnakfvm01}, we obtain for the supercurrent \begin{align} \label{dfvmkfmvk} \hat{I}={\hat I}_s\sin\varphi+I_0\sin\left(\frac{\alpha}{2}\right)(S^{\Vert}_{1}-\tilde{S}^{\Vert}_{2})\cos\varphi, \end{align} with \begin{align} \label{fdjvo2} \hat I_s=-2I_0 \left[ \cos\left(\frac{\alpha}{2}\right)\mathbf{S}^{}_{1}\cdot \tilde{\mathbf{S}}^{}_{2}-\sin\left(\frac{\alpha}{2}\right)(\mathbf{S}_1\times\tilde{\mathbf{S}}_2)^{\Vert}-\frac{1}{4}\cos\left(\frac{\alpha}{2}\right)\right]-I_1. \end{align} Note that the Josephson current $\hat I$ is still an operator in spin space, indicated by the hat. This current involves only `two-lead' paths, as manifested by the current amplitudes that contain simultaneously tunneling amplitudes from right and left superconducting leads: \begin{align} I_{0}=\Gamma_L\Gamma_R\left(\frac{C}{|\epsilon|}+\frac{C_0}{\Delta}\right), \end{align} \begin{align} I_{1}=(t_{L1}^2t_{R1}^2+t_{L2}^2t_{R2}^2)\frac{\rho_F^2\pi^2C_0}{2\Delta}. \end{align} In the presence of SOI ($\alpha\neq0$), the supercurent contains an anomalous term proportional to $\cos \varphi$. This means that the supercurrent $\hat I$ is finite even if $\varphi=0$. This phase shift in the supercurrent can be exploited to detect the presence of the SOI, depending on the orientations of the dot spins relative to each other. In turn, this spin dependence of the phase shift can also be used as a readout of the spin states of the dots: if one spin is aligned along its quantization axis, while the other one is anti-aligned along its respective quantization axis, the amplitude of the Josephson current at $\varphi=0$ will be maximal for finite SOI. In the opposite extreme case, when the spins are both aligned along their respective quantization axes, the current is minimal at $\varphi=0$, i.e. it vanishes. For the special case considered in the main text, $\Gamma_L=\Gamma_R=\Gamma$ and $\vert \epsilon\vert \ll\Delta$, we have $C_0\simeq 0$ and $C\simeq 1$ (and $C/\vert \epsilon\vert \gg C_0/\Delta)$. Then, Eq. \eqref{dfvmkfmvk} becomes \begin{align} \hat I=\hat{I}_s\sin\varphi+J\sin\left(\frac{\alpha}{2}\right)(S^{\Vert}_{1}-\tilde{S}^{\Vert}_{2})\cos\varphi, \end{align} while Eq. \eqref{fdjvo2} reduces to \begin{align} \hat {I}_s=-2J \left[ \cos\left(\frac{\alpha}{2}\right)\mathbf{S}^{}_{1}\cdot \tilde{\mathbf{S}}^{}_{2}-\sin\left(\frac{\alpha}{2}\right)(\mathbf{S}_1\times\tilde{\mathbf{S}}_2)^{\Vert}-\frac{1}{4}\cos\left(\frac{\alpha}{2}\right)\right], \end{align} where, again, $J\simeq \Gamma^2/\vert \epsilon \vert$. This is the expression given in the main text. \end{document}
2,877,628,090,943
arxiv
\section{Introduction} Integer programming continues to be a very popular way to obtain a schedule for a round robin tournament. The ability to straightforwardly model such a tournament, and next solve the resulting formulation using an integer programming solver, greatly facilitates practitioners. Moreover, it is usually possible to add all kinds of specific local constraints to the formulation that help addressing particular challenges. We substantiate this claim of the widespread use of integer programming by mentioning some of the works that use integer programming to arrive at a schedule for a round robin tournament. Indeed, from the literature, it is clear that for national football leagues (which are predominantly organized according to a so-called double round robin format), integer programming-based techniques are used extensively to find schedules. Without claiming to be exhaustive we mention Alarc\'on et al.~\cite{AlarconEtAl2017}, Della Croce and Oliveri~\cite{DellaCroceOliveri2006}, Dur\'an et al~\cite{Duranetal2007,Duranetal2017,DuranEtAl2021}, Goossens and Spieksma~\cite{GoossensSpieksma2009}, Rasmussen~\cite{Rasmussen2008}, Recalde et al.~\cite{RecaldeEtAl2013}, Ribeiro and Urrutia~\cite{RibeiroUrrutia2012}. Other sport competitions that are organized in a round robin fashion (or a format close to a round robin) have also received ample attention: we mention Cocchi et al.~\cite{CocchiEtAl2018} and Raknes and Pettersen~\cite{RaknesPettersen2018} who use integer programming for scheduling volleyball leagues, Fleurent and Ferland~\cite{FleurentFerland1993} who use integer programming for scheduling a hockey league, Kim~\cite{Kim2019} and Bouzarth et~al.~\cite{bouzarth2021scheduling} for baseball leagues, Kostuk and Willoughby~\cite{KostukWilloughby2012} for Canadian football, Nemhauser and Trick~\cite{NemhauserTrick1998} and Westphal~\cite{Westphal2014} for basketball leagues. Further, there has been work on studying properties of the traditional formulation, among others, by Trick~\cite{trick2002integer} and Briskorn and Drexl~\cite{bridre2009}. Well-known surveys are given by Rasmussen and Trick~\cite{rastri2008}, Kendall et al.~\cite{kenetal2010} and Goossens and Spieksma~\cite{GoossensSpieksma2012}; we also refer to Knust~\cite{knust}, who maintains an elaborate classification of literature on sports scheduling. More recently, the international timetabling competition~\cite{ITC2021} featured a round robin sports timetabling problem, and most of the submissions for this competition used integer programming in some way to obtain a good schedule. All this shows that integer programming is one of the most preferred ways to find schedules for competitions organized via a round robin format. In this paper, we aim to take a fresh look at the problem of finding an optimal schedule for round robin tournaments using integer programming techniques. Depending upon how often a pair of teams is required to meet, different variations of a round robin tournament arise: in case each pair of teams meets once, the resulting format is called a Single Round Robin, in case each pair of teams is required to meet twice, we refer to the resulting variation as a Double Round Robin. These formats are the ones that occur most in practice; in general we speak of a $k$-Round Robin to describe the situation where each pair of teams is required to meet $k$ times. We have organized the paper as follows. In Section~\ref{sec:models}, we precisely define the problem corresponding to the Single Round Robin tournament, and we present three integer programming formulations for it. We call them the traditional formulation (Section~\ref{sec:traditionalformulation}), the matching formulation (Section~\ref{sec:matchingformulation}), and the permutation formulation (Section~\ref{sec:permutationformulation}); the latter two formulations are, to the best of our knowledge, new. We show that their linear relaxations can be solved in polynomial time. We prove in Section~\ref{sec:strength} that the matching formulation is stronger than the other formulations. In Section~\ref{sec:inequalities} we provide a class of valid inequalities for the matching formulation. We show in Section~\ref{sec:kRR} how our results extend to the $k$-Round Robin tournament. In Section~\ref{sec:computationalresults}, we generate instances of our problem with two goals in mind: (i) to experimentally assess the quality of the bounds found by our models (Section~\ref{sec:computationalcomparison}), and (ii) to report on the performance of a branch-and-price algorithm (Section~\ref{sec:branchandprice}). We conclude in Section~\ref{sec:conclusion}. \section{Problem definition and formulations} \label{sec:models} In this section, we provide a formal definition of our problem and introduce the necessary terminology and notation. We start by describing the so-called Single Round Robin (SRR) tournament, where every pair of teams has to meet exactly once, and we return to the general version of the problem, where every pair of teams has to meet $k$ times ($k \geq 1$), in Section~\ref{sec:kRR}. Throughout the entire paper, we assume that~$n$ is an even integer that denotes the number of teams; for reasons of convenience we assume $n \geq 4$. We denote the set of all teams by~$\ensuremath{T}$. A \emph{match} is a set consisting of two distinct teams and the set of all \emph{matches} is denoted by~$\ensuremath{\mathcal{M}}$, in formulae, $\ensuremath{\mathcal{M}} = \{ m=\{i,j\} : i,j \in \ensuremath{T},\; i \neq j\}$. We denote, for each $i \in \ensuremath{T}$, by $\ensuremath{\mathcal{M}}_i = \{\{i,j\} : j \in \ensuremath{T} \setminus \{i\}\}$ the set of matches played by team $i$. As we assume in this section that every pair of teams meets once, and as $n$ is even, the matches can be organized in~$n-1$ rounds, which we denote by~$\ensuremath{R}$; hence, we deal in this section with a {\em compact} single round robin tournament. Prepared with this terminology and notation, we are able to provide a formal definition of the SRR problem. \begin{problem} (SRR) \label{prob:srr} Given an even number~$n \geq 4$ of teams with corresponding matches~$\ensuremath{\mathcal{M}}$, a set of~$n-1$ rounds~$\ensuremath{R}$, as well as an integral cost~$c_{\ensuremath{m},r}$ for every match~$\ensuremath{m}\in\ensuremath{\mathcal{M}}$ and round~$r \in \ensuremath{R}$, the \emph{single round robin (SRR)} problem is to find an assignment~$\mathcal{A} \subseteq \ensuremath{\mathcal{M}} \times \ensuremath{R}$ of matches to rounds that minimizes the cost~$\sum_{(\ensuremath{m},r) \in \mathcal{A}} c_{\ensuremath{m},r}$ such that every team plays a single match per round and each match is played in some round. \end{problem} Since the SRR problem is \ensuremath{\mathrm{NP}}-hard (see Easton~\cite{easton2002}, Briskorn et~al.\@\xspace~\cite{briskorn2010round}, and Van Bulck and Goossens~\cite{bulgoo2020}), there does not exist a polynomial time algorithm to find an optimal assignment unless~$\P = \ensuremath{\mathrm{NP}}$. For this reason, several researchers have investigated integer programming (IP) techniques for finding an optimal assignment of matches to rounds. We follow this line of research and discuss three different IP formulations for the SRR problem: a traditional formulation with polynomially many variables and constraints (Section~\ref{sec:traditionalformulation}) as well as two formulations that involve exponentially many variables (Sections~\ref{sec:matchingformulation} and \ref{sec:permutationformulation}). To the best of our knowledge, the latter models have not been discussed in the literature before. \subsection{The traditional formulation} \label{sec:traditionalformulation} The \emph{traditional formulation} of the SRR problem has been discussed, among others, by Trick~\cite{trick2002integer} and Briskorn and Drexl~\cite{bridre2009}. To model an assignment of matches to rounds, this formulation introduces, for every match~$\ensuremath{m}\in\ensuremath{\mathcal{M}}$ and round~$r \in \ensuremath{R}$, a binary decision variable~$x_{\ensuremath{m},r}$ to model whether match~$\ensuremath{m}$ is played at round~$r$ ($x_{\ensuremath{m},r} = 1$) or not ($x_{\ensuremath{m},r} = 0$). With these variables, problem SRR can be modeled as: \begin{subequations} \makeatletter \def\@currentlabel{T} \makeatother \renewcommand{\theequation}{T\arabic{equation}}% \label{tra} \begin{align} \label{tra:obj} \min \sum_{\ensuremath{m}\in\ensuremath{\mathcal{M}}}\sum_{r \in \ensuremath{R}} c_{\ensuremath{m},r}x_{\ensuremath{m},r} &&&\\ \sum_{r \in R} x_{\ensuremath{m},r} &= 1, && \ensuremath{m}\in\ensuremath{\mathcal{M}},\label{tra:matchplayed}\\ \sum_{m \in \ensuremath{\mathcal{M}}_i} x_{m,r} &= 1, && i \in \ensuremath{T}, r \in \ensuremath{R},\label{tra:teamplays}\\ x_{\ensuremath{m},r} &\in \{0, 1\}, && \ensuremath{m} \in \ensuremath{\mathcal{M}}, r \in \ensuremath{R}.\label{tra:binary} \end{align} \end{subequations} Constraints~\eqref{tra:matchplayed} ensure that each pair of teams meets once, and Constraints~\eqref{tra:teamplays} imply that each team plays in each round. This model has~$O(n^2)$ constraints and~$O(n^3)$ variables. Note that Constraints~\eqref{tra:binary} can be replaced by~$x_{\ensuremath{m},r} \in \mathds{Z}_+$ as the upper bound~$x_{\ensuremath{m},r} \leq 1$ is implicitly imposed via Constraints~\eqref{tra:matchplayed} and non-negativity of variables. The linear programming relaxation of \eqref{tra} arises when we replace~(\ref{tra:binary}) by $x_{m,r} \geq 0$; given an instance $I$ of SRR, we denote the resulting value by $v^{LP}_{tra}(I)$. \subsection{The matching formulation} \label{sec:matchingformulation} Consider the complete graph that results when associating a node to each team, say $K_n = (T, \ensuremath{\mathcal{M}})$. Clearly, a single round of a feasible schedule can be seen as a perfect matching in this graph. This observation allows us to build a matching based formulation by introducing a binary variable for every perfect matching in $K_n$; we denote the set of all perfect matchings in $K_n$ by~$\ensuremath{\mathfrak{M}}$. We employ a binary variable~$y_{\ensuremath{M},r}$ for each perfect matching~$\ensuremath{M}\in\ensuremath{\mathfrak{M}}$ and round~$r \in \ensuremath{R}$. If~\mbox{$y_{\ensuremath{M},r} = 1$}, the model prescribes that matching~$\ensuremath{M}$ is used for the schedule of round~$r$, whereas \mbox{$y_{\ensuremath{M},r} = 0$} encodes that a different schedule is used. To be able to represent the cost of round~$r \in R$, the total cost of all matches in~$\ensuremath{M}$ is denoted by~$d_{\ensuremath{M},r} \coloneqq \sum_{\ensuremath{m}\in\ensuremath{M}} c_{\ensuremath{m},r}$, which leads to the model \begin{subequations} \makeatletter \def\@currentlabel{M} \makeatother \renewcommand{\theequation}{M\arabic{equation}}% \label{mat} \begin{align} \label{mat:obj} \min \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r \in \ensuremath{R}} d_{\ensuremath{M},r}y_{\ensuremath{M},r} &&&\\ \sum_{\ensuremath{M} \in \ensuremath{\mathfrak{M}}} y_{\ensuremath{M},r} &= 1, && r \in \ensuremath{R},\label{mat:matchingonround}\\ \sum_{\substack{\ensuremath{M} \in \ensuremath{\mathfrak{M}}\colon \\ \ensuremath{m} \in \ensuremath{M}}} \sum_{r \in R} y_{\ensuremath{M},r} &= 1, && \ensuremath{m} \in \ensuremath{\mathcal{M}}, \label{mat:matchplayed}\\ y_{\ensuremath{M},r} &\in \{0, 1\}, && \ensuremath{M} \in \ensuremath{\mathfrak{M}}, r \in \ensuremath{R}.\label{mat:binary} \end{align} \end{subequations} Constraints~\eqref{mat:matchingonround} ensure that a matching is selected in each round, while Constraints~\eqref{mat:matchplayed} enforce that each pair of teams meets in some round. Similarly to the traditional formulation, we can replace~\eqref{mat:binary} by~$y_{\ensuremath{M},r} \in \mathds{Z}_+$. In this way, the linear programming relaxation of \eqref{mat} arises when replacing~(\ref{mat:binary}) by $y_{\ensuremath{M},r} \geq 0$; given an instance $I$ of SRR, the resulting value is denoted by $v^{LP}_{mat}(I)$. Notice that this formulation uses an exponential number of variables, as the number of matchings grows exponentially in $n$. Thus, a relevant question is whether we can find $v^{LP}_{mat}$ in polynomial time. The following observation shows that it can be answered affirmatively. \begin{lemma} \label{lem:pricematch} The LP relaxation of the matching formulation~\eqref{mat} can be solved in polynomial time. \end{lemma} \begin{proof} Due to the celebrated result by Gr\"otschel et al.~\cite{GroetschelLovaszSchrijver1981}, it is sufficient to show that the separation problem for the constraints of the dual of the linear relaxation of Model~\eqref{mat} can be solved in polynomial time. To avoid an exponential number of variables in the dual, we replace Constraint~\eqref{mat:binary} by~$y_{\ensuremath{M},r} \in \mathds{Z}_+$ as explained above. Then, by introducing dual variables~$\alpha_r$, $r \in \ensuremath{R}$, corresponding to Constraints~\eqref{mat:matchingonround} and~$\beta_\ensuremath{m}$, $\ensuremath{m}\in\ensuremath{\mathcal{M}}$, corresponding to Constraints~\eqref{mat:matchplayed}, the constraints of the dual of the LP relaxation of Model~\eqref{mat} are: % \begin{align*} \alpha_r + \sum_{\ensuremath{m} \in \ensuremath{M}} \beta_{\ensuremath{m}} & \leq d_{\ensuremath{M},r}, && \ensuremath{M}\in\ensuremath{\mathfrak{M}}, r \in \ensuremath{R}. \end{align*} % Given values for the dual variables, say~$(\bar{\alpha}, \bar{\beta})$, the separation problem is to decide whether it satisfies all dual constraints. For fixed~$r \in \ensuremath{R}$, we show that this problem can be solved in polynomial time. Thus, the assertion follows as there are only~$O(n)$ rounds. Indeed, if~$r \in \ensuremath{R}$ is fixed, the problem reduces to check whether there exists a matching~$\ensuremath{M}\in\ensuremath{\mathfrak{M}}$ such that \[ \bar{\alpha}_r + \sum_{\ensuremath{m}\in\ensuremath{M}} \bar{\beta}_\ensuremath{m} > d_{\ensuremath{M},r} = \sum_{\ensuremath{m}\in\ensuremath{M}} c_{\ensuremath{m},r} \quad \Leftrightarrow \quad \sum_{\ensuremath{m} \in \ensuremath{M}}(\bar{\beta}_\ensuremath{m} - c_{\ensuremath{m},r}) > -\bar{\alpha}_r. \] The latter inequality asks whether there exists a perfect matching of teams with weight greater than~$-\bar{\alpha}_r$, where an edge~$\ensuremath{m}$ between two teams is assigned weight~$(\bar{\beta}_\ensuremath{m} - c_{\ensuremath{m},r})$. This problem can be solved in polynomial time by Edmonds' blossom algorithm~\cite{edmonds1965maximum,edmonds1965paths}, which concludes the proof. \end{proof} \subsection{The permutation formulation} \label{sec:permutationformulation} Instead of fixing the schedule of a round, the \emph{permutation formulation} fixes, for a given team, the order of the teams against which the given team plays its successive matches. That is, it introduces a variable for each team~$i$ and each permutation of~$\ensuremath{T} \setminus \{ i \}$. We denote the set of all such permutations by~$\permswo{i}$. Moreover, for a team $j \in \ensuremath{T}$ and round $r \in \ensuremath{R}$, denote the set of all permutations where $j$ occurs at position $r$ in the permutation by $\permswotor{i}{j}{r}$. Permutations from $\permswotor{i}{j}{r}$ thus encode that team~$i$ plays against team~$j$ on round $r$. For a permutation $\pi \in \permswo{i}$ and round~$r \in R$, we refer to the opponent of team~$i$ at round~$r$ as~$\pi_r \in \ensuremath{T} \setminus \{i\}$. The cost of a schedule encoded via permutations~$\permswo{i}$ for a team~$i \in \ensuremath{T}$ is then given by~$e_{i,\pi} \coloneqq \sum_{r \in \ensuremath{R}} c_{\{i,\pi_r\}, r}$. Using binary variables~$z_{i,\pi}$, where~$i \in \ensuremath{T}$ and~$\pi \in \permswo{i}$, that encode whether~$i$ plays against its opponents in order~$\pi$ ($z_{i,\pi} = 1$) or not ($z_{i,\pi}=0$), the permutation formulation is \begin{subequations} \makeatletter \def\@currentlabel{P} \makeatother \renewcommand{\theequation}{P\arabic{equation}}% \label{per} \begin{align} \min \frac{1}{2}\sum_{i \in \ensuremath{T}}\sum_{\pi \in \permswo{i}} e_{i,\pi} z_{i,\pi} &&& \label{per:obj}\\ \sum_{\pi \in \permswo{i}} z_{i,\pi} &= 1, && i \in \ensuremath{T}, \label{per:scheduleforteam}\\ \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} &= \sum_{\pi \in \permswotor{j}{i}{r}} z_{j,\pi},&& \{i, j\} \in \ensuremath{\mathcal{M}}, r \in \ensuremath{R},\label{per:linking}\\ z_{i,\pi} &\in \{0, 1\}, && i \in \ensuremath{T}, \pi \in \permswo{i}.\label{per:binary} \end{align} \end{subequations} Constraints~\eqref{per:scheduleforteam} ensure that a permutation is selected for each team, while Constraints~\eqref{per:linking} enforce that, given a round and a pair of teams, these teams meet in that round, or they do not meet in that round. Due to rescaling the objective by~$\frac{1}{2}$, we find the cost of an optimal SRR schedule. Moreover, we can again replace Constraint~\eqref{per:binary} by~$z_{i,\pi} \in \mathds{Z}_+$. The linear programming relaxation of \eqref{per} then arises when replacing Constraints~(\ref{per:binary}) by $z_{i,\pi} \geq 0$; given an instance $I$ of SRR, we denote the resulting value by $v^{LP}_{per}(I)$. Since this model has~$n!$ variables, we again investigate whether its LP relaxation can be solved efficiently. \begin{lemma} \label{lem:pricepermutation} The LP relaxation of the permutation formulation~\eqref{per} can be solved in polynomial time. \end{lemma} \begin{proof} As in the proof of Lemma~\ref{lem:pricematch}, it is sufficient to show that the separation problem corresponding to the constraints of the dual of the relaxation of the permutation formulation can be solved in polynomial time. Again, to avoid exponentially many variables in the dual, we replace~\eqref{per:binary} by~$z_{i,\pi} \in \mathds{Z}_+$. We introduce dual variables~$\alpha_i$ for each constraint of type~\eqref{per:scheduleforteam} and~$\beta_{\{i,j\},r}$ for each constraint of type~\eqref{per:linking}. To normalize Constraint~\eqref{per:linking}, we assume it to be given by~$\sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} - \sum_{\pi \in \permswotor{j}{i}{r}} z_{j,\pi} = 0$ with~$i < j$. Then, the dual constraints are given by % \begin{align*} \alpha_i + \sum_{\substack{r \in \ensuremath{R}\colon\\ i < \pi_r}} \beta_{\{i,\pi_r\},r} - \sum_{\substack{r \in \ensuremath{R}\colon\\ i > \pi_r}} \beta_{\{i,\pi_r\},r} &\leq \frac{1}{2}e_{i,\pi} && i \in \ensuremath{T}, \pi \in \permswo{i}. \end{align*} % If~$i \in \ensuremath{T}$ is fixed, the separation problem for dual values~$(\bar{\alpha},\bar{\beta})$ is to decide whether there exists a permutation~$\pi \in \permswo{i}$ such that \[ \sum_{\substack{r \in \ensuremath{R}\colon\\ i < \pi_r}} \bar{\beta}_{\{i,\pi_r\},r} - \sum_{\substack{r \in \ensuremath{R}\colon\\ i > \pi_r}} \bar{\beta}_{\{i,\pi_r\},r} - \frac{1}{2} \sum_{r \in \ensuremath{R}} c_{\{i,\pi_r\},r} > -\bar{\alpha}_i \] due to the definition of~$e_{i,\pi}$. To answer this question, it is sufficient to find a permutation maximizing the left-hand side expression. Such a permutation can be found by computing a maximum weight perfect matching in the complete bipartite graph with node bipartition~$(\ensuremath{T} \setminus \{i\}) \cup \ensuremath{R}$ and edge weights defined for each~$j \in \ensuremath{T} \setminus \{i\}$ and~$r \in \ensuremath{R}$ by \[ w_{j,r} = \begin{cases} -\frac{1}{2} c_{\{i,j\},r} + \bar{\beta}_{\{i,j\},r}, & \text{if } i < j,\\ -\frac{1}{2} c_{\{i,j\},r} - \bar{\beta}_{\{i,j\},r}, & \text{otherwise.} \end{cases} \] Since this problem can be solved in polynomial time, the assertion follows by solving this problem for each of the~$n$ teams. \end{proof} \section{Comparing the strength of the different formulations} \label{sec:strength} In the previous section, we have introduced three different models for finding an optimal schedule for problem SRR. While the traditional formulation contains both polynomially many variables and constraints, the matching and permutation formulation make use of an exponential number of variables. The aim of this section is to investigate whether the increase in the number of variables in comparison with the traditional formulation leads to a stronger formulation. We measure the strength of a formulation based on the value of its LP relaxation, where a higher value of the LP relaxation indicates a stronger formulation as the LP relaxation's value is closer to the optimum value of the integer program, as encapsulated by the following definitions. \begin{definition} Let~$f$ and~$g$ be mixed-integer programming formulations of the SRR problem and denote by~$\lprelaxation{f}(I)$ and~$\lprelaxation{g}(I)$ the value of the respective LP relaxations for an instance~$I$ of~SRR. % \begin{itemize} \item We say that~$f$ and~$g$ are \emph{relaxation-equivalent} if, for each instance $I$ of problem SRR, the value of the linear programming relaxations are equal, i.e., $\lprelaxation{f}(I) = \lprelaxation{g}(I)$. \item We say that~$f$ is stronger than (or dominates) $g$ if \begin{enumerate*}[label=(\roman*), ref=(\roman*)] \item for each instance $I$ of problem SRR, $\lprelaxation{f}(I) \geq \lprelaxation{g}(I)$, and \item there exists an instance $I$ of problem SRR for which~$\lprelaxation{f}(I) > \lprelaxation{g}(I)$. \end{enumerate*} \end{itemize} \end{definition} We now proceed by formally comparing the strength of the formulations from Section~\ref{sec:models} using the terminology of these definitions. We state our results using three lemmata, and summarize all our results in Theorem~\ref{th:overallstrength}. First, we show that the traditional and permutation formulation have equivalent LP relaxations. \begin{lemma} \label{lem:equitraandper} The permutation formulation~\eqref{per} is relaxation-equivalent to the traditional formulation~\eqref{tra}. \end{lemma} \begin{proof} To prove this lemma, we show that there is a one-to-one correspondence of feasible solutions of the traditional formulation's and the permutation formulation's LP relaxations that preserves the objective value. First, we construct a solution of the LP relaxation of the traditional formulation from a solution~$z$ of the LP relaxation of the permutation formulation. To this end, define for each~$\{i,j\} \in \ensuremath{\mathcal{M}}$ and~$r \in \ensuremath{R}$ a solution~$x \in \mathds{R}^{\ensuremath{\mathcal{M}} \times \ensuremath{R}}$ via~$x_{\{i,j\},r} = \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi}$. Note that~$x$ is non-negative as all~$z$-variables are non-negative. Moreover, it is well-defined as~$\sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} = \sum_{\pi \in \permswotor{j}{i}{r}} z_{j,\pi}$ due to~\eqref{per:linking}. Finally, all constraints of type~\eqref{tra:matchplayed} and~\eqref{tra:teamplays} are satisfied since % \begin{align*} &\text{for each } \ensuremath{m} \in \ensuremath{\mathcal{M}}: & \sum_{r \in \ensuremath{R}} x_{\{i,j\},r} = \sum_{r \in \ensuremath{R}} \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} = \sum_{\pi \in \bigcup_{r \in \ensuremath{R}} \permswotor{i}{j}{r}} z_{i,\pi} = \sum_{\pi \in \permswo{i}} z_{i, \pi} &\overset{\eqref{per:scheduleforteam}}{=} 1,\\ &\text{for each } i \in \ensuremath{T},\; r \in \ensuremath{R}: &\sum_{j \in T \setminus \{i\}} x_{\{i,j\},r} = \sum_{j \in T \setminus \{i\}} \sum_{\pi \in \permswotor{i}{j}{r}}z_{i,\pi} = \sum_{\pi \in \Pi^{-i}} z_{i, \pi} &\overset{\eqref{per:scheduleforteam}}{=} 1. \end{align*} % We conclude the proof by constructing a feasible solution for the LP relaxation of the permutation formulation from a feasible solution~$x$ of the traditional formulation's LP relaxation. Let~$x$ be such a solution and let~$i \in \ensuremath{T}$. Consider the matrix~$X^i \in \mathds{R}^{(\ensuremath{T} \setminus \{i\}) \times \ensuremath{R}}$ with entries~$X^i_{j,r} = x_{\{i,j\},r}$. Due to all constraints of the traditional formulation's LP relaxation, $X^i$ is a doubly stochastic matrix and is thus contained in the Birkhoff polytope, see~\cite{Ziegler1995}. Consequently, $X^i$ can be written as a convex combination of all permutation matrices. That is, if~$P^{i,\pi}$ is the permutation matrix associated with~$\pi \in \permswo{i}$, there exist multipliers~$\lambda^i_\pi \geq 0$, $\pi \in \permswo{i}$, such that~$X^i = \sum_{\pi \in \permswo{i}} \lambda^i_\pi P^{i,\pi}$ and~$\sum_{\pi \in \permswo{i}} \lambda^i_\pi = 1$. Based on these multipliers, we define a solution~$z$ of the permutation formulation via~$z_{i,\pi} = \lambda^i_{\pi}$. To conclude the proof, we need to show that this solution~$z$ is feasible for the permutation formulation's LP relaxation and has the same objective value as~$x$. Observe that~$z$ is non-negative since all~$\lambda$'s are non-negative. Constraints~\eqref{per:scheduleforteam} and~\eqref{per:linking} are satisfied as % \begin{align*} &\text{for each } i \in \ensuremath{T}:& \sum_{\pi\in\permswo{i}} z_{i,\pi} &= \sum_{\pi\in\permswo{i}} \lambda^i_\pi = 1,\\ &\text{for each } \{i,j\} \in\ensuremath{\mathcal{M}},\; r \in \ensuremath{R}:& \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} &= \sum_{\pi \in \permswotor{i}{j}{r}} \lambda^i_\pi = x_{\{i,j\},r} = \sum_{\pi \in \permswotor{j}{i}{r}} \lambda^j_\pi = \sum_{\pi\in\permswo{j}} z_{j,\pi} \end{align*} % since~$\sum_{\pi\in\permswo{i}} \lambda^i_\pi = 1$ and~$x_{\{i,j\},r}$ is a convex combination of permutation matrices that assign team~$j$ (or~$i$) to round~$r$, respectively. Consequently, $z$ is feasible for the permutation formulation's LP relaxation. Finally, both~$x$ and~$z$ have the same objective value because % \begin{align*} \frac{1}{2} \sum_{i \in\ensuremath{T}} \sum_{\pi \in \permswo{i}} e_{i,\pi}z_{i,\pi} &= \frac{1}{2} \sum_{i \in\ensuremath{T}} \sum_{\pi \in \permswo{i}} e_{i,\pi}\lambda^i_\pi = \frac{1}{2} \sum_{i \in\ensuremath{T}} \sum_{\pi \in \permswo{i}} \sum_{r \in \ensuremath{R}} c_{\{i,\pi_r\},r}\lambda^i_\pi\\ &= \frac{1}{2} \sum_{i \in\ensuremath{T}} \sum_{j \in \ensuremath{T}\setminus\{i\}} \sum_{r \in \ensuremath{R}} c_{\{i,j\},r} \sum_{\substack{\pi\in\permswo{i}\colon\\ \pi_r = j}} \lambda^i_\pi P^{i,\pi}_{\pi_r,r} = \frac{1}{2} \sum_{i \in\ensuremath{T}} \sum_{j \in \ensuremath{T}\setminus\{i\}} \sum_{r \in \ensuremath{R}} c_{\{i,j\},r} x_{\{i,j\},r}\\ &= \sum_{\{i,j\} \in \ensuremath{\mathcal{M}}}\sum_{r \in \ensuremath{R}} c_{\{i,j\},r}x_{\{i,j\},r}. \end{align*} % which proves that both formulations are relaxation-equivalent. \end{proof} Next, we turn our focus to the matching formulation and compare it with the traditional formulation (and thus, by the previous lemma, also with the permutation formulation). \begin{lemma} \label{lem:strength} For each $n \geq 6$, the matching formulation~\eqref{mat} is stronger than the traditional formulation~\eqref{tra}. \end{lemma} \begin{proof} First, we show that we can transform any feasible solution of the matching formulation's LP relaxation to a feasible solution of the traditional formulation's LP relaxations. Afterwards, to show that the matching formulation is stronger than the traditional formulation, we show that, for any even~$n \geq 6$, there exists an instance of SRR for which the LP relaxation of the matching formulation has a strictly larger value than the traditional formulation's LP relaxation. Let~$y$ be a feasible solution of the matching formulation's LP relaxation. We construct a solution~$x$ for the traditional formulation by setting~$x_{\ensuremath{m},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon \ensuremath{m}\in\ensuremath{M}} y_{\ensuremath{m},r}$. Since~$y$ is non-negative, also~$x$ is non-negative. Moreover, Conditions~\eqref{tra:matchplayed} and~\eqref{tra:teamplays} are satisfied as % \begin{align*} &\text{for each } \ensuremath{m}\in\ensuremath{\mathcal{M}}:& \sum_{r \in \ensuremath{R}} x_{\ensuremath{m},r} &= \sum_{r \in \ensuremath{R}} \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}\in\ensuremath{M}}} y_{\ensuremath{M},r} \overset{~\eqref{mat:matchplayed}}{=} 1,\\ &\text{for each } i\in\ensuremath{T},\; r \in\ensuremath{R}:& \sum_{j \in T \setminus \{i\}} x_{\{i,j\},r} &= \sum_{j \in T \setminus \{i\}} \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \{i,j\}\in\ensuremath{M}}} y_{\{i,j\},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} y_{\ensuremath{M},r} \overset{~\eqref{mat:matchingonround}}{=} 1. \end{align*} % Finally, both~$x$ and~$y$ have the same objective value as \[ \sum_{\ensuremath{m}\in\ensuremath{\mathcal{M}}}\sum_{r\in\ensuremath{R}} c_{\ensuremath{m},r}x_{\ensuremath{m},r} = \sum_{\ensuremath{m}\in\ensuremath{\mathcal{M}}}\sum_{r\in\ensuremath{R}} c_{\ensuremath{m},r}\sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}\in\ensuremath{M}}} y_{\ensuremath{M},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}}\sum_{r\in\ensuremath{R}} \sum_{\ensuremath{m}\in\ensuremath{M}}c_{\ensuremath{m},r} y_{\ensuremath{M},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}}\sum_{r\in\ensuremath{R}} d_{\ensuremath{M},r} y_{\ensuremath{M},r}, \] that is, the traditional formulation cannot be stronger than the matching formulation. To prove that the matching formulation dominates the traditional formulation for~$n\geq 6$ even, we distinguish three cases. In the first case, assume~$n \geq 10$. Consider the pairs of teams given by \[ P = \big\{\{1,2\}, \{2,3\}, \{1,3\}\big\} \cup \big\{\{4,5\}, \{5,6\}, \{4,6\}\big\} \cup \big\{\{7,8\},\{8,9\},\dots,\{n-1,n\},\{7,n\}\big\}. \] Interpreting~$P$ as the edges of an undirected graph, $P$ defines three connected components consisting of two 3-cycles and an even cycle. We construct an instance of the SRR problem by specifying the cost function~$c \in \mathds{R}^{\ensuremath{\mathcal{M}}\times\ensuremath{R}}$ via \[ c_{\ensuremath{m},r} = \begin{cases} 1, & \text{if } \ensuremath{m} \notin P \text{ and } r \in \{1,2\},\\ 0, & \text{otherwise.} \end{cases} \] It is easy to verify that~$x \in \mathds{R}^{\ensuremath{\mathcal{M}} \times \ensuremath{R}}$ given by \[ x_{\ensuremath{m},r} = \begin{cases} \tfrac{1}{2}, & \text{if } \ensuremath{m} \in P \text{ and } r \in \{1,2\},\\ 0, & \text{if } \ensuremath{m} \notin P \text{ and } r \in \{1,2\},\\ \tfrac{1}{n-3}, & \text{otherwise}, \end{cases} \] is feasible for the LP relaxation of the traditional formulation and has objective value~0. Hence, $x$ is optimal. Solving the LP relaxation of the matching formulation for this instance, however, results in an objective value that is at least~2. Indeed, each perfect matching $\ensuremath{M}\in\ensuremath{\mathfrak{M}}$ contains at least one match $\ensuremath{m} \in \ensuremath{M}$ with $\ensuremath{m} \in \{\{i,j\} : (i,j) \in\{1, 2, 3 \} \times \{4, 5, \dots, n\}\}$. Since $c_{\ensuremath{m}, 1} = c_{\ensuremath{m},2} = 1$ for such a match, it follows that in both rounds~1 and~2, matchings are selected with total weight at least 1 due to~\eqref{mat:matchplayed},leading to a solution with total cost at least 2. In the second case, we consider~$n = 6$. To prove the statement, we use the same construction as before, however, we do not require the even cycle anymore. That is, $P$ defines two 3-cycles and the argumentation remains the same as before. In the last case~$n = 8$, we consider the set of pairs \[ P = \big\{\{1,2\}, \{1,3\}, \{2,3\}\big\} \cup \big\{ \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{4,8\}\big\}. \] If we interpret~$P$ as edges of an undirected graph, the corresponding graph has two connected components being a~3-cycle and a~5-cycle, respectively. We choose the cost-coefficients~$c \in \mathds{R}^{\ensuremath{\mathcal{M}} \times \ensuremath{R}}$ to be \[ c_{\ensuremath{m}, r} = \begin{cases} 1, & \text{if } \ensuremath{m} \notin P \text{ and } r \in \{1,2\},\\ 0, & \text{otherwise.} \end{cases} \] Simple calculations show that an optimal solution of the traditional formulation's LP relaxation is~$x \in \mathds{R}^{\ensuremath{\mathcal{M}} \times \ensuremath{R}}$ with \[ x_{\ensuremath{m}, r} = \begin{cases} \tfrac{1}{2}, & \text{if } \ensuremath{m} \in P \text{ and } r\in \{1,2\},\\ 0, & \text{if } \ensuremath{m} \notin E \text{ and } r\in\{1,2\},\\ \tfrac{1}{5}, & \text{otherwise,} \end{cases} \] which has objective value~0, whereas the matching formulation's LP relaxation has value~2. \end{proof} The previous two lemmata completely characterize the relative strength of the three different formulations except for~$n=4$. The status of this missing case is settled next. \begin{lemma} \label{lem:n=4} For $n=4$, the traditional formulation and the matching formulation are relaxation-equivalent. \end{lemma} \begin{proof} Observe that exactly the same arguments as in the proof of Lemma~\ref{lem:strength} can be used to show that the matching formulation is at least as strong as the traditional formulation if~$n=4$. Hence, it remains to show that every solution~$x$ of the traditional formulation's LP relaxation can be turned into a solution of the LP relaxation of the matching formulation if~$n=4$. Let~$x$ be such a solution. Let~$\ensuremath{M} = \big\{\{i,j\}, \{k,l\}\big\} \in \ensuremath{\mathfrak{M}}$. Since~$x$ satisfies Equations~\eqref{tra:teamplays}, summing the equations for~$i,j$ and subtracting the equations for~$k,l$ yields~$2x_{\{i,j\},r} - 2x_{\{k,l\},r} = 0$. That is, the~$x$-variables for the two matches of a matching within the same round have the same value. Consequently, the solution~$y \in \mathds{R}^{\ensuremath{\mathfrak{M}} \times \ensuremath{R}}$ given by~$y_{\ensuremath{M},r} = x_{\{i,j\},r}$ is well-defined and it is immediate to check that~$y$ is feasible for the matching formulation's LP relaxation and has the same objective value as~$x$. \end{proof} Summarizing the previous results of this section, we can provide a complete comparison of the strength of the traditional, matching, and permutation formulation. \begin{theorem} \label{th:overallstrength} For each~$n\geq 6$, the traditional and permutation formulation are relaxation-equivalent, whereas the matching formulation is stronger than either of them. For~$n=4$, the traditional, matching, and permutation formulation for problem SRR are relaxation-equivalent. \end{theorem} Besides verifying that all three models are equivalent for~$n=4$, we can also show that the matching formulation's integer hull is already completely characterized by~\eqref{mat:matchingonround}, \eqref{mat:matchplayed}, as well as non-negativity inequalities for all variables. \begin{proposition} For~$n=4$, Equations~\eqref{mat:matchingonround} and~\eqref{mat:matchplayed} as well as non-negativity inequalities define an integral polyhedron. That is, the matching formulation's LP relaxation coincides with its integer hull. \end{proposition} \begin{proof} To prove the proposition's statement, we show that the constraint matrix of~\eqref{mat} is totally unimodular. The result follows then by the Hoffman-Kruskal theorem~\cite{Schrijver1987} as all right-hand side values in~\eqref{mat} are integral. For $n=4$, the set of all matchings $\ensuremath{\mathfrak{M}}$ consists of exactly the three matchings \begin{align*} \ensuremath{M}_1 &= \big\{ \{1, 2\}, \{3, 4\} \big\}, & \ensuremath{M}_2 &= \big\{ \{1, 3\}, \{2, 4\} \big\}, & \ensuremath{M}_3 &= \big\{ \{1, 4\}, \{2, 3\} \big\}. \end{align*} The non-trivial constraints from Formulation~\eqref{mat} are Equations~\eqref{mat:matchingonround} and~\eqref{mat:matchplayed}, which yield system \[ \begin{pmatrix} 1& & &1& & &1& & \\ &1& & &1& & &1& \\ & &1& & &1& & &1 \\ 1&1&1& & & & & & \\ & & &1&1&1& & & \\ & & & & & &1&1&1 \\ & & & & & &1&1&1 \\ & & &1&1&1& & & \\ 1&1&1& & & & & & \\ \end{pmatrix} \begin{pmatrix} y_{\ensuremath{M}_1, 1} \\ y_{\ensuremath{M}_1, 2} \\ y_{\ensuremath{M}_1, 3} \\ y_{\ensuremath{M}_2, 1} \\ y_{\ensuremath{M}_2, 2} \\ y_{\ensuremath{M}_2, 3} \\ y_{\ensuremath{M}_3, 1} \\ y_{\ensuremath{M}_3, 2} \\ y_{\ensuremath{M}_3, 3} \\ \end{pmatrix} = \begin{pmatrix} 1\\1\\1\\1\\1\\1\\1\\1\\1 \end{pmatrix} . \quad \begin{array}{l} (\eqref{mat:matchingonround}, r=1) \\ (\eqref{mat:matchingonround}, r=2) \\ (\eqref{mat:matchingonround}, r=3) \\ (\eqref{mat:matchplayed}, m=\{1, 2\}) \\ (\eqref{mat:matchplayed}, m=\{1, 3\}) \\ (\eqref{mat:matchplayed}, m=\{1, 4\}) \\ (\eqref{mat:matchplayed}, m=\{2, 3\}) \\ (\eqref{mat:matchplayed}, m=\{2, 4\}) \\ (\eqref{mat:matchplayed}, m=\{3, 4\}) \\ \end{array} \] Note that the last three equations are redundant and can be removed. The constraint matrix of the remaining equations is the node-edge incidence matrix of a bipartite graph and hence totally unimodular, which concludes the proof. \end{proof} Thus, for $n=4$, simply solving the LP-relaxation of the matching formulation by the simplex method, suffices to find an optimum integral solution. \section{Strengthening the formulations} \label{sec:inequalities} In this section, we continue our investigations of the structure of the formulations. In Section~\ref{sec:matchingcontinued}, we derive an exponentially sized class of valid inequalities for the matching formulation. Also, we show in Section~\ref{sec:tracontinued} that adding the so-called odd-cut inequalities to the traditional formulation yields a formulation that is relaxation-equivalent to the matching formulation. \subsection{Strengthening the matching formulation} \label{sec:matchingcontinued} Observe that Theorem~\ref{th:overallstrength} does not rule out the possibility that, for $n\geq 6$, every vertex of the matching formulation is integral. That, however, is not the case already for~$n = 6$ as we will show next. To this end, we first provide a fractional point~$y^\star$ that is contained in the LP relaxation of the matching formulation for~$n=6$. Afterwards, we derive a class of valid inequalities for the integer hull matching formulation, and finally, we provide one such inequality that is violated by~$y^\star$. \begin{example} \label{ex:examplesolution} Let $n=6$. Then, the set of teams and rounds is given by~$\ensuremath{T} = \{1, \dots, 6\}$ and rounds~$\ensuremath{R} = \{1, \dots, 5\}$, respectively. In Figure~\ref{fig:examplesolution}, we depict a fractional solution of the matching formulation's LP relaxation. For each round~$r \in \ensuremath{R}$, we provide two perfect matchings between the teams~$T$, the blue and green (dashed) matching~$\ensuremath{M}$, whose corresponding variables~$y_{\ensuremath{M},r}$ have value~$\frac{1}{2}$ in the corresponding solution; all remaining variables have value~0. It is easy to verify that this fractional solution is indeed feasible for the LP relaxation of~\eqref{mat}. \begin{figure}[ht] \begin{subfigure}{\textwidth/5} \centering \begin{tikzpicture} \node (0) at ({sin(0*360/6)}, {cos(0*360/6)}) {1}; \node (1) at ({sin(1*360/6)}, {cos(1*360/6)}) {2}; \node (2) at ({sin(2*360/6)}, {cos(2*360/6)}) {3}; \node (3) at ({sin(3*360/6)}, {cos(3*360/6)}) {4}; \node (4) at ({sin(4*360/6)}, {cos(4*360/6)}) {5}; \node (5) at ({sin(5*360/6)}, {cos(5*360/6)}) {6}; \draw[draw=darkblue, ultra thick] (2) edge (3); \draw[draw=darkblue, ultra thick] (1) edge (4); \draw[draw=darkblue, ultra thick] (0) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (2) edge (4); \draw[draw=darkgreen, dashed, ultra thick] (0) edge (3); \draw[draw=darkgreen, dashed, ultra thick] (1) edge (5); \end{tikzpicture} \caption{round 1} \end{subfigure}% \begin{subfigure}{\textwidth/5} \centering \begin{tikzpicture} \node (0) at ({sin(0*360/6)}, {cos(0*360/6)}) {1}; \node (1) at ({sin(1*360/6)}, {cos(1*360/6)}) {2}; \node (2) at ({sin(2*360/6)}, {cos(2*360/6)}) {3}; \node (3) at ({sin(3*360/6)}, {cos(3*360/6)}) {4}; \node (4) at ({sin(4*360/6)}, {cos(4*360/6)}) {5}; \node (5) at ({sin(5*360/6)}, {cos(5*360/6)}) {6}; \draw[draw=darkblue, ultra thick] (1) edge (2); \draw[draw=darkblue, ultra thick] (0) edge (4); \draw[draw=darkblue, ultra thick] (3) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (2) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (1) edge (3); \draw[draw=darkgreen, dashed, ultra thick] (0) edge (4); \end{tikzpicture} \caption{round 2} \end{subfigure}% \begin{subfigure}{\textwidth/5} \centering \begin{tikzpicture} \node (0) at ({sin(0*360/6)}, {cos(0*360/6)}) {1}; \node (1) at ({sin(1*360/6)}, {cos(1*360/6)}) {2}; \node (2) at ({sin(2*360/6)}, {cos(2*360/6)}) {3}; \node (3) at ({sin(3*360/6)}, {cos(3*360/6)}) {4}; \node (4) at ({sin(4*360/6)}, {cos(4*360/6)}) {5}; \node (5) at ({sin(5*360/6)}, {cos(5*360/6)}) {6}; \draw[draw=darkblue, ultra thick] (0) edge (2); \draw[draw=darkblue, ultra thick] (1) edge (4); \draw[draw=darkblue, ultra thick] (3) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (0) edge (2); \draw[draw=darkgreen, dashed, ultra thick] (3) edge (4); \draw[draw=darkgreen, dashed, ultra thick] (1) edge (5); \end{tikzpicture} \caption{round 3} \end{subfigure}% \begin{subfigure}{\textwidth/5} \centering \begin{tikzpicture} \node (0) at ({sin(0*360/6)}, {cos(0*360/6)}) {1}; \node (1) at ({sin(1*360/6)}, {cos(1*360/6)}) {2}; \node (2) at ({sin(2*360/6)}, {cos(2*360/6)}) {3}; \node (3) at ({sin(3*360/6)}, {cos(3*360/6)}) {4}; \node (4) at ({sin(4*360/6)}, {cos(4*360/6)}) {5}; \node (5) at ({sin(5*360/6)}, {cos(5*360/6)}) {6}; \draw[draw=darkblue, ultra thick] (2) edge (4); \draw[draw=darkblue, ultra thick] (1) edge (3); \draw[draw=darkblue, ultra thick] (0) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (0) edge (1); \draw[draw=darkgreen, dashed, ultra thick] (4) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (2) edge (3); \end{tikzpicture} \caption{round 4} \end{subfigure}% \begin{subfigure}{\textwidth/5} \centering \begin{tikzpicture} \node (0) at ({sin(0*360/6)}, {cos(0*360/6)}) {1}; \node (1) at ({sin(1*360/6)}, {cos(1*360/6)}) {2}; \node (2) at ({sin(2*360/6)}, {cos(2*360/6)}) {3}; \node (3) at ({sin(3*360/6)}, {cos(3*360/6)}) {4}; \node (4) at ({sin(4*360/6)}, {cos(4*360/6)}) {5}; \node (5) at ({sin(5*360/6)}, {cos(5*360/6)}) {6}; \draw[draw=darkblue, ultra thick] (0) edge (1); \draw[draw=darkblue, ultra thick] (2) edge (5); \draw[draw=darkblue, ultra thick] (3) edge (4); \draw[draw=darkgreen, dashed, ultra thick] (4) edge (5); \draw[draw=darkgreen, dashed, ultra thick] (1) edge (2); \draw[draw=darkgreen, dashed, ultra thick] (0) edge (3); \end{tikzpicture} \caption{round 5} \end{subfigure}% \caption{A feasible point for the LP relaxation of Formulation~\eqref{mat}.} \label{fig:examplesolution} \end{figure} \end{example} To describe our class of valid inequalities, consider the following lemma. \begin{lemma} \label{lem:simpleCGcut} Let~$\ensuremath{m}_1, \ensuremath{m}_2 \in \ensuremath{\mathcal{M}}$ be disjoint and let~$r' \in \ensuremath{R}$. Then, % \begin{equation} \label{eq:cgcutmat} \sum_{r \in \ensuremath{R}\setminus\{r'\}}\sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1 \in \ensuremath{M} \text{ or } \ensuremath{m}_2\in\ensuremath{M}}} y_{\ensuremath{M},r} + \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1 \notin \ensuremath{M} \text{ or } \ensuremath{m}_2\notin\ensuremath{M}}} y_{\ensuremath{M},r'} + \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1, \ensuremath{m}_2 \in \ensuremath{M}}} 2y_{\ensuremath{M},r'} \geq 2 \end{equation} % is a valid inequality for~\eqref{mat}. In particular, it is a Chv\'atal-Gomory cut derived from the LP relaxation of~\eqref{mat}. \end{lemma} \begin{proof} It is sufficient to prove that~\eqref{eq:cgcutmat} is indeed a Chv\'atal-Gomory cut. To this end, we multiply Equation~\eqref{mat:matchingonround} for round $r'$ and Equations~\eqref{mat:matchplayed} for matches~$\ensuremath{m}_1$ and~$\ensuremath{m}_2$ by~$\frac{1}{2}$ and sum the resulting equations to obtain \[ \sum_{r \in \ensuremath{R}\setminus\{r'\}}\sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1 \in \ensuremath{M} \text{ or } \ensuremath{m}_2\in\ensuremath{M}}} \frac{C_{\ensuremath{M},r}}{2} y_{\ensuremath{M},r} + \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1 \notin \ensuremath{M} \text{ or } \ensuremath{m}_2\notin\ensuremath{M}}} \frac{1 + C_{\ensuremath{M},r}}{2}y_{\ensuremath{M},r'} + \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}_1, \ensuremath{m}_2 \in \ensuremath{M}}} \tfrac{3}{2}y_{\ensuremath{M},r'} = \frac{3}{2}, \] where~$C_{\ensuremath{M},r} = \card{\ensuremath{M} \cap \{\ensuremath{m}_1,\ensuremath{m}_2\}}$. Since all~$y$-variables are non-negative, we can turn this equation into a~$\geq$-inequality by rounding up the left-hand side coefficients. Moreover, since in a feasible solution for~\eqref{mat} all variables attain integer values, we can increase the right-hand side from~$\frac{3}{2}$ to~2, which yields the desired inequality. \end{proof} Using this class of inequalities, we can show that the point~$y^\star$ presented in the previous example is indeed not contained in the matching formulation's integer hull. Select~$r' = 1$, $\ensuremath{m}_1 = \{1,6\}$, and~$\ensuremath{m}_2 = \{3,5\}$, and let~$\ensuremath{M}$ be the blue and~$\ensuremath{M}'$ be the green matching of the first round as well as~$\ensuremath{M}''$ the blue matching of round~$4$. Then, the corresponding inequality's left-hand side evaluates in~$y^\star$ to $y^\star_{M'',4} + y^\star_{M,1} + y^\star_{M',1} = \frac{3}{2}$. Hence, $y^\star$ violates the corresponding inequality as~$\frac{3}{2} \ngeq 2$. Note that Inequality~\eqref{eq:cgcutmat} is a so-called~$\{0, \frac{1}{2}\}$-cut~\cite{CapraraFischetti1996} as all multipliers used in the derivation are~$\frac{1}{2}$ (and~0 for inequalities/equations that have not been used). By taking more equations in the generation of a valid inequality into account, we can generalize~\eqref{eq:cgcutmat} to an exponentially large class of inequalities. \begin{proposition} Let~$A \subseteq \ensuremath{\mathcal{M}}$ be a set of pairwise disjoint matches and let~$B \subseteq \ensuremath{R}$. If~$\card{A} + \card{B}$ is odd, then % \begin{equation} \label{eq:generalcg} \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r \in B} \left\lceil \frac{1 + \card{\ensuremath{M} \cap A}}{2}\right\rceil y_{\ensuremath{M},r} + \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r \in \ensuremath{R}\setminus B} \left\lceil \frac{\card{\ensuremath{M} \cap A}}{2}\right\rceil y_{\ensuremath{M},r} \geq \frac{1 + \card{A} + \card{B}}{2}, \end{equation} % is a valid inequality for~\eqref{mat}. In particular, it is a Chv\'atal-Gomory cut derived from the LP relaxation of~\eqref{mat}. \end{proposition} \begin{proof} We follow the line of the proof of Lemma~\ref{lem:simpleCGcut} and multiply each constraint of type~\eqref{mat:matchingonround} with index in~$A$ and each constraint of type~\eqref{mat:matchplayed} with index in~$B$ by~$\frac{1}{2}$ and sum all resulting equations. This leads to \[ \sum_{j = 1}^{\nicefrac{n}{2}} \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \card{\ensuremath{M}\cap A} = j}} \sum_{r \in B} \frac{1 + j}{2} y_{\ensuremath{M},r} + \sum_{j = 1}^{\nicefrac{n}{2}} \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \card{\ensuremath{M}\cap A} = j}} \sum_{r \in \ensuremath{R}\setminus B} \frac{j}{2} y_{\ensuremath{M},r} = \frac{\card{A} + \card{B}}{2}. \] Since all~$y$-variables are non-negative, we derive the inequality \[ \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r \in B} \left\lceil \frac{1 + \card{\ensuremath{M} \cap A}}{2}\right\rceil y_{\ensuremath{M},r} + \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r \in \ensuremath{R}\setminus B} \left\lceil \frac{\card{\ensuremath{M} \cap A}}{2}\right\rceil y_{\ensuremath{M},r} \geq \frac{\card{A} + \card{B}}{2}, \] and by integrality of the~$y$-variables, we can round up the right-hand side, which leads to the desired inequality. \end{proof} While Inequalities~\eqref{eq:cgcutmat} can trivially be separated in polynomial time, an efficient separation algorithm for~\eqref{eq:generalcg} is not immediate. We leave the complexity status of separating~\eqref{eq:generalcg} open for future research. \subsection{Strengthening the traditional formulation} \label{sec:tracontinued} Revisiting the proof of Lemma~\ref{lem:strength}, it becomes clear that it is possible to assign, for a fixed round, each edge (match) of an odd cycle in~$K_n$ a weight of~$\frac{1}{2}$. That is, the traditional formulation can assign an odd cycle of length~$k$ a weight of~$\frac{k}{2}$. Such a solution, however, cannot be written as a convex combination of integer feasible solutions, because each such solution defines a perfect matching on the matches of a fixed round, i.e., the total weight of an odd cycle can be at most~$\frac{k-1}{2}$. To strengthen the traditional formulation, one can thus add facet defining inequalities for the perfect matching polytope~$P_M$ to Model~\eqref{tra}, which results in the additional inequalities \begin{align} \label{eq:blossom} \sum_{i \in U} \sum_{j \in T \setminus U} x_{\{i,j\},r} &\geq 1, && U \subseteq T \text{ with } \card{U}\text{ odd}, r \in \ensuremath{R}, \end{align} which correspond to the odd-cut inequalities for the matching polytope and can be separated in polynomial time. \begin{lemma} Let~$n \geq 6$. The traditional formulation~\eqref{tra} extended by~\eqref{eq:blossom} is relaxation-equivalent to the matching formulation. \end{lemma} \begin{proof} We use the same proof strategy as for Lemma~\ref{lem:strength}. Therefore, consider again the solution~$x \in \mathds{R}^{\ensuremath{\mathcal{M}} \times \ensuremath{R}}$ given by~$x_{\ensuremath{m},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon \ensuremath{m}\in\ensuremath{M}} y_{\ensuremath{M},r}$ for a solution~$y$ of the matching formulation's LP relaxation. Due to the proof of Lemma~\ref{lem:strength}, it is sufficient to show that~$x$ satisfies~\eqref{eq:blossom} to prove that the matching formulation is at least as strong as the enhanced traditional formulation. Let~$U \subseteq \ensuremath{T}$ have odd cardinality. Since every $\ensuremath{M} \in \ensuremath{\mathfrak{M}}$ is a perfect matching, there is at least one team~$i \in U$ that does not play against another team in $U$ since~$U$ is odd. Hence, for each~$\ensuremath{M} \in \ensuremath{\mathfrak{M}}$, there is a match~$\{i,j\} \in \ensuremath{M}$ with~$i \in U$ and~$j \notin U$. Then, % \begin{align*} \sum_{i \in U} \sum_{j \in \ensuremath{T} \setminus U} x_{\{i,j\},r} &= \sum_{i \in U} \sum_{j \in \ensuremath{T} \setminus U} \sum_{\substack{\ensuremath{M} \in \ensuremath{\mathfrak{M}}\colon\\ \{i,j\} \in \ensuremath{M}}} y_{\ensuremath{M},r}\\ &= \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{i \in U} \sum_{\substack{j \in T \setminus U\colon\\ \{i,j\}\in \ensuremath{M}}} y_{\ensuremath{M},r} \geq \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} y_{\ensuremath{M},r} \overset{~\eqref{mat:matchplayed}}{\geq} 1. \end{align*} % Consequently, the matching formulation is at least as strong as the enhanced traditional formulation. To prove that the enhanced traditional formulation is not weaker than the matching formulation, we use a strategy similar to the one pursued in the proof of Lemma~\ref{lem:equitraandper}. Since the enhanced traditional formulation contains, per round~$r$, all facet defining inequalities as well as equations for the perfect matching polytope~$P_M$, each vector~$X^r \in \mathds{R}^{\ensuremath{\mathcal{M}}}$ given by~$X^r_{\{i,j\}} = x_{\{i,j\},r}$ is contained in~$P_M$. Hence, there exist non-negative multipliers~$\lambda^r \in \mathds{R}_+^{\ensuremath{\mathfrak{M}}}$ with~$\sum_{\ensuremath{M} \in \ensuremath{\mathfrak{M}}} \lambda^r_{\ensuremath{M}} = 1$ such that~$X^r = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \lambda^r_\ensuremath{M} V_\ensuremath{M}$, where~$V_\ensuremath{M}$ is the vertex of~$P_M$ corresponding to the perfect matching~$\ensuremath{M}$. We claim that~$y \in \mathds{R}^{\ensuremath{\mathfrak{M}} \times \ensuremath{R}}$ given by~$y_{\ensuremath{M},r} = \lambda^r_\ensuremath{M}$ is feasible for the LP relaxation of the matching formulation. Because~$X^r = \sum_{\ensuremath{M} \in \ensuremath{\mathfrak{M}}} \lambda^r_\ensuremath{M} V_\ensuremath{M}$ implies~$X^r_{\ensuremath{m}} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon \ensuremath{m}\in\ensuremath{M}} \lambda^r_\ensuremath{M}$, both~\eqref{mat:matchingonround} and~\eqref{mat:matchplayed} are satisfied as % \begin{align*} \sum_{\ensuremath{M} \in \ensuremath{\mathfrak{M}}} y_{\ensuremath{M},r} &= \sum_{\ensuremath{M} \in \ensuremath{\mathfrak{M}}} \lambda^r_\ensuremath{M} = 1,\\ \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m} \in \ensuremath{M}}} \sum_{r \in \ensuremath{R}} y_{\ensuremath{M},r} &= \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m} \in \ensuremath{M}}} \sum_{r \in \ensuremath{R}} \lambda^r_\ensuremath{M} = \sum_{r \in \ensuremath{R}} x_{\ensuremath{m},r} \overset{~\eqref{tra:matchplayed}}{=} 1. \end{align*} % Moreover, $y$ is non-negative as the~$\lambda$'s form a convex combination and both~$x$ and~$y$ have the same objective value since \[ \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}}\sum_{r\in\ensuremath{R}} d_{\ensuremath{M},r}y_{\ensuremath{M},r} = \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}}\sum_{r\in\ensuremath{R}} \sum_{\ensuremath{m}\in\ensuremath{M}}c_{\ensuremath{m},r}\lambda^r_\ensuremath{M} = \sum_{\ensuremath{m}\in\ensuremath{\mathcal{M}}}\sum_{r\in\ensuremath{R}}\sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\ \ensuremath{m}\in\ensuremath{M}}} c_{\ensuremath{m},r}\lambda^r_\ensuremath{M} = \sum_{\ensuremath{m}\in\ensuremath{\mathcal{M}}}\sum_{r\in\ensuremath{R}} c_{\ensuremath{m},r}x_{\ensuremath{m},r}, \] which concludes the proof. \end{proof} \begin{remark} Since the traditional and permutation formulation are equivalent, one might wonder whether also the permutation formulation can be enhanced by odd-cut inequalities. Indeed, using the transformation~$x_{\{i,j\},r} = \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi}$ as in the proof of Lemma~\ref{lem:equitraandper}, one can show that the corresponding version of odd-cut inequalities is given by % \begin{align*} \sum_{i \in U} \sum_{j \in \ensuremath{T}\setminus U} \sum_{\pi \in \permswotor{i}{j}{r}} z_{i,\pi} &\geq 1, && U \subseteq T \text{ with } \card{U}\text{ odd}, r \in \ensuremath{R}, \end{align*} % and that the enhanced traditional and permutation formulation are equivalent. \end{remark} \section{An extension: $k$-round robin tournaments} \label{sec:kRR} In this section, we generalize the models for single round robin tournaments to $k$-round robin tournaments, where each pair of teams is required to meet exactly $k$ times, for $k \geq 1$. As a consequence, the total number of matches that need to be scheduled becomes $\frac12kn(n-1)$, and we set $\ensuremath{R} \coloneqq \{1,2, \ldots, k(n-1)\}$. \begin{problem}[$k$RR] \label{prob:krr} Let~$n \geq 4$ be an even integer and let~$k \geq 1$ be integral. Given~$n$ teams with corresponding matches~$\ensuremath{\mathcal{M}}$, a set of~$\frac12kn(n-1)$ rounds~$\ensuremath{R}$ ($k \geq 1$), as well as an integral cost~$c_{\ensuremath{m},r}$ for every~$\ensuremath{m}\in\ensuremath{\mathcal{M}}$ and round~$r \in \ensuremath{R}$, the $k$-\emph{round robin ($k$RR)} problem is to find an assignment~$\mathcal{A} \subseteq \ensuremath{\mathcal{M}} \times \ensuremath{R}$ of matches to rounds such that (i) every team plays a single match per round, (ii) each match is played in $k$, pairwise distinct, rounds, while total cost $\sum_{(\ensuremath{m},r) \in \mathcal{A}} c_{\ensuremath{m},r}$ is minimized. \end{problem} \noindent Problem~\ref{prob:srr} (SRR) is a special case of $k$RR as it arises when $k = 1$. Another very prominent special case arises when $k=2$, the so-called Double Round Robin tournament, denoted hereafter by DRR. In principle, it is easy to generalize the models from Section~\ref{sec:models} to account for meeting $k$ times instead of once. Indeed, by replacing the right-hand side of constraints \eqref{tra:matchplayed}, or the right-hand side of constraints \eqref{mat:matchplayed} by $k$, or by redefining $\permswo{i}$ and $\permswotor{i}{j}{r}$ to ordered lists for team $i$ that features every opponent $j$ exactly $k$ times, the resulting formulations for $k$RR directly arise. In fact, we claim that it is not difficult to verify that the results concerning the polynomial solvability of the linear relaxations (Lemmata~\ref{lem:pricematch} and \ref{lem:pricepermutation}), as well as the strength of the relaxations (Theorem~\ref{th:overallstrength}) hold for the $k$RR for each $k \geq 1$. However, in practice, a number of additional properties become relevant when considering $k$-round robin tournaments: \emph{phased} tournaments and tournaments where playing \emph{home} or \emph{away} matters. We now discuss these properties, and their consequences for the formulations, in more detail. \paragraph{Phased} (PH) The tournament is split into $k$ parts such that each pair of teams meets once in each part. Here a {\em part} of the tournament refers to $n-1$ consecutive rounds, starting at round $\ell(n-1) +1$, for $\ell \in \{0, \dots, k-1\}$. Moreover, we use $\ensuremath{R}_{\ell} \coloneqq \{\ell (n-1)+1, \dots, (\ell+1)(n-1)\}$ to denote the rounds in part $\ell \in \{0, \dots, k-1\}$, and $\ensuremath{R} \coloneqq \bigcup_{\ell=0}^{k-1} \ensuremath{R}_{\ell}$. Without the presence of any additional constraints, a phased tournament can be trivially decomposed in multiple single-round robin tournaments: one for each set of rounds~$\ensuremath{R}_\ell$. \paragraph{Home-away} (HA) Each team has a home venue, implying that to specify a schedule it is no longer sufficient to specify the matches in each round; instead, one also has to specify, for each match, which teams plays home, and which team plays away. We denote this by redefining a match between teams $i, j \in \ensuremath{T}$ ($i \neq j$) where $i$ is the home-playing team, by an ordered pair $(i, j)$ (in contrast to an unordered pair $\{i, j\}$). Let~$k$ be a positive integer. For $n$ teams and a $k$-round robin setting, denote $\ensuremath{T} \coloneqq \{1, \dots, n\}$ and $\ensuremath{R} \coloneqq \{1, \dots, k(n-1) \}$. Let $\ensuremath{\mathcal{M}} \coloneqq \{ (i, j) : i, j \in \ensuremath{T}, i \neq j \}$ be the set of ordered matches. The assignment of match $(i, j) \in \ensuremath{\mathcal{M}}$ to round $r \in \ensuremath{R}$ comes at a cost $c_{(i, j), r}$, and in contrast to the SRR case, $c_{(i, j), r}$ and $c_{(j, i), r}$ can be different. We proceed by describing the phased $k$-round robin problem with home-away patterns ($k$RR-PH-HA): \begin{problem}[$k$RR-PH-HA] \label{prob:krrha} Given an even number~$n \geq 4$ of teams with corresponding matches~$\ensuremath{\mathcal{M}}$, a set of~$\frac12kn(n-1)$ rounds~$\ensuremath{R}$ ($k \geq 1$), as well as an integral cost~$c_{\ensuremath{m},r}$ for every~$\ensuremath{m}\in\ensuremath{\mathcal{M}}$ and round~$r \in \ensuremath{R}$, the $k$RR-PH-HA problem is to find an assignment~$\mathcal{A} \subseteq \ensuremath{\mathcal{M}} \times \ensuremath{R}$ of matches to rounds such that (i) every team plays a single match per round, (ii) each pair of teams meets once in part~$R_{\ell}, \ell \in \{1, \ldots, k\}$, and (iii) each ordered match is played $\lfloor \frac{k}{2} \rfloor$ or~$\lceil \frac{k}{2} \rceil$ times so that each pair of teams meets in total $k$ times, while total cost $\sum_{(\ensuremath{m},r) \in \mathcal{A}} c_{\ensuremath{m},r}$ is minimized. \end{problem} We will show how the three formulations of Sections~\ref{sec:traditionalformulation}--\ref{sec:permutationformulation} can be adapted to deal with these properties. \paragraph{Extending the traditional formulation for $k$-RR tournaments} For reasons of convenience, we assume $k$ is even; this implies that for each pair of distinct teams $i,j \in T$ match~$(i,j)$ and match~$(j,i)$ each need to occur $\frac{k}{2}$ times in any feasible schedule. \begin{subequations} \makeatletter \def\@currentlabel{$k$T} \makeatother \renewcommand{\theequation}{$k$T\arabic{equation}}% \label{trakRR} \begin{align} \min \sum_{(i,j) \in \ensuremath{\mathcal{M}}} \sum_{r \in \ensuremath{R}} c_{(i,j),r}x_{(i,j),r} &&&\label{trakRR:obj}\\ \sum_{r \in \ensuremath{R}} x_{(i,j),r} &= \frac{k}{2}, && (i,j) \in \ensuremath{\mathcal{M}},\label{trakRR:eachmatch}\\ \sum_{r \in \ensuremath{R}_\ell} (x_{(i,j),r} + x_{(j,i),r}) &= 1, && i,j \in \ensuremath{T},\; i\neq j,\; \ell \in \{0,\dots,k-1\}, \label{trakRR:eachpart}\\ \sum_{j \in \ensuremath{T}\setminus\{i\}} (x_{(i,j),r} + x_{(j,i),r}) &= 1, && i \in \ensuremath{T},\; r \in \ensuremath{R},\label{trakRR:eachteameachround}\\ x_{(i,j),r} &\in \{0,1\}, && (i,j) \in \ensuremath{\mathcal{M}},\; r \in \ensuremath{R}. \label{trakRR:int} \end{align} \end{subequations} Constraints \eqref{trakRR:eachmatch} ensure that each match is played $\frac{k}{2}$ times, Constraints \eqref{trakRR:eachpart} express that each pair of teams has to meet once in each part (the ``phased'' property), and Constraints \eqref{trakRR:eachteameachround} prescribe that each team plays a single match in each round. \paragraph{Extending the matching formulation for $k$RR tournaments} We now assume that $K_n=(T,A)$ is a {\em directed} multi-graph, where each arc $(i,j)$ with~$i,j \in T$, $i \neq j$ is present $\frac{k}{2}$ times; a (directed) matching $\ensuremath{M}$ is now defined as a set of $\frac{n}{2}$ arcs incident to each node once, and $\ensuremath{\mathfrak{M}}$ now stands for the set of all (directed) matchings. \begin{subequations} \makeatletter \def\@currentlabel{$k$M} \makeatother \renewcommand{\theequation}{$k$M\arabic{equation}}% \label{matkRR} \begin{align} \min \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} \sum_{r\in\ensuremath{R}} d_{\ensuremath{M},r} y_{\ensuremath{M},r} &&&\label{matkRR:obj}\\ \sum_{\ensuremath{M}\in\ensuremath{\mathfrak{M}}} y_{\ensuremath{M},r} &=1, && r\in\ensuremath{R},\label{matkRR:eachround}\\ \sum_{r\in\ensuremath{R}}\sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\(i,j)\in\ensuremath{M}}} y_{\ensuremath{M},r} &= \frac{k}{2}, && (i,j) \in \ensuremath{\mathfrak{M}},\label{matkRR:eachmatch}\\ \sum_{r\in\ensuremath{R}_{\ell}} \sum_{\substack{\ensuremath{M}\in\ensuremath{\mathfrak{M}}\colon\\(i,j)\in\ensuremath{M} \text{ or } (j,i)\in\ensuremath{M}}} y_{\ensuremath{M},r} &=1, && i,j \in T,\; i \neq j,\; \ell \in \{0,\dots,k-1\},\label{matkRR:eachpart}\\ y_{\ensuremath{M},r} &\in \{0,1\}, && \ensuremath{M}\in\ensuremath{\mathfrak{M}},\; r \in\ensuremath{R}. \end{align} \end{subequations} Constraints \eqref{matkRR:eachround} ensure that a (directed) perfect matching is selected in each round, Constraints~\eqref{matkRR:eachmatch} say that each match occurs $\frac{k}{2}$ times, and Constraints \eqref{matkRR:eachpart} model the fact that each pair of teams has to meet once in each part. \paragraph{Extending the permutation formulation for $k$RR tournaments} We have to redefine $\Pi^{-i}$: each entry needs to specify home or away, and of course, the fact that each team meets all other teams in rounds $(\ell-1)(n-1)+1, \ldots, \ell (n-1)$, for each $\ell \in \{1,\dots,k\}$ needs to be taken into account. Other than that Formulation \eqref{per} remains unchanged. Without giving formal proofs, we claim that the linear relaxations of the extension of the matching formulation, as well as the linear relaxation of the extension of the permutation formulation can be solved in polynomial time. We also claim that the extension of the matching formulation is stronger than the other two formulations---the adaptations to the proofs of Lemmata~\ref{lem:pricematch} and \ref{lem:pricepermutation} and Theorem~\ref{th:overallstrength} are straightforward. \section{Computational results} \label{sec:computationalresults} In this section, we report the outcomes of our computational experiments. Section~\ref{sec:instances+solvers} describes the test set that we have used in our experiments. Afterwards, we investigate the quality of the LP relaxations of the different models and compare their corresponding values in Section~\ref{sec:computationalcomparison}. Finally, we discuss our experience with solving instances of the SRR problem using the matching formulation, i.e., not only solving the LP relaxation but also the corresponding integer program. To this end, we have implemented a branch-and-price algorithm whose details are described in Section~\ref{sec:branchandprice}. \subsection{Test set} \label{sec:instances+solvers} We have generated~\num{1000} instances\footnote{The instances as well as the implementation of our algorithms are publicly available at \url{https://github.com/JasperNL/round-robin}; all experiments have been conducted using the code with githash \texttt{1657b4d7}. } of the SRR problem to evaluate the quality of the LP relaxations of the different models. Our test set comprises of instances of different sizes and thus different levels of difficulty, which are parameterized by a tuple~$(n, \rho)$ and have cost coefficients attaining values~$0$ or~$1$. Parameter~$n$ encodes the number of teams and has range~$n \in \{6, 12, 18, 24\}$; parameter~$\rho$ controls the number of~1-entries in the objective. More precisely, we pick a set of match-round pairs~$\ensuremath{\mathcal{M}} \times \ensuremath{R}$ of size~$\lfloor \rho \cdot \card{\ensuremath{\mathcal{M}} \times \ensuremath{R}} \rfloor$ uniformly at random, denoted by $S \subseteq \ensuremath{\mathcal{M}} \times \ensuremath{R}$, where $\rho \in \{0.5, 0.6, 0.7, 0.8, 0.9\}$. The generated instance consists of $n$ teams and has cost coefficients~$c_{m,r} = 1$ if~$(m, r) \in S$ and~$c_{m,r} = 0$ otherwise. For each combination of $n \in \{6, 12, 18, 24\}$ and $\rho \in \{0.5, 0.6, 0.7, 0.8, 0.9\}$ we have generated 50 instances. \subsection{A computational comparison of the linear relaxations} \label{sec:computationalcomparison} In this section, we provide a computational comparison between the LP relaxation values of the traditional formulation~\eqref{tra} (and thus by Lemma~\ref{lem:equitraandper} also of the permutation formulation) and LP relaxation values of the matching formulation~\eqref{mat}, and compare these to the actual optimal (or best found) integral solutions. Before we discuss our numerical results, we provide details about our implementation as well as on how we find optimal integral solutions first. \paragraph{Implementation details} To find the LP relaxation values, we implemented both formulations in Python~3 using the \texttt{PySCIPOpt} 4.1.0 package~\cite{MaherMiltenbergerPedrosoRehfeldtSchwarzSerrano2016} for \texttt{SCIP}~8.0.0~\cite{BestuzhevaEtal2021OO}, with \texttt{CPLEX} 20.1.0.0 as LP solver. The traditional formulation is implemented as a compact model. For the matching formulation, we use a column generation procedure that receives a subset of all variables, solves the corresponding LP relaxation restricted to these variables, and adds further variables until it can prove that an optimal LP solution has been found. To identify whether new variables need to be added, we solve the so-called pricing problem, which corresponds to separating a corresponding solution of the dual problem. The separation problem can be solved by finding a maximum weight perfect matching as detailed in the proof of Lemma~\ref{lem:pricematch}. We start with the empty set of variables, which means that the primal problem is initially infeasible. Analogously to the proof of Lemma~\ref{lem:pricematch}, we resolve infeasibility by adding variables to the problem that are associated with a dual constraint that violate a dual unbounded ray of this infeasible problem. The column generation procedure has been embedded in a so-called \texttt{pricer} plug-in of \texttt{SCIP}, which adds newly generated variables to the matching formulation. The maximal weight perfect matchings are computed using \texttt{NetworkX}~2.5.1, which provides an implementation of Edmonds' blossom algorithm. \paragraph{Finding optimal integer solutions} To obtain the optimal integer solution value of as many instances as possible, we have used two different solvers to solve the integer program of Model~\eqref{tra}. On the one hand, we have used \texttt{SCIP} as described in the above setup. On the other hand, we have modeled~\eqref{tra} using \texttt{Gurobi}~9.1.2 via its Python~3 interface. For each instance and solver, we have imposed a time limit of~\SI{48}{\hour} to find an optimal integer solution. Using \texttt{SCIP}, we managed to solve 852 of the 1000 instances to optimality. With \texttt{Gurobi}, we were able to solve 866 of the 1000 instances to optimality. There were 45 instances where \texttt{SCIP} found a better primal objective value, and 79 instances where \texttt{Gurobi} found a better primal objective value. All experiments have been run on a compute cluster with identical machines, using one (resp. two) thread(s) on Xeon Platinum 8260 processors, with \SI{10.7}{\giga\byte} (resp. \SI{21.4}{\giga\byte}) memory, respectively for \texttt{SCIP} and \texttt{Gurobi}. \begin{table}[!tbp] \caption{Comparison of the LP relaxation values of the traditional and matching formulation.} \label{tab:comp} \footnotesize \centering \begin{tabular}{ @{} r@{\hspace{4pt}} r r@{\ }r@{\ }r c@{\ }c c@{\hspace{4pt}} r@{\ }r@{\ }r c@{\ }c r@{\hspace{4pt}}r @{} } \toprule &&\multicolumn{5}{c}{all instances} &\multicolumn{8}{c}{restricted to instances with $\lprelaxation{\rm tra} < \ensuremath{v^{\mathrm{IP}}}$} \\ \cmidrule(l{4pt}r{4pt}){3-7} \cmidrule(l{4pt}r{4pt}){8-15} &&\multicolumn{3}{c}{average value} &\multicolumn{2}{c}{solved} &&\multicolumn{3}{c}{average value} &\multicolumn{2}{c}{solved} &\multicolumn{2}{c}{gap closed} \\ \cmidrule(l{4pt}r{4pt}){3-5} \cmidrule(l{4pt}r{4pt}){6-7} \cmidrule(l{4pt}r{4pt}){9-11} \cmidrule(l{4pt}r{4pt}){12-13} \cmidrule(l{4pt}r{4pt}){14-15} $n$ & $\rho$ & $\lprelaxation{\rm tra}$ & $\lprelaxation{\rm mat}$ & $\ensuremath{v^{\mathrm{IP}}}$ & O & T & \# & $\lprelaxation{\rm tra}$ & $\lprelaxation{\rm mat}$ & $\ensuremath{v^{\mathrm{IP}}}$ & O & T & average & maximal \\ \midrule 6 & 0.5 & 2.227 & 2.297 & 2.380 & 50 & 0 & 14 & 2.452 & 2.702 & 3.000 & 14 & 0 & 43.45\% & 100.00\% \\ 6 & 0.6 & 3.802 & 3.865 & 3.920 & 50 & 0 & 12 & 3.924 & 4.188 & 4.417 & 12 & 0 & 52.08\% & 100.00\% \\ 6 & 0.7 & 5.430 & 5.510 & 5.540 & 50 & 0 & 9 & 5.611 & 6.056 & 6.222 & 9 & 0 & 66.67\% & 100.00\% \\ 6 & 0.8 & 7.620 & 7.635 & 7.660 & 50 & 0 & 4 & 7.250 & 7.438 & 7.750 & 4 & 0 & 37.50\% & 100.00\% \\ 6 & 0.9 & 10.003 & 10.040 & 10.060 & 50 & 0 & 6 & 10.361 & 10.667 & 10.833 & 6 & 0 & 66.67\% & 100.00\% \\ \midrule 12 & 0.5 & 0.080 & 0.080 & 0.080 & 50 & 0 & 0 & -- & -- & -- & 0 & 0 & -- & -- \\ 12 & 0.6 & 2.018 & 2.213 & 3.480 & 50 & 0 & 49 & 2.019 & 2.217 & 3.510 & 49 & 0 & 15.29\% & 100.00\% \\ 12 & 0.7 & 8.022 & 8.342 & 9.500 & 50 & 0 & 50 & 8.022 & 8.342 & 9.500 & 50 & 0 & 22.27\% & 70.77\% \\ 12 & 0.8 & 17.184 & 17.474 & 18.340 & 50 & 0 & 50 & 17.184 & 17.474 & 18.340 & 50 & 0 & 29.89\% & 100.00\% \\ 12 & 0.9 & 31.459 & 31.654 & 31.840 & 50 & 0 & 31 & 31.096 & 31.410 & 31.710 & 31 & 0 & 56.87\% & 100.00\% \\ \midrule 18 & 0.5 & 0.000 & 0.000 & 0.000 & 50 & 0 & 0 & -- & -- & -- & 0 & 0 & -- & -- \\ 18 & 0.6 & 0.060 & 0.060 & 0.060 & 50 & 0 & 0 & -- & -- & -- & 0 & 0 & -- & -- \\ 18 & 0.7 & 2.045 & 2.292 & 5.600 & 50 & 0 & 50 & 2.045 & 2.292 & 5.600 & 50 & 0 & 6.68\% & 15.66\% \\ 18 & 0.8 & 19.831 & 20.330 & 23.900 & 50 & 0 & 50 & 19.831 & 20.330 & 23.900 & 50 & 0 & 12.37\% & 28.19\% \\ 18 & 0.9 & 52.700 & 53.066 & 54.500 & 50 & 0 & 49 & 52.673 & 53.047 & 54.510 & 49 & 0 & 21.55\% & 100.00\% \\ \midrule 24 & 0.5 & 0.000 & 0.000 & 0.000 & 50 & 0 & 0 & -- & -- & -- & 0 & 0 & -- & -- \\ 24 & 0.6 & 0.000 & 0.000 & 0.000 & 50 & 0 & 0 & -- & -- & -- & 0 & 0 & -- & -- \\ 24 & 0.7 & 0.200 & 0.200 & 4.340 & 0 & 50 & 49 & 0.163 & 0.163 & 4.408 & 0 & 49 & 0.00\% & 0.00\% \\ 24 & 0.8 & 12.352 & 12.893 & 24.180 & 0 & 50 & 50 & 12.352 & 12.893 & 24.180 & 0 & 50 & 4.57\% & 7.91\% \\ 24 & 0.9 & 69.327 & 69.922 & 74.860 & 29 & 21 & 50 & 69.327 & 69.922 & 74.860 & 29 & 21 & 10.69\% & 21.68\% \\ \bottomrule \end{tabular} \end{table} \paragraph{Numerical results} Table~\ref{tab:comp} shows the aggregated computational results of our experiments. For each number of teams $n$ and ratio $\rho$, we provide the average of the objective values of the relaxation of the traditional formulation (column ``$\lprelaxation{\rm tra}$''), the average of the objective values of the relaxation of the matching formulation (column ``$\lprelaxation{\rm mat}$''), and the average optimum value (column ``$\ensuremath{v^{\mathrm{IP}}}$''). Notice that for $n=24$ we have not been able to solve all instances to optimality; in this case, we use the value of the best known solution instead of the (unknown) optimum for that instance in the~$\ensuremath{v^{\mathrm{IP}}}$ column. Recall that each value is an average over 50 instances. The number of optimally solved instances (resp.\ instances not terminating within the time limit) are shown in column ``O'' (resp.\ ``T''). To be able to assess the strength of the matching formulation compared to the traditional formulation, we focus, in the right side of the table, on those instances for which $\lprelaxation{\rm tra} < \ensuremath{v^{\mathrm{IP}}}$; their number (out of 50) is given in the column labeled ``\#''. From this column, we see that the fraction~$\rho$ that leads to instances with a gap between $\lprelaxation{\rm tra}$ and $\ensuremath{v^{\mathrm{IP}}}$ slowly increases with $n$. Indeed, for $n=6$, most instances do not have a gap, for $n=12$, almost all instances with $\rho \in \{0.6, 0.7, 0.8\}$ have a gap, and for $n=18$, almost all instances with $\rho \in \{0.7, 0.8, 0.9\}$ have a gap. We use the notion of the \emph{relative gap} that is closed by the matching formulation relative to the traditional formulation, given by \[ {\rm rgap}(I) \coloneqq \frac{ \lprelaxation{\rm mat}(I) - \lprelaxation{\rm tra}(I)}{ \ensuremath{v^{\mathrm{IP}}}(I) - \lprelaxation{\rm tra}(I)}\ % {\rm for~an~instance~}I~{\rm of~SRR} \text{ with } \ensuremath{v^{\mathrm{IP}}}(I) - \lprelaxation{\rm tra}(I) > 0. \] A value of zero for ${\rm rgap}(I)$ implies that the relaxation values of the traditional formulation and the matching formulation are equal, while a value of one (i.e., 100\%) implies that the relaxation of the matching formulation is equal to the true objective of the optimal integral solution. The column ``average'' gives the average {\rm rgap}, whereas column ``maximal'' shows the maximum relative closed gap for an instance of this sub test set. For $n=6$, there are few instances with a gap. However, for those instances for which there is a gap, it is clear that a sizable part of that gap is closed by the relaxation of the matching formulation. For larger values of~$n$, many instances have a gap. We observe that a significant percentage of the gap is closed by the relaxation of the matching formulation. If~$n$ is getting larger, however, both the value of the average gap closed as well as the value of the maximal gap closed decrease. We conclude that for small values of~$n$, and thus for many realistic applications, the matching formulation provides a much better relaxation value than the traditional formulation. \subsection{A branch-and-price algorithm} \label{sec:branchandprice} Since the matching formulation can dominate the traditional formulation, a natural question is whether the stronger formulation also allows to solve the SRR problem faster than the traditional formulation. For this reason, we have implemented a branch-and-price algorithm (in the computational setup as described above) to compute optimal integral solutions of the matching formulation. That is, we use a branch-and-bound algorithm to solve the matching formulation, where each LP relaxation is solved using a column generation procedure. \paragraph{Implementation details} In classical branch-and-bound algorithms, the most common way to implement the branching scheme is to select a variable~$x_i$ whose value~$x^\star_i$ in the current LP solution is non-integral and to generate two subproblems by additionally enforcing either~$x_i \leq \lfloor x^\star_i \rfloor$ or~$x_i \geq \lceil x^\star \rceil$. In principle, this strategy is also feasible for the matching formulation, where the subproblems correspond to forbidding a schedule~$\ensuremath{M}\in\ensuremath{\mathfrak{M}}$ for a round~$r \in \ensuremath{R}$ or fixing the schedule in round~$r$ to be~$\ensuremath{M}$. This branching scheme, however, leads to a very unbalanced branch-and-bound tree as the former subproblem only rules out a very specific schedule, while the latter one fixes the matches of an entire round. Another difficulty of the classical scheme is that it might affect the structure of the pricing problem in the newly generated subproblems. Ideally, the pricing problem should not change such that the same algorithm can be used for adding new variables to the problem. We will address both issues next. To obtain a more balanced branch-and-bound tree, we have implemented a custom branching rule following the Ryan-Foster branching scheme~\cite{RyanFoster1981}: Our scheme selects a match~$\{i, j\} \in \ensuremath{\mathcal{M}}$ at a round~\mbox{$r \in \ensuremath{R}$} and creates two children. In the left child, we forbid that $\{i, j\}$ is played in round~$r$, and in the right child, we enforce that $\{i, j\}$ is played in round $r$. Note that for all matchings~$\ensuremath{M} \in \ensuremath{\mathfrak{M}}$ this branching decision fixes all variables $y_{\ensuremath{M},r}$ to zero if $\{i, j\} \in \ensuremath{M}$ for the left child, and $\{i, j\} \notin \ensuremath{M}$ for the right child. Using this branching strategy, the structure of the pricing problem at each subproblem remains a matching problem. At the root node of the branch-and-bound tree, we need to solve a maximum weight perfect matching problem in a weighted version of~$K_n$ as described above. At other nodes of the branch-and-bound tree, we have added branching decisions that enforce that two teams~$i$ and~$j$ either do meet or do not meet in a round~$r \in \ensuremath{R}$. These decisions can easily be incorporated by deleting edges from~$K_n$. When generating variables for round~$r$, we remove edge~$\{i,j\}$ from~$K_n$ if~$i$ and~$j$ shall not meet in this round; if the match~$\{i,j\}$ shall take place, then we remove all edges incident with~$i$ and~$j$ except for~$\{i,j\}$. Consequently, our branching strategy allows to solve the LP relaxations of all subproblems in polynomial time. Since our Python implementation of the traditional and matching formulation took too much time to be used in a branching scheme, we decided to implement our branch-and-price algorithm as a plug-in using the C-API of \texttt{SCIP}. The pricer plug-in is analogous, and maximal weight perfect matchings are now computed using the \texttt{LEMON}~1.3.1 graph library. To ensure that the branching decisions are taken into account, we also implemented a constraint that fixes $y_{\ensuremath{M}, r}$ to zero if the matching~$\ensuremath{M}$ violates the branching decisions for round~$r$, and added a plug-in that implements the branching decisions. The branching rule sketched above admits some degrees of freedom in selecting the match~$\{i,j\}$ and round~$r$. In our implementation, we decided to mimic two well-known branching rules: most infeasible branching and strong branching on a selection of variables, see Achterberg et~al.\@\xspace~\cite{achterberg2005branchingrulesrevised} for an overview on branching rules. Most infeasible branching branches on a binary variable with fractional value in an LP solution that is closest to~0.5, and strong branching branches on the variable that yields the largest dual bound improvement based on some metric. Since strong branching requires significant computational effort, it is common to make a limited branching candidate selection and apply strong branching on those. \begin{algorithm}[!tbp] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \caption{Determining the branching candidate for an LP node.} \label{alg:branchingrule} \Input{An LP solution $y^\star_{m, r}$ in a branch-and-bound tree node at depth $d$ with objective ${\rm obj}$.} \Output{The branching decision (match $m \in \ensuremath{\mathcal{M}}$ on round $r \in \ensuremath{R}$), or detected integrality.} \tcp{fractional assignment of match $m$ to round $r$} compute ${\rm assign}_{m, r} \gets \sum_{M \in \ensuremath{\mathfrak{M}}\colon \ensuremath{m}\in\ensuremath{M}} y^\star_{M,r}$ for $m \in \ensuremath{\mathcal{M}}$ and $r \in \ensuremath{R}$% \; \BlankLine \If{${\rm assign}_{m, r}$ is $0$ or $1$ for all $m \in \ensuremath{\mathcal{M}}$ and $r \in \ensuremath{R}$}% { \Return integral solution found\; } \BlankLine \tcp{fractional part of ${\rm assign}_{m, r}$} compute ${\rm frac}_{m, r} \gets \min\{ {\rm assign}_{m, r}, 1.0 - {\rm assign}_{m, r} \}$ for $m \in \ensuremath{\mathcal{M}}$ and $r \in \ensuremath{R}$% \; \BlankLine \tcp{score for every match-round pair} compute $ {\rm score}_{m, r} \gets {\rm frac}_{m, r} \cdot (1.0 + | c_{m, r} |) \cdot ({\rm assign}_{m, r})^2 $ for $m \in \ensuremath{\mathcal{M}}$ and $r \in \ensuremath{R}$% \label{alg:branchingrule:score} \; \BlankLine \tcp{strong branching candidate selection} $ {\rm number\_of\_candidates} \gets \max\{ 1, \lfloor 0.1 \cdot \card{\ensuremath{\mathcal{M}} \times \ensuremath{R}} \cdot 0.65^{d} \rfloor \} $\; \If{${\rm number\_of\_candidates} > 1$}{ pick ${\rm number\_of\_candidates}$ candidates $(m, r) \in \ensuremath{\mathcal{M}} \times \ensuremath{R}$ with highest ${\rm score}_{m, r}$ as strong branching candidates\; \ForEach{strong branching candidate $(m, r)$}{ \If{${\rm score}_{m, r} = 0.0$}{ \tcp{then ${\rm assign}_{m, r}$ is 0 or 1, skip this candidate} \textbf{continue}\ (i.e., skip this candidate)\; } apply strong branching on $(m, r)$, with objectives ${\rm obj}_{\rm forbid}$, ${\rm obj}_{\rm enforce}$ in the two children\; compute ${\rm score}_{m, r}^\star \gets ({\rm obj}_{\rm forbid} - {\rm obj} + 1.0) \cdot ({\rm obj}_{\rm enforce} - {\rm obj} + 1.0)$\; \label{alg:branchingrule:scorestar} } \Return branch on strong branching candidate $(m, r)$ with maximal~${\rm score}_{m, r}^\star$\; } \Else{ \tcp{do not apply strong branching deep in the branch-and-bound tree} \Return branch on $(m, r) \in \ensuremath{\mathcal{M}} \times \ensuremath{R}$ with maximal ${\rm score}_{m, r}$\; } \end{algorithm} The pseudocode for our branching rule is given in Algorithm~\ref{alg:branchingrule}. We start by computing the fractional match-on-round assignment values induced by the $y$-variables. Then, we make a selection of potentially good branching candidates~$(m, r) \in \ensuremath{\mathcal{M}} \times \ensuremath{R}$, that is based on the ${\rm score}_{m, r}$ metric shown in Line~\ref{alg:branchingrule:score}: this score prioritizes match-on-round assignments for which \begin{enumerate*}[label=(\roman*), ref=(\roman*)] \item\label{score1} the assignment value is close to $0.5$, \item\label{score2} the cost coefficients are large, and \item\label{score3} the assignment values are relatively high. \end{enumerate*} Using this score, we hope to resolve fractionality soon (by~\ref{score1}). By~\ref{score2}, we want to enforce a significant change of the objective value in the child that forbids match~$\ensuremath{m}$, whereas the child enforcing that~$\ensuremath{m}$ is played selects a match that is most likely played due to~\ref{score3}. Experiments show that full strong branching leads to a smaller number of nodes to solve the problem, but this turns out to be very costly computationally. Therefore, we only apply strong branching for branch-and-bound tree nodes close to the root, and only evaluate a subset of branching candidates that have the highest ${\rm score}_{m, r}$-metric. This is our candidate pre-selection. The higher the depth of the considered branch-and-bound tree node, the smaller the number of candidates considered. Of those branching candidates, we pick the candidate that maximizes~${\rm score}_{m, r}^{\star}$ as defined in Line~\ref{alg:branchingrule:scorestar}. The goal of this score is to choose the candidate where the objective values of the hypothetical children are different from the current node's objective. By considering the product of this difference, we prioritize if the objective of both hypothetical children have some difference with the current node objective. If the number of candidates is only one, no strong branching is applied and that candidate match-on-round assignment is chosen for branching. All experiments have been run on a Linux cluster with Intel Xeon E5 \SI{3.5}{\GHz} quad core processors and~\SI{32}{\giga\byte} memory. The code was executed using a single thread and the time limit for all computations was~\SI{2}{\hour} per instance. \paragraph{Numerical results} Table~\ref{tab:comp:branch} summarizes our results for our instances for~$n \in \{6,12,18\}$. We distinguish the instances by their parameters~$n$ and~$\rho$, and we report on the number of instances that could be solved (resp.\ could not be solved) within the time limit in column ``O'' (resp. ``T''). Moreover, we report on the the minimum, mean, and maximum running time per parameterization. The mean of all running times~$t_i$ is reported in shifted geometric mean~$\prod_{i = 1}^{50} (t_i + s)^{\frac{1}{50}} - s$ using a shift of~\SI{10}{\second} to reduce the impact of instances with very small running times. \begin{table}[t] \caption{Computational results for the branch-and-price algorithm for Model~\eqref{mat}.} \label{tab:comp:branch} \footnotesize \centering \begin{tabular}{@{}rrcccrrr@{}} \toprule &&&\multicolumn{2}{c}{Solved}& \multicolumn{3}{c}{Solving time (s)}\\ \cmidrule(l{4pt}r{4pt}){4-5} \cmidrule(l{4pt}r{4pt}){6-8} $n$ & $\rho$ & \# & O & T & min & mean & max \\ \midrule 6&0.5&50&50& 0& 0.00& 0.00& 0.01\\ 6&0.6&50&50& 0& 0.00& 0.00& 0.01\\ 6&0.7&50&50& 0& 0.00& 0.00& 0.01\\ 6&0.8&50&50& 0& 0.00& 0.00& 0.01\\ 6&0.9&50&50& 0& 0.00& 0.00& 0.01\\ \midrule 12&0.5&50&50& 0& 1.25& 2.38& 5.73\\ 12&0.6&50&50& 0& 0.12& 4.09& 8.28\\ 12&0.7&50&50& 0& 0.21& 3.33& 7.38\\ 12&0.8&50&50& 0& 0.14& 2.05& 7.63\\ 12&0.9&50&50& 0& 0.11& 0.54& 2.21\\ \midrule 18&0.5&50&50& 0& 54.61& 106.09& 210.77\\ 18&0.6&50&48& 2& 103.41& 866.21& 7200.00\\ 18&0.7&50& 3&47& 930.53& 6854.47& 7200.09\\ 18&0.8&50&22&28& 312.65& 4084.21& 7200.06\\ 18&0.9&50&50& 0& 6.74& 332.98& 3990.61\\ \bottomrule \end{tabular} \end{table} We observe that instances with~6 and~12 teams can be solved very efficiently within fractions of seconds in the former and within seconds in the latter case. Instances with~18 teams are more challenging, in particular, if the ratio~$\rho \in \{0.7, 0.8\}$. In this case, only~3 and~22 instances could be solved, respectively, but note that not all instances are equally difficult. For instance, for~$n = 18$ and~$\rho = 0.8$, there exists an instance that can be solved within roughly five minutes, whereas the mean running time is more than an hour. To fully benefit from the strong LP relaxation of the matching formulation, it might be the case that additional algorithmic enhancements can further improve the performance of the branch-and-price algorithm. \section{Conclusion} \label{sec:conclusion} The use of integer programming for finding schedules of round robin tournaments is widespread. We have introduced and analyzed two new formulations for this problem, one of which (the matching formulation) is stronger than the other formulations. We have proposed a class of valid inequalities for the matching formulation, which may be of use when developing cutting-plane based techniques for this problem. By randomly generating instances, we studied the strength of the formulations, and we implemented a branch-and-price algorithm based on the matching formulation to see its efficiency. Although this algorithm is able to solve small-scale instances rather efficiently, solving large instances of the SRR efficiently remains a challenge. Possible directions of future research are thus to further strengthen our integer programming formulations/techniques. On the one hand, once can investigate additional cutting planes to strengthen both the traditional and matching formulation. For the matching formulation, cutting planes will in particular affect the pricing problem and thus might change its structure. Thus, the trade-off between the strength of cutting planes and the difficulty of solving the pricing problem that needs to be investigated. On the other hand, one can enhance our branch-and-price algorithm in several directions, e.g., the development of more sophisticated branching rules or heuristics for producing good schedules. \paragraph{Acknowledgment} The fourth author's research is supported by the Dutch Research Council (NWO) through Gravitation grant NETWORKS-024.002.003. \bibliographystyle{abbrv}
2,877,628,090,944
arxiv
\section{Introduction} \label{sec:introduction} \renewcommand{\theitheorem}{\Alph{itheorem}} We initiate a program which relates the geometry of affine Grassmannians with the representation theory of shifted Yangians. More precisely, we study slices in affine Grassmannians which arise naturally in geometric representation theory; they correspond to weight spaces of irreducible representations under the geometric Satake correspondence. Our main result is that certain subquotients of Yangians quantize these slices. There is a general program to study symplectic resolutions by means of the representation theory of their quantizations, generalizing the interplay between between semisimple Lie algebras and nilpotent cones. We believe that the representation theory of shifted Yangians and its relationship to the geometry of slices in the affine Grassmannian will prove to be a very fruitful area of inquiry. \subsection{Slices in the affine Grassmannian} Let $ G $ be a complex semisimple group and consider its \textbf{thick affine Grassmannian} $ \Gr = G((t^{-1}))/G[t] $. Attached to each pair of dominant coweights $\la\geq \mu$, we have Schubert varieties $\Gr^\la,\Gr^\mu\subset \Gr$, with $\Gr^\mu\subset \overline{\Gr^\la}$. The neighborhood in $\overline{\Gr^\la}$ of a point in $\Gr^\mu$ is encapsulated in a transversal slice to the latter variety in the former, which we denote by $\Grlmbar$. This slice is an important object of study in geometric representation theory because under the geometric Satake correspondence it is related to the $\mu$ weight space in the irreducible representation of $ G^\vee $ of highest weight $ \lambda $. The Manin triple $ (\fg[t], t^{-1}\fg[[t^{-1}]], \fg((t^{-1})) ) $ provides $ \Gr $ with the structure of a Poisson variety. The slice $ \Grlmbar $ is an affine Poisson subvariety and thus, its coordinate ring is naturally a Poisson algebra. The purpose of this paper is to explicitly describe quantizations of this Poisson algebra. \subsection{Quotients of shifted Yangians} The slice $ \Grlmbar $ is defined as the intersection $ \overline{\Gr^\la} \cap \Gr_\mu $, where $ \Gr_\mu $ is an orbit of the group $ \Gm$, the first congruence subgroup of $ G[[t^{-1}]] $. Thus on the level of functions $ \O(\Grlmbar) $ is a quotient of $ \O(\Gr^\mu) $, and $ \O(\Gr^\mu) $ is a subalgebra of $ \O(\Gm) $. In order to quantize $ \Grlmbar $ we follow a three step procedure which mirrors this construction. We first construct a version $Y $ of the Yangian, which is a subalgebra of the Drinfeld Yangian. Next, we define natural subalgebras $Y_\mu\subset Y$, called {\bf shifted Yangians}, quantize $ \Gr_\mu $. This generalizes the shifted Yangian for $\mathfrak{gl}_n$ introduced by Brundan-Kleshchev \cite{BK}. Finally, we define a quotient $Y_\mu^\lambda$ of $Y_\mu $ using some remarkable representations of $ Y $ as difference operators, constructed by Gerasimov-Kharchev-Lebedev-Oblezin \cite{GKLO}. \begin{itheorem} The algebras defined above are all quantizations of the analogous geometric objects. That is: \begin{enumerate} \item The Yangian $Y$ quantizes $\Gm$. \item The shifted Yangian $Y_\mu$ quantizes $\Gr_\mu$. \item The quotient $Y^\la_\mu $ quantizes a (possibly non-reduced) scheme supported on $ \Grlmbar$. \end{enumerate} \end{itheorem} Item (1) above is proven using the Drinfeld-Gavarini quantum groups duality, (2) follows simply from (1), and (3) follows using the GKLO representation. In fact, we produce a family $ Y^\la_\mu(\mathbf c) $ of quantizations which we conjecture to map surjectively to the universal family in the sense of Bezrukavnikov-Kaledin \cite{BK04a}. Unfortunately, we are not able to prove that the scheme quantized by $Y^\la_\mu $ is reduced. However, we do provide a conjectural description of the generators of the ideal of $ \Grlmbar $ inside $ \Gr_\mu $ and prove that this conjecture implies that $Y^\la_\mu $ quantizes the reduced scheme structure on $ \Grlmbar$. Moreover, we prove that this conjecture gives a simple description for the ideal defining $ Y^\la_\mu$. \subsection{Motivation and relation to other work} Brundan-Kleshchev \cite{BK} construct an isomorphism between quotients of shifted Yangians of $ \mathfrak{gl}_n $ and $ W$-algebras of $ \mathfrak{gl}_m $. On one hand, it is known that $ W$-algebras are quantizations of Slodowy slices. On the other hand, by the work of Mirkovi\'c\xspace -Vybornov \cite{MVy} we have an isomorphism between Slodowy slices for $ \mathfrak{sl}_m $ and slices in the affine Grassmannian for $ GL_n $. Thus via these results, we see that quotients of shifted Yangians for $ \mathfrak{gl}_n $ quantize slices in the affine Grassmannian for $ GL_n $. This motivated us to look for a direct construction of quantizations of affine Grassmannian slices (for any semisimple $ G $) using quotients of shifted Yangians. (The idea that the Brundan-Kleshchev isomorphism should be thought as a quantization of the Mirkovi\'c\xspace -Vybornov isomorphism was independently observed by Losev \cite[Remark 5.3.4]{Lo}.) If we take a limit of $ \Grlmbar$ as $ \lambda \rightarrow \infty $ and $ \lambda-\mu $ is fixed, then the slice $ \Grlmbar $ becomes the Zastava space $ Z_{\lambda - \mu}$. Finkelberg-Rybnikov \cite{FR} have given conjectural quantizations of Zastava spaces (for $PGL_n$) using quotients of Borel Yangians, which are a limit of shifted Yangians. Thus in this limit we prove their conjectures, dependent on the above mentioned conjecture about the ideal of $ \Grlmbar$. Earlier work on shifted Yangians by Brundan and Kleshchev \cite{BK2} suggest that one natural direction for future work is the study of a version of category $\cO$ over the algebra $Y^\la_\mu$. Because of the geometric Satake correspondence, we think of category $ \cO $ for $ Y^\la_\mu $ as a categorification of a weight space in a representation of the Langlands dual group $G^\vee$. Thus we expect that these categories (with $\la $ fixed) carry categorical $ \fg^\vee $-actions. Moreover, conjectures of Braden, Licata, Proudfoot and the second author \cite{BLPW} suggest that category $ \cO $ for $ Y^\la_\mu $ should be Koszul dual to similar categories constructed from quiver varieties (in type A, we expect that this reduces to parabolic-singular duality of Beilinson-Ginzburg-Soergel \cite{BGS}). \subsection{Acknowledgements} We would like to thank Alexander Braverman, Pavel Etingof, Mikhail Finkelberg, Ivan Mirkovi\'c\xspace, Sergey Oblezin, Travis Schedler and Catharina Stroppel for extremely useful conversations. \section{Symplectic structure on slices in the affine Grassmannian} \subsection{Notation}\label{sec:notation} For any group $ H $, we will write $ H((t^{-1})) = H(\C((t^{-1}))) $ for its loop group and write $ H[t] = H(\C[t]) $ and $ H[[t^{-1}]] = H(\C[[t^{-1}]]) $ for its usual subgroups. Let $ H_1[[t^{-1}]] $ denote the first congruence subgroup of $ H[[t^{-1}]] $, i.e. the kernel of the evaluation at $ t^{-1} = 0 $, $ H[[t^{-1}]] \rightarrow H $. Throughout $ G $ will denote a fixed complex semisimple group, with opposite Borel subgroups $ B, B_-$, unipotent subgroups $N, N_-$, maximal torus $ T$, coweight lattice $ X $, Weyl group $ W $, set of roots $ \Delta $, simple roots $ \{\alpha_i\}_{i\in I} $. We write $ \{\om_i \}_{i \in I} $ for the fundamental weights of the simply connected form of $ G $. Following Drinfeld, we use generators $e_i, f_i, h_i$ for $\fg$ where $$ [h_i, e_j] = (\alpha_i, \alpha_j) e_j, \quad [h_i, f_j] = -(\alpha_i,\alpha_j) f_j, \quad [e_i, f_j] = \delta_{ij} h_i $$ along with the usual Serre relations. Let $(a_{ij})_{1 \leq i,j \leq n}$ be the Cartan matrix of $\fg$, and let $d_i$ be the unique coprime positive integers such that $b_{ij} = d_i a_{ij}$ is a symmetric matrix. Then the associated invariant form on $\fg$ is defined by $(e_i, f_j) = \delta_{ij}$, and $(\alpha_i, \alpha_j) = (h_i, h_j) = d_i a_{ij}$, and in particular $h_i$ is the image of $\alpha_i$ under the identification of $\fh$ and $\fh^\ast$. This is as opposed to the standard Chevalley generators $e_i', f_i', h_i'$, which we will identify as $$e_i = - d_i^{1/2} e_i',\quad f_i = - d_i^{1/2} f_i',\quad h_i = d_i h_i'$$ In this way we have fundamental weights $\omega_i(h_j') = \delta_{ij}$, and a lift of the Weyl group defined via $\overline{s_i} = \exp{(f_i')}\exp{(-e_i')}\exp{(f_i')}$. If $ \mu $ is a weight or coweight, we write $ \mu^* = -w_0 \mu $. Likewise, we write $ i^* $ if $\alpha_{i^*} = -w_0 \alpha_i $. Let $ V $ be a representation of $ G $, and let $ v \in V $,$ \beta \in W^*$. The matrix entry $ \Delta_{\beta, v} $ is a function on $ G $ given by $ \Delta_{\beta,v}(g) = \langle \beta, g v \rangle $. If $w_1,w_2 \in W $ and $ \tau $ a dominant weight, we define $$ \Delta_{w_1\tau, w_2\tau}(g) = \langle\overline{w_1} v_{-\tau}, \overline{w_2} v_\tau\rangle $$ using the lift described above, where $v_\tau$ is the highest weight vector for the irreducible representation $V(\tau)$ and $ v_{-\tau} $ is the dual lowest weight vector in $ V(\tau^*) $. Using this matrix entry (also known as generalized minor), we define the function $ \Delta_{\beta,v}^{(s)} $ on $G((t^{-1})) $, for $ s \in \mathbb{Z} $, whose value at $ g $ is the coefficient of the polynomial $ \Delta_{\beta, v}(g)$. More precisely, these are given by the formula \begin{equation*} \Delta_{\beta,v}(g) = \sum_{s=-\infty}^{\infty} \Delta_{\beta, v}^{(s)}(g) t^{-s} \end{equation*} \subsection{Slices in the affine Grassmannian} Let $ G $ be a semisimple complex group. In this paper, we will work with the \textbf{thick affine Grassmannian} $ \Gr = G((t^{-1})) / G[t] $. We have an embedding of the usual thin affine Grassmannian into the thick affine Grassmannian $$ G((t))/G[[t]] \cong G[t,t^{-1}]/G[t] \hookrightarrow G((t^{-1})) / G[t] $$ In this paper, we work with the thick affine Grassmannian since it is forced upon us by the non-commutative algebras we consider. One manifestation of this is the fact that the thick Grassmannian is an honest scheme, while the thin Grassmannian is only an ind-scheme. However, at a first reading, this difference will be of little importance, and the reader can pretend that we are working with the usual thin affine Grassmannian. Any coweight $ \lambda$ can be thought of as $\C[t,t^{-1}]$-point of $G$, which we can think of as a $\C((t^{-1}))$-point as well. To avoid confusion, we use $t^\lambda$ to denote this point in $ G((t^{-1})) $. We also use $t^\lambda$ for the image of $ t^\la $ in $ \Gr $. Let $\la$ and $\mu$ will denote dominant weights. Define \[ \Gr^\lambda = \Gp t^\lambda, \qquad \Gr_{\mu} = \Gm t^{ w_0 \mu}. \] Recall that the thin affine Grassmannian is precisely $\cup_\lambda \Gr^\lambda $. Our main object of interest will be \[ \Grlmbar := \overline{\Gr^\lambda} \cap \Gr_\mu.\] This variety is a transverse slice to $\Gr^\mu $ inside of $ \overline{\Gr^\lambda}$ since $\Gr_\mu$ intersects every $\Gr^\nu$ transversely, and the intersection $\Gr^{\overline{\mu}}_\mu$ is just the point $t^{w_0\mu}$. In particular, this variety is non-empty if and only if $\mu\leq \la$, that is, if $\Gr^\mu\subset \overline{\Gr^\la}$. These varieties arise naturally under the geometric Satake correspondence of Lusztig \cite{L}, Ginzburg \cite{G}, and Mirkovi\'c-Vilonen \cite{MV}: the intersection homology of $\Grlmbar$ is identified with the $\mu$ weight space of the irreducible $G^\vee$-representation of highest weight $\la$. Note that $ \kb^\times $ acts on $\Gr $ by loop rotation. This action preserves the $ G[t] $ and $ \Gm $ orbits and so $ \kb^\times $ acts on $ \Grlmbar $. The following result is standard. \begin{proposition}\mbox{} \begin{enumerate} \item $ \Grlmbar $ is an affine variety of dimension $ 2\langle \rho, \lambda - \mu \rangle$. \item The action of $ \kb^\times $ on $ \Grlmbar $ contracts $ \Grlmbar$ to the unique fixed point $ t^{w_0 \mu} $. \end{enumerate} \end{proposition} \begin{example} \label{eg:Kleinian} If $\la=\mu+\al_i^\vee$, then $ \Grlmbar $ is isomorphic to the Kleinian singularity $\C^2/(\Z/n+2)$ where $ n= \langle \mu, \alpha_i \rangle$. To see this, first we identify $$ \C^2/(\Z/n+2) = \{ (u, v, w) : uv + w^{n+2} = 0 \} $$ and then we define the isomorphism \begin{align*} \C^2/(\Z/n+2) &\rightarrow \Grlmbar \\ (u,v,w) &\mapsto \phi_i \left( \left[ \begin{smallmatrix} 1 - wt^{-1} & v t^{-(n+1)} \\ ut^{-1} & 1 + wt^{-1} + \dots + w^{n+1} t^{-(n+1)} \end{smallmatrix} \right] \right)t^{w_0 \mu} \end{align*} where $ \phi_i : SL_2 \rightarrow G $ denotes the $ SL_2 $ corresponding to $ \alpha_i $. \end{example} Let $ G((t^{-1}))_\mu $ denote the stabilizer of $ t^{w_0 \mu} $ inside of $ G((t^{-1}))$. The following easy result describes the stabilizer on the Lie algebra level. \begin{lemma} \label{le:LieStab} $Lie(G((t^{-1}))_\mu) = \mathfrak{t}[t] \oplus \bigoplus_{\alpha \in \Delta} t^{\langle \alpha, w_0 \mu \rangle} \fg_\alpha[t]$. \end{lemma} \begin{proof} The result follows immediately after observing that for $ g \in G((t^{-1}))$, we have $ g \in G((t^{-1}))_\mu $ if and only if $ t^{-w_0\mu} g t^{w_0 \mu} \in G[t]$. \end{proof} In what follows, we will need the following set-theoretic description of $ \overline{\Gr^\lambda} $ due to Finkelberg-Mirkovi\'c\xspace \cite{FM}. As we shall see, it is much trickier to find a description of this variety with its natural reduced scheme structure. \begin{proposition} \label{pr:SetTheoryGrlam} Let $ g \in G((t^{-1})) $. We have $ [g] \in \overline{\Gr^\lambda} $ if and only if $ \Delta_{\beta, v}^{(s)}(g) = 0 $ for all dominant weights $ \tau $, for all $ v \in V(\tau), \beta \in V(\tau)^* $ and for all $ s < \langle \lambda, w_0 \tau \rangle $. \end{proposition} \begin{proof} Fix $ \tau $ and let $ k $ be the minimal $ s $ such that there exists $ \beta \in V(\tau)^*, v \in V(\tau) $ with $ \Delta_{\beta, v}^{(s)} (g) \ne 0 $ (if such a minimum exists). It is easy to see that $ k $ only depends on the $G[t] $ double coset containing $ g $. Thus if $ [g] \in \Gr^\lambda $, we have that $ k = \langle \lambda, w_0 \tau \rangle $. The result follows. \end{proof} The proof makes it clear that the Proposition holds even if $ \tau $ only ranges over a set of dominant weights which spans (over $ \mathbb Q $) the weight lattice. \subsection{Symplectic structure on the affine Grassmannian} \label{se:SymplecticStructure} There is a non-degenerate pairing on $\fg((t^{-1})) $ coming from residue and the invariant form on $\fg $. Hence the Lie algebras $ \fg[t] $, $ t^{-1}\fg[[t^{-1}]] $, and $ \fg((t^{-1})) $ form a Manin triple (see \cite{Dr}). This induces a Poisson-Lie structure on $G((t^{-1}))$ with $G[t]$ and $\Gm$ as Poisson subgroups. In particular, it coinduces a Poisson structure on $\Gr$, by standard calculations which date back to work of Drinfeld \cite{Drpoisson}. Let us state a couple of results concerning the interaction between this symplectic structure and the geometry considered in the previous section. These results were originally obtained by Mirkovi\'c\xspace \cite{MPC}. \begin{theorem} The subvarieties $ \Gr^\lambda_\mu = Gr^\lambda \cap \Gr_\mu $ are symplectic leaves of $ \Gr$. \end{theorem} \begin{proof} First we note that $ \Gr^\lambda_\mu $ are connected by \cite[1.4]{Rich}, since $\fg((t^{-1}))=\fg[t]\oplus t^{-1}\fg[[t^{-1}]]$. The argument is stated there for finite dimensional groups, but carries through to the loop situation without issues. Then the result follows from \cite[Corollary 2.9]{LY}. \end{proof} These are not all symplectic leaves of $\Gr$, since not every $\Gm$-orbit contains a point $t^{w_0\mu}$ and not every $ G[t] $ orbit contains a point $ t^\lambda $. A general symplectic leaf which lies in the thin affine Grassmannian is of the form $ \Gr^\lambda \cap \Gm gt^{w_0\mu} $ where $ g \in G $. Let $ S^\mu = N((t^{-1})) t^{w_0\mu}$. An {\bf MV cycle} is a component of $ \overline{\Gr^{\lambda}} \cap S^\mu $. By Mirkovi\'c\xspace -Vilonen, these MV cycles give a basis for weight spaces of irreducible representations of the Langlands dual group. As we now see the MV cycles are Lagrangians in $ \Grlmbar $. \begin{proposition} $\overline{\Gr^\lambda} \cap S^\mu $ is a Lagrangian subvariety of $ \Grlmbar $. \end{proposition} \begin{proof} First we prove that $ \overline{\Gr^\la} \cap S^\mu \subset \Grlmbar $. Since $ N$ is unipotent, we have that $ N((t^{-1})) = N_1[[t^{-1}]] N[t]$. Now by Lemma \ref{le:LieStab}, we have that $ N[t]t^{w_0 \mu} = t^{w_0 \mu} $. Hence $N((t^{-1})) t^{w_0 \mu} = N_1[[t^{-1}]] t^{w_0 \mu} $ and thus $ S^\mu \subset \Gr_\mu $. From \cite{MV}, $\dim \overline{\Gr^\lambda} \cap S^\mu = \langle \rho, \lambda-\mu \rangle $ and thus the intersection $\overline{\Gr^\lambda} \cap S^\mu $ is half-dimensional in $\Grlmbar$, Hence it is Lagrangian if and only if it is coisotropic. The variety $\Grlmbar$ is affine, and so it suffices to check that the Poisson bracket of any two functions that vanish on $\overline{\Gr^\lambda} \cap S^\mu$ vanishes there as well. The functions vanishing on $S^\mu\cap \overline{\Gr^\lambda}$ are generated by all functions of negative weight under the action of the coweight $\rho^\vee:\C^\times \to G$. Since that action preserves the Poisson structure, the Poisson bracket of two negative weight functions is again negative weight; this completes the proof. \end{proof} It is natural to ask whether $ \Grlmbar $ has a symplectic resolution. Let us temporarily assume that $ G $ is of adjoint type and let us fix an sequence $\vlam= (\lambda_1, \dots, \lambda_n) $ of fundamental coweights such that $ \lambda = \lambda_1 + \cdots + \lambda_n $. Then we have the open and closed convolutions $$ \Gr^{\vlam} := Gr^{\lambda_1} \tilde{\times} \cdots \tilde{\times} \Gr^{\lambda_n}, \overline{\Gr^{\vlam}} := \overline{Gr^{\lambda_1}} \tilde{\times} \cdots \tilde{\times} \overline{\Gr^{\lambda_n}} $$ along with the convolution morphisms $ m :\Gr^{\vlam} \rightarrow \overline{\Gr^\lambda} $ and $ \bar{m} : \overline{\Gr^{\vlam}} \rightarrow \overline{\Gr^\lambda} $ . Let \[ \Gr^{\vlam}_\mu := m^{-1}(\Gr_\mu) \qquad\Gr^{\overline{\vlam}}_\mu := \bar{m}^{-1}(\Gr_\mu) .\] Recall that a normal variety $X$ with a fixed symplectic structure $\Omega$ on its smooth locus is said to have {\bf symplectic singularities} if, locally on $X$, there are resolutions of singularities $p\colon U\to X$ where $p^*\Omega$ is the restriction of a closed 2-form on $U$ (which is not assumed to be non-degenerate on the exceptional locus). A variety $X$ is said to have {\bf terminal singularities} if there is a resolution of singularities of $X$ such that each irreducible exceptional fiber has positive discrepancy, that is, $X$ is as close to being smoothly resolved as is crepantly possible. A {\bf terminalization} $X\to Y$ is a map which is birational, proper, and crepant with $X$ having terminal singularities. We say a variety $X$ is {\bf $\mathbb{Q}$-factorial} if every Weil divisor on $X$ has an integer multiple which is Cartier. \begin{theorem} \label{th:terminalization} The variety $ \Grlmbar $ has symplectic singularities, and $\Gr^{\overline{\vlam}}_\mu $ is a $\mathbb{Q}$-factorial terminalization of $ \Grlmbar $. \end{theorem} \begin{proof} First, we claim that $\Gr^{\overline{\vlam}}_\mu $ has singular locus in codimension $\geq 4$. Since $\Gr_\mu$ is transverse to every $G[t]$-orbit, the codimension of the singular locus cannot jump when we pass to $\Gr^{\overline{\vlam}}_\mu$, so we need only establish the same result for $\Gr^{\overline{\vlam}}$, for which it suffices to consider the case of a fundamental coweight. If $\om_i$ is a fundamental coweight, and $\nu$ is a dominant coweight such that $\Gr^\nu\subset \Gr^{\overline{\om_i}}$, then we have that $\rho^\vee(\om_i-\nu)\geq 2$, since $\om_i-\al_j$ is never dominant. Thus, the singular locus $\Gr^\nu$ has codimension at least $4$. As Beauville notes \cite[(1.2)]{Beau}, since $\Gr^{\overline{\vlam}}_\mu $ is regular in codimension 3 and normal, the existence of a symplectic form on its smooth locus implies that it has symplectic singularities. Since we have a Poisson map $\Gr^{\overline{\vlam}}_\mu\to \Grlmbar $, this variety also has symplectic singularities. By a result of Namikawa \cite{Nanote}, this regularity in codimension 3 also implies that $\Gr^{\overline{\vlam}}_\mu $ is terminal. Since each local singularity in $\Gr^{\overline{\vlam}}_\mu $ is a local singularity in $\Gr^{\overline{\vlam}}$, and these are the product of local singularities in $\Gr^{\overline{\om_i}}$, we need only prove $\mathbb{Q}$-factoriality in this case. The group of Weil divisors of $\Gr^{\overline{\om_i}}$ is the same as that of $\Gr^{{\om_i}}$ which is an affine bundle over $G/P_i$ where $P_i$ is the maximal parabolic containing all negative simple root spaces but $\mathfrak{g}_{-\al_i}$. Thus, the Weil divisor group of $G/P_i$ is isomorphic to $\Z$. Since $\Gr^{\overline{\om_i}}$ is projective, {\it some} Weil divisor on $\Gr^{\overline{\om_i}}$ is Cartier. Thus, the group generated by any non-trivial Weil divisor must intersect the image of the Cartier divisors, and so $\Gr^{\overline{\om_i}}$ is $\mathbb{Q}$-factorial\footnote{We thank Alexander Braverman for suggesting this portion of the argument to us.}. Furthermore, the map $\Gr^{\overline{\vlam}}_\mu \to \Grlmbar$ well-known to be proper and birational. The preimage of $\Gr^\mu$ for $\mu\neq \la, \la-\al_i$ has codimension $\geq 4$, so any exceptional divisor must be the closure of a component of the preimage of $\Gr^{\la-\al_i}$. The coefficients of these divisors in the discrepancy can thus be computed locally in a neighborhood of $x\in \Gr^{\la-\al_i}$, but the germ of the map is equivalent to the minimal resolution of a Kleinian singularity by Example \ref{eg:Kleinian}. The Kleinian singularities are known to be crepant. \end{proof} An obvious question is when $ \Grlmbar $ has a symplectic resolution. First, we make the following conjecture. \begin{conjecture} Any symplectic resolution of $ \Grlmbar $ is of the form $\Gr^{\overline{\vlam}}_\mu$. \end{conjecture} We can easily see when $\Gr^{\overline{\vlam}}_\mu$ is actually a resolution. \begin{theorem} The following are equivalent. \begin{enumerate} \item $ \Grlmbar$ possesses a symplectic resolution of singularities. \item $\Gr^{\overline{\vlam}}_\mu$ is smooth and thus is a symplectic resolution of singularities of $ \Grlmbar$. \item $ \Gr^{\vlam}_\mu = \Gr^{\overline{\vlam}}_\mu$. \item There do not exist coweights $ \nu_1, \dots, \nu_n $ such that $ \nu_1 + \cdots + \nu_n = \mu $, for all $k $, $ \nu_k $ is a weight of $ V(\lambda_k) $ and for some $ k $, $ \nu_k $ is a not an extremal weight of $ V(\lambda_k) $. \end{enumerate} \end{theorem} \begin{proof} \noindent (i) $\Rightarrow$ (ii): If $ \Grlmbar$ has a symplectic resolution, then by \cite[5.6]{NaP} any $\mathbb{Q}$-factorial terminalization of $ \Grlmbar$, in particular $\Gr^{\overline{\vlam}}_\mu$, is smooth. \noindent (ii) $\Rightarrow$ (i): In this case, $\Gr^{\overline{\vlam}}_\mu$ is an example of a symplectic resolution of singularities. \noindent (ii) $\Rightarrow$ (iii): It is well-known that the smooth locus of $ \overline{\Gr^\lambda} $ is precisely $ \Gr^\lambda $. Thus the smooth locus of $ \Gr^{\overline{\vlam}} $ is precisely $ \Gr^{\vlam} $. Now assume that there is a point $x$ in $\Gr^{\overline{\vlam}}_\mu $ not in $\Gr^{\vlam}_\mu$; we know that $\Gr^{\overline{\vlam}}$ is not smooth at $x$. By the transversality of the $ G_1[[t^{-1}]] $ and $ G[t] $ orbits, the completion of $\Gr^{\overline{\vlam}}$ at $x$ is the same as the completion of $\Gr^{\overline{\vlam}}_\mu $ at $x$ times something smooth. Therefore $ \Gr^{\overline{\vlam}}_\mu$ cannot be smooth at $x$ either. \noindent (iii) $\Rightarrow$ (ii): clear. \noindent (iii) $\Rightarrow$ (iv): If there exist $ \nu_1, \dots, \nu_n $ as in (iii), then $ (t^{\nu_1}, t^{\nu_1 + \nu_2}, \dots, t^{\mu}) \in \Gr^{\overline{\vlam}}_\mu \smallsetminus \Gr^{\vlam}_\mu$. \noindent (iv) $\Rightarrow$ (iii): Suppose that there exists $$ (L_1, \dots, L_n) \in \Gr^{\overline{\vlam}}_\mu \smallsetminus \Gr^{\vlam}_\mu $$ Recall that we have a $ \C^\times \times T $ action on $ \Gr $ where the first factor acts by loop rotation. Consider a map $ \C^\times \rightarrow \C^\times \times T $ which is the identity into the first factor and a generic dominant coweight into the second factor. We get a resulting $ \C^\times $ action on $ \Gr $ whose attracting sets are the $ I_- $ orbits, where $ I_- $ is the preimage of $ B $ under $ G[[t^{-1}]] \rightarrow G $. Let $ (t^{\mu_1}, \dots, t^{\mu_n}) = \lim_{s \rightarrow 0} s \cdot (L_1, \dots, L_n) $. From the definition of $\Gr^{\overline{\vlam}}_\mu$, we see that $ \mu_n = \mu $. Also, we see that for each $k $, $ d(t^{\mu_{k-1}}, t^{\mu_k}) \le \lambda_k $ (where $ d : \Gr \times \Gr \rightarrow X_+ $ denotes the $ G((t^{-1})) $-invariant distance function on $\Gr $) and so $ \nu_k := \mu_k - \mu_{k-1} $ is a weight of $ V(\lambda_k)$. Thus we obtain $ \nu_1, \dots, \nu_n $ with $ \nu_1 + \cdots + \nu_n = \mu $. Moreover, since $ (L_1, \dots, L_n) \notin \Gr^{\vlam}_\mu $, for some $ k $, $ d(L_{k-1}, L_k) < \lambda_k $ and so $ \nu_k $ is a non-extremal weight of $ V(\lambda_k) $. \end{proof} If $ \lambda $ is a sum of minuscule coweights, then the above conditions hold. For any simple $G $ not of type A, there are non-miniscule fundamental coweights $ \lambda $; for such $ \lambda $, we can choose $ \mu $ such that the above conditions do not hold. So there exist $ \Grlmbar $ which do not admit symplectic resolutions. \subsection{Beilinson-Drinfeld Grassmannian} Using the Beilinson-Drinfeld Grassmannian, we can define a family of Poisson varieties over $\mathbb{A}^n $ whose special fibre is $ \Grlmbar $. In this work, this family will only be used as motivation for a similar family of quantizations of $ \Grlmbar $; as illustrated in works such as \cite{BK04a, BPW, Lo}, the universal symplectic deformation of a symplectic singularity as a symplectic variety is intimately tied to understanding its quantizations (see section \ref{se:universality}). From this perspective, a natural next step (beyond the scope of this paper) would be to study quantizations of the total spaces of these deformations, not just of a single fibre. Recall that we have the moduli interpretation of the affine Grassmannian \begin{multline*} \Gr = \{ (E, \phi) : E \text{ is a principal $G$-bundle on $ \pp $ and}\\\text{ $\phi: E |_{\pp \smallsetminus \{0 \}} \rightarrow E^0|_{\pp \smallsetminus \{0\}} $ is an isomorphism} \} \end{multline*} where $ E^0 $ denotes the trivial $ G $-bundle. We say that $ (E, \phi) $ has \textbf{Hecke type} $ \lambda $ at $0 $ if $ (E, \phi) $ gives a point in $ \Gr^\lambda $ under the above identification. Note that the action of $ G[[t^{-1}]] $ by left multiplication in the homogeneous space definition becomes change of trivialization in the new definition. Thus the $ G[[t^{-1}]] $ orbit of $ (E, \phi) $ is determined by isomorphism class of the $ G $-bundle $E $, which is given by a dominant coweight. Note also that the action of $\Gm $ corresponds to changes of trivialization which do not change anything at $ \infty$. Let $ \mu $ be a dominant weight and let $ P $ be the corresponding standard parabolic subgroup (so that $ W_P $ is the stabilizer of $ \mu $ in the Weyl group). Let $ E $ be a principal $ G$-bundle of type $ \mu $. Then $ E $ has a canonical $ P $-structure. Now let $ (E, \phi) \in \Gr $. Let $ \mu $ be the isomorphism type of $ E $. Then $ \phi_\infty $ carries the parabolic structure at $ \infty $ to a parabolic subgroup of $ G $ of type $ \mu $. Hence we see that the $ \Gm $ orbits on $ \Gr$ are labelled by a pair consisting of a dominant weight $\mu $ and a parabolic subgroup of $ G $ of type $ \mu $. In particular $ \Gr_\mu $ is the locus of those $ (E, \phi) $ where $ E $ has isomorphism type $ \mu $ and the parabolic subgroup produced is the standard one. We now will consider the Beilinson-Drinfeld deformation of the affine Grassmannian. This is a family $ \Gr_{\mathbb{A}^n} $ over $ \mathbb{A}^n $ whose fibre at $ a_1, \dots, a_n \in \mathbb{A}^n $ is given as follows: \begin{equation*} \begin{aligned} \Gr_{a_1, \dots, a_n} = \{ (E, \phi) :&E \text{ is a principal $G$-bundle on $\pp$ and} \\ &\phi: E|_{\pp \smallsetminus \{a_1, \dots, a_n \} } \rightarrow E^0|_{\pp \smallsetminus \{a_1, \dots, a_n\}} \text{ is an isomorphism } \} \end{aligned} \end{equation*} Let $\Gr_{\mu, \mathbb{A}^n} $ be the locus of $ (E, \phi) $ where $ E $ has isomorphism type $ \mu $ and the parabolic subgroup at $ \infty$ is the standard one. Specializing to one choice of parameters, we can consider changes of trivialization acting on $ \Gr_{a_1, \dots, a_n} $. Let $ G_1(\pp \smallsetminus \{a_1, \dots, a_n \}) $ denote the kernel of $ G(\pp \smallsetminus \{a_1, \dots, a_n \}) \rightarrow G $ given by evaluation at $ \infty $. Then, $ \Gr_{\mu, (a_1, \dots, a_n)} $ is an orbit of $ G_1(\pp\smallsetminus \{a_1, \dots, a_n \} )$. We may also think of this locus in terms of the $ \kb^\times $ action. We have an action of $ \kb^\times $ on $ \Gr_{\mathbb{A}^n} $ coming from the action of $ \kb^\times $ on $ \pp$. Note that this action moves the base $ \mathbb{A}^n$. On the central fibre $ \Gr_{(0, \dots, 0)} = \Gr $ this action of $ \kb^\times $ restricts to the loop rotation action on $ \Gr $. Hence the fixed points of this $ \kb^\times $ action are the same as the fixed points of the loop rotation action, namely the sets $ G t^\mu $ inside the affine Grassmannian. Moreover, we have that $ \Gr_{\mu,\mathbb{A}^n} $ is the attracting set for $ t^{w_0 \mu} $ under the $ \kb^\times $ action. We have a fiberwise Poisson structure on $ \Gr_{\mathbb{A}^n}$ using the Manin triples described in Etingof-Kazhdan \cite{EK}, Corollary 2.10 and Proposition 2.12. As in Section \ref{se:SymplecticStructure}, we get a Poisson structure on $\Gr_{\mu, (a_1, \dots, a_n)}$. Now, let us choose an expression $ \lambda = \lambda_1 + \cdots + \lambda_n $, where $ \lambda_1, \dots, \lambda_n $ are fundamental coweights. This gives us an $ X_+ $ colored divisor $ D $ on $ \pp $ defined by $ D = \sum \lambda_i a_i $. We will think of $ D $ as a function $ \pp \rightarrow X_+$. Now we define \begin{equation*} \Gr_{\mu, (a_1, \dots, a_n)}^{\lambda_1, \dots, \lambda_n} \\:= \{ (E, \phi) \in \Gr_{\mu, (a_1, \dots, a_n)} : (E,\phi) \text{ has Hecke type } D(x)\text{ for all } x\in \pp \} \end{equation*} From the above analysis, it is possible to show that these are symplectic leaves in $ \Gr_{\mu, (a_1, \dots, a_n)}$. Fixing $ (\lambda_1, \dots, \lambda_n) $ and letting $ (a_1, \dots, a_n) $ vary, this forms a family of $ \mathbb{A}^n $. The central fibre of this family is $ \Gr_\mu^{\lambda} $. Now, define \begin{equation*} \Gr_{\mu, (a_1, \dots, a_n)}^{\overline{\lambda_1, \dots, \lambda_n}}\\ := \{ (E, \phi) \in \Gr_{\mu, (a_1, \dots, a_n)} : (E,\phi) \text{ has Hecke type } \le D(x) \text{ } \text{ for all } x\in \pp \} \end{equation*} Then we obtain a flat family of symplectic varieties over $ \mathbb{A}^n $ whose central fibre is $ \Grlmbar $. \subsection{Direct system on slices and Zastava spaces} We will now look at what happens to $ \Grlmbar $ when we increase $ \lambda, \mu $, keeping $ \lambda - \mu $ fixed. Let us fix $ \nu $ in the positive coroot cone. Let $ \mu, \mu' $ be dominant coweights with $ \mu' - \mu $ dominant. From Lemma \ref{le:LieStab}, the stabilizer of $ t^{w_0\mu'} $ in $\Gm $ contains the stabilizer of $ t^{w_0\mu} $ in $\Gm $. So we can define a map $ \Gr_\mu \rightarrow \Gr_{\mu'}$ by $ gt^{w_0\mu} \mapsto gt^{w_0 \mu'} $. From Proposition \ref{pr:SetTheoryGrlam}, we see that this restricts to a map $ \Gr_\mu^{\overline{\mu + \nu}} \rightarrow \Gr_{\mu'}^{\overline{\mu' + \nu}}$. By construction, it is a Poisson map. Clearly these maps are compatible with composition. Thus with $ \nu $ fixed we get a direct system of slices $ \left\{ \Gr_\mu^{\overline{\mu + \nu}} \right\}_\mu $. The limit of this system is an ind-scheme, but in general it will not be represented by a scheme. On the other hand, we can consider the Zastava space $ Z_\nu $, an affine variety, as defined in \cite{FM}. It is a compactification of the moduli space $ Z_\nu^{\circ} $ of based maps from $ \mathbb{P}^1$ into $ G/B $ of degree $ \nu $. The variety $ Z_\nu $ carries an action of $ \kb^\times $, extending the action of $ \kb^\times $ on $Z_\nu^{\circ} $ which rotates the source of the map. The following result is Theorem 2.8 from Braverman-Finkelberg \cite{BF}. It shows that the algebras of functions $ \O(\Gr_\mu^{\overline{\mu + \nu}}) $ stabilize to $ \O(Z_\nu)$. \begin{theorem} \label{th:maptoZastava} There exists a map $ \Gr_\mu^{\overline{\mu + \nu}} \rightarrow Z_\nu $. These maps are compatible with the above direct system on the slices and with the actions of $ \mathbb{C}^\times $. Moreover, the induced maps $ \O(Z_\nu)_N \rightarrow \O(\Gr_\mu^{\overline{\mu+\nu}})_N $ are isomorphisms if $ N \le \langle \alpha_i, \mu \rangle$ for all $ i $. \end{theorem} \begin{remark} The theorem provides $ Z_\nu $ with a Poisson structure. On the other hand, $Z_\nu^{\circ} $ carries a symplectic structure as described in \cite{FKMM}. It is expected that these two structures are compatible. \end{remark} \begin{example} \label{ex:1} Let us take $ G = PGL_2 $ and $ \nu = \alpha^\vee $, the simple coroot. Then (as in Example \ref{eg:Kleinian}), for $ n \ge 0$, $$ \Gr_{n \omega^\vee }^{\overline{n\omega^\vee + \alpha^\vee}} \cong \{(u,v,w) : uv + w^{n+2} = 0 \} $$ Moreover, for $ m \ge n $, the map $ \Gr_{n \omega^\vee}^{\overline{n\omega^\vee + \alpha^\vee}} \rightarrow \Gr_{m \omega^\vee}^{\overline{m\omega^\vee + \alpha^\vee}} $ is given by $ (u,v,w) \mapsto (u,vw^{m-n}, w) $. This is because we have an equality in $ \Gr_{PGL_2} $ \begin{equation*} \left[ \begin{smallmatrix} 1 - wt^{-1} & v t^{-(n+1)} \\ ut^{-1} & 1 + wt^{-1} + \dots + w^{n+1} t^{-(n+1)} \end{smallmatrix} \right] \left[\begin{smallmatrix} 1 & 0 \\ 0 & t^m \end{smallmatrix} \right] = \left[ \begin{smallmatrix} 1 - wt^{-1} & vw^{m-n} t^{-(m+1)} \\ ut^{-1} & 1 + wt^{-1} + \dots + w^{m+1} t^{-(m+1)} \end{smallmatrix} \right] \left[\begin{smallmatrix} 1 & 0 \\ 0 & t^m\end{smallmatrix} \right]. \end{equation*} On the other hand, the Zastava space $ Z_\alpha $ is $ \mathbb{A}^2$. The map in Theorem \ref{th:maptoZastava} is given by $ (u,v,w) \mapsto (u,w) $. With respect to the $ \kb^\times $ action on $$ \Gr_{n \omega^\vee}^{\overline{n\omega^\vee + \alpha^\vee}} = \{(u,v,w) : uv + w^{n+2} =0 \} $$ the variables $ u, w $ have weight 1 and $ v $ has weight $ n+1 $. So, we can see that $$ \O(Z_\alpha) = \kb[u,w] \rightarrow \O(\Gr_{n \omega^\vee}^{\overline{n\omega^\vee + \alpha^\vee}}) = \kb[u,v,w]/ (uv+w^{n+2}) $$ is an isomorphism in degrees $ 0, \dots, n $ as predicted by Theorem \ref{th:maptoZastava}. The Poisson structure on $ \Gr_{n \omega^\vee}^{\overline{n\omega^\vee + \alpha^\vee}} $ is given by $$ \{ w, u \} = u \quad \{w, v \} = - v \quad \{u, v \} = (n+2)w^{n+1} $$ while the Poisson structure on $ Z_\alpha $ is given by $$ \{ w, u \} = u. $$ Finally, note that the $ \kb$-points of the ind-scheme $ \lim_n \Gr_{n \omega^\vee}^{\overline{n\omega^\vee + \alpha^\vee}} $ are $$ \{(u,w) : u \in \kb^{\times}, w \in \kb \} \cup \{(0,0) \} $$ which is a proper subset of $ \kb^2 $ and hence this ind-scheme is not equal to $ \mathbb{A}^2 $. \end{example} \subsection{Description of the Poisson structure} \label{sec:Poisson structure} We would like to describe the Poisson structure on $ \Gm $ in a little more detail. Let $C \in \fg \otimes \fg $ be the Casimir element for the bilinear form. Picking dual bases, we may represent this element as $ C = \sum J_a \otimes J^a $; this Casimir element allows us to describe the Poisson bracket of two minors. This can be written more compactly using the series $ \Delta_{\beta, v}(u)=\sum_{s\geq 0} \Delta_{\beta, v}^{(s)}u^{-s}$. Note that $ \Delta^{(0)}_{\beta, v} = \langle \beta, w \rangle $ is a constant function. \begin{proposition} \label{th:Poissonminor} In $ \O(\Gm)[[u_1^{-1}, u_2^{-1}]]$, the Poisson bracket $\{ \Delta_{\beta_1, v_1}(u_1), \Delta_{\beta_2,v_2}(u_2) \}$ is equal to \begin{equation*} \frac{1}{u_1-u_2} \sum_a \Delta_{ \beta_1,J_a v_1}(u_1) \Delta_{ \beta_2,J^a v_2}(u_2) - \Delta_{J_a\beta_1, v_1}(u_1) \Delta_{J^a\beta_2, v_2 }(u_2) \end{equation*} \end{proposition} \begin{proof} The cobracket $\fg((t^{-1}))\to \fg((u_1))\otimes \fg((u_2))$ is coboundary. If we let $r(u_1,u_2)=\frac{C}{u_1-u_2}$ it is given by \[a(t)\mapsto \left[a(u_1)\otimes 1+1\otimes a(u_2),r(u_1,u_2)\right].\] As described earlier, the Lie algebra $\fg((t^{-1}))$ carries an inner product $(f,g)_t=- \operatorname{res}_{t=0} (f,g)$ for which $\gm$ is Lagrangian and complementary to $\gp$; this realizes $\fg((t^{-1}))$ as the (topological) Drinfeld double of $\gm$. In particular, $\Gm\subset G((t^{-1}))$ is a Poisson subgroup, and the Poisson bracket of any two functions on $\Gm$ can be calculated taking the bracket of any two extensions to all of $G((t^{-1}))$ and then restricting to $\Gm$. Thus, the Poisson structure on $G((t^{-1}))$ is defined by $r^L(u_1,u_2)-r^R(u_1,u_2)$, the difference of the left translation and right translation of the element $r(u_1,u_2)$ considered as a bivector at the identity. If $ X \in \gm $, and $ g \in \Gm$, we identify $ X $ with a tangent vector at $ g $ by left translation. Then we have $$ (d \Delta_{\beta, v})_g(X) = \langle \beta, gXv \rangle. $$ Hence \begin{multline*} \{ \Delta_{\beta_1,v_1}(u_1), \Delta_{\beta_2, v_2}(u_2) \}(g) = (d \Delta_{\beta_1, v_1})_g \otimes (d \Delta_{\beta_2, v_2})_g (g) \\ = \frac{1}{u_1-u_2} \Big( \sum_a \langle \beta_1, g(u_1) J_a v_1 \rangle \langle \beta_2, g(u_2) J^a v_2 \rangle - \sum_a \langle \beta_1, J_a g(u_1)v_1 \rangle \langle \beta_2, J^a g(u_2) v_2\rangle \Big) \end{multline*} and then the proposition follows from the invariant of the pairing between dual representations. \end{proof} We can unpack Proposition \ref{th:Poissonminor} into the following equations: \begin{equation}\label{eq:minor-bracket} \{ \Delta_{ \beta_1, v_1}^{(r+1)}, \Delta_{\beta_2, v_2}^{(s)} \} - \{ \Delta_{\beta_1, v_1}^{(r)}, \Delta_{ \beta_2, v_2}^{(s+1)} \} = \sum \Delta_{J_a\beta_1, v_1}^{(r)} \Delta_{J^a\beta_2, v_2}^{(s)} - \Delta_{ \beta_1, J_a v_1}^{(r)} \Delta_{\beta_2, J^a v_2}^{(s)} \end{equation} for $ r, s \ge 0$. These equations specify all the desired Poisson brackets. \subsection{A conjectural description of the ideal of \texorpdfstring{$\Grlmbar$}{Gr}} In this section, we describe a conjectural description of the ideal of $ \Gr^{\overline{\lambda}}_\mu $ as a subvariety of $ \Gr_0 = \Gm$. Let $ G_{sc} $ denote the simply connected cover of $ G $. Note that the natural map $ G_{sc}[[t^{-1}]]_1 \rightarrow G[[t^{-1}]]_1 $ is an isomorphism. This allows us to consider $ \Delta_{\om_i, \om_i}^{(s)} $ as functions on $\Gm $, even if $ \om_i $ are not weights of $ G $ (for example if $ G $ is of adjoint type). We begin with the case of $ \mu = 0 $. Let $J^\lambda_0 $ denote the ideal in $ \O(\Gm) $ Poisson generated by $ \Delta_{\om_i, \om_i}^{(s)} $ for $ s > \langle \lambda, \om_{i^*} \rangle$ and for $ i \in I $. \begin{conjecture} \label{co:MainConj} The ideal of $ \Gr^{\overline{\lambda}}_0 $ in $\O(\Gm) $ is $J^\lambda_0$. \end{conjecture} Let us make some comments on this conjecture. First, we have the following result. \begin{proposition} \label{pr:J0gen} $J^\lambda_0$ is generated as an ordinary ideal by $ \Delta_{\beta, v}^{(s)} $ for $ s > \langle \lambda, \om_{i^*} \rangle$ and for $ i \in I $ and where $ \beta, v $ range over bases for $ V(\om_i)^* $ and $ V(\om_i)$. \end{proposition} \begin{proof} Let $I$ be the ideal generated as an ordinary ideal by $ \Delta_{\beta, v}^{(s)} $ for $ s > \langle \lambda, \om_{i^*} \rangle$. First, we show that this ideal is contained in $J^\lambda_0 $. First, we claim that $ \Delta^{(s)}_{\om_i, v} \in J^\lambda_0 $ for all $ v \in V(\om_i) $ and $ s > \langle \la, \om_{i^*} \rangle $. We proceed by downward induction on the weight of $ v $. The base case of $ v $ is highest weight follows by definition. For the inductive step, suppose that $v$ is not highest weight weight. In this case, $v=\sum f_j v_j$ for some $v_j$ of higher weight than $ v $. Fix $ s $ with $ s > \langle \la, \om_{i^*} \rangle $. Using \eqref{eq:minor-bracket} and the expression for the Casimir (for notation see Section \ref{PBW Yangian}) \begin{equation*} C = C_\fh + \sum_{\alpha \in \Phi_+} C_\alpha e_\alpha \otimes f_\alpha + C_\alpha f_\alpha \otimes e_\alpha, \end{equation*} where $(e_\alpha, f_\alpha) = C_\alpha^{-1}$, we see that \[ \{\Delta^{(s)}_{\om_i,v_j},\Delta^{(1)}_{\om_j,s_j\om_j}\}=-\Delta^{(s)}_{\beta,f_j v_j}\] Thus we see that $$\Delta^{(s)}_{\om_i, v} = \sum_j \Delta^{(s)}_{\om_i, f_jv_j} = - \sum_j \{ \Delta^{(s)}_{\om_i, v_j}, \Delta^{(1)}_{\om_j, s_j \om_j} \}. $$ All the terms on the right hand side lie in $J^\lambda_0$ by the inductive assumption, and thus $ \Delta^{(s)}_{\om_i, v} \in J^\lambda_0 $. Now we claim that $ \Delta^{(s)}_{\beta, v} \in J^\lambda_0 $ for all $ \beta \in V(\om_i)^*, v \in V(\om_i)$ and $ s > \langle \la, \om_{i^*} \rangle $. We have already proven this claim when $ \beta = v_{-\om_i} $, so we proceed by induction on the weight of $ \beta$. Suppose that $ \beta \in V(\om_i)^* $ is not lowest weight and assume that the claim holds for all $ \beta $ of lower weight. In this case, we can write $ \beta = \sum e_j \beta_j $ for some $ \beta_j $ of lower weight. Fix $ s $ with $ s > \langle \la, \om_{i^*} \rangle $. Again using the above expression for the Casimir we find that \[ \{\Delta^{(s)}_{\beta_j,v},\Delta^{(1)}_{s_j\om_j,\om_j}\}=\Delta^{(s)}_{\beta_j,e_j v}-\Delta^{(s)}_{e_j\beta_j,v},\] Thus we see that $$\Delta^{(s)}_{\beta, v} = \sum_j \Delta^{(s)}_{e_j \beta_j,v} = \sum_j \{ \Delta^{(s)}_{\beta_j, v}, \Delta^{(1)}_{s_j \om_j, \om_j} \} - \Delta^{(s)}_{\beta_j,e_j v}. $$ All the terms on the right hand side lie in $J^\lambda_0$ by the inductive assumption, and thus $ \Delta^{(s)}_{\beta, v} \in J^\lambda_0 $. This shows that $I\subset J^0_\la$. It remains to show that $ I $ is a Poisson ideal. Since $ \Delta^{(s)}_{\beta, v} $, for $ \beta \in V(\om_i)^*, v \in V(\om_i), i \in I$, generates $ \O(G_1[[t^{-1}]]) $, it suffices to check that $ I $ is closed under Poisson bracket with these elements. This follows immediately from \eqref{eq:minor-bracket}. \end{proof} Combining this proposition with Proposition \ref{pr:SetTheoryGrlam}, we obtain the following. \begin{corollary} \label{co:J0vanish} The vanishing set of $J^\lambda_0 $ is $ \Gr^{\overline{\lambda}}_0 $. \end{corollary} Thus in order to establish Conjecture \ref{co:MainConj}, it only remains to show that $I^\lambda_0$ is radical. \begin{remark} Let $ G = SL_n $. By an observation which goes back to Lusztig, we know that there is an isomorphism $ \Gr^{\overline{n\omega_1}}_0 \cong \mathcal{N} $, the nilpotent cone of $ \mathfrak{sl}_n $. For any dominant coweight $ \lambda $ with $ \lambda \le n \omega_1$, under this isomorphism $ \Gr^{\overline{\lambda}}_0 $ is taken to a nilpotent orbit closure. Thus, the above conjecture implies generators for the ideal of a nilpotent orbit closure inside the nilpotent cone of $ \mathfrak{sl}_n $. From this perspective, one can see that Conjecture \ref{co:MainConj} would imply the main result of Weyman \cite{W}, which gives generators for the ideals of nilpotent orbit closures. This gives additional evidence toward the conjecture, but also suggests it will be difficult to prove. \end{remark} \begin{remark} One could imagine a similar conjecture for the ideal of $ \Gr^{\bar \lambda} $ inside of the homogeneous coordinate ring of $ \Gr $. However, this conjecture is false, already for $ SL_2 $ and $ \lambda = \alpha $. \end{remark} We will need the following generalization of Conjecture \ref{co:MainConj} which describes the ideal of $ \Grlmbar$. Consider the subgroup $ \Gm_\mu $ defined as the stabilizer in $ \Gm $ of $ t^{w_0 \mu} $. Note that Lemma \ref{le:LieStab}, $\Gm_\mu \subset N_1[[t^{-1}]]$. By the orbit-stabilizer theorem, we see that $\Gr_\mu = \Gm / \Gm_\mu $ and so $ \O(\Gr_\mu) = \O(\Gm)^{\Gm_\mu} $. Moreover the map $ \Gm \rightarrow \Gr_\mu $ is Poisson and thus $ \O(\Gr_\mu) $ is a Poisson subalgebra of $ \O(\Gm) $. \begin{lemma} \label{le:GrmuMinors} The subalgebra $ \O(\Gr_\mu) $ contains \begin{gather*} \Delta^{(s)}_{s_i\om_i, \om_i}, \text{ for all } i \in I, s > 0, \quad \Delta^{(s)}_{\om_i, \om_i}, \text{ for all } i \in I, s > 0, \\ (\Delta_{\om_i, s_i\om_i}/\Delta_{\om_i, \om_i})^{(s)}, \text{ for all } i \in I, s > \langle \mu^*, \alpha_i \rangle \end{gather*} \end{lemma} Later we will see that these elements generate $ \O(\Gr_\mu) $ as a Poisson algebra. \begin{proof} Note that the action of $ \Gm_\mu $ on $ \O(\Gm) $ is given by $ (k \cdot f)(g) = g(fk) $ for $ k \in \Gm_\mu$, $ f \in \O(\Gm) $ and $ g \in \Gm $. In particular, we see that $ k \cdot \Delta_{\beta, v} = \Delta_{\beta, kv} $. Since $ \Gm_\mu \subset N_1[[t^{-1}]]$, the minors $ \Delta_{\om_i, \om_i} $ and $ \Delta_{s_i \om_i, \om_i} $ will be $ \Gm_\mu$-invariant. Hence all $\Delta^{(s)}_{s_i\om_i, \om_i}, \Delta^{(s)}_{\om_i, \om_i} $ all lie in $ \Gr_\mu $. On the other hand, let us consider the coefficients of the $\Delta_{\om_i, s_i \om_i} $ minor. If $k\in \Gm_\mu$, then we have $k\cdot v_{s_i \om_i}=v_{s_i \om_i}+\Delta_{\om_i,s_i \om_i}(k)v_{\om_i}$. Hence if $ g \in \Gm $, then \begin{equation*} \frac{\Delta_{\om_i,s_i\om_i}(gk)}{\Delta_{\om_i,\om_i}(gk)} =\frac{\Delta_{\om_i,s_i\om_i}(g)+\Delta_{\om_i,\om_i}(g)\Delta_{\om_i,s_i \om_i}(k)}{\Delta_{\om_i,\om_i}(g)} =\frac{\Delta_{\om_i,s_i\om_i}(g)}{\Delta_{\om_i,\om_i}(g)}+\Delta_{\om_i,s_i \om_i}(k) \end{equation*} By Lemma \ref{le:LieStab}, we have $\val \Delta_{\om_i,s_i\om_i}(k)\ge \langle w_0\mu, \alpha_i \rangle $. Hence the coefficient of $t^{-s} $ in $ \Delta_{\om_i, s_i \om_i}/\Delta_{\om_i, \om_i} $ is invariant under the action of $ \Gm_\mu$ for $ s > \langle \mu^*, \alpha_i \rangle $. Thus $ (\Delta_{\om_i, s_i\om_i}/\Delta_{\om_i, \om_i})^{(s)} \in \O(\Gr_\mu) $ for $ s > \langle \mu^*, \alpha_i \rangle$. \end{proof} Let $ J_\mu^\lambda $ denote the ideal of $\O(\Gr_\mu) $ Poisson generated by $ \Delta_{\omega_i, \omega_i}^{(s)} $ for $ i \in I $ and $ s > \langle \lambda - \mu, \om_{i^*} \rangle =m_i$. \begin{conjecture} \label{co:main2} The ideal of $ \Grlmbar $ in $\O(\Gr_\mu) $ is $ J_\mu^\lambda $. \end{conjecture} This conjecture generalizes Conjecture \ref{co:MainConj}. When $ \mu \ne 0 $, we do not have a set of (ordinary) generators for $ J_\mu^\lambda $ as in Proposition \ref{pr:J0gen}. However, we will now establish an analog of Corollary \ref{co:J0vanish}. \begin{proposition}\label{pr:GenJmulam} The vanishing locus of $ J_\mu^\lambda $ is $ \Grlmbar $. \end{proposition} \begin{proof} The vanishing locus of $ J_\mu^\lambda $ is the union of the symplectic leaves in the vanishing locus of $ \Delta_{\omega_i, \omega_i}^{(s)} $ for $ i \in I $ and $ s > \langle \lambda - \mu, \om_{i^*} \rangle =m_i$; after all, the vanishing set is a union of symplectic leaves and if these functions vanish on a symplectic leaf, so do all Poisson brackets with them. These generalized minors vanish on $\Grlmbar $ by Proposition \ref{pr:SetTheoryGrlam}. So it suffices to prove the vanishing locus of our generators does not contain $ \Gr^\nu_\mu $ for some $ \nu \nleq \la $. Fix $ \nu \nleq \la $ such that $ \mu \le \nu $. Then for some $i$, then $d=\langle \nu - \mu, \om_{i^*} \rangle > \langle \lambda - \mu, \om_{i^*} \rangle$. We will prove that there exists a point in $\Gr^\nu_\mu $ on which $ \Delta_{\omega_i, \omega_i}^{(d)} $ is non-zero. Let $I_+^+ = I \subset G((t^{-1}))$ denote the standard Iwahori and let $ I_-^+ = w_0 I_+^+ w_0^{-1} $ be the preimage of $ B_- $ in $G[t]$. We claim that it suffices to prove that \begin{equation} \label{eq:nonempty} I_-^+ t^{w_0 \nu} I_+^+ \cap G_-[[t^{-1}]] t^{w_0 \mu} \ne \emptyset \text{ in } G((t^{-1})) \end{equation} To see that (\ref{eq:nonempty}) suffices, let $ g \in G_1[[t^{-1}]] $ such that $ gt^{w_0\mu} $ lies in the above intersection. As, $I_-^+, I_+^+ \subset G[t] $, we see that $ gt^{w_0 \mu} \in \Gr^\nu_\mu $. Finally, we can write $ g = b_- t^{w_0\nu} b_+ t^{-w_0 \mu} $ for $ b_- \in I_-^+, b_+\in I_+^+ $ and an elementary computation shows that $ \Delta^{(d)}_{\om_i, \om_i}(b_- t^{w_0\nu} b_+ t^{-w_0 \mu} ) \ne 0$. To prove (\ref{eq:nonempty}), we work in the affine flag variety $ G((t^{-1}))/I $ and note that (\ref{eq:nonempty}) is equivalent to non-emptiness of the intersection $ I_-^+ t^{w_0 \nu} \cap G_-[[t^{-1}]] t^{w_0 \mu} $ in $ G((t^{-1}))/I $. Let $ I_+^- $ denote the preimage of $ B $ in $G[[t^{-1}]] $ under evaluation at $ t^{-1} = 0 $. Since $ \mu $ is dominant, $ B $ fixes $ t^{w_0 \mu} $ and thus $ G_-[[t^{-1}]] t^{w_0 \mu} = I^-_+ t^{w_0 \mu} $. Thus we reduce to proving that $$ I_-^+ t^{w_0 \nu} \cap I_+^- t^{w_0 \mu} \ne \emptyset \text{ in } G((t^{-1}))/I. $$ Twisting by $ w_0 $, we reduce to proving that $$ I_+^+ w_0 t^{w_0 \nu} \cap I_-^- w_0 t^{w_0 \mu} \ne \emptyset \text{ in } G((t^{-1}))/I. $$ where $I_-^-$ is the preimage of $ B_- $ in $G[[t^{-1}]] $. From general theory of flag varieties, this is equivalent to $ w_0 t^{w_0 \nu} \ge w_0 t^{w_0 \mu} $ in the Bruhat order on the (extended) affine Weyl group. This last fact is easily verified under our hypothesis that $ \mu, \nu $ are dominant and $ \nu \ge \mu $; after all $t^\nu\ge t^\mu$, the latter equation is arrived at by right multiplication by $w_0$, and $t^\nu$ is a minimal double coset representative for $W$ in the extended affine Weyl group. \end{proof} \section{Yangians} \subsection{The Drinfeld Yangian} As mentioned in the introduction, we will study subquotients of Yangians in order to quantize our slices. We will actually need a slight variant on the usual Yangian, which will be produced via a theory developed by Gavarini \cite{G1,G2}. We begin with the usual Yangian which we call the ``Drinfeld Yangian'' to avoid confusion with the Yangian we wish to consider. We define the {\bf Drinfeld Yangian} $ U_{\hh} \gp $ as the associative $\C[[h]]$-algebra with generators $ e_i^{(s)}, h_i^{(s)}, f_i^{(s)} $ for $ i \in I $ and $ r,s \in \mathbb{N} $ and relations \begin{align*} [h_i^{(s)}, h_j^{(s)}] &= 0, \\ [e_i^{(r)}, f_i^{(s)}] &= \delta_{ij} h_i^{(r+s)}, \\ [h_i^{(0)}, e_j^{(s)}] &= (\alpha_i, \alpha_j) e_j^{(s)}, \\ [h_i^{(r+1)},e_j^{(s)}] - [h_i^{(r)}, e_j^{(s+1)}] &= \frac{\hh (\alpha_i, \alpha_j)}{2} (h_i^{(r)} e_j^{(s)} + e_j^{(s)} h_i^{(r)}) , \\ [h_i^{(0)}, f_j^{(s)}] &= - (\alpha_i, \alpha_j) f_j^{(s)}, \\ [h_i^{(r+1)},f_j^{(s)}] - [h_i^{(r)}, f_j^{(s+1)}] &= -\frac{\hh (\alpha_i, \alpha_j)}{2} (h_i^{(r)} f_j^{(s)} + f_j^{(s)} h_i^{(r)}) , \\ [e_i^{(r+1)}, e_j^{(s)}] - [e_i^{(r)}, e_j^{(s+1)}] &= \frac{\hh (\alpha_i, \alpha_j)}{2} (e_i^{(r)} e_j^{(s)} + e_j^{(s)} e_i^{(r)}), \\ [f_i^{(r+1)}, f_j^{(s)}] - [f_i^{(r)}, f_j^{(s+1)}] &= -\frac{\hh (\alpha_i, \alpha_j)}{2} (f_i^{(r)} f_j^{(s)} + f_j^{(s)} f_i^{(r)}), \\ i \neq j, N = 1 - a_{ij} \Rightarrow \operatorname{sym} &[e_i^{(r_1)}, [e_i^{(r_2)}, \cdots [e_i^{(r_N)}, e_j^{(s)}]\cdots]] = 0 \\ i \neq j, N = 1 - a_{ij} \Rightarrow \operatorname{sym} &[f_i^{(r_1)}, [f_i^{(r_2)}, \cdots [f_i^{(r_N)}, f_j^{(s)}]\cdots]] = 0 \end{align*} where $\operatorname{sym}$ denotes symmetrization with respect to $r_1,\dots,r_N$. The following result of Drinfeld will be our starting point. \begin{theorem} $ U_\hh \gp $ is a quantization of $ \gp $. More precisely, there is an isomorphism of co-Poisson Hopf algebras $U_\hh \gp / \hh U_\hh \gp \cong U \gp$, where $ U \gp $ carries the co-Poisson structure coming from the Manin triple $ (\fg[t], t^{-1}\fg[[t^{-1}]], \fg((t^{-1}))) $. \end{theorem} \subsection{PBW basis for the Drinfeld Yangian}\label{PBW Yangian} Fix any order on the nodes of the Dynkin diagram; for each positive root $\al$, we let $\check{\al}$ denote the smallest simple root such that $\hat{\al}=\al-\check{\al}$ is again a positive root. We define $ e_\alpha \in \fg $ for $ \alpha \in \Delta_+ $ recursively, by $$ e_{\alpha_i} = e_i \text{ and } e_\alpha = [e_{\hat{\alpha}}, e_{\check{\alpha}}] $$ We extend this definition to $ U_\hh \gp $ by defining $$ e_{\alpha_i}^{(r)} = e_i^{(r)} \text{ and } e_{\alpha}^{(r)} = [e_{\hat{\alpha}}^{(r)}, e_{\check{\al}}^{(0)}]. $$ Similarly, we define $ f_\alpha $ and $ f_\alpha^{(r)} $. We have the following PBW theorem for the Drinfeld Yangian: \begin{proposition}\mbox{} \begin{enumerate} \item Under the isomorphism $U_\hh \gp / \hh U_\hh \gp \cong U \gp$, $e_\alpha^{(r)} $ corresponds to $ e_\alpha t^r $. \item Ordered monomials in the $ e_\alpha^{(r)}, h_i^{(r)}, f_\beta^{(r)} $ form a PBW basis for $ U_\hh \gp $. \end{enumerate} \end{proposition} \subsection{Drinfeld-Gavarini duality} Our goal is to give a quantization of the Poisson-Hopf algebra $ \O(\Gm) $ using the Drinfeld Yangian $ U_\hh \gp $. For this we will use the quantum groups duality of Drinfeld-Gavarini. We briefly describe one half of Drinfeld-Gavarini duality \cite{Dr, G1,G2}. Let $(H,\Delta,\ep)$ be a Hopf algebra over $\C[[\hh]]$. Consider maps $\Delta^n:H\rightarrow H^{\otimes n}$ for $n\geq 0$ defined by $\Delta^0=\ep$, $\Delta^1 = \textrm{id}_H$, and $\Delta^n = (\Delta\otimes \textrm{id}^{\otimes(n-2)})\circ \Delta^{n-1}$ for $n\geq2$. Let $\delta^n = (\textrm{id}_H - \ep)^{\otimes n}\circ \Delta^n$, and define the Hopf subalgebra $$ H' = \left\{ a\in H \mid \delta^n(a) \in \hh^n H^{\otimes n} \right\}\!. $$ In general, $H'/ \hh H'$ is a commutative Hopf algebra over $\C$ and can be given the Poisson bracket $$ \{a+\hh H',b+\hh H'\} = \hh^{-1} [a,b] + \hh H'. $$ Suppose that $G$ is a Poisson affine algebraic group, namely the maximal spectrum of a Poisson commutative Hopf algebra $\O(G)$, and let $\fg, \fg^*$ be its tangent and cotangent Lie bialgebras. Let $U_\hh = U_\hh(\fg)$ be a quantization of $U(\fg)$. \begin{theorem}[{\cite[Theorem 2.2]{G1}}] \label{DG Duality} There is an isomorphism of Poisson-Hopf algebras \[ {U_\hh}'/\hh {U_\hh}' \cong \O(G^\ast) \] where $G^\ast$ is a connected algebraic group with tangent Lie bialgebra $\fg^\ast$. \end{theorem} By \cite{G2}, for any basis $\left\{\overline{x}_\alpha\right\}$ of $\fg$, there exists a lift $\left\{x_\alpha\right\}$ in $U_\hh$ such that \begin{itemize} \item $\epsilon(x_\alpha) = 0$, \item ${U_\hh}'$ is generated by $\left\{ \hh x_\alpha \right\}$, and \item ordered monomials in these generators span ${U_\hh}'$ over $\kb[[\hh]]$. \end{itemize} In particular, if $\left\{\overline{x}_i\right\}$ generates $\fg$, then $\left\{\hh x_i + \hh {U_\hh}'\right\}$ generates ${U_\hh}'/\hh {U_\hh}'$ as a Poisson algebra. To allow for easier identification of ${U_\hh}'/\hh {U_\hh}'$ and $ \O(G^\ast)$, we can reformulate Theorem \ref{DG Duality} as follows. Consider $$ \mathcal{L} = \textrm{Der} ({U_\hh}'/\hh {U_\hh}') := \left\{\left. \varphi: {U_\hh}'/\hh {U_\hh}'\rightarrow\C \right| \varphi(ab) = \varphi(a)\ep(b)+\ep(a)\varphi(b)\right\} $$ with Lie bracket $$ [\varphi, \phi](a) = (\varphi\otimes\phi)(\Delta(a) - \Delta^{op}(a)) $$ and cobracket $$ \delta(\varphi)(a\otimes b) = \varphi(\{a,b\}) $$ This is the Lie bialgebra of the Poisson algebraic group $ Spec({U_\hh}'/\hh {U_\hh}') $. The isomorphism described in Theorem \ref{DG Duality} can be rephrased as follows. \begin{corollary} \label{DG Duality2} There is an isomorphism of Lie bialgebras $\fg^\ast \cong \mathcal{L}$ defined by \[ \overline{y} \longmapsto \Big( \hh x + \hh {U_\hh}' \longmapsto \langle \overline{y}, \overline{x}\rangle \Big) \] for $x$ a lift of $\overline{x}\in\fg$, extended by the Leibniz rule. This isomorphism yields a perfect Poisson--Hopf pairing $\langle\cdot,\cdot\rangle : U(\fg^\ast)\times {U_\hh}'/\hh {U_\hh}' \rightarrow \C$. \end{corollary} \subsection{Our Yangian} \label{OurYangian} We will now apply this theory to the Drinfeld Yangian $U_\hh \gp$. We let $Y := (U_\hh \gp)'$. We will refer to $ Y $ as the Yangian from now on. Note that it is a subalgebra of the usual Yangian. For $ X = E_\alpha,H_i, F_\alpha $, and $ r \ge 1 $, we define $ X^{(r)} = hx^{(r-1)}$. From the general remarks above these elements generate $Y$ and monomials in these generators give a PBW basis for $ Y $. We define a grading on $ Y $ where $ X^{(r)}$ has degree $ r $. \begin{theorem} \label{OurYPresentation} The $ X^{(r)} $ generate $ Y $ subject to the relations \begin{align*} [H_i^{(s)}, H_j^{(s)}] &= 0, \\ [E_i^{(r)}, F_j^{(s)}] &= \hh \delta_{ij} H_i^{(r+s-1)}, \\ [H_i^{(1)}, E_j^{(s)}] &= \hh (\alpha_i, \alpha_j) E_j^{(s)}, \\ [H_i^{(r+1)},E_j^{(s)}] - [H_i^{(r)}, E_j^{(s+1)}] &= \frac{\hh (\alpha_i, \alpha_j)}{2} (H_i^{(r)} E_j^{(s)} + E_j^{(s)} H_i^{(r)}) , \\ [H_i^{(1)}, F_j^{(s)}] &= -\hh (\alpha_i, \alpha_j) F_j^{(s)}, \\ [H_i^{(r+1)},F_j^{(s)}] - [H_i^{(r)}, F_j^{(s+1)}] &= -\frac{\hh (\alpha_i, \alpha_j)}{2} (H_i^{(r)} F_j^{(s)} + F_j^{(s)} H_i^{(r)}) , \\ [E_i^{(r+1)}, E_j^{(s)}] - [E_i^{(r)}, E_j^{(s+1)}] &= \frac{\hh (\alpha_i, \alpha_j)}{2} (E_i^{(r)} E_j^{(s)} + E_j^{(s)} E_i^{(r)}), \\ [F_i^{(r+1)}, F_j^{(s)}] - [F_i^{(r)}, F_j^{(s+1)}] &= -\frac{\hh (\alpha_i, \alpha_j)}{2} (F_i^{(r)} F_j^{(s)} + F_j^{(s)} F_i^{(r)}),\\ i \neq j, N = 1 - a_{ij} \Rightarrow \operatorname{sym} &[E_i^{(r_1)}, [E_i^{(r_2)}, \cdots [E_i^{(r_N)}, E_j^{(s)}]\cdots]] = 0 \\ i \neq j, N = 1 - a_{ij} \Rightarrow \operatorname{sym} &[F_i^{(r_1)}, [F_i^{(r_2)}, \cdots [F_i^{(r_N)}, F_j^{(s)}]\cdots]] = 0 \\ E_{\alpha_i} &= E_i \\ [E_{\hat{\alpha}}^{(r)}, E_{\check{\al}}^{(1)}] &= \hh E_{\alpha}^{(r)} \\ F_{\alpha_i} &= F_i \\ [F_{\hat{\alpha}}^{(r)}, F_{\check{\al}}^{(1)}] &= \hh F_{\alpha}^{(r)} \end{align*} \end{theorem} \addtocounter{equation}{1} We can repackage these generators relations using generating series. Let $$ E_i(u) = \sum_{s = 1}^\infty E_i^{(s)} u^{-s}, \quad H_i(u) = 1 + \sum_{s = 1}^\infty H_i^{(s)} u^{-s}, \quad F_i(u) = \sum_{s = 1}^\infty F_i^{(s)} u^{-s} $$ Then the above relations can be written in series form. For example the series version of the commutator relation between $ E_i $ and $ F_i $ is \begin{equation} [E_i(u), F_j(v)] = \delta_{ij} \frac{\hh}{u-v} \bigl( H_i(u) - H_i(v) \bigr), \label{EFseries} \end{equation} \begin{remark} Note that the Drinfeld Yangian $ U_\hh \gp $ and our Yangian $ Y $ have natural $ \C[\hh] $-forms; moreover their $ \hh = 1 $ specializations $ U_1 \gp $ and $ Y_1 $ coincide as Hopf algebras. The gradings on $ U_\hh \gp $ and on $ Y $ give rise to two different filtrations on $ Y_1$. In the work of Brundan-Kleshchev \cite{BK}, these filtrations appear as the ``loop filtration'' and the ``Kazhdan filtration'', respectively. \end{remark} \subsection{Identification of Yangian with functions of \texorpdfstring{$ \Gm$}{ G\_1[t\textasciicircum {-1}] }} From the results described above, we can deduce that there is a perfect Hopf pairing between $U(\gm)$ and $Y/\hh Y$, as per Corollary \ref{DG Duality2}. Let us denote by $Q$ the root lattice for $\fg$, let $ Q_+ $ denote the positive root cone, and let $ Q_> = Q_+ \smallsetminus \{0\} $, $ Q_< = - Q_> $. \begin{lemma} \label{Y0 graded} The Drinfeld Yangian $U_\hh \gp $, $Y$, and $Y/hY$, are all $Q$-graded Hopf algebras (all tensor products being graded by total degree). The pairing between $U(\gm)$ and $Y/hY$ respects this grading. \begin{proof} The Hopf grading on these spaces is induced by the action of the elements $h_i^{(0)}$ (resp. $H_i^{(1)}$). In each case, coproducts preserve total degree since the coproduct is a homomorphism and the above elements are Lie algebra-like. It is clear from the formulas of Corollary \ref{DG Duality2} that the pairing between $U(\gm)$ and $Y/hY$ respects the grading for pairings $\left\langle y, x\right \rangle$, when $y\in \gm$, $x\in Y_0$. The result follows for monomials $y_1\cdots y_k\in U(\gm)$ by induction on $k$. \end{proof} \end{lemma} For $\alpha\in Q$, let $Y(\alpha)$ be the corresponding component of $Y/hY$ as per Lemma \ref{Y0 graded}. \begin{proposition} \label{DG pairing} In $Y/hY$ we have: \[\Delta(H_i^{(r)}) = H_i^{(r)} \otimes 1 + 1 \otimes H_i^{(r)} + \sum_{s=1}^{r-1}{ H_i^{(s)}\otimes H_i^{(r-s)}} + \bigoplus_{\substack{\alpha+\beta = 0\\\alpha\in Q_<, \beta\in Q_>}} Y(\alpha)\otimes Y(\beta) \] \[\Delta(E_i^{(r)}) = E_i^{(r)}\otimes 1 + 1\otimes E_i^{(r)} + \sum_{s=1}^{r-1}{ H_i^{(s)}\otimes E_i^{(r-s)}} + \bigoplus_{\substack{\alpha + \beta = \alpha_i\\ \alpha\in Q_<, \beta\in Q_>}} Y(\alpha)\otimes Y(\beta) \] \[\Delta(F_i^{(r)}) = F_i^{(r)}\otimes 1 + 1\otimes F_i^{(r)} + \sum_{s=1}^{r-1}{ F_i^{(s)}\otimes H_i^{(r-s)}} + \bigoplus_{\substack{\alpha+\beta = -\alpha_i\\ \alpha\in Q_<, \beta\in Q_>}} Y(\alpha)\otimes Y(\beta)\] \end{proposition} \begin{proof} To begin we recall that $\Delta(X^{(1)}) = X^{(1)}\otimes 1 + 1 \otimes X^{(1)}$ for all $x\in\fg$. Also, using the presentation of $U_\hh \gp$ with generators $x, J(x)$ for $x\in \fg$ (for which the coproduct is known), a direct calculation yields \[ \Delta(H_i^{(2)}) = H_i^{(2)} \otimes 1 + 1\otimes H_i^{(2)} + H_i^{(1)}\otimes H_i^{(1)} - \sum_{\beta\in\Phi_+}{C_\beta (\beta,\alpha_i) F_\beta^{(1)} \otimes E_\beta^{(1)}} \] where $(e_\beta,f_\beta) = C_\beta^{-1}$. We prove the coproduct for $E_i^{(r)}$ by induction on $r$, using the identity \[ E_i^{(r+1)} = \frac{1}{(\alpha_i,\alpha_i)}\left\{H_i^{(2)}, E_i^{(r)}\right\} - H_i^{(1)} E_i^{(r)} \] The coproduct of the right side is expanded using the Poisson-Hopf algebra relations, the formula for $\Delta(H_i^{(2)})$, and the inductive hypothesis. The above identity is the then applied again to reduce the terms in the result, and yields the form as claimed. An analogous induction proves the case of $\Delta(F_i^{(r)})$. Finally, we take the coproduct of the identity \[ H_i^{(r)} = \left\{E_i^{(r)},F_i^{(1)}\right\} \] to finish the proof. \end{proof} Recall that the pairing between $U(\gm)$ and $Y/hY$ is determined, as per Corollary \ref{DG Duality2}, by the pairing between $\gm$ and $\gp$ given in Section \ref{sec:Poisson structure}. Choose an ``FHE'' total ordering on the generators $f_\alpha t^r, h_i t^r, e_\alpha t^r$ for $U(\gm)$. Then it is easy to see that the previous lemma and proposition completely control the pairing between $U(\gm)$ and $Y/hY$ for the corresponding PBW basis. For example, $- F_i^{(r)}$ acts as the dual of the basis element $e_i t^{-r}$, etc. \begin{theorem} \label{th:YangianQuantize} There is an isomorphism of $\mathbb N $-graded Poisson Hopf algebras $\phi \colon Y/hY \cong \O(\Gm)$ such that \begin{align*} \phi(H_i(u)) &= \prod_j{ \Delta_{\omega_j, \omega_j}(u)^{-a_{ji}}} \\ \phi(F_i(u)) &= d_i^{-1/2}\frac{\Delta_{\omega_i,s_i \omega_i}(u)}{\Delta_{\omega_i, \omega_i}(u)} \\ \phi(E_i(u)) &= d_i^{-1/2}\frac{\Delta_{s_i \omega_i, \omega_i}(u)}{ \Delta_{\omega_i,\omega_i}(u)} \end{align*} where $ \O(\Gm) $ is graded using the loop rotation $\C^\times $ action. \end{theorem} \begin{proof} We check explicitly that the right-hand sides act as described by the previous proposition. Let $X =(x_1 t^{r_1})\cdots(x_k t^{r_k})\in U(\gm)$ be a basis monomial with the FHE order as chosen above. Then we have \[ \frac{\Delta_{\omega_i, s_i\omega_i}(u)}{\Delta_{\omega_i, \omega_i}(u)}(X) = -d_i^{-1/2}\left. \frac{\partial^k}{\partial z_1\cdots\partial z_k} \frac{\langle v_{-\omega_i}, (1+z_1 u^{r_1} x_1)\cdots (1+z_k u^{r_k} x_k) f_i v_{\omega_i}\rangle}{\langle v_{-\omega_i}, (1+z_1 u^{r_1} x_1)\cdots (1+z_k u^{r_k} x_k) v_{\omega_i}\rangle}\right|_{z_1=\ldots=z_k=0} \] noting that $\overline{s_i}v_{\omega_i} = f_i' v_{\omega_i} = - d_i^{-1/2} f_i v_{\omega_i}$ in the generalized minor (see Section \ref{sec:notation}). Since we have an FHE order, to get something nonzero in the right-hand numerator $x_k$ must be a multiple of $e_i$, since $e_i f_i v_{\omega_i} = h_i v_{\omega_i} = d_i v_{\omega_i}$. In this case $z_k u^{r_k} e_i$ does not contribute to the denominator, and the remaining factors cancel leaving \[ \frac{\Delta_{\omega_i, s_i\omega_i}(u)}{\Delta_{\omega_i, \omega_i}(u)}(X) = -d_i^{1/2} \left. \frac{\partial^k}{\partial z_1\cdots\partial z_k} z_k u^{r_k} \right|_{z_1=\ldots=z_k=0} \] so $X$ must have been $e_i t^{r}$ to start with. But this is precisely how $ d_i^{1/2} F_i(u)$ acts on $X$. Similar computations hold in the two remaining cases. To prove the equality for $H_i(u)$ one can also work in $\O(\Gm)$, and build off the known results $E_i(u)$ and $F_i(u)$, since we must have \[ \frac{\phi(H_i(u)) - \phi(H_i(v))}{u-v} = \Big\{\phi(E_i(u)), \phi(F_i(v))\Big\} \] We can then use formula \eqref{eq:minor-bracket} and identities for generalized minors. The nondegeneracy of both Hopf pairings implies that $\phi$ is an injection. It follows that $\phi$ is an isomorphism from a dimension count; both $Y/hY$ and $\mathcal{O}(\Gm)$ have Hilbert series for the loop grading given by \[\prod_{i=1}^\infty \frac{1}{(1-q^{i})^{\dim\fg}}.\] Indeed for $Y/hY$ this follows from the PBW theorem coming from $Y$, since $Y$ is a free $\C[[h]]$-algebra. On the other hand, the Hilbert series on $ \mathcal{O}(\Gm) $ is the same as the Hilbert series for $ \textrm{Sym} (\gm) $ since as $ \Gm $ is pro-unipotent, we have an isomorphism of vector spaces. \end{proof} \subsection{Shifted Yangians} The Yangian has a very interesting class of subalgebras: the shifted Yangians. Let $\mu$ be a dominant weight. We will now redefine elements \begin{equation} \label{Fredefine} F_\al^{(s)}=\frac{1}{h}\big[F_{\hat{\al}}^{(s-\langle \mu^*, \check{\alpha} \rangle)}, F_{\check{\al}}^{(\langle \mu^*, \check{\alpha}\rangle+1)} \big]. \end{equation} for $ \al $ a positive non-simple root and for $ s > \langle \mu^*, \al \rangle $. Note that these $ F_\al^{(s)} $ depend on $ \mu $. \begin{definition} The {\bf shifted Yangian} $Y_\mu $ is the subalgebra of $ Y $ generated by $E_\alpha^{(s)}$ for all $ \alpha, s$, $H_i^{(s)}$ for all $ i, s$, and $F_\alpha^{(s)}$ for $ s > \langle \mu^*, \alpha \rangle$. \end{definition} \begin{proposition} \begin{enumerate} \item Monomials in the $ E_{\alpha}^{(s)}, H_i^{(s)}, F_\alpha^{(s)} $ give a basis for $ Y_\mu $. \item The natural map $ Y_\mu/\hh Y_\mu \rightarrow Y / hY $ is injective. \end{enumerate} \end{proposition} \begin{proof} We first construct a PBW basis for Y slightly different from the one described in Section \ref{OurYangian}. The generators $E_{\alpha}^{(s)}$ are defined as usual (cf. Section \ref{OurYangian}). The generators $F_{\alpha}^{(s)}$ are given the usual definition when $s\leq\langle\mu^*,\alpha\rangle$, but for $s>\langle\mu^*,\alpha\rangle$ we take definition (\ref{Fredefine}). By the general remarks following Theorem \ref{DG Duality}, ordered monomials in generators $F_{\alpha}^{(s)}, H_{i}^{(s)}, E_{\alpha}^{(s)}$ are a PBW basis of $Y$. Any element $x\in Y_\mu$ can be expressed as a linear combination of these PBW monomials. We now show that any monomials appearing in such an expression do not contain factors of the form $F_{\alpha}^{(s)}$ for $s\leq\langle \mu^*,\alpha\rangle$. By definition, $x$ is a linear combination of (unordered) monomials in $F_{\alpha}^{(s)}, H_{i}^{(t)}, E_{\alpha}^{(u)}$, where $s>\langle\mu^*,\alpha\rangle$. To put $x$ in PBW form one has to commute these generators past each other. By definition, when $s>\langle\mu^*,\alpha\rangle$, $F_{\alpha}^{(s)}$ is a linear combination of monomials built from $F_i^{(t)}$, where $t>\langle\mu^*,\alpha_i\rangle$. Therefore it suffices to show that when commuting such $F_i^{(t)}$ past the other generators of $Y_\mu$ one never obtains factors of the form $F_{j}^{(u)}$ for $u\leq\langle \mu^*,\alpha_j\rangle$. This is a direct consequence of the relations appearing in Theorem \ref{OurYPresentation}. This proves the first statement of the theorem. The second part is a direct consequence of the first. \end{proof} In the limit as $ \mu \rightarrow \infty $, then we obtain $ Y_\infty $ which is the subalgebra generated by all $ E_\alpha^{(s)}, A_i^{(s)} $. This is called the {\bf Borel Yangian} in \cite{FR}. We will now show that this shifted Yangian is a quantization of $ \Gr_\mu $. Recall that $ \O(\Gr_\mu) $ is embedded as a Poisson subalgebra of $ \O(\Gm) $. \begin{theorem} \label{th:shiftedYangianQuantize} The isomorphism $ \phi $ restricts to an isomorphism of Poisson algebras from $Y_\mu / \hh Y_\mu $ to $\O(\Gr_\mu)$. \end{theorem} \begin{proof} First note that $ Y_\mu / \hh Y_\mu $ is generated as a Poisson algebra by all $ E_i^{(s)}$, $A_i^{(s)}$, and those $ F_i^{(s)} $ for $ i > \langle \mu^*, \alpha_i \rangle $. We note that Lemma \ref{le:GrmuMinors} shows that the image of these generators under $\phi $ land in the subalgebra $ \O(\Gr_\mu) $. Since $ \O(\Gr_\mu) $ is a Poisson subalgebra of $ \O(\Gm) $, we see that $ \phi $ restricts to a map $ Y_\mu/hY_\mu \rightarrow \O(\Gm) $. This map is injective, since it is the restriction of an injective map. Thus, we only need to show that it is surjective, which we do by a dimension count. Note that by Lemma \ref{le:LieStab} the isotropy Lie algebra of $t^{w_0\mu}$ in $\Gm$ is the finite dimensional nilpotent Lie algebra $$\bigoplus_{\al \in \Delta_+} \bigoplus_{i = 1}^{-\langle w_0\mu,\al\rangle} t^{-i}\fg_\al. $$ As a $\C^*$-module, the functions on the group are identical to those on the Lie algebra by the unipotence of the stabilizer. Thus, if we let $d(k)$ be the number of roots such that $\langle w_0\mu,\al\rangle< k$, the Hilbert series of the functions on the stabilizer is \[\prod_{i=1}^\infty \frac{1}{(1-q^{i})^{d(i)}}.\] The Hilbert series of $\O(\Gm)^{\Gm_\mu}$ is the quotient of that of $\O(\Gm)$ by that of functions on the stabilizer. That is, it is \[\prod_{i=1}^\infty \frac{1}{(1-q^{i})^{\dim\fg-d(i)}}.\] On the other hand, the PBW basis for the shifted Yangian gives us the same Hilbert series for $ Y_\mu $. \end{proof} Thus the shifted Yangian $ Y_\mu $ gives a quantization of $ \Gr_\mu $. \begin{remark} We should note that it is this theorem that forces us to use the thick Grassmannian; it will fail if we take the analogue of $ \Gr_\mu $ in the thin affine Grassmannian, since this has ``too many'' functions, and will correspond to a completion of $Y_\mu$. \end{remark} \subsection{Deformation of the Yangian} \label{sec:bd-yangian} We consider a deformation of the Yangian, which we think of as related to the Beilinson-Drinfeld Grassmannian deforming the affine Grassmannian. We consider for each node $i$ in the Dynkin diagram an infinite sequence of parameters $r^{(1)}_i,r^{(2)}_i,\dots \in \C[[h]] $ and their generating series $r_i(u)=1+r_i^{(1)}u^{-1}+\cdots $. Now consider the algebra $Y(\mathbf r)$ generated by the coefficients of $E_i(u), F_i(u),A_i(u)$. The relations are as in the previous section, with the relation \eqref{EFseries} replaced by \begin{equation} (u-v)[E_i(u), F_i(v)] = \hh(r_i(u)H_i(u) - r_i(v)H_i(v)), \label{EFprime} \end{equation} and let $Y_\mu(\mathbf r) $ be the shifted analogue of this algebra. $Y(\mathbf r) $ is actually isomorphic to the trivial deformation of the Yangian via the map $H_i(u)\mapsto H_i(u)/r_i(u)$. \section{Quantization of slices} In order to quantize the slices $ \Grlmbar $, we will need to define a quotient of $ Y_\mu $ (and its deformations $ Y_\mu(\mathbf r)$). To do this we will use the work of Gerasimov-Kharchev-Lebedev-Oblezin \cite{GKLO}. \subsection{Change of Cartan generators} It will be convenient for us to change the Cartan generators of $ Y $. Following \cite{GKLO}, we define $ A_i^{(s)} $ by the equation \begin{equation} \label{eq:AfromH} H_i(u) = \frac{\prod_{j \ne i} \prod_{p = 1}^{-a_{ji}} A_j(u -\frac{\hh}{2} (\alpha_i + p \alpha_j, \alpha_j)) }{A_i(u ) A_i(u -\frac{\hh}{2}(\alpha_i, \alpha_i)) } \end{equation} where $ A_i(u) = 1 + \sum_{s = 1}^\infty A_i^{(s)} u^{-s} $. \begin{example} In the $ G = SL_2 $ case, this gives $ H(u) = \frac{1}{A(u)A(u-\hh)}$ and so for example we have \begin{align*} H^{(1)} = -2A^{(1)}, \quad H^{(2)} = 3{A^{(1)}}^2 - \hh A^{(1)} - 2A^{(2)} \end{align*} \end{example} \begin{proposition}[\mbox{\cite[Lemma 2.1]{GKLO}}] Equation \ref{eq:AfromH} uniquely determines all the $ A_i^{(s)} $. \qed \end{proposition} One can think of the new generators $ A_i^{(s)} $ as being related to the fundamental coweights of $ G $, whereas the $H_i^{(s)}$ match with the simple coroots. In particular, we have the following result which follows by setting $h = 0 $ in (\ref{eq:AfromH}). \begin{proposition} \label{th:ImageAi} Let $ \phi : Y/hY \rightarrow \O(\Gm) $ be the isomorphism from Theorem \ref{th:YangianQuantize}. Then $ \phi(A_i^{(s)}) = \Delta_{\om_i, \om_i}^{(s)} $.\qed \end{proposition} \subsection{The GKLO representation} \label{GKLOrep} In this section, we describe certain representations via difference operators of shifted Yangians, based on work of Gerasimov-Kharchev-Lebedev-Oblezin \cite{GKLO}. Fix an orientation of the Dynkin diagram; we will write $ i \leftarrow j $ to denote arrows in this quiver. This will replace the ordering on the simple roots in \cite{GKLO}. Fix a dominant weight $\la$ such that $\mu\leq \la$ and let $m_i = \langle \lambda - \mu, \omega_{i^*} \rangle$ and let $ \lambda_i = \langle \lambda, \alpha_{i^*} \rangle $. Define a $\C[[\hh]]$-algebra $ D_\mu^\lambda $, with generators $z_{i,k}, \beta_{i,k}, \beta_{i,k}^{-1} $, for $ i \in I $ and $ 1 \le k \le m_i $, and $ (z_{i,k} - z_{i,l})^{-1} $, and relations that all generators commute except that $ \beta_{i,k} z_{i,k} = (z_{i,k} + d_i h) \beta_{i,k} $. This algebra $ D_\mu^\lambda $ is an algebra of $ \hh$-difference operators. \begin{proposition} The algebra $ D_\mu^\lambda $ is a free $ \C[[h]]$-algebra and we have an isomorphism of Poisson algebras $$ D_\mu^\lambda / h D_\mu^\lambda \cong \C[z_{i,k}, (z_{i,k} - z_{i,l})^{-1} , \beta_{i,k}, \beta_{i,k}^{-1}] $$ where the right hand side is given the Poisson structure defined by $ \{ \beta_{i,k}, z_{i,k} \} = d_i \beta_{i,k} $ and all other generators Poisson commute. \end{proposition} \begin{proof} Obviously, we have a map \[\C[z_{i,k}, (z_{i,k} - z_{i,l})^{-1} , \beta_{i,k}, \beta_{i,k}^{-1}] \to D_\mu^\lambda / h D_\mu^\lambda \] by observing that $D_\mu^\lambda / h D_\mu^\lambda $ is commutative. From the Bergman diamond lemma, we see that the algebra $ D_\mu^\lambda$ has a PBW basis consisting of \[h^p\cdot \prod\beta_{i,k}^{\pm a_{i,k}}\cdot \prod z_{j,k}^{b_{j,k}}\cdot \prod_{k<\ell} (z_{i,k}-z_{i,\ell})^{e_{i,k,\ell}}\] subject to restriction that if $b_{j,k}\neq 0$, then $k$ must be maximal in its equivalence class for the relation given by the transitive closure of the binary relation $k\sim \ell$ if $e_{j,k,\ell}\neq 0$. Freeness over $ \C[[h]] $ follows immediately and since the same monomials give a basis of $\C[[h]][z_{i,k}, (z_{i,k} - z_{i,l})^{-1} , \beta_{i,k}, \beta_{i,k}^{-1}]$, this confirms that we have the desired isomorphism. The Poisson bracket calculation follows immediately from the relations. \end{proof} Fix some complex numbers $ c_i^{(r)} $ for $ i \in I $, $ 1 \le r \le \lambda_i $. For any variable $x$, consider the monic degree $ \lambda_i $ polynomial whose coefficients are the numbers $ c_i^{(r)} $, $C_i(x) = x^{\lambda_i} + c_i^{(1)} x^{\lambda_i - 1} + \dots + c_i^{(\lambda_i)}.$ Note that $ x^{-\lambda_i} C_i(x) = 1 + c_i^{(1)} x^{- 1} + \dots + c_i^{(\lambda_i)}x^{-\lambda_i}$. We also introduce polynomials $Z_i(x)=\prod_{k=1}^{m_i}(x-z_{i,k})$ and $Z_{i,k}(x)=\prod_{\ell\neq k}(x-z_{i,\ell})$. Let $\mu_i=\left<\mu,\alpha_{i^*} \right>$ and set $F_{\mu,i}(u)=\sum_{s=1}^{\infty}F_i^{(s+\mu_i)}u^{-s}$. Finally, for any $ \mathbf c $ as above, define $ \mathbf r $ by \begin{equation} \label{eq:rfromc} r_i(u) = u^{-\lambda_i} C_i(u) \frac{ \prod_{j \ne i} \prod_{p=1}^{-a_{ji}} (1- u^{-1}( \hh d_i\frac{a_{ij}}{2} + hd_jp))^{m_j}}{(1-\hh d_{i} u^{-1})^{m_i}} \end{equation} We are now ready to define the \textbf{GKLO representation}: \begin{theorem} There is a map of $ \C[[\hh]] $-algebras, $\Psi_\mu^\lambda: Y_\mu(\mathbf r) \rightarrow D_\mu^\lambda $ defined by: \begin{align*} A_i(u) &\mapsto u^{-m_i}Z_i(u)\\ E_i(u) &\mapsto d_i^{-1/2} \sum_{k=1}^{m_i}\frac{\displaystyle \prod_{j \rightarrow i} \prod_{p=1}^{-a_{ji}}Z_j(z_{i,k}-\hh d_i \frac{a_{ij}}{2}-hd_jp) }{\displaystyle (u-z_{i,k})Z_{i,k}(z_{i,k}) }\beta^{-1}_{i,k} \end{align*} And, $F_{\mu,i}(u)$ maps to $$ -d_i^{-1/2} \sum_{k=1}^{m_i} C_i(z_{i,k} + hd_{i}) \frac{\prod_{j \leftarrow i} \prod_{p=1}^{-a_{ji}} Z_j(z_{i,k}-\hh d_i(\frac{a_{ij}}{2} -1)-hd_jp)}{(u-z_{i,k} - hd_{i})Z_{i,k}(z_{i,k}) }\beta_{i,k} $$ \end{theorem} \begin{proof} When $\mu=0$ this is a reformulation of \cite[Theorem 3.1.(i)]{GKLO}. Suppose then that $\mu\neq0$. Then the proof of \cite[Theorem 3.1]{GKLO} applies to all the relations in $Y_\mu $ except for the commutator relation between $E_i(u)$ and $F_{\mu,i}(v)$. In the shifted Yangian this relation takes the form \begin{equation} \label{RelI} (u-v)[E_i(u),F_{\mu,i}(v)]=h(J_{\mu,i}(v)-J_{\mu,i}(u)) \end{equation} where $J_i(v)=r_i(v)H_i(v)=\sum_{p=0}^{\infty}J_i^{(p)}v^{-p} $ and $$ J_{\mu,i}(v)=\sum_{p=1}^{\infty}J_i^{(p+\mu_i)}v^{-p}. $$ To express the left hand side of (\ref{RelI}) we set: \begin{eqnarray*} L_i(v)&=&\frac{C_i(z_{i,k}+hd_i)\prod_{j\neq i}\prod_{p=1}^{-a_{ji}}Z_j(z_{i,k}-hd_i(\frac{a_{ij}}{2}-1)-hd_jp)}{Z_{i,k}(z_{i,k}+hd_i)Z_{i,k}(z_{i,k})(v-z_{i,k}-hd_i)} \\ R_i(v)&=&\frac{C_i(z_{i,k})\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(z_{i,k}-hd_i\frac{a_{ij}}{2}-hd_jp))}{Z_{i,k}(z_{i,k}-hd_i)Z_{i,k}(z_{i,k})(v-z_{i,k})} \end{eqnarray*} Then the left hand side of (\ref{RelI}) is equal to $$ d_i^{-1}\sum_{k=1}^{m_i} (L_i(v)-R_i(v))-(L_i(u)-R_i(u)) $$ Note that we expressed this sum as a ``$v$-part'' minus a ``$u$-part''. Now we consider the right hand side of (\ref{RelI}). Note that $$\lambda_i=\mu_i+2m_i+\sum_{j \leftrightarrow i}a_{ji}m_j$$ Therefore, $$ r_i(u) = u^{-\mu_i}\frac{C_i(u)\prod_{j\neq i}\prod_{p=1}^{-a_{ji}}(u- \hh d_i\frac{a_{ij}}{2} - hd_jp)^{m_j}}{u^{m_i}(u-hd_{i})^{m_i}} $$ Now $$ H_i(u) \mapsto \frac{u^{m_i}(u-hd_{i})^{m_i}}{\prod_{j\neq i}\prod_{p=1}^{-a_{ji}}(u-hd_i\frac{a_{ij}}{2}-hd_jp)^{m_j}} \frac{\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_i(u)Z_i(u-hd_i)} $$ and hence $$ r_i(u)H_i(u) \mapsto u^{-\mu_i}C_i(u)\frac{\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_i(u)Z_i(u-hd_i)} $$ Therefore $$ C_i(u)\frac{\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_i(u)Z_i(u-hd_i)}=\sum_{p=0}^{\infty}J_i^{(p)}u^{\mu_i-p} $$ On the other hand $$ J_{\mu,i}(u)=\sum_{p=\mu_i+1}^{\infty}J_i^{(p)}u^{\mu_i-p} $$ showing that $J_{\mu,i}(u)$ is a truncation of $C_i(u)\frac{\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_i(u)Z_i(u-hd_i)}$. More precisely, for $r=1,2,...$ \[ hJ_{\mu,i}(u)\Big\vert_{u^{-r}}=hC_i(u)\frac{\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_i(u)Z_i(u-hd_i)}\Big\vert_{u^{-r}} \] Using partial fractions we have that $\frac{h}{Z_i(u)Z_i(u-hd_{i})}$ equals $$ \sum_{k=1}^{m_i}\frac{1}{Z_{ik}(z_{ik})Z_{ik}(z_{ik}+hd_{i})(u-z_{ik}-hd_{i})}-\frac{1}{Z_{ik}(z_{ik})Z_{ik}(z_{ik}-hd_{i})(u-z_{ik})} $$ Therefore for $r=1,2,...$ the $u^{-r}$-coefficient of $hJ_{\mu,i}(u)$ is equal to the $u^{-r}$-coefficient of \begin{eqnarray*} \sum_{k=1}^{m_i}\frac{C_i(u)\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_{ik}(z_{ik})Z_{ik}(z_{ik}+hd_{i})(u-z_{ik}-hd_{i})}&-&\\\frac{C_i(u)\prod_{j \neq i}\prod_{p=1}^{-a_{ji}}Z_j(u-hd_i\frac{a_{ij}}{2}-hd_jp)}{Z_{ik}(z_{ik})Z_{ik}(z_{ik}-hd_{i})(u-z_{ik})} \end{eqnarray*} Now observe that for any polynomial $p(u)$ and for $r=1,2,...$ $$ \frac{p(u)}{u-z}\Big|_{u^{-r}} = \frac{p(z)}{u-z}\Big|_{u^{-r}} $$ Therefore for $r=1,2,...$ the $u^{-r}$-coefficient of $hu^{\mu_i}J_{\mu,i}(u)$ is equal to the $u^{-r}$-coefficient of \begin{eqnarray*} \sum_{k=1}^{m_i}L_i(u)-R_i(u)\end{eqnarray*} proving (\ref{RelI}). \end{proof} \begin{example} \label{ExampleA1} If $\fg=\mathfrak{sl}_2$ and $\la=\alpha^\vee, \mu=0$, then the formulas above simplify considerably. In this case, \[ A(u) \mapsto 1-z u^{-1} \qquad E(u) \mapsto \frac{1}{u-z}\beta^{-1} \] and $F(u) \mapsto -((z+\hh)^2 + c^{(1)} (z+\hh) + c^{(2)})\frac{1 }{u-z-h}\beta$. In particular, \begin{equation*} H^{(1)} \mapsto 2z \qquad E^{(1)} \mapsto \beta^{-1} \qquad F^{(1)} \mapsto -((z+h)^2 + c^{(1)} (z+h) + c^{(2)}) \beta \end{equation*} Restrict this representation to the copy of $ \mathfrak{sl}_2 $ generated by $E^{(1)},H^{(1)}+c^{(1)}+h, F^{(1)} $, and consider these as difference operators acting on the polynomial ring $ \C[z] $. (More precisely, these act on $\C[[h]][z]$, but one can specialize $h$ to $1$.) This is a standard Whittaker module for $ \mathfrak{sl}_2 $ with generic nilpotent character. \end{example} \begin{remark} We can define a $\Z$-grading on $ D_\mu^\lambda $ by setting $$ \deg \hh = 1, \ \deg z_{i,k} = 1, \ \deg \beta_{i,k} = m_i + \sum_{i \rightarrow j} a_{ij}m_j + \lambda_i - \mu_i $$ With this definition, the GKLO representation preserves grading. \end{remark} \subsection{Quantization of the slices \texorpdfstring{$\Grlmbar$}{Gr}} For any $ \mathbf c $ as above, let $Y_\mu^\lambda(\mathbf c) $ be the image of $ Y_\mu(\mathbf r) $ in $ D_\mu^\la $ under the GKLO representation $ \Psi_\mu^\la$ and let $I_\mu^\la(\mathbf c) $ denote the kernel of $ \Psi_\mu^\la$ (here $ \mathbf r $ is determined from $ \mathbf c $ by (\ref{eq:rfromc})). Note that $ Y^\lambda_\mu(\mathbf c) $ is free as a $\C[[\hh]] $-algebra since it is a subalgebra of $ D_\mu^\lambda $, a free $ \C[[\hh]]$-algebra. We have the isomorphism $ Y_\mu(\mathbf c) \rightarrow Y_\mu $ from section \ref{sec:bd-yangian} and thus we get an isomorphism of Poisson algebras $ Y_\mu(\mathbf c) / h Y_\mu(\mathbf c) \rightarrow \O(\Gr_\mu) $ from Theorem \ref{th:shiftedYangianQuantize}. On the other hand, because $ Y^\lambda_\mu(\mathbf c) $ is free as a $\C[[\hh]]$-algebra, we get a surjection of Poisson algebras $ Y_\mu(\mathbf c)/ hY_\mu(\mathbf c) \rightarrow Y_\mu^\lambda(\mathbf c)/ h Y_\mu^\la(\mathbf c) $. We will now establish the following theorem which shows that $ Y^\la_\mu $ is a quantization of scheme supported on $ \Grlmbar$. \begin{theorem}\label{Ylm-quant} There is a surjective map of Poisson algebras $ Y^\la_\mu(\mathbf c) / \hh Y^\la_\mu(\mathbf c) \rightarrow \O(\Gr^{\bar \la}_\mu)$ which is an isomorphism modulo the nilradical of the left hand side. \end{theorem} \begin{remark} Consider the map $$ Y^\la_\mu(\mathbf c) / \hh Y^\la_\mu(\mathbf c) \rightarrow \C[z_{i,k}, (z_{i,k} - z_{i,l})^{-1} , \beta_{i,k}, \beta_{i,k}^{-1}] $$ obtained by reducing the GKLO representation mod $\hh$. If we knew that this map was injective, then we would know that $ Y^\la_\mu(\mathbf c) / \hh Y^\la_\mu(\mathbf c) $ was reduced and that the map from Theorem \ref{Ylm-quant} was an isomorphism. We will in fact make a stronger conjecture. \end{remark} If Conjecture \ref{co:main2} holds, then we can strengthen Theorem \ref{Ylm-quant} as follows. \begin{theorem} \label{Ylm-quant2} If Conjecture \ref{co:main2} holds then \begin{enumerate} \item There is an isomorphism of Poisson algebras $Y^\la_\mu(\mathbf c) / \hh Y^\la_\mu(\mathbf c) \rightarrow \O(\Gr^{\bar \la}_\mu)$. \item $ Y^\la_\mu(\mathbf c) $ is the quotient of $ Y_\mu(\mathbf c)$ by the 2-sided ideal generated by $ A^{(s)}_i $ for $ s > m_i, i \in I $. \end{enumerate} \end{theorem} \begin{proof}[Proof of Theorem \ref{Ylm-quant}] Via the isomorphism $ Y_\mu(\mathbf c)/h Y_\mu(\mathbf c) \rightarrow \O(\Gr_\mu) $, we can regard $ Y_\mu^\la(\mathbf c) $ as a quotient of $ \O(\Gr_\mu) $ by an ideal, which we denote $ I_2 $. First, note that $ \Psi_\mu^\la(A_i^{(s)}) = 0 $ for $ i \in I$, $ s > m_i $ and thus $ \Delta_{\omega_i, \omega_i}^{(s)} \in I_2 $ for $ i \in I $ and $ s > m_i$. Since $ I_2 $ is a Poisson ideal, we see that $ J_\mu^\la \subset I_2 $. By Proposition \ref{pr:GenJmulam}, we see that the vanishing locus of $ J_\mu^\la $ is $ \Grlmbar $ and thus the vanishing locus of $ I_2 $ is contained in $ \Grlmbar$. Thus it suffices to show that the vanishing locus of $ I_2 $ is not strictly contained in $ \Grlmbar $. Since $ I_2 $ is a Poisson ideal, we see that $ V(I_2) $ is a Poisson subvariety of $ \Grlmbar $ and thus is the union of $ \Gr^{\bar \nu}_\mu $, for $ \nu \le \lambda $. Suppose that we have $ V(I_2) = \cup_j \Gr^{\bar \nu_j}_\mu $ for $ \nu_j < \la $. For each $ j$, there exists $ i $ such that $ \langle \nu_j - \mu, \om_{i^*} \rangle < \langle \la - \mu, \om_{i^*} \rangle = m_i $. Thus applying Proposition \ref{pr:SetTheoryGrlam}, $ \prod_i \Delta_{\omega_i, \omega_i}^{(m_i)} $ vanishes on $\cup_j \Gr^{\bar \nu_j}_\mu $. Hence for some $ k $ we have $ (\prod_i \Delta_{\omega_i, \omega_i}^{(m_i)})^k \in I_2$. On the other hand, we see that under the GKLO representation $$\Psi_\mu^\la(A_i^{(m_i)}) = (-1)^{m_i} z_{i,1} \cdots z_{i, m_i} $$ and thus under the map $$ \O(\Gr_\mu) \cong Y_\mu(\mathbf c)/h Y_\mu(\mathbf c) \rightarrow D^\la_\mu/hD^\la_\mu \cong \C[z_{i,k}, (z_{i,k} - z_{i,l})^{-1}, \beta_{i,k}, \beta_{i,k}^{-1}] $$ we see that $ (\prod_i \Delta_{\omega_i, \omega_i}^{(m_i)})^k $ is mapped to a monomial in the $ z_{i,k} $. In particular, this shows that $ (\prod_i \Delta_{\omega_i, \omega_i}^{(m_i)})^k$ does not lie in $ I_2 $, contradicting the previous paragraph. Thus we conclude that $ V(I_2) = \Gr_\mu^\la $ as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{Ylm-quant2}] Let $I_1 $ be the ideal of $ \Grlmbar $ in $ \O(\Gr_\mu) $. Let $ K $ be the ideal in $ Y_\mu(\mathbf c) $ generated by $ A_i^{(s)} $ for $ s > m_i, i \in I $. Then we have an inclusion $ K \subset I^\la_\mu(\mathbf c) $ and a resulting map $$ K/hK \rightarrow I^\la_\mu(\mathbf c)/hI^\la_\mu(\mathbf c) = I_2 $$ which may not be injective. Let $ I_3 $ denote the image of this map. From the definitions, we see that $I_3 \subset I_2 $. Moreover, we have that $ J^\la_\mu \subset I_3 $, since $ I_3 $ is a Poisson ideal and it contains the generators of $ I_3 $. In the previous proof we have shown that $ I_2 \subset I_1 $. Thus we have a chain of inclusions $ J^\la_\mu \subset I_3 \subset I_2 \subset I_1 $. On the other hand, Conjecture \ref{co:main2} shows us that $ I_1 = J^\la_\mu $. Hence we conclude that $ I_1 = I_2 = I_3 = J^\la_\mu $. So the first assertion holds. For the second assertion, note that $ I_3 = I_2 $ implies that $ K/hK \rightarrow I^\la_\mu(\mathbf c)/hI^\la_\mu(\mathbf c)$ is surjective. Let $ L = I^\la_\mu(\mathbf c)/K $. The long exact sequence for $ \otimes_{\C[[h]]} \C $ gives $$ K/hK \rightarrow I^\la_\mu(\mathbf c) / h I^\la_\mu(\mathbf c) \rightarrow L/hL \rightarrow 0 $$ and thus $ L/hL = 0 $. From Nakayama's lemma, we conclude that $L = 0 $ and thus $ K = I^\la_\mu(\mathbf c) $ as desired. \end{proof} \subsection{Universality of the quantization} \label{se:universality} There is already a rich literature on the theory of deformation quantizations of symplectic varieties. The most relevant work for us is that of Bezrukavnikov and Kaledin \cite{BK04a}, whose show the existence and uniqueness of deformation quantizations of symplectic resolutions. This theory can be applied directly to a smooth convolution variety $ \Gr^{\overline{\vlam}}_\mu $. Moreover, as noted by Braden, Proudfoot and the second author \cite[3.4]{BPW}, it can be extended in a very straightforward way to the non-smooth case $\Gr^{\overline{\vlam}}_\mu $, since we know that $ \Gr^{\overline{\vlam}}_\mu $ is a terminalization (Theorem \ref{th:terminalization}). This shows that the variety $\Gr^{\bar \la}_\mu$ has a canonical family of quantizations which extend to a deformation quantization sheaf on $\Gr^{\bar \vlam}_\mu$. The base of this family is the same as the base for the universal deformation of $\Grlmbar$ as a symplectic singularity (as constructed by Kaledin-Verbitsky \cite{KV02} or Namikawa \cite{NaP}). By \cite[1.1]{NaWeyl}, this base $\mathbb{B}$ is an affine space modulo the action of a finite group. This group can be described by looking at the codimension 2 strata of the product of $\Grlmbar$, which are $\Gr^{\la-\al_i}_\mu$, and taking the product of the Weyl groups attached to them by the McKay correspondence, which (using Example \ref{eg:Kleinian}) in our case results in the symmetric groups $S_{\la,\mu}=\prod_{i\colon m_i>0} S_{\la_i}$. Here we use the fact that these strata are simply connected. For the remainder of this section, let us regard the complex number $ r_i^{(s)} $ and $ c_i^{(s)} $ as variables and let $ \tilde{Y}_\mu $ be the $\C[r_i^{(s)}] $-algebra which recovers the old $ Y_\mu(\mathbf r ) $ upon specializing the variables. Let $\tilde{Y}^\la_\mu = \tilde{Y}_\mu \otimes_{\C[r_i^{(s)}]} \C[c_i^{(s)}] / (\{A_i^{(s)} : s > m_i \}) $ (here we use a map $ \C[r_i^{(s)}] \rightarrow \C[c_i^{(s)}] $ given by (\ref{eq:rfromc})). If Conjecture \ref{co:main2} (and hence Theorem \ref{Ylm-quant2}) holds, then $ \tilde{Y}^\la_\mu $ can be specialized (via a map $ \C[c_i^{(s)}] \rightarrow \C $) to each of the $ Y^\la_\mu(\mathbf c) $. We conjecture that $ \tilde{Y}^\la_\mu $ is related to the above universal quantization as follows. First note that the BD analogue $\Gr_{\mu;\mathbb{A}^{\rho(\la)}}^{\bar\vlam}$ is a symplectic deformation of $\Grlmbar$ over the base $\mathbb{A}^{\rho(\la)}$, and thus is the pull-back of the universal deformation by a map $b\colon \mathbb{A}^{\rho(\la)}\to \mathbb{B}$. \begin{conjecture} \begin{enumerate} \item The map $ b : \mathbb{A}^{\rho{\la}} \rightarrow \mathbb{B} $ descends to a surjective map $ \tilde b : \mathbb{A}^{\rho{\la}}/S_{\la, \mu} \rightarrow \mathbb{B} $. \item The algebra $\tilde{Y}^\la_\mu$ is the base change along $\tilde b $ of the universal, Bez\-ru\-kav\-ni\-kov-Kaledin-type quantization. \end{enumerate} \end{conjecture} \begin{example} We continue Example \ref{ExampleA1}, so $ G = SL_2 $ and $ \lambda = \alpha^\vee, \mu = 0 $. Note that in $ Y^\lambda_\mu $, we have that $E^{(s)}=(-A^{(1)})^{s-1}E^{(1)}$, and $F^{(s)}=F^{(1)}(-A^{(1)})^{s-1},$ and so $Y^\lambda_\mu$ is generated by $ E^{(1)} $ and $ F^{(1)} $. Let $ U_\hh \mathfrak{sl}_2 $ denote the $ \hh$-version of the universal enveloping algebra of $ \mathfrak{sl}_2 $. Let $ C = EF + FE + \frac{1}{2}H^2 $ be its Casimir element. For any complex number $ c $, let $ Z_c $ denote the ideal in $U_\hh \mathfrak{sl}_2$ generated by the central element $C - c $. Standard results give that $ U_\hh \mathfrak{sl}_2 / Z_c $ is a quantization of the nilpotent cone of $ \mathfrak{sl}_2 $, which is isomorphic as a Poisson variety to $ \Grlmbar $. The map $$ E^{(1)} \mapsto E, \quad H^{(1)} \mapsto H +c^{(1)} + h, \quad F^{(1)} \mapsto F$$ defines an isomorphism $ Y^\lambda_\mu \cong U_h\mathfrak{sl}_2/Z_{c} $, where $c=2c^{(2)}-\frac{1}{2}(c^{(1)})^2+\frac{1}{2}h^2$. If we don't specialize, then the same formulas combined with the assignment $$ c^{(2)}\mapsto -\frac{1}{2}C+\frac{1}{4}(c^{(1)})^2-\frac{1}{4}h^2 $$ give an isomorphism \[\tilde{Y}^2_0 \cong U_\hh(\mathfrak{sl}_2)[c^{(1)}].\] In this example, $U_\hh(\mathfrak{sl}_2)$ is the universal quantization, and $c^{(1)}$ a trivial deformation parameter. The universal family is \[\mathfrak{sl}_2\overset{\operatorname{tr}(a^2)}\longrightarrow\C. \] Since the fiber of the BD analogue over $(x,y)\in \mathbb{A}^2$ can be identified with matrices with eigenvalues $x$ and $y$, the map $b$ is just $b(x,y)=\nicefrac{1}{4}(x-y)^2$. Thus, choosing $x+y$ and $(x-y)^2$ as generators of symmetric functions, $\tilde b$ is just the projection map $\mathbb{A}^2\to \mathbb{A}^1$. \end{example} The sum of the $c_i^{(1)}$ is always a trivial deformation parameter; usually this is the only such parameter, but there are degenerate cases where other parameters can be trivialized as well (for example, if $\la=\mu$). \subsection{Quantization of Zastava spaces} In this section, we assume that Conjecture \ref{co:main2} holds and thus we will assume the conclusions of Theorem \ref{Ylm-quant2}. Let us fix $ \nu $ in the positive coroot cone. Choose some $ \mu_0 $ such that $ \mu_0 + \nu $ is dominant. Let $ \mathbf c $ be a collection of complex numbers as above and consider $ Y_{\mu_0}^{\mu_0 + \nu}(\mathbf c) $. Now for any dominant $ \mu$ with $ \mu \ge \mu_0 $, we extend $\mathbf c $ by 0 and (slightly abusing notation) consider $ Y_{\mu}^{\mu+\nu}(\mathbf c) $. Since the generators of $ Y_{\mu}^{\mu+\nu}(\mathbf c)$ are a subset of the generators of $ Y_{\mu_0}^{\mu_0 + \nu}(\mathbf c) $ and the relations are the same, we obtain a map $ Y_{\mu}^{\mu+\nu}(\mathbf c)\rightarrow Y_{\mu_0}^{\mu_0 + \nu}(\mathbf c) $. It is easy to see that this map is an isomorphism on the $ N$th filtered piece if $ \langle \mu, \alpha_i \rangle \ge N $ for all $i $. Thus this system stabilizes to the algebra $ Y^{\infty + \nu}_\infty $, which is the quotient of the Borel Yangian $ Y_\infty $ by the 2-sided ideal generated by $ A_i^{(s)} $ for $ s > \langle \nu, \alpha_i \rangle $; perhaps surprisingly, this limit doesn't depend on $ \mathbf c $ or our starting $ \mu_0 $. Combining together Theorem \ref{Ylm-quant2} with Theorem \ref{th:maptoZastava}, we obtain the following (dependent on Conjecture \ref{co:main2}), which was conjectured in \cite{FR} for $ G = SL_n $ (and proven for $ G =SL_2 $). \begin{theorem} $Y_\infty^{\infty + \nu}/ \hh Y_\infty^{\infty + \nu} $ is isomorphic to the Poisson algebra $ \O(Z_\nu) $. \qed \end{theorem} \begin{remark} As mentioned above, the GKLO representation gives rise to a map of graded Poisson algebras $$ Y^\lambda_\mu(\mathbf c) / \hh Y^\lambda_\mu(\mathbf c) \rightarrow D^\lambda_\mu(\mathbf c) / \hh D^\lambda_\mu(\mathbf c) $$ (which we expect is an inclusion) and thus to a $\C^\times $-equivariant map of Poisson varieties $$ \prod_i (\C^{m_i} \smallsetminus \Delta) \times (\C^\times)^{m_i} \rightarrow \Gr^\lambda_\mu $$ which we expect to be \'etale. If we then compose with the map $ Gr^\lambda_\mu \rightarrow Z_{\lambda-\mu} $, we obtain $ \prod_i (\C^{m_i} \smallsetminus \Delta) \rightarrow Z_{\lambda-\mu} $, which was studied in \cite{GKLO}. \end{remark}
2,877,628,090,945
arxiv
\section{Introduction} Organic conductors of the Bechgaard salts family, $(TMTSF)_2 X$ where $TMTSF$ = tetramethylselenafulvalene are quasi-one-dimensional (quasi-1D) systems, which have been found over the last few years to exhibit fascinating properties under magnetic field \cite{review,gorkov,gm91,pl96,hlm1}. The typical hierarchy of their transfer integrals is: $t_a=3000K$, $t_b=300K$, $t_c=10K$ In three members of this family ($X=ClO_4,PF_6,ReO_4$), the metallic phase is destroyed by a moderate magnetic field $H$ applied along the $c$ direction, perpendicular to the most conducting planes ($a,b$). A cascade of magnetic phases, separated by first order transitions appear as the field intensity is stepped up: within each sub-phase, which I have called "Ultra Quantum Crystal" (UQC in the following) a Spin-Density Wave Phase ( FISDW) (i.e. a Quantum Crystal) is stabilized with a peculiar electronic structure, characterized by a small number of exactly filled Landau levels (bands in fact) \cite{pl96}. Each UQC sub-phase exhibits a Quantized Hall conductivity, which is the first example of a Quantum Hall Effect in a 3D system. This cascade of quantized phases result from an interplay between the nesting properties of the Fermi Surface (FS), and the quantization of electronic orbits in the field: the wave vector of the SDW varies with field so that unpaired carriers in a subphase are always organized in completed filled Landau bands. As a result the number of carriers in each subphase is quantized, and so is the Hall conductivity: $\sigma_{xy}=2Ne^2/\hbar$. The factor 2 accounts for spin degeneracy \cite{review} \cite{pl96}. The condensation of the UQC phases results from the peculiar electronic strucure of open Fermi Surface metal under magnetic field: because of Lorentz force, the electronic motion becomes periodic and confined along the high conductivity direction of the chains ({\bf a }direction)\cite{gorkov}. The periodic motion of the electrons in real space is characterized by a wave vector $G=eHb/\hbar$, $b$ being the interchain distance. (In the rest of this Letter, wave vectors will be expressed in units of $G$). As a result, the static bare susceptibility of the normal phase, $\chi _0({\bf Q})$ can be expressed as a sum over weighted strictly 1D bare susceptibilities which diverge at quantized values of the longitudinal component of the wave vector $Q^n_{\vert \vert}=2k_F+n$\cite{gm91,pl96,hlm1}. The largest divergence signals the appearance of a SDW phase with quantized vector $Q_{\vert \vert}=2k_F+N$. This Quantized Nesting Model (QNM) \cite{hlm1} describes most of the features of the phase diagram in a magnetic field. It has been shown recently to explain the experimental observation of the Hall plateaux sign reversal when the field varies\cite{zm}. Most plateaux exhibit the same sign. (By convention I will refer to these plateaux as positive ones). The sign reversal has been discovered by Ribault in $(TMTSF)_2ClO_4$ under certain conditions of cooling rate\cite{rib2}. Negative plateaux have been reproduced and also found in $(TMTSF)_2PF_6$ where their existence depends crucially on pressure \cite{pivetau,cooper,bali}. Recently, Balicas et al \cite{bali} have shown that there exists a range of pressure for which, in the $PF_6$ salt, the sequence of observed plateaux when the field decreases can be identified with the quantum numbers $N=1,2,-2,3,4,5,6,7$. A more ancient experiment has shown a sequence of phases $N=1,2,-2,4,-4,5,6$\cite{cooper}. Hereafter, I will refer to the UQC Phase with negative Hall number as "Ribault Phases". Zanchi and Montambaux\cite{zm} have shown that the negative plateaux can be understood within the QNM assuming the dispersion relation in the normal phase to be: \begin{eqnarray} \label{model} \epsilon({\bf k})& =& v_F(\vert k_x\vert - k_F)+ \epsilon_{\perp }({\bf k_{\perp}}),\\ \epsilon_{\perp }({\bf k_{\perp}})& =& -2t_b\cos k_yb -2t_c\cos k_zc - 2 t'_b\cos 2k_yb \nonumber \\ &&-2t_3\cos3kyb -2t_4\cos4kyb \nonumber \end{eqnarray} $\epsilon(k_{\perp})$is a periodic function which describes a warped FS. With $t_3=t_4=0$, Eq.(\ref{model} ) cannot lead to sign reversals, as $sign(N)=sign(Q_{\vert \vert}-2k_F)=sign(t'_b)$ \cite{zm}. However small values of $t_3\simeq .2t'_b=2$K, and $t_4=.2$K are sufficient to account for the experimental results of Balicas et al \cite{bali} \cite{zm}. The normal metal-FISDW instability line $T_{cN}(H)$ is given by: \begin{equation} \label{ki} \chi _0({\bf Q},T_{cN}, H)= \Sigma _nI_n^2 (Q_{\perp })\chi _0^{1D}(Q_{\vert \vert}-n, T_{cN})=1/\lambda \end{equation} $\lambda$ is the electronic interaction constant. Eq.(\ref{ki}) exhibits the structure of $\chi _0$ as the sum of one dimensional terms $\chi _0^{1D}$ shifted by the magnetic field wave vector $G=eHb/\hbar$. $\chi_0^{1D}\propto -\ln(\max \{ v_F(2k_F-q),T \}/\epsilon_F)$. In Eq. (\ref{ki}), the coefficient $I_n$ depends on the dispersion relation and H: \begin{eqnarray} I_n(Q_{\perp}) &= &\nonumber \\ <\exp i\left[(T_{\perp}(p+Q_{\perp}/2)+ T_{\perp}(p-Q_{\perp}/2))+np \right]> && \end{eqnarray} where $T_{\perp}(p) = (1/\hbar \omega _c)\int_0^p\epsilon_{\perp}( p')dp'$ and $<...>$ denotes the average over p. Let me define a generalized instability temperature $T_{N \pm m}$: \begin{eqnarray} \label{Tc} 1/\lambda &=& I^2_{N\pm m}(Q^N_{\perp } \pm q_{\perp }) \ln \left( \frac{2\gamma E_0}{\pi T_{N \pm m}} \right) + \nonumber \\ && \sum_{n \neq 0} I^2_{N\pm m +n} (Q_{\perp }^N \pm q_{\perp})\ln \left( \frac{E_0}{\vert n\vert \omega _c}\right) \end{eqnarray} In Eq.(\ref{Tc}), $\gamma$ is Euler's constant, $E_0$ is a high energy cut-off, and $\omega_c =v_FG/2$. For $m=0$ and $q_{\perp}=0$, $T_{N \pm m}=T_{cN}$, the ordering temperature for the $N$th subphase. For $m\neq 0$, $T_{N \pm m}(q_{\perp})$ generalizes the definition of the critical temperatures on either side of phase $N$ in the $(T,H)$ plane. $T_{N \pm m}(q_{\perp})$ are at most equal to the virtual transition lines $T_{ N\pm m}$ which can be drawn in the $N$th subphase part of the phase diagram and which represent virtual transition lines to phases with slightly larger free energy than the $N$th subphase\cite{pl87}. In the ($T,H$) plane, there is an infinite number of continuous lines crossing the phase diagram. The upper limit of this family is the actual (continuous non analytic) transition line from the normal metal to the UQC; this line coincides piecewise with the transition lines labelled by the successive integers describing the Quantum Hall conductivity. See Fig.(1) . An example of computed network of transition lines was given in \cite{lm} Using this network of real and virtual transition lines, I showed, with Poilblanc, that the UQC collective modes exhibit, aside from the usual Golstone modes linear in wave vector, at least one Magneto-Roton (hereafter MR) mode within the single particle gap, located at $q_{\vert \vert}=G=1$, $q_z=\pi /c$, and $q_y$ some optimum $q_y$\cite{pl87,dppl}. This mode, together with the usual Goldstone modes, is the signature of the novel nature of the electron-hole condensate in UQC: a Spin Density Wave driven by orbital quantization and an Integer Quantum Hall system driven by electronic interactions. The MR mode is both a consequence of quasi 1D electronic periodic orbital motion, gauge invariance, and the long range crystalline order of the Field Induced Phase\cite{dppl}. One obvious question comes to mind: {\it are the Ribault phases the same objects as the phases with positive plateaux?} At first sight, the answer seems to be positive: both have a periodic modulation of Spin Density, both exhibit Quantized Hall Plateaux, the only difference being the sign of the carriers, in turn connected to a quantized wave vector $Q_{\vert \vert}$ which is larger (smaller) than $2k_F$, for N positive (negative). The purpose of this Letter is to point out that {\it the Ribault Phase of the UQC differs from the usual UQC in one important aspect: its collective modes exhibit at least {\bf two}, (sometimes {\bf four},or more), low lying MR modes within the gap}, with different wave vector components $q_{1\vert \vert} =M_1$ and $q_{2\vert \vert}=M_2$ at the magneto-roton minima with $M_1,M_2$ integers $>1$, (sometimes $q_{i\vert \vert} =M_i >1 $, $i$ from 1 to 4,etc.,) as opposed to one low-lying mode for the usual UQC, with wave vector $q_{\vert \vert}=1$. Both MR minima, relative to the single particle gap, {\it vary with field with opposite sign of the derivative versus field, as opposed to an almost field independent MR mode in the usual UQC}. To my knowledge this is the first example of such a rich collective mode structure in a quantum condensate. This will result in possibly markedly different physical properties for the Ribault Phase , as well as its neighbouring phases as compared with the usual UQC.\cite{pl97,plcmc} In order to prove my point, let me recall a few results on the UQC collective mode theory\cite{pl87,dppl} \section{New MR Collective Modes} The collective modes are obtained from the poles of the spin-spin correlation function in the ordered phase, in the RPA \cite{pl87,dppl}. The equation is: \begin{eqnarray} \label{rot} &(1-\lambda \hat{\chi }^0_{+-}({\bf Q_N +q}, \omega ) )(1-\lambda \hat{\chi }^0_{+-}({\bf Q_N-q}, \omega )) & \nonumber \\ & - \lambda^2 \hat{ \Gamma}^0_{+-}({\bf q}, \omega )\hat{ \Gamma}^0_{-+}({\bf q},\omega ) = 0& \end{eqnarray} with ${\bf q= Q - Q_N}=$ collective mode wave vector. In Eq.(\ref{rot}) $\hat{\chi }_{+-}^0$ are the irreducible bubbles renormalized by all possible scatterings on the mean field potentials connected to the various gaps. Likewise $\hat{\Gamma}^0_{+-}({\bf q}, \omega )$ is the extraordinary bubble, also renormalized with all possible scatterings. The simplest approximation resums to all orders the gap $\delta_N= \Delta I_N$ at the Fermi level and takes all other gaps into account to second order in perturbation\cite{pl87,dppl}. Then \begin{eqnarray} \hat{\chi}^0_{+-}({\bf Q}_N+ {\bf q}, \omega)&=&\Sigma_n I^2_{N+n}(Q_{\perp }^N+q_{\perp }) \tilde{\bar{\chi }}^0\left( n -q_{\vert \vert}, \omega \right) \end{eqnarray} and \begin{eqnarray} \hat{\Gamma}^0_{+-} ({\bf q}, \omega )=& &\nonumber \\ \Sigma_n I_{N+n}(Q_{\perp}^N +q_{\perp})I_{N-n}(Q_{\perp }-q_{\perp }) \tilde{\bar{ \Gamma^0}} \left( n - q_{\vert \vert}, \omega \right) && \end{eqnarray} $\tilde{\bar{\chi}}^0$ and $\tilde{\bar{\Gamma}}^0$ are for $n=0$ the objects discussed in \cite{lra} in connections with collective modes of SDW. For $q\ll 1$ and $\omega \ll 2\delta_N$ Eq.(\ref{rot}) describes the usual phase and amplitude modes $\omega^2=v_F^2{\bf q}^2$ and $\omega^2=v_F^2{\bf q}^2 + 4\delta^2_N$. {\it New physics appears for $q_{\vert \vert} =m +\delta $}, with $\delta \ll 1$ and $m$ integer. In that case, $\hat{\chi}^0_{+-}({\bf Q}_N+ {\bf q}, \omega )\neq \hat{\chi}^0_{+-}({\bf Q}_N- {\bf q}, \omega )$, so that {\it Eq.(\ref{rot}) does not factorize anymore.} Then an interaction with the gap at $N\pm m$ allows the collective mode to propagate in a medium almost identical to the case $m=0$ and $q_{\vert \vert}\ll 1 $. A second interaction allows the outgoing oscillation to retrieve the momentum lost with the first interaction. {\it The mode with $m\neq 0$ would have exactly the same energy as that with $m=0$ and $q_{\vert \vert} \ll 1$ if all $I_N$ were equal}. Such is not the case, so that the {\it phase and amplitude modes of the order parameter are not decoupled anymore} for $m\neq 0$ and, {\it instead of a zero energy mode at $q_{\vert \vert}=m$, a local minimum appears}. More precisely Eq.( \ref{rot}) reduces to: \begin{eqnarray} \label{rot2} \left( \ln \left( \frac{2\gamma E_0}{\pi T_{N+m }} \right) - \tilde{\bar{\chi_0}}(\delta,\omega) \right)&& \nonumber \\ \times \left( \ln \left( \frac{2\gamma E_0}{\pi T_{N-m}} \right) -\tilde{\bar{\chi_0}}(\delta, \omega) \right) & =& \left( \tilde{\bar {\Gamma}} (\delta ,\omega ) \right)^2 \end{eqnarray} where $T_{N \pm m}$ is defined in Eq.( \ref{Tc}), $ q_{\vert \vert}=2k_F +m +\delta$, and $\delta \ll 1$. For simplicity, I restrict the discussion here to T=0K\cite{pl97},\cite{plcmc}. Then \ref{rot2} yields, setting $x=\omega_{MR}(m,\delta=0)/2\delta_N $, ($x<1$) \cite{pl87}: \begin{eqnarray} \label{rot3} \left(\ln \left(\frac{T_{cN}}{T_{N+m}}\right) -(x^2-1/2)h(x) \right) && \nonumber \\ \times \left( \ln\left( \frac{T_{cN}}{T_{N-m}}\right)-(x^2-1/2)h(x) \right) =h^2(x)/4 && \end{eqnarray} where $h(x)=\frac{sin^{-1}x}{x(1-x^2)^{1/2}}$. Using (\ref{rot2}) and (\ref{rot3}), one proves the existence of at least one low lying MR mode at $m=1$ in the usual UQC case with no Ribault phase \cite{pl87}. The possibility of other MR minima with $m>1$ was mentionned, but until now no proof was given for their existence at energies well inside the single particle gap\cite{pl87}. The field dependence of the MR mode at $m=1$ is easy to find when $\epsilon_{\pm 1}=(T_{cN}-T_{N\pm 1})/T_{cN}\ll 1$. The MR minimum is : $x^2_0=(\epsilon_{+1}+\epsilon_{-1})/2$. Since $\epsilon_{+1}$ and $\epsilon_{-1}$ have opposite and nearly equal variation with field within the phase N (see \cite{lm}), $x^2_0$ is almost constant within a given phase, equal to the value of $\epsilon_{\pm 1}$ at their crossing point . Consider now the Ribault Phase $N=-2$ studied by Balicas et al. \cite{bali}, with the sequence of quantum numbers 1,2,-2,3,4,5,6,7. The lowest lying modes are not at $q_{\vert \vert}=1$ any more, but at $q_{\vert \vert,\alpha}=4$ and $q_{\vert \vert,\beta}=5$. Consider one of these, say $q_{\vert \vert,\alpha}$. Define $\epsilon_{\alpha } (H)=(T_{c,-2}-T_2)/T_{c,-2}$ for $T_{c,-2} \geq T_2$ ; $\epsilon_{\alpha}$ goes to zero at the transition between phases $N=-2$ and $N=2$; it varies roughly linearly with field, with a negative slope of order a few tenth of K per T; assume for simplicity $\epsilon_{\alpha}\ll 1$ in the whole Ribault phase. Define also $L_{\alpha }=\ln(T_{c,-2}/T_{-6})$. $L_{\alpha}$ is certainly larger than 1 and slowly varying in the whole Ribault phase. Assume it is constant, for simplicity, with no loss in generality. The equation for the corresponding MR minimum $a(H)=\omega_{MR}(4) /2\delta_{-2}$ is: \begin{eqnarray}\label{rot4} [\epsilon_{\alpha}-(a^2-1/2)h(a)][L_{\alpha}-(a^2-1/2)h(a)]&=&h^2(a)/4 \end{eqnarray} In the (realistic) limit $L_{\alpha}\gg 1$, there is always a solution $a=a_0 +s_{\alpha }\epsilon_{\alpha}(H)$, where $a_0\leq 1/\sqrt2$ and $s_{\alpha}\simeq 1$ . At most, $a$ may have a weak additionnal field dependence $\propto L^{-1}_{\alpha}$. Similarly, for $q_{\vert \vert,\beta}$, I find a second MR minimum $b=b_0 +s_{\beta}\epsilon_{\beta}(H)$, with $\epsilon_{\beta}=(T_{c,-2}- T_3)/T_{c,-2}$, which has the same magnitude but a slope opposite to that of $\epsilon_{\alpha}$. See Fig. (1). Now $L_{\beta}=\ln(T_{c,-2}/T_{-7})$ and is large. Around each minimum, the MR dispersion is strongly anisotropic \cite{pl97}. For $\delta \ll 1$, $\omega_{MR}^2(m,\delta)=\omega_{MR}^2(m,0)+ s(v_F\delta)^2$,($s\sim 1$)\cite{pl87}. Interesting things should happen when the two magneto-roton minima , which have commensurate parallel components of their wave vectors (here 4 and 5) cross, as they are likely to do, somewhere within the Ribault phase\cite{pl97}. Fig.(2) summarizes the results for the Ribault phase $N=-2$. The Ribault Phase contaminates the neighbouring normal phases: within the $N=3$ phase, close to the (3,-2) transition, Eq.(\ref{rot3}) , on top of the mode at $q_{\vert \vert}=1$, yields a MR mode at $q_{\vert \vert}=5$. Since $\ln(T_{c,3}/T_8 )$ is not small \cite{lm,plcmc}, the field dependence of that mode should be similar in magnitude to the corresponding mode in the Ribault phase, but opposite in sign. The mode at $q_{\vert \vert}=1$ should keep its much weaker field dependence, because the line $T_{2}(H)$ passes through the $N=3$ phase rather close to the actual metal-UQC instability line. The Ribault phase also exerts its influence on the normal UQC when, although thermodynamically unstable, it is close to being stable. This situation can be experimentally realized by tuning the pressure. The $N=-2$ phase is actually quite close to being stable even when $t_3=t_4=0$ in Eq.(\ref{model}). (See ref. \cite{lm}). As the virtual transition line to a Ribault Phase , say with $N=-2$ nears the usual UQC-normal metal transition lines, say $N=3$ and $N=2$ from below, secondary MR modes appear from the bottom of the conduction band, at the wave vector of the relevant transition: within the $3$ (resp. $2$) phase, at $q_{\vert \vert}=5$, (resp. $q_{\vert \vert}=4$). Call $\eta_3$ (resp. $\eta_2$) the smallest relative distance between the virtual line $T_{-2}$ and the lines $T_{c3}$ (resp $T_{c2}$). Then the lowest new MR mode energy is: $ \omega_3(\eta_{3}, 5 )/2\delta_3= \omega_3(0,5) /2\delta_3+ s_{\eta_{3}} \eta_3$ (resp. $\omega_2 (\eta_2, 4 )/2\delta_2= \omega_2(0,4)/2\delta_2 + s_{\eta_2}\eta_2$). The field dependence follows along similar lines. I expect a richer structure yet of MR modes within the single particle gap, for the sequence 1,2,-2,4,-4,5,6 \cite{cooper}. If the corresponding virtual transition lines in a given subphase are reasonably close to the true critical line, I find, for example in phase $-4$, {\bf four} modes at $q_{\vert \vert}=2, 7, 8, 9 $. Applying the method of this paper, I find that the mode at $q_{\vert \vert}=2$ should have a much weaker H dependence than the modes 7,8,9. The occurence of the MR modes of the UQC could have measurable consequences on various properties, such as transport (T dependence of the longitudinal resistivity $\rho_{xx}$), specific heat, NMR relaxation time,) etc.\cite{pl96} \cite{pl97}. The success of the QNM in explaining the phase diagram and the Hall quantization gives added confidence that the new type of MR particles described here exists, so that a renewed experimental effort is called for to detect them and check theoretical predictions on this unique electron-hole condensate, the Ultra Quantum Crystal. \acknowledgments I am grateful to C. M. Chaves and the Departamento de Fisica da PUC-Rio, to Gilson M. Carneiro and the Instituto de Fisica da UFRJ for their hospitality during the completion of this work.
2,877,628,090,946
arxiv
\section{Introduction} Hydrogen fluoride (HF) is one of the simplest molecules capable of forming hydrogen bonds. Despite the simplicity of the compound, the theoretical description of its behavior, especially in the liquid phase, is still far from being fully satisfactory. Much relevant theoretical work has concentrated on the problem of determining a potential model suitable for computer simulations. For this purpose a model is needed that reproduces correctly the main features of the real interaction potential and which is simple enough to be computed efficiently. In the last two decades several empirical potentials have been developed for Molecular Dynamics (MD) or Monte Carlo (MC) simulations of {\it liquid} HF. \cite{Klein1979,Cournoyer1984,Jedlovszky1997a,Jedlovszky1997b,Sun1992,Jorgensen1978,Jorgensen1979,Honda1992} Of these models, only the three-site ones \cite{Klein1979,Cournoyer1984,Jedlovszky1997a,Jedlovszky1997b} are able to reproduce correctly the dipole and quadrupole moments of the monomer. These models represent the charge distribution of each monomer by fractional charges placed at three sites on the molecular axis, two on the F and H nuclei and the third one at an appropriate position X along the F-H bond. A first three-site model (called HF3) was developed by Klein and McDonald \cite{Klein1979} by fitting {\ai} results for the potential energy surface of the (HF)$_2$ dimer. Cournoyer and Jorgensen \cite{Cournoyer1984} proposed a second three-site model (called HFC), with a simplified non-Coulombic part consisting of a single Lennard-Jones interaction between the fluorines. The parameters of the model were fitted directly to experimental thermodynamic data for the liquid, while simultaneously providing a reasonable equilibrium geometry of the dimer. Recently, Jedlovszky and Vallauri \cite{Jedlovszky1997a,Jedlovszky1997b} presented two further models, hereafter referred to as HF-JV1 and HF-JV2, respectively. HF-JV1 is a variant of HFC, with charges reproducing both the monomer dipole and quadrupole and with an accurate treatment of the long range part of the Coulombic interactions, neglected in the original work on HFC. \cite{Cournoyer1984} The HF-JV2 model includes molecular polarizability, by adding an induced point dipole moment at each F site, while keeping the charge distribution of HF-JV1 unchanged. The scalar polarizability of the molecules was set equal to its experimental value, while the two parameters of the Lennard-Jones interaction between F atoms were fitted to the experimental values of the liquid density and internal energy. Unfortunately, none of the available models is able to reproduce, in a fully satisfactory way, both the thermodynamics and the structure of liquid and gaseous HF. In the search for a new potential, the essential ingredients can be identified by reviewing the known properties of the gas, liquid and solid phases of HF. The simplest associated form of HF is the gas phase dimer, (HF)$_2$, whose structure and rovibrational spectrum were first characterized by the microwave spectra of Dyke, Howard and Klemperer. \cite{Dyke1972,Howard1984} The equilibrium configuration of the isolated dimer F-H$'\cdots$F-H$''$ is planar but bent, as a consequence of a competition between dipolar and quadrupolar electrostatic interactions \cite{Howard1984,Kolebrander1988,Barton1982} (whenever ambiguity is possible, H$'$ denotes the hydrogen atom involved in the hydrogen bond, while H$''$ is the other one). The atoms F-H$'\cdots $F form a nearly linear arrangement, with H$'$ placed slightly off the FF axis, and with a distance between the hydrogen-bonded fluorines $r_{{\rm FF}}\approx $ 2.7 \AA . The second hydrogen atom forms an angle $\FFH \approx 115^{\circ}$ and the H--F bonds are slightly stretched with respect to the monomer. \cite{Howard1984,Huber1979,Kofranek1988} The (HF)$_2$ bent dimer appears to be the basic structural motif of all the best known associated forms of HF, namely the gas phase (HF)$_6$ cyclic hexamer, \cite{Janzen1969} the low temperature crystal \cite{Johnson1975} and even the liquid. \cite{Deraman1985} Pairs of adjacent HF units in the hexamer exhibit a bent arrangement similar to that of the dimer. The same structure is found in crystalline deuterium fluoride (DF) which is made up of infinite zig-zag chains of DF units. \cite{Johnson1975} Interchain F--F distances ($\approx 3.2$ \AA ) are much larger than intrachain F--F distances ($\approx 2.5$ \AA ), an indication that interchain interactions are weaker than the interactions between adjacent HF units within the chains. The very small entropy difference between the solid and the liquid \cite{Vanderzee1970} suggests that the liquid is largely associated. Best fit analysis of dielectric constant data \cite{Cole1973} and of Raman and IR spectra \cite{Desbat1983} support this conclusion and indicate that acyclic zig-zag chains are abundant in the liquid. Finally, the neutron scattering measurements \cite{Deraman1985} of the radial correlation function of liquid DF yield H--F and F--F average neighbor distances consistent with a most probable local structure similar to that of the other forms of HF. Since all associated forms of HF share the (HF)$_2$ dimer as a common structural unit, any satisfactory potential model should reproduce the main features of the (HF)$_2$ potential energy surface. {\Ai} quantum mechanical calculations have contributed significantly to clarify the interactions present in the isolated dimer (HF)$_2,$ by determining accurately its equilibrium properties \cite{Peterson1995,Collins1995} and by mapping out the entire potential energy surface. \cite{Kofranek1988,Bunker1988,Bunker1990,Jensen1990} Much of this work has been directed to obtain analytical models for the potential surface, usually by fitting the numerical results to a chosen functional form. \cite{Kolebrander1988,Peterson1995,Bunker1988,Bunker1990,Jensen1990,Quack1990} The analytical expressions of these models, which have been developed mainly to study the rovibrational states of (HF)$_2 $, \cite{Kolebrander1988,Barton1982,Jensen1990,Quack1990,Zhang1995} are usually very complex and therefore not suitable for simple MD or MC simulations. Fortunately, as already mentioned, there is evidence that the main features of the real molecular interactions may be approximately reproduced by much simpler empirical models with effective potentials. In fact, it has been long known \cite{Howard1984,Kolebrander1988,Barton1982} that the bent structure of the (HF)$_2$ dimer is dominated by the classical two-body electrostatic interactions between the permanent multipole moments of the monomer (mainly the dipolar and quadrupolar terms). The incorporation of many-body polarization effects gives rather small refinements on the predicted equilibrium angles, \cite{Kolebrander1988} a finding consistent with the experimental observation that the dipole moment of the dimer is only weakly enhanced relative to that of the isolated monomer. \cite{Dyke1972} The effects of the environment on the HF molecules do not seem to be large. However, a comparison of the structures in the sequence ``monomer $\rightarrow $ dimer $\rightarrow $ hexamer $\rightarrow $ liquid $\rightarrow $ solid'' (as shown by Tables \ref{t:gas} and \ref{t:condensed} below) evidences a tendency in which a larger degree of association implies a small increase of the H--F bond distance, together with a much larger decrease of the F--F distance (from 2.8 {\AA} in the gas phase dimer to 2.49 {\AA} in the solid). Unfortunately, as stressed by R\"{o}thlisberger and Parrinello, \cite{Rothlisberger1997} the available three-site models \cite{Klein1979,Cournoyer1984,Jedlovszky1997a,Jedlovszky1997b} refer to {\it rigid} molecules and, consequently, they cannot reproduce the relaxation of the interatomic distances in going from the gas phase dimer to the condensed phases. The previous observations suggest a relatively clear picture of the ingredients required for a satisfactory potential model for HF. First, a charge distribution is required which approximately reproduces the first few multipolar moments of the HF molecule. Second, the H--F bond cannot be considered as rigid and {\it intra}-molecular interactions must be included to allow for the observed variations of the H--F and F--F distances in the various aggregation forms of HF. Finally, further atom--atom interactions must be introduced to model the remaining non-Coulombic intermolecular forces. Having all this in mind, we have tried to construct a new three-site model by simultaneously using data on the gas, liquid and solid phases. For this purpose: (a) the molecules are not rigid, but the H--F bond length can vary, and, (b) the parameters are fitted to theoretical and experimental data, including the {\ai} structure of the HF dimer, \cite{Kofranek1988} the room temperature density of liquid DF \cite{Deraman1985} and the experimental structure of solid DF at 4 K. \cite{Johnson1975} For the structure of the dimer we have used the {\ai} (HF)$_2$ potential energy surface developed by Bunker, Jensen, Karpfen, Kofranek and Lishka (BJKKL). \cite{Kofranek1988,Bunker1988,Bunker1990,Jensen1990} The BJKKL calculations, in which the potential energy of the (HF)$_2$ complex has been determined for over 1000 different configurations, represent the most complete and accurate scan of the energy surface to date. These quantum mechanical results are in excellent agreement with the experiments on gas phase (HF)$_2$. The decision to add to the fit some data on solid and liquid HF (more precisely, DF) has been taken because of the partial failure of our preliminary models fitted only to the {\ai} surface. These models reproduced well the zig-zag (HF)$_\infty $ chains characteristic of the crystal, but gave totally wrong inter-chain distances and, furthermore, did not agree with the experimental density of the liquid. \cite{Deraman1985} This behavior was to be expected. In fact, the (HF)$_2$ potential surface \cite{Kofranek1988,Bunker1988,Bunker1990} only accounts for the basic HF--HF interactions {\it within} a chain, and obviously exclude the weak long-range interactions responsible for the distances between different (HF)$_\infty $ chains. Since the density of the liquid is affected by interactions between distant pairs of HF molecules in relative orientations which are not sampled in the solid, it is also understandable that only by fine-tuning the long range potential it has been possible to reproduce the experimental density of the liquid. Finally, it must be mentioned that the addition of data on solid and liquid HF to the fit gave only a small deterioration of the agreement with the {\ai} data on the dimer. This fact confirms that solid and liquid data add information on regions of the potential surface that are not sampled by the dimer. \section{Methods and Calculations} \subsection{Potential model} \label{ss:pot} The potential model is represented by intra- and inter-molecular parts: \begin{equation} V_{\text{HF}}^{\rm intra}(r_{\text{HF}})=D_e\left\{ 1-\exp[-\alpha (r_{\text{HF}}-r_e)]\right\}^2, \end{equation} \begin{equation} V_{AB}^{\rm inter}=V_{AB}^{\rm Coul}+V_{AB}^{\rm non-Coul}, \end{equation} \begin{equation} V_{AB}^{\rm Coul}=\sum_{i\in A} \sum_{j\in B} \frac{q_iq_j}{r_{ij}}, \end{equation} \begin{equation} V_{AB}^{\rm non-Coul}=A_{{\rm FF}}\exp(-B_{{\rm FF}}~r_{{\rm FF}})-C_{{\rm FF}}~r_{{\rm FF}}^{-6}. \end{equation} BJKKL \cite{Bunker1990} fitted the intra-molecular part of their own {\ai} surface with a Morse potential, eq. (1). Here $r_e$ represents the equilibrium H-F distance, $D_e$ the dissociation energy and $\alpha $ is an effective range parameter. Because of their simplicity and accuracy, the BJKKL functional form and parameter values are adopted in this paper. The Coulombic interactions between molecules $A$ and $B$ are modeled through three point charges for each HF monomer, two at the nuclear positions $\R_{{\rm H}}$ and $\R_{{\rm F}}$, and the third at a position $\R_{{\rm X}}$ along the H--F bond. In eq. (3) $q_i$ and $q_j$ are the fractional charges on the $i$th site of molecule $A$ and $j$th site of molecule $B,$ respectively; $r_{ij} $ is the distance between these sites. The motion of the site X is constrained so that it remains at the same relative position along the bond, \begin{equation} \R_{{\rm X}}=\beta \R_{{\rm F}}+(1-\beta )\R_{{\rm H}}, \end{equation} \noindent where $\beta $ is an adjustable parameter between 0 and 1. The charges are $+q$ at both the H and F nuclei, and $-2q$ at the third site to preserve neutrality. This three-site charge model is related to that of Refs. \onlinecite{Klein1979,Cournoyer1984,Jedlovszky1997a,Jedlovszky1997b,Sun1992}. By allowing for changes in the H--F bond length, the present model effectively accounts for a part of the polarization effects. For solid and liquid DF the Ewald's method \cite{Born1954,Signorini1991,Allen1987} has been used to ensure complete convergence of the Coulombic interactions (which have an infinite range). The remaining non-Coulombic part of the inter-molecular potential is represented in a simplified way, using only a Buckingham ``$\exp\!-6$'' atom-atom interaction between the fluorines, eq. (4). This term is meant to represent the interactions between the electronic clouds around two far away atoms. Since the hydrogens in HF are essentially bare nuclei, it makes good physical sense to avoid atom-atom interactions involving them. As a matter of fact, no improvement in the quality of the fit is found by adding similar H--H and H--F non-Coulombic interactions. A rather well defined hierarchy of interactions may be identified in the chosen potential model. The length of the HF monomer is solely determined by the intra-molecular potential. The structure of the dimer is also influenced by the position of the charge site X and by the F--F equilibrium distance, i.e. by the position of the minimum of the $\exp\!-6$ interaction between the fluorines. Finally, the charge and the remaining properties of the $\exp\!-6$ model, mainly the strength of the long range attractive term $C_{{\rm FF}}~r_{{\rm FF}}^{-6}$, affect the structure and density of solid and liquid HF. The presence of this hierarchy implies that small changes in the charge and in the long range attraction can be compensated by the remaining parameters to maintain the correct monomer and dimer structures. \subsection{Potential optimization} The potential model contains five adjustable parameters, $\beta $, $q$, $A_{{\rm FF}}$, $B_{{\rm FF}}$ and $C_{{\rm FF}}$, and three parameters fixed at the {\ai} values, \cite{Bunker1990} $r_e$, $\alpha $ and $D_e$. In a first series of attempts, we fitted the present model (as well as other preliminary ones) only to the {\ai} potential energy of the dimer. The $\chi ^2$ deviation between the model and the {\ai} surface was computed, for each given combination of parameter values, with the same relative weights used by BJKKL. \cite{Bunker1990} The $\chi^2$ was minimized by searching the parameter space with Nelder-Mead simplex method. \cite{Press1986} The resulting potential was then tested by computing some properties of the other associated forms of HF. In particular, the liquid phase was studied by isothermal-isobaric MD simulations, as described in the next Subsection. As discussed in the introduction, these preliminary models fitted only to the {\ai} surface gave unsatisfactory results, so that it was decided to add more data to the fit. For this purpose, the equilibrium geometries at $T=0$ K of the HF dimer, hexamer and crystal were determined as a function of the potential parameters by minimizing, with the WMIN program, \cite{Busing1984} the total potential energy with respect to the structural parameters. The deviations of the calculated geometries from the {\ai} dimer structure \cite{Kofranek1988} and from the experimental DF crystal structure \cite{Johnson1975} at 4 K are then added to $\chi ^2$, with weights subjectively chosen to make the contributions from the surface, the dimer and the crystal roughly equal. Since the new set of parameters, although more satisfactory, still did not give the correct density of the liquid, it become necessary to tune the long range interactions. In a further set of minimization runs, a range of $q$ and $C_{{\rm FF}}$ values was searched, again with Nelder-Mead method. The three remaining parameters $\beta $, $A_{{\rm FF}}$ and $B_{{\rm FF}}$ where determined as a function of $q$ and $C_{{\rm FF}}$ by fitting the {\ai} and crystal data. Each complete set of five parameters was then used in a short MD simulation to determine the equilibrium density of the liquid at 293 K and to add to the $\chi^2$ the deviation from the relevant experimental value. \cite{Deraman1985} This method, which involves two nested fit procedures, was found to be reasonably efficient and converged to a minimum in about fifty cycles. The main problem encountered in the fit was the noise in the computed liquid density due to insufficient MD equilibration with our computer time constraints. To reduce this noise, the MD equilibration was done in parallel with the potential optimization, by accepting after successful $\chi^2$ minimization cycles the final configuration of the MD run as the initial configuration for the next run. With this strategy the current MD configuration was always the one with the best potential parameters so far. Since the parameters change rather slowly, the simulated system tended to remain close to equilibrium. No structural data on the liquid, beside the density, has been included in the fit. This rather drastic choice avoids the repeated calculation of equilibrated radial correlation functions, which would have required even longer MD runs then those needed for equilibrated densities. As a technical detail, it must be noticed that no attempt was done to embed potential surface calculation, MD simulation, dimer and crystal energy minimizations and the two nested Nelder-Mead procedures into a single monolithic program, which would have been unmanageably complex. Separate programs, calling each other as distinct processes at the operating system level, \cite{Bourne1982} were used instead. The two Nelder-Mead procedures, in particular, were actually a single program invoking a second copy of itself. No special changes were required for the surface, MD and energy minimization programs. The optimal set of parameter values is shown in Tab. \ref{t:pot} and represents a compromise among the best results that can obtained separately for the dimer, the crystal and the liquid. As usual, no special physical significance should be attributed to the potential parameters. In fact, because of the possible compensation among different terms in the potential model, alternative slightly different sets of parameters might have been used. The model may be simply regarded as a tool to reproduce the observed data and predict new results. \subsection{Molecular Dynamics} \label{ss:md} The MD calculations employed 500 {\it deuterium} fluoride (DF) molecules, in a cube with periodic boundary conditions, and, using Andersen's isothermal-isobaric (NPT) method \cite{Andersen1980,Brown1984,Fox1984} simulated a liquid sample in contact with a heat bath and subject to a hydrostatic pressure of 1 atm ($\approx 10^{-4}$ GPa). The simulated liquid was obtained by melting and equilibrating at 293 K an initial crystalline configuration. The behavior of the system as a function of temperature was then determined by raising or lowering the bath temperature in steps of $10$ K. Each temperature was maintained for at least 5 ps for equilibration and further 5 ps for analysis. The equations of motion were integrated using the velocity Verlet algorithm, \cite{Allen1987,Brown1984,Fox1984,Verlet1967} with a time step of 0.25 fs. As previously described, each DF molecule carried three point charges, at D, F and X sites. Ciccotti, Ferrario and Ryckaert method for linear constraints \cite{Ciccotti1982} has been used to maintain each massless X charge at a fixed fraction $\beta $ of the DF bond. \section{Results} The most important properties calculated with the present potential model for the monomer, dimer, planar (HF)$_n$ rings, hexamer, crystal and liquid forms of HF (or DF) are compared in Tables \ref{t:gas}, \ref{t:rings} and \ref{t:condensed} with the available experimental and {\ai} data. As described in section \ref{ss:md}, the properties of the liquid have been determined through MD calculations, while the equilibrium geometries at 0 K of the other forms of HF are found by minimizing the potential energy. \subsection{Monomer, dimer and cyclic polymers of HF} The excellent results (Tab. \ref{t:gas}) for the equilibrium bond length, dissociation energy and spectroscopic parameters of the monomer, which all depend only on the parameters fixed to the BJKKL values, \cite{Bunker1990} indicate that the Morse model accurately reproduces the main features of the true intra-molecular potential. The spectroscopic parameters $\nu_e$ (harmonic frequency) and $x_e$ (anharmonicity constant) for the energy levels of a Morse oscillator, $E_n=h\nu_e[(n+\frac{1}{2})-x_e(n+\frac{1}{2})^2]$, are directly obtained from $D_e$ and $\alpha $ through $\nu_e=\alpha \sqrt{D_e/\mu}$ and $x_e=h\nu_e/4D_e$, where $\mu $ is the reduced HF mass. \cite{Child1984} The multipole moments computed from our three-charges model (taking the molecular center of mass as origin) are very close to the experimental and {\ai} moments of the monomer. \cite{Bunker1988,Gray1984} The dipole and quadrupole values follow the trend of the {\ai} results and therefore are slightly underestimated with respect to the experimental data. The octupole and hexadecupole moments are also in reasonable agreement with the {\ai} calculations. This overall agreement is an indication of a good match between the electrostatic interactions in real HF and in the model. The computed minimum energy structure of the dimer (Tab. \ref{t:gas}) compares well with the experimental and {\ai} data. \cite{Howard1984,Kofranek1988,Bunker1988,Pine1986} The experimental F-F distance is excellently reproduced, as well as both the angles ${\HFF}$ and ${\FFH}$ of the bent equilibrium configuration. Moreover, the increased length of the HF molecule in going from the monomer to the dimer, and the slight length difference between the two HF intramolecular bonds, \cite{Kofranek1988} are well predicted. Since these length changes were the primary reason for allowing non-rigid molecules, such a behavior must be considered very satisfactory. Unfortunately, in spite of the excellent dimer geometry, the dimerization energy is clearly underestimated. This drawback was not completely unexpected. In fact, in real HF and in quantum mechanical models dimerization is accompanied by hydrogen bonding, which is not explicitly incorporated in the present classical treatment. As a further test of the potential, we have computed the equilibrium geometry of the planar (HF)$_n$ rings, with $n=2,3\cdots8$. As shown in Tab. \ref{t:rings}, the structural parameter computed for planar rings, with $C_{nh}$ symmetry, are in good agreement with the available {\ai} results. \cite{Kofranek1988,Karpfen1990} Though the model systematically underestimates the {\ai} binding energies, it nevertheless reproduces correctly the relative stability of the different structures. The smallest ring, which is the cyclic dimer, has a binding energy of $\Delta E=-3.18$ kcal/mole, and is thus substantially less stable than the bent dimer (Tab. \ref{t:gas}). For cyclic (HF)$_3$ the binding energy for each hydrogen bond, $-\Delta E/3$, is slightly less that the binding energy of the bent dimer. The bond stabilization energy, $-\Delta E/n$, increases for larger rings, up to the hexamer, and then decreases again. The particularly favorable stability of the (HF)$_n$ rings with $n\approx6$ is readily understood by noticing that for these rings the ${\HFF}$ and ${\FFH}$ angles (Tab. \ref{t:rings}) are close to those of the bent dimer (Tab. \ref{t:gas}). Planar (HF)$_n$ rings must satisfy the geometric constraint ${\FFH}-{\HFF}=\alpha_n$, where $\alpha_n=180^\circ-360^\circ/n$ is the inner angle of the $n$-sides regular polygon. The hexamer, for which $\alpha_n=120^\circ$, can be obtained by joining essentially undeformed bent dimers, and is the most stable structure. The hexamer can be stabilized even further by allowing for non planar structures. We find that the stablest structure has a non-symmetric ``chair'' shape, with average bond lengths and angles (Tab. \ref{t:gas}) which compare well with the available experiments \cite{Janzen1969} and which are almost identical to those found in the dimer. A ``boat'' structure slightly above in energy, at $-4.96$ kcal/mole, is also found, with lengths and angles close to those of the ``chair'' structure. \subsection{Crystal and liquid DF} Low temperature crystals of DF are orthorhombic, space group $Cmc2_1$ ($C_v^{12}$), with four molecules per unit cell on the $\sigma_v$ plane sites. \cite{Johnson1975} The minimum energy structure computed for the DF crystal (see Tab. \ref{t:condensed}) is close to the experimental structure at low temperature. \cite{Johnson1975} The discrepancies in the lengths of the cell axes partially compensate each other, yielding a density only slightly smaller than the experimental one. The increased H-F intra-molecular lengths with respect to both monomer and dimer, and the F-F distance smaller than in the dimer, are well reproduced. In Fig. \ref{f:density} the densities predicted by the present model over the whole temperature range of liquid HF (which, at atmospheric pressure, goes from the freezing point at $-83$ $^{\circ}$C up to the boiling point at $19.75$ $^{\circ}$C) are compared with the experimental data. The experimental densities show a nearly linear dependence on $T.$ The data are from two sources, covering different temperature ranges. \cite{Simons1932,Sheft1973} The slight vertical shift between the two data sets is due to the experimental uncertainties. It is very satisfactory to note that, although our potential model has been fitted only to a single density at $293$ K, the straight line corresponding to its predictions is somewhat vertically shifted, but has essentially the same slope as the experimental density. This slope is not reproduced by the HFC, HF-JV1 and HF-JV2 models, \cite{Cournoyer1984,Jedlovszky1997a,Jedlovszky1997b} since their density straight lines (obtained by joining the corresponding results, which, unfortunately, are available only at $203$ and $273$ K) intersect the experimental curve. Another pleasant feature of the present model, also shown in Fig. \ref{f:density}, is that at $303$ K and above the average density continued to decrease over the whole MD analysis period. At each $T<303$ K, the density oscillated around an equilibrium value. In our opinion, this behavior indicates that the simulated liquid boils at some point in the range $293\div 303$ K, in good agreement with the experimental normal boiling point of HF ($T_b=292.9$ K). With regard to the internal energy $U,$ our MD results exhibit a nearly linear dependence on $T.$ However, their absolute values (Tab. \ref{t:condensed}) are underestimated with respect to the available experimental data \cite{Jedlovszky1997a} (a similar drawback is also present in the HF3 model \cite{Klein1979}). Fig. \ref{f:radial} reports the partial pair radial correlation functions $g_{ij}$(r) computed for the liquid at 293 K, together with the real space function $d(r)\equiv 4\pi \rho_m r\left[ G(r)-1\right] $. Here $\rho_m$ is the molecular number density, and $G(r)$ is a composite (or total) pair correlation function, obtained by adding the three partial pair correlation functions with the appropriate nuclear weights \cite{Deraman1985} and then by convoluting the sum with the same experimental resolution function used in the neutron scattering experiments. \cite{Deraman1985} The resulting theoretical prediction for $d(r)$ is compared in Fig. \ref{f:radial}, with the corresponding neutron diffraction data for liquid DF at 293 K, which are the only available real space data. \cite{Deraman1985} The first peak at $r\approx 0.95$ {\AA} is due to the intra-molecular H--F bond distance and here the agreement between simulation and experiment is excellent. The second and third peak of the experimental $d(r)$ occur at $r\approx 1.6$ {\AA} and $r\approx 2.55$ {\AA}, and correspond to the hydrogen-bond $r_{{\rm HF}}$ inter-molecular distance and to the $r_{{\rm FF}}$ separation, respectively. Unfortunately, the model (with the present choice of parameters) fails to reproduce these peaks and the complex liquid structure at longer distances. The reason of such a shortcoming may be found by comparing the partial pair correlation functions $g_{ij}(r)$ of the present model Fig. (\ref{f:radial}) with the analogous {\ai} MD results of R\"{o}thlisberger and Parrinello. \cite{Rothlisberger1997} The position of the first inter-molecular peak of all our $g_{ij}(r)$ is shifted toward larger $r $ values (a similar trend occurs in the polarizable HF-JV2 model \cite{Jedlovszky1997b}). In addition, the height of the hydrogen-bond peak of $g_{{\rm HF}}(r)$ is nearly half the correct value, and the height of the first peak of $g_{{\rm HH}}(r)$ is also underestimated. It is to be recalled that the HF3 potential \cite{Klein1979} reproduces the three principal peaks of the experimental data \cite{Deraman1985} and gives the best performance for $d(r)$ among the available models. This quite good agreement was obtained by modeling the hydrogen-bond interaction with a Morse term. \cite{Klein1979} To our knowledge, no other empirical model takes explicitly into account the hydrogen bond. Before concluding this Section, it may be useful to summarize the performances of the best available models for liquid HF. The HF3 potential gives, as already mentioned, quite good agreement with neutron diffraction structural data at the normal boiling point (293 K), \cite{Deraman1985} but systematically fails to reproduce thermodynamics: \cite{Klein1979} the predicted internal energies are largely underestimated (i.e., their absolute values are too small) and the pressures in MD calculations at constant volume are several kilobars too high, indicating that the model system is less strongly bound than real HF. In comparison with HF3, the HFC model \cite{Cournoyer1984} yields a slightly better thermodynamics but a slightly worse structure for the liquid, with charges corresponding to a dipole moment enhanced with respect to the monomer. Then, the predictions of HF-JV1 for the liquid phase \cite{Jedlovszky1997a} are not very different from the HFC ones, but HF-JV1 fails completely to reproduce the properties of the isolated dimer. Finally, whereas HF-JV1 works reasonably well only at room temperature, the polarizable model potential HF-JV2 represents a true improvement \cite{Jedlovszky1997b} as regards the predicted density at low temperature, i.e. at $203$ K. However, problems with the pair correlation functions are encountered also for HF-JV2. \section{Conclusions} We have shown that a simple potential model suitable for MD or MC simulations can reproduce, quantitatively or semiquantitatively, many physical properties of the hydrogen fluoride over the whole set of its solid, liquid and gaseous associated forms. To cover such a wide range of environmental conditions, it has been necessary to consider a molecular model with variable bond length. The present investigation confirms the plausibility of assuming the dimer as the basic structural unit, but also stresses the need of fitting the potential to a set of experimental and theoretical data which includes information on the condensed phases. In the liquid, the correct trend of the density is very encouraging, but the underestimated energies and the problems with the radial distribution functions indicate that some physical effect is still misrepresented by the model. Unfortunately, it is not likely that adding data on the liquid structure could improve the results with the current type of model (without an hydrogen bond term). In fact, our experience with the fit shows that those few parameter sets which give better liquid structures were incompatible with the gas and crystal data. We believe that the essential shortcoming of the model is the neglect of any {\it explicit} representation for the hydrogen bond, which cannot be reduced to purely electrostatic multipole interactions. Such an explicit modeling of the hydrogen bond is also lacking in most other three-site potentials for HF, with the exception of the HF3 model. \cite{Klein1979} Although the hydrogen bonding has a quantum mechanical origin, an approximate classical treatment is however possible and may have significant consequences, as seen from the rather good structural results for the HF3 potential. \cite{Klein1979} The inclusion of potential terms representing the hydrogen bond interactions appears therefore the next necessary step for more accurate HF models. More data on the liquid, including at least the radial correlation function, need to be incorporated in the fit. In conclusion, our model cannot be considered as a definitive solution of the problem, but can be seen as a significant step towards a really satisfactory potential for MD or MC simulations. It has the merit of pointing out the importance of a variable molecular length and of the hydrogen bond. Moreover, it shows that the strategy of a simultaneous fit to data covering all the associated forms of HF can be successful and should be considered as the appropriate way to fully accomplish the difficult task of finding a potential model for such a strongly associating system. \acknowledgments Work done with funds from MURST (Ministero dell'Universit\`a e della Ricerca Scientifica e Tecnologica) through the INFM (Istituto Nazionale di Fisica della Materia), from CNR and from the University of Bologna (``Finanziamento speciale alle strutture''). We thank Bunker, Jensen, Karpfen, Kofranek, and Lishka for providing their {\ai} data. \bigskip \begin{figure} \caption{Density of liquid DF as a function of temperature $T$. The center and half width of the error bars represent the average and standard deviation of the MD results, respectively, sampled over the last 5 ps at each bath temperature. The arrow at 303 K spans the range of densities sampled at this $T$, and indicates that the density was decreasing through the whole analysis period. The filled circle represents the experimental DF density, \protect\cite{Deraman1985} whereas the empty circles represent HF measurements, \protect\cite{Simons1932,Sheft1973} multiplied by the DF/HF mass ratio to obtain consistent units.} \label{f:density} \end{figure} \begin{figure} \caption{Upper panel: total neutron pair radial correlation function $d(r)$ for liquid DF (continuous line, simulation; dots, experiment \protect\cite{Deraman1985}). Lower panel: $g_{ij}(r)$ partial pair correlation functions from the simulation. The curves for $g_{\rm HH}(r)$ and $g_{\rm FF}(r)$ have been vertically displaced.} \label{f:radial} \end{figure} \bigskip \begin{table}[ht] \caption{Potential parameters. The meaning of the parameters is described in section \protect\ref{ss:pot}. The Morse parameters $r_e$, $\alpha$ and $D_e$ have been fixed to the values of the corresponding parameters ($c_1$, $c_3$ and $k_8^{000}/8\pi^{3/2}$, respectively) of the BJKKL fit to the {\ai} surface. \protect\cite{Bunker1990}} \begin{tabular}{ll} Monomer equilibrium length, $r_e$ ($a_0$) & 1.73727 \\ Monomer Morse parameter, $\alpha$ ($a_0^{-1}$) & 1.174 \\ Monomer dissociation energy, $D_e$ (Hartree) & 0.22306 \\ Position of the X site, $\beta = {\rm XF/HF}$ & 0.16245 \\ Charge, $q$, $-2q$, $q$ on F, X, H, $q$ ($e$ units) & 0.59456 \\ Buckingham parameter, $A_{\rm FF}$ (kcal/mol) & 167017 \\ Buckingham parameter, $B_{\rm FF}$ (\AA$^{-1}$) & 4.148 \\ Buckingham parameter, $C_{\rm FF}$ (\AA$^6$kcal/mol) & 547.9 \\ \end{tabular} \label{t:pot} \end{table} \bigskip \begin{table}[ht] \caption{Properties of gas-phase HF monomer and polymers. The {\ai} hexamer data \protect\cite{Karpfen1990} are for a planar ring, whereas the model results are for the minimum energy ``chair'' structure.} \begin{tabular}{llllll} Monomer & Model & Experimental & Ref. & {\Ai} & Ref. \\ \hline HF equilibrium distance, $r_e$ (\AA) & 0.9193 & 0.91680 & [\onlinecite{Huber1979}] & 0.9194 & [\onlinecite{Kofranek1988}] \\ dissociation energy, $D_e$ (kcal/mole) & 140.0 & 141.6 & [\onlinecite{Huber1979}] & 141.2 & [\onlinecite{Feller1997}] \\ harmonic frequency, $\nu_e$ (cm$^{-1}$) & 4120.16 & 4138.32 & [\onlinecite{Huber1979}] & 4135~~ & [\onlinecite{Kofranek1988}] \\ anharmonicity constant, $x_e\nu_e$ (cm$^{-1}$) &~~~86.69 &~~~89.88 & [\onlinecite{Huber1979}] &~~~90.1 & [\onlinecite{Feller1997}] \\ fundamental frequency, $\Delta\nu_{0\rightarrow 1}$ (cm$^{-1}$) & 3946.78 & 3961.42 & [\onlinecite{Blanc1994}] & & \\ dipole moment, $\mu_{z}$ (D) & 1.772 & 1.826 & [\onlinecite{Gray1984}] & 1.7728 & [\onlinecite{Bunker1988}] \\ quadrupole moment, $Q_{zz}$ (D \AA) & 2.122 & 2.36~ & [\onlinecite{Gray1984}] & 2.3048 & [\onlinecite{Bunker1988}] \\ octupole moment, $\Omega_{zzz}$ (D \AA$^2$) & 1.894 & & & 1.7327 & [\onlinecite{Bunker1988}] \\ hexadecupole moment, $\Phi_{zzzz}$ (D \AA$^3$) & 1.658 & & & 1.87~~ & [\onlinecite{Gray1984}] \\ \\ Dimer & Model & Experimental & Ref. & {\Ai} & Ref. \\ \hline HF$'$ distance (\AA) & 0.9270 & & & 0.9236 & [\onlinecite{Kofranek1988}] \\ HF$''$ distance (\AA) & 0.9222 & & & 0.9220 & [\onlinecite{Kofranek1988}] \\ FF distance (\AA) & 2.6850 & 2.72$\pm$0.03 & [\onlinecite{Howard1984}] & 2.7919 & [\onlinecite{Kofranek1988}] \\ ${\HFF}$ angle (degrees) &~~~7.52 & ~~10$\pm$6 & [\onlinecite{Howard1984}] &~~~6.81 & [\onlinecite{Kofranek1988}] \\ ${\FFH}$ angle (degrees) & 110.86 & ~117$\pm$6 & [\onlinecite{Howard1984}] & 114.45 & [\onlinecite{Kofranek1988}] \\ dimerization energy $\Delta E$ (kcal/mole) & $-$4.03 & $-$4.56 & [\onlinecite{Pine1986}] & $-4.32$ & [\onlinecite{Bunker1988}] \\ \\ Hexamer & Model & Experimental & Ref. & {\Ai} & Ref. \\ \hline HF distance (\AA) & 0.9333 & 0.973$\pm$0.009 & [\onlinecite{Janzen1969}] & 0.948 & [\onlinecite{Karpfen1990}] \\ FF distance (\AA) & 2.6217 & 2.535$\pm$0.003 & [\onlinecite{Janzen1969}] & 2.475 & [\onlinecite{Karpfen1990}] \\ ${\HFF}$ angle (degrees) &~~~6.15 & & & ~~~2.4 & [\onlinecite{Karpfen1990}] \\ ${\FFH}$ angle (degrees) & 111.24 & 104 & [\onlinecite{Janzen1969}] & 117.6 & [\onlinecite{Karpfen1990}] \\ binding energy $\Delta E$ (kcal/mole) & $-4.98$ & $-7.20$ & [\onlinecite{Redington1981}] & $-8.3$ & [\onlinecite{Karpfen1990}] \\ \end{tabular} \label{t:gas} \end{table} \noindent \bigskip \begin{table}[ht] \caption{Interatomic distances, angles, and bond stabilization energies of (HF)$_n$ planar rings with $C_{nh}$ symmetry.} \begin{tabular}{ccccccccccl} & \multicolumn{4}{c}{Model} &~~~& \multicolumn{4}{c}{{\Ai}} \\ $n$ & HF & FF & $\FFH$ & $\Delta E/n$ && HF & FF & $\FFH$ & $\Delta E/n$ & Ref.\\ & (\AA) & (\AA) & (degrees) & (kcal/mol) && (\AA) & (\AA) & (degrees) & (kcal/mol) & \\ \hline 2 & 0.9247 & 2.6950 & ~49.34 & $-1.59$ && 0.9223 & 2.796 & ~54.23 & $-3.30$ & [\onlinecite{Kofranek1988}] \\ 3 & 0.9321 & 2.6568 & ~81.37 & $-3.79$ && 0.932~ & 2.616 & ~83.6 & $-5.1$ & [\onlinecite{Karpfen1990}] \\ 4 & 0.9335 & 2.6246 & 101.04 & $-4.71$ && 0.943~ & 2.522 & 101.6 & $-7.2$ & [\onlinecite{Karpfen1990}] \\ 5 & 0.9326 & 2.6338 & 114.41 & $-4.90$ && \\ 6 & 0.9315 & 2.6474 & 123.90 & $-4.92$ && 0.948~ & 2.475 & 122.4 & $-8.3$ & [\onlinecite{Karpfen1990}] \\ 7 & 0.9308 & 2.6599 & 131.02 & $-4.89$ && \\ 8 & 0.9302 & 2.6688 & 136.50 & $-4.86$ && \\ \end{tabular} \label{t:rings} \end{table} \bigskip \begin{table}[ht] \caption{Properties of solid and liquid DF} \begin{tabular}{llll} Solid DF at 4.2K & Model & Experimental & Ref. \\ \hline HF distance (\AA) & 0.933 & 0.95$\pm$0.02 & [\onlinecite{Johnson1975}] \\ FF distance (\AA) & 2.662 & 2.50$\pm$0.01 & [\onlinecite{Johnson1975}] \\ ${\FFH}$ angle (degrees) & 113.1 & 116.6$\pm$1.0 & [\onlinecite{Johnson1975}] \\ unit cell axis $a$ (\AA) & 3.29 & 3.31 & [\onlinecite{Johnson1975}] \\ unit cell axis $b$ (\AA) & 4.50 & 4.26 & [\onlinecite{Johnson1975}] \\ unit cell axis $c$ (\AA) & 5.31 & 5.22 & [\onlinecite{Johnson1975}] \\ density (g/cm$^3$) & 1.77 & 1.89 & [\onlinecite{Johnson1975}] \\ binding energy $\Delta E$ (kcal/mole) & $-7.41$ & $-10.7$ & [\onlinecite{Zunger1975}] \\ \\ Liquid DF at 293K & Model & Experimental & Ref. \\ \hline HF distance (\AA) & 0.930$\pm$0.022 & 0.958$\pm$0.002 & [\onlinecite{Deraman1985}] \\ FF distance (\AA) & 2.777$\pm$0.169 & 2.56 & [\onlinecite{Deraman1985}] \\ binding energy $\Delta E$ (kcal/mole) & $-3.70$ & $-6.93$ & [\onlinecite{Cournoyer1984}] \\ density (g/cm$^3$) & $1.02\pm0.03$ & 1.0106 & [\onlinecite{Deraman1985}] \\ boiling temperature $T_b$ (K) & $\le303$ & 292.90 & [\onlinecite{Sheft1973}] \\ \end{tabular} \label{t:condensed} \end{table}
2,877,628,090,947
arxiv
\section{Introduction}\label{intro} \IEEEPARstart{G}{eometric} signal processing tools have found broad applications in data analysis to uncover obscure or hidden structures from complex datasets \cite{c1}. Various data sources, {such} as social networks, traffic flows, and biological images, often feature complex structures that pose challenges to traditional signal processing tools. Recently, graph signal processing (GSP) emerges as an effective tool over the graph signal representation \cite{c3}. For a signal with $N$ samples, a graph of $N$ nodes can be formed to model their underlying interactions \cite{c2}. In GSP, a graph Fourier space is also defined from the spectral space of the representing matrix (adjacency/Laplacian) for signal processing tasks \cite{c4}, such as denoising \cite{c5}, resampling \cite{c6}, and classification \cite{c7}. Generalization of the more traditional GSP includes signal processing over hypergraphs \cite{c8} and simplicial complexes \cite{c9}, {which are} suitable to model high-degree multi-lateral node relationships. Traditional graph signal processing tools generally describe signals as graph nodes connected by one type of edges. However, real-life systems and datasets may feature multi-facet interactions \cite{c11}. For example, in a video dataset modeled by spatial temporal graph shown in Fig. \ref{ex1}, the nodes may exhibit different types of spatial connections at different temporal steps. It is harder for single-layer graphs to model such multi-facet connections. To model multiple layers of signal connectivity, we explore a high-dimensional graph representation known as multilayer networks \cite{c10}. Multilayer network (MLN) is a geometric model containing correlated layers with different structures and physical meanings, unlike traditional single-layer graphs. A typical example is smart grid consisting of two layers shown as Fig. \ref{ex2}: the power grid and the computation network. These two layers have different physical connectivity and rules \cite{c12}. Still, signal interactions across the multiple layers in MLN can be strongly correlated. Thus, separate representations by multiple single layer graphs may fail to capture such characteristics. Consider a network consisting of a physical power layer and a cyber layer, the failure of one layer could trigger the failure of the other \cite{c13}. One example was the power line damage caused by a storm on September 28th of 2003. Not only did it lead to the failure of several power stations, but also disrupted communications as a result of power station breakdowns that eventually affected 56 million people in Europe \cite{c14}. \begin{figure*}[htbp] \centering \subfigure[]{ \label{ex1} \includegraphics[height=2.5cm]{video.png}} \hspace{3cm} \subfigure[]{ \label{ex2} \includegraphics[height=2.5cm]{CPS.png}} \caption{Multilayer Networks and Applications: (a) Video: each layer represents one frame of the video and the edges capture the spatial-temporal relationships; (b) Cyber-Physical System (CPS): each layer represents one component in CPS and the edges capture the physical connections.} \label{ex_mln} \vspace{-3mm} \end{figure*} The complexity and multi-level interactions of MLN make the data reside on the irregular and high-dimensional structures, which do not directly lend themselves to standard GSP tools. For example, even though one can represent MLN by a supra-graph unfolding all the layers \cite{d1}, traditional GSP would treat interlayer and intralayer interactions equivalently in one spectral space without differentiating the spectra of interlayer and intralayer signal correlations. Recently, there has been growing interest in developing advanced GSP tools to process such multi-level structures. In \cite{c16}, a two-step transform is proposed to process spatial-temporal graphs. Graph Fourier transform (GFT) is applied first in the spatial domain (intralayer) and then in the temporal domain (interlayer). In this framework, different graph Fourier spaces are defined for interlayer and intralayer connections respectively. However, all the spatial interactions reside within a single graph structure which limits the generalization of MLN. In \cite{c19}, a joint time-vertex Fourier transform (JFT) is defined by implementing GFT and DFT consecutively. Although JFT can process the time-varying datasets, it does not process more general temporal (interlayer) connectivity in a generic multilayer network. Alternatively, a tensor-based multi-way graph signal processing framework (MWGSP) relies on product graph \cite{c18}. In this framework, separate factor graphs are constructed for each mode of a tensor-represented signal, and a joint spectrum combines the spectra of all factor graphs. Since MWGSP focuses on product of all factor graphs estimated from signals, it is not well suited to a multilayer network with a given structure. Another challenge in MLN signal processing lies in the need for a suitable mathematical representation. Traditional methods start with connectivity matrices. For example, in \cite{d2}, a supra-adjacency matrix is defined to represent all layers equivalently while ignoring the natures of different layers. One can also represent each layer with an individual adjacency matrix \cite{c20}. However, such matrix-based representations mainly focus on the intralayer connections and lack representation for interlayer interactions \cite{c20}. A more natural and general way may start with tensor representation \cite{c10}, which is particularly attractive in handling complex MLN graph analysis. Our goal is to generalize graph signal processing for multilayer networks to model, analyze, and process signals based on the {\em intralayer} and {\em interlayer} signal interactions. To address the aforementioned challenges and to advance MLN processing, we present a novel tensor framework for graph signal processing over multilayer networks (M-GSP). We summarize the main contributions of this work as follows: \begin{itemize} \item Leveraging tensor representation of MLN, we introduce the definitions of signals and shifting over MLN. \item We define new concepts of spectral space and spectrum transform for M-GSP. For interpretability of the spectral space, we analyze the resulting MLN spectral properties and their distinctions from existing GSP tools. \item We also present fundamentals of filter design in M-GSP, and suggest several practical applications based on the proposed framework. \end{itemize} We organize the technical contents as follows. Section \ref{prelim} first summarizes the preliminaries of traditional GSP and tensor analysis. We then introduce the fundamentals of M-GSP and frequency analysis in Section \ref{funda} and Section \ref{mln_spec}, respectively. We next present MLN filter design in Section \ref{fter}. We provide the physical insights and spectrum interpretation of M-GSP concepts in Section \ref{discus}. Within the proposed M-GSP framework, we present several example applications and demonstrate its effectiveness in Section \ref{app}, before summarizing our conclusions in Section \ref{con}. \section{Preliminaries} \label{prelim} \subsection{Overview of Graph Signal Processing} Signal processing on graphs \cite{c1,c2,c3} studies signals that are discrete in some dimensions by representing the irregular signal structure using a graph $\mathcal{G}=\{\mathcal{V},\mathbf{F}\}$, where $\mathcal{V}=\{v_1,v_2,\cdots, v_N\}$ is a set of $N$ nodes, and $\mathbf{F}\in\mathbb{R}^{N\times N}$ is the representing matrix (e.g., adjacency/Laplacian) describing the geometric structure of the graph $\mathcal{G}$. Graph signals are the attributes of nodes that {underlie} the graph structure. A graph signal can be written as vector $\mathbf{s}=[s_1, s_2,\cdots, s_N]^\mathrm{T}\in\mathbb{R}^{N}$ where the superscript $()^\mathrm{T}$ denotes matrix/vector transpose. With a graph representation $\mathbf{F}$ and a signal vector $\mathbf{s}$, the basic graph filtering is defined via $\mathbf{s}'=\mathbf{Fs}$. The graph spectral space, also known as the graph Fourier space is defined based on the eigenspace of the representing matrix. Let the eigen-decomposition of $\mathbf{F}$ be given by $\mathbf{F}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{-1}$, where $\mathbf{V}$ is the matrix with eigenvectors of $\mathbf{F}$ as columns, and diagonal matrix $\mathbf{\Lambda}$ consists of the corresponding eigenvalues. The graph Fourier transform (GFT) is defined as \begin{equation}\label{GFT} \hat{\mathbf{s}}=\mathbf{V}^{-1}\mathbf{s}, \end{equation} whereas the inverse GFT is given by $\mathbf{s}=\mathbf{V}\hat{\mathbf{s}}$. From definitions of GFT, other concepts, such as sampling theory \cite{c23}, filter design \cite{c24}, and frequency analysis \cite{c4} can be developed for signal processing and data analysis tasks. \subsection{Introduction of Tensor Basics} Before introducing the fundamentals of M-GSP, we first review some basics on tensors that are useful for multilayer network analysis. Tensors can be viewed as multi-dimensional arrays. The order of a tensor is the number of indices needed to label a component of that array \cite{c25}. For example, a third-order tensor has three indices. More specially, a scalar is a zeroth-order tensor; a vector is a first-order tensor; a matrix is a second-order tensor; and an $M$-dimensional array is an $M$th-order tensor. For convenience, we use bold letters to represent the tensors excluding scalars, i.e., $\mathbf{A}\in\mathbb{R}^{I_1\times I_2\cdots\times I_N}$ represents an $N$th-order tensor with $I_k$ being the dimension of the $k$th order, and use $A_{{i_1}\cdots i_N}$ to represent the entry of $\mathbf{A}$ at position $(i_1,i_2,\cdots,i_N)$ with $1\leq i_k\leq I_k$ in this work. If $\mathbf{A}_f$ has a subscript $f$, we use $[A_f]_{{i_1}\cdots i_N}$ to denote its entries. We now start with some useful definitions and tensor operations for the M-GSP framework \cite{c25}. \subsubsection{Super-diagonal Tensor} An $N$th-order tensor $\mathbf{A}\in\mathbb{R}^{I_1\times I_2\cdots\times I_N}$ is \textit{super-diagonal} if its entries $A_{i_1i_2\cdots i_N}\neq 0$ only for $i_1=i_2=\cdots=i_N$. \subsubsection{Symmetric Tensor} A tensor is \textit{super-symmetric} if its elements remain constant under index permutation. For example, a third-order $\mathbf{A}\in \mathbb{R}^{I\times I\times I}$ is \textit{super-symmetric} if $A_{ijk}=A_{jik}=A_{kij}=A_{kji}=A_{jik}=A_{jki}$. In addition, tensors can be \textit{partially symmetric} in two or more modes as well. For example, a third-order tensor $\mathbf{A}\in\mathbb{R}^{I\times I\times J}$ is \textit{symmetric} in the order one and two if $A_{ijk}=A_{jik}$, for $1\leq i,j\leq I$ and $1\leq k\leq J$. \subsubsection{Tensor Outer Product} The \textit{tensor outer product} between a $P$th-order tensor $\mathbf{U}\in \mathbb{R}^{I_1\times \cdots\times I_P}$ with entries $U_{i_1 ... i_P}$ and a $Q$th-order tensor $\mathbf{V} \in\mathbb{R}^{J_1\times \cdots\times J_Q}$ with entries $V_{j_1 ... j_Q}$ is denoted by \begin{equation} \mathbf{W}=\mathbf{U} \circ \mathbf{V}. \end{equation} The result $\mathbf{W}\in \mathbb{R}^{I_1\times \cdots\times I_P\times J_1\times \cdots\times J_Q} $ is a $(P+Q)$th-order tensor, whose entries are calculated by $W_{i_1 ... i_P j_1 ... j_Q}= U_{i_1 ... i_P} \cdot V_{j_1 ... j_Q}.$ The tensor outer product is useful to construct a higher order tensor from several lower order tensors. \subsubsection{n-mode Product} The \textit{n-mode product} between a tensor $\mathbf{U}\in \mathbb{R}^{I_1\times \cdots \times I_P}$ and a matrix $\mathbf{V}\in \mathbb{R}^{J\times I_n}$ is denoted by \begin{equation} \mathbf{W}=\mathbf{U}\times_n \mathbf{V}\in \mathbb{R}^{I_1\times \cdots\times I_{n-1}\times J \times I_{n+1}\times \cdots \times I_P}. \end{equation} Each element in $\mathbf{W}$ is given by $W_{i_1 i_2 \cdots i_{n-1} j i_{n+1} \cdots i_P}=\sum_{i_n=1}^{I_n}U_{i_1\cdots i_P}V_{j i_n}.$ Note that the $n$-mode product is a different operation from matrix product. \subsubsection{Tensor Contraction} In M-GSP, the contraction (inner product) between a forth order tensor $\mathbf{A}\in\mathbb{R}^{M\times N\times M \times N}$ and a matrix $\mathbf{x} \in \mathbb{R}^{M\times N}$ in the third and forth order is defined as \begin{equation}\label{contract} \mathbf{y}=\mathbf{A}\diamond \mathbf{x}\in \mathbb{R}^{M\times N}, \end{equation} where $y_{\alpha i}=\sum_{\beta=1}^M\sum_{j=1}^N A_{\alpha i \beta j}x_{\beta j}$. In addition, the contraction between two fourth-order tensor $\mathbf{U},\mathbf{V}\in\mathbb{R}^{M\times N\times M \times N}$ is defined as \begin{equation}\label{oodot} \mathbf{W}=\mathbf{U}\odot \mathbf{V}\in\mathbb{R}^{M\times N\times M \times N}, \end{equation} whose entries are $W_{\alpha i \epsilon p}=\sum_{\beta j} U_{\alpha i \beta j} V_{\beta j \epsilon p}$. \subsubsection{Tensor Decomposition} Tensor decompositions are useful tools to extract the underlying information of tensors. Particularly, CANDECOMP/PARAFAC (CP) decomposition decomposes a tensor as a sum of {the tensor outer product of} rank-one tensors \cite{c25,c26}. Another important decomposition is the Tucker decomposition, which is in the form of higher-order PCA. More specifically, Tucker decomposition decomposes a tensor into a core tensor multiplied by a matrix along each mode \cite{c25}. Other typical decompositions include Higher-Order SVD (HOSVD) \cite{c27}, orthogonal CP-decomposition \cite{c28}, and Tensor-Train decomposition \cite{c29}. Interested readers are referred to the tutorial \cite{c25} for more details. Additional examples and illustrations of tensor decomposition in M-GSP are also provided in the \textbf{Appendix B} because of page limit. \section{Fundamentals of M-GSP} \label{funda} In this section, we introduce the basic definitions in the proposed M-GSP framework. \begin{figure}[t] \centering \subfigure[]{ \label{mln1} \includegraphics[height=3cm]{MLN.png}} \hspace{2cm} \subfigure[]{ \label{mlp1} \includegraphics[height=3cm]{MLP.png}} \caption{Example of multilayer networks: (a) A three-layer interconnected network; (b) a three-layer multiplex network.} \label{ex_mln1} \vspace{-5mm} \end{figure} \subsection{Multilayer Network} Before introducing the foundations of M-GSP, we first provide definitions of multilayer networks (MLN) \cite{c25}. \begin{definition}[Multilayer Network] A multilayer network with $K$ nodes and $M$ layers is defined as $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$, where $\mathcal{V}=\{v_1,v_2,\cdots,v_K\}$ is the set of nodes, $\mathcal{L}=\{l_1,l_2,\cdots,l_M\}$ denotes the set of layers with each layer $l_i=\{v_{i_1},\cdots,v_{i_n}\}$ being the subsets of $\mathcal{V}$, whereas $\mathbf{F}$ is the algebraic representation describing node interactions. \end{definition} Note that, we mainly focus on the layer-disjoint multilayer network \cite{c11} where each node exists exactly in one layer, since layers denote different phenomena. For example, in a smart grid, a station with functions in both power grid and communication network, is usually modeled as two nodes in a two-layer network for the network analysis \cite{d3}. In multilayer networks, edges connect nodes in the same layer (intralayer edges) or nodes of different layers (interlayer edges) \cite{c30}. There are two main types of multilayer networks: \textit{multiplex network} and \textit{interconnected network} \cite{c31}. In a \textit{multiplex network}, each layer has the same number of nodes, and each node only connects with their 1-to-1 matching counterparts in other layers to form interlayer connections. Typically, multiplex networks characterize different types of interactions among the same (or a similar) set of physical entities. For example, the spatial temporal connections among a set of nodes can be intuitively modeled as a multiplex network \cite{c31}. In the \textit{interconnected networks}, each layer may have different numbers of nodes without a 1-to-1 counterpart. Their interlayer connections could be more flexible. Examples of a three-layer multiplex network and a three-layer interconnected network are shown in Fig. \ref{ex_mln1}, where different colors represent different layers, solid lines represent intralayer connections, and dash lines indicate interlayer connections. \subsection{Algebraic Representation} To capture the high-dimensional `multilayer' interactions between different nodes, we use tensor as algebraic representation of MLN for the proposed M-GSP framework \cite{c11}. \subsubsection{MLN with same number of nodes in each layer} To better interpret the tensor representation of a multilayer network, we start from a simpler type of MLN, in which each layer contains the same number of nodes. For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L}\}$ with $|\mathcal{L}|=M$ layers and $N$ nodes in each layer, i.e., $|l_i|=N$ for $1\leq i \leq M$, it could be interpreted as embedding the interactions between a set of $N$ `entities' (not nodes) into a set of $M$ layers. The nodes in different layers can be viewed as the projections of the entities. For example, the video datasets could be modeled by the spatial connections between objects (entities) into different temporal frames (layers). Mathematically, the process of embedding (projecting) entities can be viewed as a tensor product, and network connections can be represented by tensors \cite{c10}. For {convenience}, we use Greek letters $\alpha,\beta,\cdots$ to indicate each layer and Latin letters $i,j,\cdots$ to indicate each interpretable `entity' with corresponding node in each layer. Given a set of entities $\mathcal{X}=\{x_1,x_2,\cdots,x_N\}$, one can construct a vector $\mathbf{e}_i\in \mathbb{R}^{N}$ to characterize each entity $i$. Thus, interactions of two entities can be represented by a second-order tensor $\mathbf{A}_X=\sum_{i,j=1}^N a_{ij} \mathbf{e}_i \circ \mathbf{e}_j\in\mathbb{R}^{N\times N}$, where $a_{ij}$ is the intensity of the relationship between entity $i$ and $j$. Similarly, given a set of layers $\mathcal{L}=\{l_1,l_2,\cdots,l_M\}$, a vector $\mathbf{e}_{\alpha}\in \mathbb{R}^{M}$ can capture the properties of the layer $\alpha$, and the connectivity between two layers could be represented by $\mathbf{A}_L=\sum_{\alpha,\beta=1}^M b_{\alpha\beta}\mathbf{e}_\alpha \circ \mathbf{e}_\beta \in \mathbb{R}^{M\times M}$. Following this approach, connectivity between the projected nodes of the entities in the layers can be represented by a {fourth-order} tensor \begin{equation}\label{tensor_construction} \mathbf{A}=\sum_{\alpha,\beta=1}^{M}\sum_{i,j=1}^N w_{\alpha i \beta j}\mathbf{e}_\alpha \circ \mathbf{e}_i\circ \mathbf{e}_\beta\circ\mathbf{e}_j \in \mathbb{R}^{M\times N\times M \times N}, \end{equation} where $w_{\alpha i \beta j}$ is the weight of connection between the entity $i$'s projected node in layer $\alpha$ and the entity $j$'s projected node in layer $\beta$. More specially, if we select the vector $\mathbf{e}_i=[0,\cdots,0,1,0,\cdots,0]^\mathrm{T}$ in which the only nonzero element is the $i$th element (equal to 1) for both layers and entities, the {fourth-order} tensor becomes the adjacency tensor of the multilayer network, where each entry $A_{\alpha i \beta j}=w_{\alpha i \beta j}$ characterizes the edge between the entity $i$'s projected node in layer $\alpha$ and the entity $j$'s projected node in layer $\beta$. Thus, similar to the adjacency matrix whose 2-D entries indicate whether and how two nodes are pairwise connected by a simple edge in the normal graphs, we adopt an adjacency tensor $\mathbf{A}\in\mathbb{R}^{M\times N\times M \times N}$ to represent the multilayer network with the same number of nodes in each layer as follows. \begin{definition}[Adjacency Tensor] A multilayer network $\mathcal{M}$, with $|\mathcal{L}|=M$ layers and $|l_i|=N$ nodes in each layer $i$, can be represented by a {fourth-order} adjacency tensor $\mathbf{A}\in \mathbb{R}^{M\times N\times M \times N}$ defined as \begin{equation} \mathbf{A}=(A_{\alpha i \beta j}), \quad 1\leq \alpha,\beta\leq M, 1\leq i,j\leq N. \end{equation} Here, each entry $A_{\alpha i \beta j}$ of the adjacency tensor $\mathbf{A}$ indicates the intensity of the edge between the entity $j$'s projected node in layer $\beta$ and entity $i$'s projected node in layer $\alpha$. \end{definition} Clearly, for a single layer graph/network, $\mathbf{e}_\alpha$ is a scalar $1$ and the {fourth-order} tensor degenerates to the adjacency matrix of the normal graphs. Similar to $A_{ij}$ in an adjacency matrix which indicates the direction from the node $v_j$ to $v_i$, $A_{\alpha i \beta j}$ also indicates the direction from the node $v_{\beta j}$ to the node $v_{\alpha i}$ in a network. Note that, vectors $\mathbf{e}_i$ and $\mathbf{e_\alpha}$ are not eigenvectors of the adjacency tensor. They are merely the vectors characterizing features of the entities and layers, respectively. We shall discuss the MLN-based spectral space in Section \ref{mln_spec}. Given an adjacency tensor, we can define the Laplacian tensor of the multilayer networks similar to that in a single-layer graph. Denoting the degree (or multi-strength) of the entity's $i$'s projected node $v_{\alpha i}$ in layer $\alpha$ as $d(v_{\alpha i})$ which is a summation over weights of different natures (inter- and intra- layer edges), the degree tensor $\mathbf{D}\in \mathbb{R}^{M\times N\times M \times N}$ is defined as a diagonal tensor with entries $D_{\alpha i \alpha i}=d(v_{\alpha i})$ for $1\leq i\leq N, 1\leq \alpha \leq M$, whereas its other entries are zero. The Laplacian tensor can be defined as follows. \begin{definition}[Laplacian Tensor] A multilayer network $\mathcal{M}$, with $|\mathcal{L}|=M$ layers and $|l_i|=N$ nodes in each layer $i$, can be represented by a {fourth-order} Laplacian tensor $\mathbf{L}\in \mathbb{R}^{M\times N\times M \times N} $ defined as $\mathbf{L=D-A}$, where $\mathbf{A}$ is the adjacency tensor and $\mathbf{D}$ is the degree tensor. \end{definition} The Laplacian tensor can be useful to analyze propagation processes such as diffusion or random walk \cite{c10}. Both adjacency and Laplacian tensors are important algebraic representations of the MLN depending on datasets and user objectives. \subsubsection{Representation of General MLN} Representing a general multilayer network with different number of nodes in each layer always remains a challenge if one aims to distinguish the interlayer and intralayer connection features. In JFT \cite{c19} and MWGSP \cite{c18}, all layers must reside on the same underlying graph structure which restrict the number of nodes to be the same in each layer. Similarly, a reconstruction is also needed to represent a general MLN by the forth-order tensor in M-GSP. Note that, although M-GSP also needs a reconstruction to represent a general MLN, we allow different layers with heterogeneous graph structures, which provides additional flexibility than JFT and MWGSP. There are mainly two ways to reconstruct: 1) Add isolated nodes to layers with fewer nodes to reach $N$ nodes \cite{c12} and set the augmented signals as zero; and 2) Aggregate several nodes into super-nodes for layers with $|l_i|>N$ \cite{c33} and merge the corresponding signals. Since isolated nodes do not interact with any other nodes, it does not change the topological structure of the original multilayer architecture in the sense of signal shifting while the corresponding spectral space could still be changed. The aggregation method depends on how efficiently we can aggregate redundant or similar nodes. Different methods can be applied depending on specific tasks. For example, if one wants to explore the cascading failure in a physical system, the method based on isolated nodes is more suitable. For the applications, such as video analysis where pixels can be intuitively merged as superpixels, the aggregation method can be also practical. In addition, although the {fourth-order} representing tensor can be viewed as the projection of several entities into different layers in Eq. (\ref{tensor_construction}), the entities and layers can be virtual and not necessarily physical to capture the underlying structures of the datasets. The information within the multilayer networks, together with definitions of the underlying virtual entities and layers, should only depend on the structure of the multilayer networks. We will illustrate this further in Section \ref{exn_la}. \subsection{Flattening and Analysis} \label{mln_flat} In this part, we introduce the flattening of the multilayer network, which could simplify some operations in the tensor-based M-GSP. For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes in each layer, its {fourth-order} representing tensor $\mathbf{F}\in \mathbb{R}^{M\times N\times M \times N}$ can be flattened into a second-order matrix to capture the overall edge weights. There are two main flattening schemes in the sense of entities and layers, respectively: \begin{itemize} \item Layer-wise Flattening: The representing tensor $\mathbf{F}$ can be flattened into $\mathbf{F}_{FL} \in \mathbb{R}^{MN\times MN}$ with each element \begin{equation}\label{lwise} {[F_{FL}]}_{{N(\alpha-1)+i, N(\beta-1)+j}}=F_{\alpha i \beta j}. \end{equation} \item Entity-wise Flattening: The representing tensor $\mathbf{F}$ can be flattened into $\mathbf{F}_{FN} \in \mathbb{R}^{NM\times NM}$ with each element \begin{equation}\label{ewise} {[F_{FN}]}_{{M(i-1)+\alpha, M(j-1)+\beta}}=F_{\alpha i \beta j}. \end{equation} \end{itemize} These two flattening methods provide two ways to interpret the network structure. In the first method, the flattened multilayer network has $M$ clusters with $N$ nodes in each cluster. The nodes in the same cluster have the same function (belong to the same layer). In the second method, the flattened network has $N$ clusters with $M$ nodes in each cluster. Here, the nodes in the same cluster are from the same entity. Examples of the tensor flattening of a two-layer network with $3$ nodes in each layer are shown in Fig. \ref{flat}. From the examples, we can see that the diagonal blocks in $\mathbb{R}^{N\times N}$ are the intralayer connections for each layer and other blocks describe the interlayer connections through {\em layer-wise flattening}; and the diagonal block in $\mathbb{R}^{M\times M}$ describe the `intra-entity' connections and other elements represent the `inter-entity' connections in {\em entity-wise flattening}. Although these two flattening schemes define the same MLN with a different indexing of vertices, they are still helpful to analyze the MLN spectral space. For example, in \cite{c12}, the approximations of spectral radius are derived based on different structures of these two flattened matrix. \begin{figure}[t] \centering \includegraphics[width=3.5in]{flattening.png} \caption{Example of Multilayer Network Flattening.} \vspace{-2mm} \label{flat} \vspace{-4mm} \end{figure} \subsection{Signals and Shifting over the Multilayer Networks} Based on the tensor representation, we now define signals and signal shifting over the multilayer networks. In GSP, each signal sample is the attribute of one node. Typically, a graph signal can be represented by an $N$-length vector for a graph with $N$ nodes. Recall that in traditional GSP \cite{c2}, basic signal shifting is defined with the representing matrix as the shifting filter. Thus, in M-GSP, we can also define the signals and signal shifting based on the filter implementation. In M-GSP, each signal sample is also related to one node in the multilayer network. Intuitively, if there are $K=MN$ nodes, there are $MN$ signal samples in total. Similar to GSP, we use the representing (adjacency/Laplacian) tensor $\mathbf{F}\in \mathbb{R}^{M \times N\times M \times N}$ as the basic MLN-filter. Since the input signal and the output signal of the MLN-filter should be consistent in the tensor size, we define a special form of M-GSP signals to work with the representing tensor as follows. \begin{definition} [Signals over Multilayer Networks] For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$, with $|\mathcal{L}|=M$ layers and $|l_i|=N$ nodes in each layer $i$, the definition of multilayer network signals is a second-order tensor \begin{equation} \mathbf{s}=(s_{\alpha i})\in\mathbb{R}^{M\times N}, \quad 1\leq \alpha \leq M, 1\leq i\leq N, \end{equation} where the entry $s_{\alpha i}$ is the signal sample in the projected node of entity $i$ in layer $\alpha$. \end{definition} Note that, if the multilayer network degenerates to a single-layer graph with $M=1$, the multilayer network signal becomes an $N$-length vector, which is similar to that in the traditional GSP. Similar to the representing tensor, the tensor signal $\mathbf{s}\in\mathbb{R}^{M\times N}$ can also be flattened as a vector in $\mathbb{R}^{MN}$: \begin{itemize} \item Layer-wise flattening: $\mathbf{s}_L\in \mathbb{R}^{MN}$ whose entries are calculated as $[s_L]_{N(\alpha-1)+i}=s_{\alpha i}$. \item Entity-wise flattening: $\mathbf{s}_N\in \mathbb{R}^{NM}$ whose entries are calculated as $[s_N]_{M(i-1)+\alpha}=s_{\alpha i}$. \end{itemize} Given the definitions of multilayer network signals and filters, we now introduce the definitions of signal shifting in M-GSP. In traditional GSP, the signal shifting is defined as product between signal vectors and representing matrix. Similarly, we define the shifting in the multilayer network based on the contraction (inner product) between the representing tensor and tensor signals. \begin{definition}[Signal Shifting over Multilayer Networks] Given the representing matrix $\mathbf{F}\in\mathbb{R}^{M\times N\times M \times N}$ and the tensor signal $\mathbf{s}\in\mathbb{R}^{M\times N}$ defined over a multilayer network $\mathcal{M}$, the signal shifting is defined as the contraction (inner product) between $\mathbf{F}$ and $\mathbf{s}$ in one entity-related order and one layer-related order, i.e., \begin{equation} \label{shift_mln} \mathbf{s}'=\mathbf{F}\diamond \mathbf{s}\in \mathbb{R}^{M\times N}, \end{equation} where $\diamond$ is the contraction between $\mathbf{F}$ and $\mathbf{s}$ defined in Eq. (\ref{contract}). \end{definition} The elements in the shifted signal $\mathbf{s}'$ are calculated as \begin{equation} \label{diffuse} s'_{\alpha i}=\sum_{\beta=1}^M\sum_{j=1}^N F_{\alpha i \beta j}s_{\beta j}. \end{equation} From Eq. (\ref{diffuse}), there are two important factors to construct the shifted signal: 1) The signal in the neighbors of the node $v_{\alpha i}$; and 2) The intensity of interactions between the node $v_{\alpha i}$ and its neighbors. Then, the signal shifting is related to the diffusion process over the multilayer networks. More specifically, if $\mathbf{F}$ is the adjacency tensor, signals shift in directions of edges. Meanwhile, if $\mathbf{F}$ is the Laplacian tensor, Eq. (\ref{diffuse}) can be written as \begin{equation} s'_{\alpha i}=\sum_{\beta=1}^M\sum_{j=1}^N A_{\alpha i \beta j} (s_{\alpha i}-s_{\beta j}), \end{equation} which is the weighted average of difference with neighbors. \section{Multilayer Network Spectral Space} \label{mln_spec} In traditional GSP, graph spectral space is defined according to the eigenspace of the representing matrix \cite{c2}. Similarly, we define the MLN spectral space based on the decomposition of the representing tensor. Since tensor decomposition is less stable when exploring the factorization of a specific order or when extracting the separate features in the asymmetric tensors, we will mainly focus on spectral properties of undirected multilayer networks in this section for simplicity and clarity of presentation. For the directed MLN, we provide alternative spectral definitions in \textbf{Appendix C} and leave the frequency analysis in the future works. Meanwhile, all the proofs of the properties in this part are listed in \textbf{Appendix A}. \subsection{Joint Spectral Analysis in M-GSP} \label{JSA} For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes, the eigen-tensor $\mathbf{V}\in\mathbb{R}^{M\times N}$ of the representing tensor $\mathbf{F}$ is defined in the tensor-based multilayer network theory \cite{c10} as $\mathbf{F}\diamond \mathbf{V}=\lambda \mathbf{V}$. More specifically, $\mathbf{F}\in\mathbb{R}^{M\times N\times M\times N}$ can be decomposed as \begin{align}\label{ted} \mathbf{F} =\sum_{k=1}^{MN}\lambda_k \mathbf{V}_k \circ\mathbf{V}_k=\sum_{\alpha=1}^M\sum_{i=1}^N \lambda_{\alpha i} \mathbf{V}_{\alpha i} \circ\mathbf{V}_{\alpha i}, \end{align} where $\lambda_k$ is the eigenvalues and $\mathbf{V}_k\in\mathbb{R}^{M\times N}$ is the corresponding eigen-tensor. Note that $\mathbf{V}_{\alpha i}$ just relabels the index of $\mathbf{V}_k$, and there is no specific order for $\mathbf{V}_{\alpha i}$ here. Similar to the traditional GSP where the graph Fourier space is defined by the eigenvectors of the representing matrix, we define the joint MLN Fourier space as follows. \begin{definition}[Joint Multilayer Network Fourier Space] For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes, the MLN Fourier space is defined as the space consisting of all spectral tensors $\{\mathbf{V}_1,\cdots,\mathbf{V}_{MN}\}$, which characterizes the joint features of entities and layers. \end{definition} Recall that in GSP, the GFT is defined based on the inner product of $\mathbf{V}^{-1}$ and the signals $\mathbf{s}$ defined in Eq. (\ref{GFT}). Similarly, we can define the M-GFT based on the spectral tensors of the representing tensor $\mathbf{F}$ to capture joint features of inter- and intra- layer interactions as follows. \begin{definition}[Joint M-GFT] Let $\mathbf{U}_\mathcal{F}=(\mathbf{V}_{\alpha i})\in\mathbb{R}^{M\times N \times M \times N}$ consist of spectral tensors of the representing tensor $\mathbf{F}$, where $[U_\mathcal{F}]_{\alpha i \beta j}=[V_{\alpha i}]_{\beta j}$. The joint M-GFT can be defined as the contraction between $\mathbf{U}_\mathcal{F}$ and the tensor signal $\mathbf{s}\in\mathbb{R}^{M\times N}$, i.e., \begin{equation}\label{M-GFT} \hat{\mathbf{s}}=\mathbf{U}_\mathcal{F} \diamond \mathbf{s}. \end{equation} \end{definition} Now, we show how to obtain the eigen-tensors. Implementing the flattening analysis, we have the following properties. \begin{property} The two types of flattened tensor in Eq. (\ref{lwise}) and Eq. (\ref{ewise}) lead to the same eigenvalues. \end{property} This property shows that the flattened tensors are the reshape of the original representing tensor, and could capture some of the spectral properties as follows. \begin{property}\label{eig} Given the eigenpair $(\lambda_{FL}, \mathbf{x})$ of the layer-wise flattened tensor, the eigenpair $(\lambda,\mathbf{V})$ of the original representing tensor can be calculated as $\lambda=\lambda_{FL}$, and $V_{\alpha i}=x_{N(\alpha-1)+i}$. Similarly, given the eigenpair $(\lambda_{FN}, \mathbf{y})$ of the entity-wise flattened tensor, the eigenpair $(\lambda,\mathbf{V})$ of the original representing tensor can be calculated as $\lambda=\lambda_{FN}$, and $V_{\alpha i}=y_{M(i-1)+\alpha}$. \end{property} The {Property \ref{eig}} shows that we can calculate the eigen-tensor from the flattened tensor to simplify the decomposition operations. Moreover, the joint M-GFT is the bijection of GFT in the flattened MLN, with vertices indexed by both the layers and the entities. However, such joint M-GFT analyzes the inter- and intra- layer connections jointly while ignoring the individual features of entities and layers. Next, we will show how to implement the order-wise frequency analysis in M-GSP based on tensor decomposition. \subsection{Order-wise Spectral Analysis in M-GSP}\label{OAA} In an undirected multilayer network, the representing tensor (adjacency/Laplacian) $\mathbf{F}$ is partially symmetric between orders one and three, and between orders two and four, respectively. Then, the representing tensor can be written with the consideration of the multilayer network structure under orthogonal CP-decomposition \cite{c28} as follows: \begin{align}\label{decompose1} \mathbf{F}&\approx\sum_{\alpha=1}^{M}\sum_{i=1}^N \lambda_{\alpha i} \cdot \mathbf{f}_\alpha \circ\mathbf{e}_i\circ \mathbf{f}_\alpha \circ\mathbf{e}_i\\ &=\sum_{\alpha=1}^M\sum_{i=1}^N \lambda_{\alpha i} \tilde{\mathbf{V}}_{\alpha i} \circ\tilde{\mathbf{V}}_{\alpha i}, \end{align} where $\mathbf{f}_\alpha \in\mathbb{R}^M$ are orthonormal, $\mathbf{e}_i\in\mathbb{R}^N$ are orthonormal and $\tilde{\mathbf{V}}_{\alpha i}=\mathbf{f}_{\alpha}\circ \mathbf{e}_i\in\mathbb{R}^{M\times N}$. The CP decomposition factorizes a tensor into a sum of component rank-one tensors, which describe the underlying features of each order. Although approximated algorithms are implemented to obtain the optimal decomposition, CP decomposition still achieves great success in real scenarios, such as feature extraction \cite{d5} and tensor-based PCA analysis \cite{d4}. A detailed discussion of tensor decomposition and its implementation in M-GSP are provided in \textbf{Appendix B}. In Eq. (\ref{decompose1}), $\mathbf{f}_\alpha$ and $\mathbf{e}_i$ capture the features of layers and entities, respectively, which can be interpreted as the subspaces of the MLN. More discussions about the frequency interpretation of order-wise M-GSP spectrum and connections to MWGSP spectrum are presented in Section \ref{dep}. Note that, if there is only one layer in the multilayer network, Eq. (\ref{decompose1}) reduces to the eigendecomposition of a normal single-layer graph, i.e., $\mathbf{F}=\sum_{i=1}^N\lambda_i\mathbf{e}_i \circ \mathbf{e}_i$ With the decomposed representing tensor in Eq. (\ref{decompose1}), the order-wise MLN spectrum is defined as follows. \begin{definition}[Order-wise MLN Spectral Pair] For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes, the order-wise MLN spectral pairs are defined by $\{\lambda_{\alpha i},\mathbf{f}_\alpha,\mathbf{e}_i\}$, where $\{\mathbf{f}_1,\cdots,\mathbf{f}_M\}$ and $\{\mathbf{e}_1,\cdots,\mathbf{e}_N\}$ characterize features of layers and entities, respectively. \end{definition} With the definition of order-wise MLN spectral pair, we now explore their properties. Considering $\tilde{\mathbf{V}}_{\alpha i}=\mathbf{f}_{\alpha}\circ \mathbf{e}_i$, we have the following property, which indicates the availability of a joint MLN analysis based on order-wise spectrum. \begin{property} The factor tensor $\tilde{\mathbf{V}}_{\alpha i}$ of the representing tensor $\mathbf{F}$ is the approximated eigen-tensor of $\mathbf{F}$. \end{property} By constructing a {fourth-order} tensor $\tilde{\mathbf{U}}_{\mathcal{F}}\in\mathbb{R}^{M\times N\times M \times N}$ with $\tilde{\mathbf{V}}_{\alpha i}$ as its elements, i.e., $[\tilde{U}_{\mathcal{F}}]_{\alpha i \beta j}=[\tilde{V}_{\alpha i}]_{\beta j}$, we can have the following property. \begin{property}\label{ortho} Let $\mathbf{W}=\tilde{\mathbf{U}}_{\mathcal{F}}\otimes \tilde{\mathbf{U}}_{\mathcal{F}}$, where $\otimes$ is the contraction in the third and forth order with $W_{\alpha i \beta j}=\sum_{p,\theta}[\tilde{U}_{\mathcal{F}}]_{\beta j \theta p}\times [\tilde{U}_{\mathcal{F}}]_{\alpha i \theta p}$. Then, $\mathbf{W}$ is super-diagonal with super-diagonal elements all equal to one. \end{property} This property generalizes the orthogonality of the spectral tensor into a similar definition of matrix. We now introduce the order-wise MLN spectral transform. Similar to Eq. (\ref{M-GFT}), the joint transform can be defined as \begin{equation}\label{O-GFT} \hat{\mathbf{s}}=\tilde{\mathbf{U}}_\mathcal{F} \diamond \mathbf{s}. \end{equation} Note that each element of $\hat{\mathbf{s}}$ in Eq. (\ref{O-GFT}) can be calculated as \begin{align} \hat{s}_{\alpha i}&=\sum_{\beta,j} [\tilde{U}_{\mathcal{F}}]_{\alpha i \beta j}s_{\beta j} =\sum_{\beta,j} [\tilde{V}_{\alpha i}]_{\beta j}s_{\beta j}\\ &=\sum_{\beta,j} [f_\alpha]_\beta \cdot [e_i]_j\cdot s_{\beta j}. \end{align} Let $\mathbf{E}_f=[\mathbf{f}_1\cdots\mathbf{f}_M]\in\mathbb{R}^{M\times M}$ and $\mathbf{E}_e=[\mathbf{e}_1\cdots\mathbf{e}_N]\in\mathbb{R}^{N\times N}$. We then have $\hat{\mathbf{s}}'=\mathbf{E}_f^{\mathrm{T}}\mathbf{s}\mathbf{E}_e$, with each element $\tilde{s}_\alpha'=\sum_{j,\beta}[f_\alpha]_\beta \cdot [e_i]_j\cdot s_{\beta j}$. Clearly, the joint M-GFT can be obtained as $\hat{\mathbf{s}}=\hat{\mathbf{s}}'=\mathbf{E}_f^{\mathrm{T}}\mathbf{s}\mathbf{E}_e$. Then, we have the following definition of M-GFT based on order-wise spectrum. \begin{definition} [Order-wise M-GFT] Given the spectral vectors $\mathbf{E}_f=[\mathbf{f}_1\cdots\mathbf{f}_M]\in\mathbb{R}^{M\times M}$ and $\mathbf{E}_e=[\mathbf{e}_1\cdots\mathbf{e}_N]\in\mathbb{R}^{N\times N}$, the layer-wise M-GFT can be defined as \begin{equation} \hat{\mathbf{s}}_L=\mathbf{E}_f^{\mathrm{T}}\mathbf{s}\in\mathbb{R}^{M\times N}, \end{equation} and the entity-wise M-GFT can be defined as \begin{equation} \hat{\mathbf{s}}_N=\mathbf{s}\mathbf{E}_e\in\mathbb{R}^{M\times N}. \end{equation} The joint M-GFT based on order-wise spectrum is defined as \begin{equation} \label{ffff} \hat{\mathbf{s}}=\mathbf{E}_f^{\mathrm{T}}\mathbf{s}\mathbf{E}_e\in\mathbb{R}^{M\times N}. \end{equation} \end{definition} If there is only one layer in the multilayer network, the M-GFT calculated with $\mathbf{s}^T\in\mathbb{R}^{N}$ as $(\hat{\mathbf{s}}_N)^{\mathrm{T}}=(\mathbf{s}\mathbf{E}_e)^{\mathrm{T}}\in\mathbb{R}^{N}$, which has the same form as the traditional GFT in Eq. (\ref{GFT}). In addition, since $\mathbf{f}_\alpha$ and $\mathbf{e}_i$ are orthonormal basis of undirected MLN, the inverse M-GFT can be calculated as \begin{equation}\label{igft} \mathbf{s}'=\mathbf{E}_f\hat{\mathbf{s}}\mathbf{E}_e^{\mathrm{T}}. \end{equation} Different from joint MLN Fourier space in Section \ref{JSA}, the order-wise MLN spectrum provides an individual analysis on layers and entities separately, and a reliable approximated analysis on the underlying MLN structures jointly. \subsection{MLN Singular Tensor Analysis} In addition to the eigen-decomposition, the singular value decomposition (SVD) is another important decomposition to factorize a matrix. In this part, we provide the higher-order SVD (HOSVD) \cite{c27} of the representing tensor as an alternative definition of spectrum for the multilayer networks. Given the multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes in each layer, its representing tensor $\mathbf{F}\in\mathbb{R}^{M \times N\times M \times N}$ can be decomposed via HOSVD as \begin{equation}\label{decomposeS} \mathbf{F}= \mathbf{S}\times_1 \mathbf{U}^{(1)}\times_2 \mathbf{U}^{(2)}\times_3 \mathbf{U}^{(3)}\times_4 \mathbf{U}^{(4)}, \end{equation} where $\mathbf{U}^{(n)}=[\mathbf{U}^{(n)}_1\quad \mathbf{U}^{(n)}_2\quad\cdots\quad \mathbf{U}^{(n)}_{I_n}]$ is a unitary $(I_n\times I_n)$ matrix, with $I_1=I_3=M$ and $I_2=I_4=N$. $\mathbf{S}$ is a complex $(I_1\times I_2\times I_3 \times I_4)$-tensor of which the subtensor $\mathbf{S}_{i_n}$ obtained by fixing $n$th index to $\alpha$ have \begin{itemize} \item $<\mathbf{S}_{i_n=\alpha},\mathbf{S}_{i_n=\beta}>=0$ where $\alpha\neq\beta$. \item $||\mathbf{S}_{i_n=1}||\geq||\mathbf{S}_{i_n=2}||\geq\cdots\geq ||\mathbf{S}_{i_n=I_n}||\geq 0$. \end{itemize} The Frobenius-norms $\sigma_i^{(n)}=||S_{i_n=i}||$ is the $n$-mode singular value, and $\mathbf{U}^{(i)}$ are the corresponding $n$-mode singular vectors. For an undirected multilayer network, the representing tensor is symmetric for every 2-D combination. Thus, there are two modes of singular spectrum, i.e., $(\gamma_\alpha, \mathbf{f}_\alpha)$ for mode $1,3$, and $(\sigma_i,\mathbf{e}_i)$ for mode $2,4$. More specifically, $\mathbf{U}^{(1)}=\mathbf{U}^{(3)}=(\mathbf{f}_\alpha)$ and $\mathbf{U}^{(2)}=\mathbf{U}^{(4)}=(\mathbf{e}_i)$. Since the joint singular tensor captures the consistent information of entities and layers, it can be calculated as \begin{equation}\label{decom} (\lambda_{\alpha i}, \hat{\mathbf{V}}_{\alpha i})=(\gamma_\alpha\cdot\sigma_i, \mathbf{f}_\alpha \cdot \mathbf{e}_i). \end{equation} Note that the diagonal entries of $\mathbf{S}$ are not the eigenvalues or frequency coefficients of the representing tensor in general. The multilayer network singular space is defined as follows. \begin{definition}[Multilayer Network Singular Space] For a multilayer network $\mathcal{M}=\{\mathcal{V},\mathcal{L},\mathbf{F}\}$ with $M$ layers and $N$ nodes, the MLN singular space is defined as the space consisting of all singular tensors $\{\hat{\mathbf{V}}_1\cdots\hat{\mathbf{V}}_{MN}\}$ obtained from Eq. (\ref{decom}). The singular vectors $\{\mathbf{f}_1,\cdots,\mathbf{f}_M\}$ and $\{\mathbf{e}_1,\cdots,\mathbf{e}_N\}$ in Eq. (\ref{decomposeS}) characterize layers and entities, respectively. \end{definition} Similar to order-wise spectral analysis in Section \ref{OAA}, we can define the MLN singular tensor transform (M-GST) based on the singular tensors as follows. \begin{definition}[M-GST] Suppose that $\mathbf{U}_s=(\mathbf{f}_\alpha\circ\mathbf{e}_i) \in\mathbb{R}^{ M\times N \times M \times N}$ consists of the singular vectors of the representing tensor $\mathbf{F}$ in Eq. (\ref{decomposeS}), where $[U_s]_{\alpha i \beta j}=[f_\alpha]_\beta\cdot [e_i]_j$. The M-GST can be defined as the contraction between $\mathbf{U}_s$ and the tensor signal $\mathbf{s}\in\mathbb{R}^{M\times N}$, i.e., \begin{equation} \label{ssss} \check{\mathbf{s}}=\mathbf{U}_s \diamond \mathbf{s}. \end{equation} If the singular vectors are included in $\mathbf{W}_f=[\mathbf{f}_1\cdots\mathbf{f}_M]\in\mathbb{R}^{M\times M}$ and $\mathbf{W}_e=[\mathbf{e}_1\cdots\mathbf{e}_N]\in\mathbb{R}^{N\times N}$, the layer-wise M-GST can be defined as \begin{equation} \check{\mathbf{s}}_L=\mathbf{W}_f^{\mathrm{T}}\mathbf{s}\in\mathbb{R}^M\times \mathbb{R}^N, \end{equation} and the entity-wise M-GST can be defined as \begin{equation} \check{\mathbf{s}}_N=\mathbf{s}\mathbf{W}_e\in\mathbb{R}^M\times \mathbb{R}^N. \end{equation} \end{definition} \noindent Inverse M-GST can be defined similarly as in Eq. (\ref{igft}) with unitary $\mathbf{W}_e$ and $\mathbf{W}_f$. Compared to the eigen-tensors in Eq. (\ref{ted}), the singular tensors come from the combinations of the singular vectors, thus are capable of capturing information of layers and entities more efficiently. Eigen-decomposition, however, focuses more on the joint information and approximate the separate information of layers and entities. We shall provide further discussion on the performance of different decomposition methods in Section \ref{app}, together with additional discussions in \textbf{Appendix B}. The intuition of applying HOSVD in MLN analysis and its correlations to GSP are also provided in Section \ref{i_hosvd}. \subsection{Spectrum Ranking in the Multilayer Network} \label{ex_fre} In traditional GSP, the frequencies are defined by the eigenvalues of the shift, whereas the total variation is an alternative measurement of the order of the graph frequencies \cite{c2}. Similarly, we use the total variation of $\lambda_{\alpha i}$ based on the spectral tensors to rank the MLN frequencies. Let $|\lambda|_{max}$ be the joint singular/eigen- value with the largest magnitude. The M-GSP total variation is defined as follows: \begin{align}\label{tv} TV(\mathbf{V}_{\alpha i})&=||\mathbf{V}_{\alpha i}-\frac{1}{|\lambda|_{max}} \mathbf{F}\diamond \mathbf{V}_{\alpha i}||_1\\ &=|1-\frac{\lambda}{|\lambda|_{max}}\mathbf|\cdot||{V}_{\alpha i}||_1, \end{align} where $||\cdot||_1$ is the $l_1$ norm. Other norms could also be used to define the total variation. For example, the $l_2$ norm could be efficient in signal denoising \cite{c2}. The graph frequency related to $\lambda_{\alpha i}$ is said to be a higher frequency if its total variation $TV(\mathbf{V}_{\alpha i})$ is larger, and its corresponding spectral tensor $\mathbf{V}_{\alpha i}$ is a higher frequency spectrum. We shall provide more details on interpretation of MLN frequency in Section \ref{fre_int}. \section{Filter Design} \label{fter} In this section, we introduce an MLN filter design together with its properties based on signal shifting. \subsection{Polynomial Filter Design} Polynomial filters are basic filters in GSP \cite{c7,d6}. In M-GSP, first-order filtering consists of basic signal filtering, i.e., $\mathbf{s}'=f_1(\mathbf{s})=\mathbf{F}\diamond \mathbf{s}$. Similarly, a second order filter can be defined as additional filtering on first-order filtered signal, i.e., \begin{align} \mathbf{s}''=f_2(\mathbf{s})=\mathbf{F}\diamond(\mathbf{F}\diamond \mathbf{s}), \end{align} whose entries $s_{\alpha i}''$ are calculated as \begin{align} s_{\alpha i}''&=\sum_{\beta=1}^{M}\sum_{j=1}^N F_{\alpha i \beta j} s_{\beta j}' =\sum_{\beta ,j} F_{\alpha i \beta j}\sum_{\epsilon ,p} F_{\beta j \epsilon p}s_{\epsilon p}\\ &=\sum_{\epsilon, p}s_{\epsilon p}\sum_{\beta, j} F_{\alpha i \beta j} F_{\beta j \epsilon p} =(\mathbf{F}\odot\mathbf{F})\diamond \mathbf{s}, \end{align} where $\odot$ is the contraction defined in Eq. (\ref{oodot}). Let $\mathbf{F}^{[2]}=\mathbf{F}\odot \mathbf{F}$. From Eq. (\ref{ted}), we have: \begin{align} F^{[2]}_{\alpha i\beta j}&=\sum_{\theta,p} F_{\alpha i \theta p} F_{\theta p \beta j}\\ &=\sum_{\theta,p}(\sum_k\lambda_k[{V}_k]_{\alpha i}[V_k]_{\theta p})(\sum_t \lambda_t[V_t]_{\beta j}[V_t]_{\theta p})\nonumber\\ &=\sum_{k,t}\lambda_k\lambda_t[ {V}_k]_{\alpha i}[ {V}_t]_{\beta j}(\sum_{\theta,p}[{V}_t]_{\theta p}[{V}_k]_{\theta p})\nonumber\\ &=\sum_k \lambda_k^2[ {V}_k]_{\alpha i}[ {V}_k]_{\beta j}. \end{align} Similarly, for $\tau$th-order term $\mathbf{F}^{[\tau]}$, its entry $F^{[\tau]}_{\alpha i\beta j}$ can be calculated as $F^{[\tau]}_{\alpha i\beta j}=\sum_k \lambda_k^\tau[\mathbf{V}_k]_{\alpha i}[\mathbf{V}_k]_{\beta j}$. Now we have the following property. \begin{property} The $\tau$-th order basic shifting filter $f_\tau(\mathbf{s})$ can be calculated as \begin{align}\label{polyf} f_\tau(\mathbf{s})&=\mathbf{F}^{[\tau]}\diamond\mathbf{s} =(\sum_{k=1}^{MN}\lambda_k^{\tau}\mathbf{V}_k\circ\mathbf{V}_k)\diamond\mathbf{s}. \end{align} \end{property} This property is the M-GSP counterpart to the traditional linear system interpretation that complex exponential signals are eigenfunction of linear systems \cite{c2}, and provides a quicker implementation of higher-order shifting. With the $k$-order polynomial term, the adaptive polynomial filter is defined as \begin{equation} h(\mathbf{s})=\sum_k \alpha_k\mathbf{F}^{[k]}\diamond\mathbf{s}, \end{equation} where $\{\alpha_k\}$ are parameters to be estimated from data. Adaptive polynomial filter is useful in semi-supervised classification \cite{d7} and exploits underlying geometric topologies. We will illustrate further and provide application examples based on MLN polynomial filtering in Section \ref{app}. \subsection{Spectral Filter Design}\label{fefi} Filtering in the graph spectral space is useful in GSP frequency analysis. For example, ordering the Laplacian graph spectrum $\mathbf{V}_\mathcal{G}=[\mathbf{e}_1, \cdots, \mathbf{e}_N]\in\mathbb{R}^{N\times N}$ in a descent order by the graph total variation \cite{c2}, i.e., high frequency to low frequency, the GFT of $\mathbf{s}\in\mathbb{R}^{N}$ is calculated as $\hat{\mathbf{s}}=\mathbf{V}_\mathcal{G}^\mathrm{T}\mathbf{s}$. By removing $k$ elements in the low frequency part, i.e., $\hat{s}'=[\hat{s}_1,\cdots, \hat{s}_{N-k}, 0, \cdots, 0]$, a high-pass filter can be designed as \begin{align} \mathbf{s}'&=\mathbf{V}_\mathcal{G}\hat{s}' =\mathbf{V}_\mathcal{G}\Sigma_k\mathbf{V}_\mathcal{G}^\mathrm{T}\mathbf{s} \end{align} where $\Sigma_k=diag([\sigma_1,\cdots,\sigma_N])$ is a diagonal matrix with $\sigma_i=0$ for $i=1,\cdots,N-k$; otherwise, $\sigma_i=0$. Similarly, in M-GSP, a spectral filter is designed by filtering in the spectral space together with the inverse M-GFT. With Eq. (\ref{ffff}) and Eq. (\ref{ssss}), spectral filtering of $\mathbf{s}$ is defined as \begin{align} &\mathbf{s}'=\nonumber\\ &\mathbf{E}_f \begin{bmatrix} g(\gamma_1) & \cdots & 0\\ \vdots & \ddots & \vdots\\ 0&\cdots&g(\gamma_N) \end{bmatrix} \mathbf{E}_f^{\mathrm{T}}\mathbf{s} \mathbf{E}_e \begin{bmatrix} f(\sigma_1) & \cdots & 0\\ \vdots & \ddots & \vdots\\ 0&\cdots&f(\sigma_N) \end{bmatrix} \mathbf{E}_e^{\mathrm{T}} \end{align} where functions $g(\cdot)$ and $f(\cdot)$ are designed by the specific tasks. For example, if one wants to design a layer-wise filter capturing the smoothness of signals in the MLN singular space, the function $g(\cdot)$ can be designed as $\mathbf{g}=[1,\cdots,1, 0, \cdots, 0]$ by ordering the layer-wise singular vectors in the descent order of singular values. More discussions and examples are presented in Section \ref{discus} and Section \ref{app}. In addition to polynomial and spectral filters, filters designed through optimization based on geometric information play an important role in semi-supervised signal processing works. Interested readers can refer to \textbf{Appendix E} for a short discussion. \section{Discussion and Interpretative Insights} \label{discus} \subsection{Interpretation of M-GSP Frequency} \label{fre_int} \subsubsection{Interpretation of Graph Frequency}\label{gfe} To better understand its physical meaning, we start with the total variation in digital signal processing (DSP). The total variation in DSP is defined as differences among signals over time \cite{c34}. Moreover, the total variations of frequency components have a 1-to-1 correspondence to frequencies in the order of their values. If the total variation of a frequency component is larger, the corresponding frequency with the same index is higher. This means that, a higher frequency component changes faster over time and exhibits a larger total variation. Interested readers could refer to \cite{c2,c8} or \textbf{Appendix D} for a detailed interpretation of total variation in DSP. Now, let us elaborate the graph frequency motivated by the cyclic graph. Rewrite the finite signals in DSP as vectors, i.e., $\mathbf{s}=[s_1, \cdots,s_{N}]\in\mathbb{R}^{N}$, the signal shifting can be interpreted as the shift filtering corresponding to a cyclic graph shown in Fig. \ref{cirg}. Suppose that its adjacency matrix is written as \begin{align} { \mathbf{C}_N= \left[\begin{smallmatrix} 0&0&\cdots&0&1\\ 1&0&\cdots&0&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\ddots&0&0\\ 0&0&\cdots&1&0 \end{smallmatrix}\right]} \end{align} Then, the shifted signal over the cyclic graph is calculated as $\mathbf{s}'=\mathbf{C}_N\mathbf{s}=[s_{N}\quad s_1\quad\cdots\quad s_{N-1}]^{\mathrm{T}}$, which shifts the signal at each node to its next node. More specifically, $\mathbf{C}_N$ can be decomposed as $\mathbf{C}_N=\mathbf{V\Lambda}\mathbf{V}^{-1}$ where the eigenvalues $\lambda_n=e^{-j\frac{2\pi n}{N}}$ and $\mathbf{V}^{-1}=\frac{1}{\sqrt{N}}[\lambda^{kn}_N]$ is the discrete Fourier matrix. Inspired by the DSP, the eigenvectors in $\mathbf{V}$ are the spectral components (spectrum) of the cyclic graph and the eigenvalues are related to the graph frequencies \cite{c2}. Generalizing the adjacency matrix of the cyclic graph to the representing (Laplacian/adjacency) matrix $\mathbf{F}_M$ of an arbitrary graph, the graph Fourier space consists of the eigenvectors of $\mathbf{F}_M$ and the graph frequencies are related to the eigenvalues. More specifically, the graph Fourier space can be interpreted as the manifold or spectral space of the representing matrix. As aforementioned, the total variations of frequency components reflect the order of frequencies, we can also use the total variation, i.e., $TV(\mathbf{e}_i)=||\mathbf{e}_i-\frac{1}{|\lambda|_{max}}\mathbf{e}_i||_1$, to rank the graph frequencies, where $\mathbf{e}_i$ is the spectral component related to the eigenvalue $\lambda_i$ in $\mathbf{F}_M$. Similar to DSP, the graph frequency indicates the oscillations over the vertex set, i.e., how fast the signals change over the graph shifting. \begin{figure}[t] \centering \includegraphics[width=1.5in]{CIRC.jpg} \vspace{-2mm} \caption{Example of Cyclic Graph.} \label{cirg} \vspace{-4mm} \end{figure} \subsubsection{Interpretation of MLN Frequency} Now, return to M-GSP. Given spectral tensors $\mathbf{V}_k\in\mathbb{R}^{M\times N}$ of a multilayer network, a signal $\mathbf{s}\in\mathbb{R}^{M\times N}$ can be written in a weighted sum of the spectrum, i.e., $\mathbf{s}=\sum_k a_k\mathbf{V}_k$. Viewing the spectral tensor as a signal component, the total variation is in the form of differences between the original signal and its shifted version in Eq. (\ref{tv}). If the signal component changes faster over the multilayer network, the corresponding total variation is larger. Since we relate higher frequency component with a larger total variation, the MLN frequency indicates how fast the signal propagates over the multilayer network under the representing tensor. If a signal $\mathbf{s}$ contains more high frequency components, it changes faster under the representing matrix. \begin{figure}[t] \centering \includegraphics[width=1.8in]{differece.jpg} \vspace{-2mm} \caption{Example of MLN Frequencies.} \label{diff} \vspace{-4mm} \end{figure} \begin{table}[t] \centering \caption{Total Difference of all Nodes} \vspace{-2mm} \begin{tabular}{|l|l|l|l|} \hline Eigenvalue & 0.5726 & 2.7500 & 5.7259 \\ \hline Total Differences & 3.7439 & 2.6846 & 0.8293 \\ \hline \end{tabular} \label{t_diff} \vspace{-4mm} \end{table} Here, we use an example to illustrate it further. We randomly generate a multiplex network with six layers and five nodes in each layer. Each node has a probability of $30\%$ to connect other nodes for intralayer connections and its counterparts in other layers for interlayer connections. We use the Laplacian tensor as representing tensor and $l_1$-norm of the flattened signal to calculate the total variation. Then we have $\lambda_k\geq 0$ and $TV(\mathbf{V}_{k})=|1-\frac{\lambda}{\lambda_{max}}\mathbf|$ for $1\leq k \leq MN$. Clearly, the total variation is larger if the eigenvalue is smaller, i.e., smaller eigenvalues correspond to higher frequencies, similar to GSP \cite{c2}. We next evaluate the differences between filtered signal and original signal by treating different eigen-tensor as signals. From results of Fig. \ref{diff}, shifted signal sample of each node changes more than its original samples. Also from the results given n Table \ref{t_diff}, we can see that a higher frequency signal component exhibits a larger total difference between itself and its shifts, indicating larger oscillations and faster propagation over the MLN under its representing tensor. \subsubsection{Interpretation of MLN Singular Tensors}\label{i_hosvd} As discussed in Section \ref{gfe}, the name of graph Fourier space arises from the adjacency matrix of the cyclic graph. However, when the algebra representation is generalized to an arbitrary graph, especially the Laplacian matrix, the definition of graph spectrum is less related to the Fourier space in DSP but can be interpreted as the manifold or subspace of the representing matrix instead. In literature, SVD is an efficient method to obtain the spectrum for signal analysis, such as spectral clustering \cite{d11} and PCA analysis \cite{d12}. It is straightforward to generalize graph spectral analysis to graph singular space, especially for the Laplacian matrix. In MLN-GSP, the order-wise singular vectors can be interpreted as subspaces characterizing features of layers and entities, respectively. Since HOSVD is robust and efficient, transforming signals to the MLN singular space (M-GST) for the analysis of underlying structures can be a useful alternative for M-GFT. \vspace{-3mm} \subsection{Interpretation of Entities and Layers} \label{exn_la} To gain better physical insight of entities and layers, we discuss two categories of datasets: \begin{itemize} \item In most of the physical systems and datasets, signals can be modeled with a specific physical meaning in terms of layers and entities. In smart grid, for example, each station can be an entity, connected in two layers of computation and power transmission, respectively. Another example is video in which each geometric pixel point is an entity and each video frame form a layer. Each layer node denotes the pixel value in that video frame. M-GSP can be intuitive tool for these datasets and systems. \item In some scenarios, however, the datasets usually only has a definition of layers without meaningful entities. In particular, for multilayer networks with different numbers of nodes, we may insert some isolated artificial nodes to augment the multilayer network. Often in such applications, it may be harder to identify the physical meaning of entities. Here, the entities may be virtual and are embedded in the underlying structure of the multilayer network. Although definition of a virtual entity may vary with the chosen adjacency tensor, it relates to the topological structure in terms of global spectral information. For example, in Fig. \ref{le}, we can use two different definitions of virtual entities. Although the representing tensors for these two definitions differ, their eigenvalues remain the same. Considering also layer-wise flattening, the two supra-matrices are related by reshaping, by exchanging the fourth and fifth columns and rows. They still have the same eigenvalues, whereas the eigentensors can also be the same by implementing the reshaping operations. Note that, to capture distinct information from entities, their spectra would change with different definitions of virtual entities. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=2in]{le.png} \vspace{-1mm} \caption{Example of Different Entities.} \vspace{-5mm} \label{le} \end{figure} \vspace{-3mm} \subsection{Distinctions from Existing GSP Works} \label{dep} \subsubsection{Graph Signal Processing} Generally, M-GSP extends traditional GSP into multilayer networks. Although one can stack all MLN layers to represent them with a supra-matrix, such matrix representation makes GSP inefficient in extracting features of layers and entities separately. Given a supra-matrix of the MLN, the layers of nodes can not be identified directly from its index since all the nodes are treated equivalently. However, the tensor representation provides clear identifications on layers in its index. Moreover, in GSP, we can only implement a joint transform to process inter- and intra- layer connections together, while the M-GSP provide a more flexible choice on joint and order-wise analysis. In Section \ref{JSA}, the joint M-GSP analysis introduced can be viewed as the bijection of GFT in the flattened MLN, with vertices indexed by both layers and entities. Beyond that, we flexibly provide order-wise spectral analysis based on tensor decompositions, which allow the order-wise analysis on layers and nodes. One can select suitable MLN-based tools depending on tasks. The joint spectral analysis can be implemented if we aim to explore layers and entities fully, whereas the order-wise spectral and singular analysis are more efficient in characterizing layers and entities separately. \subsubsection{Joint Time-Vertex Fourier Analysis} In \cite{c19}, a joint time-vertex Fourier transform (JFT) is defined by implementing GFT and DFT consecutively. As discussed in Section \ref{gfe}, the time sequences can be interpreted under a cyclic graph, and thus reside on a MLN structure. However, JFT assumes that all the layers have the same intra-layer connections, which limits the generalization of the MLN analysis. Differently, the tensor-based representation allows heterogeneous structures for the intra-layer connections, which makes M-GSP more general. \subsubsection{Multi-way Graph Signal Processing} In \cite{c18}, MWGSP has been proposed to process high-dimensional signals. Given $K$th-order high-dimensional signals, one can decompose the tensor signal in different orders, and construct one graph for each. Graph signal is said to reside on a high-dimensional product graph obtained by the product of all individual factor graphs. Although the MW-GFT is similar to M-GFT for $K=2$, there still are notable differences in terms of spectrum. First, MWGSP can only process signals without exploiting a given structure since multiple graph spectra would arise from each order of the signals. For a multilayer network with a given structure, such as physical networks with heterogeneous intralayer connections, MWGSP does not naturally process it efficiently and cohesively. The order-wise spectra come from factor graphs of each order in MWGSP while M-GSP spectra are calculated from the tensor representation of the whole MLN. Second, MWGSP assumes all the layers residing on a homogeneous factor graph and restricts the types of manageable MLN structure. For example, in a spatial temporal dataset, a product graph, formed by the product of spatial connections and temporal connections, assumes the same topology in each layer. However, many practical systems and datasets feature more complex geometric interactions. M-GSP provide a more intuitive and natural framework for such MLN. In summary, despite some shared similarities between MW-GFT and M-GFT in some scenarios, they serve different purposes and are suitable for different underlying data structures. \begin{figure*}[t] \centering \subfigure[Original Image.]{ \label{SPEC1} \includegraphics[height=2.3cm,width=2.4cm]{blood_ori.jpg}} \hfill \subfigure[Superpixel.]{ \label{SPEC2} \includegraphics[height=2.3cm,width=2.4cm]{blood_super.jpg}} \hfill \subfigure[K-Means.]{ \label{CELL1} \includegraphics[height=2.3cm,width=2.7cm]{KM1.jpg}} \hfill \subfigure[GSP.]{ \label{CELL2} \includegraphics[height=2.3cm,width=2.7cm]{GSP1.jpg}} \hfill \subfigure[MLN-FZ.]{ \label{CELL3} \includegraphics[height=2.3cm,width=2.7cm]{MLN_CP1.jpg}} \hfill \subfigure[MLN-SVD.]{ \label{CELL4} \includegraphics[height=2.3cm,width=2.7cm]{MLN_HOSVD.jpg}} \vspace{-2mm} \caption{Example of BCCD Datasets and Segmented Images: (a) the original image; (b) the boundaries of each superpixel; (c)-(f) segmented images under different methods (WBCs are marked yellow, RBCs are marked green, and Platelet (P) is marked blue).} \vspace{-4mm} \label{SPEC} \end{figure*} \section{Application Examples} \label{app} We now provide some illustrative application examples within our M-GSP framework. \vspace{-3mm} \subsection{Image Segmentation} In this part, we introduce an MLN spectral clustering for unsupervised RGB image segmentation. To model an RGB image using MLN, we can directly treat its three colors as three layers. To reduce the number of nodes for computational efficiency, we first build $N$ superpixels for a given image and represent each superpixel as an entity in the multilayer network, as shown in Fig. \ref{SPEC2}. Here, we define the feature of a superpixel according to its RGB pixel values. For interlayer connections, each node connects with it counterparts in other layers. For intralayer connections in layer $\ell$, we calculate the Gaussian-based distance between two superpixels according to $W_{ij}= \exp\left(-\frac{|\mathbf{s}_i(\ell)-\mathbf{s}_j(\ell)|^2}{\delta_\ell^2}\right)$ if $\quad|\mathbf{s}_i(\ell)-\mathbf{s}_j(\ell)|^2\leq t_\ell$; otherwise, $W_{ij}=0$, where $\mathbf{s}_i(\ell)$ is the superpixel value in layer $\ell$, $\delta_\ell$ is an adjustable parameter and $t_\ell$ is a predefined threshold. Different layers may have different structures. The threshold $t$ is set to be the mean of all pairwise distances. As such, an RGB image is modeled as multiplex network with $M=3$ and $N$ nodes. We now consider MLN-based spectral clustering. For image segmentation, we focus on the properties of entities (i.e., superpixels), and implement spectral clustering over entity-wise spectrum by proposing Algorithm \ref{basic4}. The previous discussions have been summarized in steps 1-3. In Step 4, different schemes may be used to calculate spectrum, including spectral vector via tensor factorization in Eq. (\ref{decompose1}), and singular vector in Eq. (\ref{decomposeS}). Step 5 determines $K$ based on the largest arithmetic gap in eigenvalues. Traditional clustering methods, such as $k$-means clustering \cite{d11}, can be carried out in Step 6. \begin{table}[t] \caption{Results of Image Segmentation in Image BSD300: (F) for all, and (C) for coarse} \vspace{-2mm} \scriptsize \begin{tabular}{|l|l|l|l|l|l|} \hline & N=100 (F) & N=300 (F) & N=100 (C) & N=300 (C) & N=900 (C) \\ \hline GSP & 0.1237 & 0.1149 & 0.3225 & 0.3087 & 0.3067 \\ \hline K-MEANS & 0.1293 & 0.1252 & 0.3044 & 0.3105 & 0.3124 \\ \hline MLN-SVD & \textbf{ 0.1326} & \textbf{0.1366} & \textbf{ 0.3344} & \textbf{0.3394} & \textbf{ 0.3335} \\ \hline MLN-CP & 0.1321 & 0.1293 & 0.3195 & 0.3256 & 0.3243 \\ \hline \end{tabular} \label{BSD} \end{table} \begin{algorithm}[t] \begin{algorithmic}[1] \caption{MLN-based Unsupervised Image Segmentation}\label{basic4} \STATE {\bf{Input}}: RGB Image $\mathbf{I}\in\mathbb{R}^{P\times Q\times 3}$; \STATE Build $N$ superpixels for the image $\mathbf{I}$ and calculate the value of superpixel based on the mean of all pixels inside that superpixel, i.e., $\mathbf{s}\in\mathbb{R}^{1000\times 3}$; \STATE Construct a multilayer network $\mathbf{A}\in\mathbb{R}^{M\times N \times M \times N}$; \STATE Find entity-wise spectrum $\mathbf{E}=[\mathbf{e}_1,\cdots,\mathbf{e}_N]\in\mathbb{R}^{N\times N}$; \STATE Select the first $K$ important leading spectrum based on the eigenvalues (singular values) of $\mathbf{E}$ as $\mathbf{C}\in\mathbb{R}^{N\times K}$; \STATE Cluster each row of $\mathbf{C}$, and assign the $i$th superpixel into $j$th cluster if the $i$th row of $\mathbf{C}$ is clustered into $j$th group; \STATE Assign all pixels inside one superpixel to the cluster of that superpixel; \STATE {\bf{Output}}: The segmented image. \end{algorithmic} \end{algorithm} To test the proposed Algorithm 1, we first compare its results with those from GSP-based method and traditional $k$-means clustering by using a public BCCD blood cell dataset shown in Fig. \ref{SPEC1}. In this dataset, there are mainly three types of objects, i.e., White Blood Cell (WBC) vs. Red Blood Cell (RBC) vs. Platelet (P). We set the number of clusters to 3 and $N=1000$. For GSP-based spectral clustering, we construct graphs based on the Gaussian model by using information from all 3 color values $\sum_{\ell=1}^3|\mathbf{s}_i(\ell)-\mathbf{s}_j(\ell)|^2$ to form edge connections in a single graph. There is only a single $\delta_\ell$ and $t_\ell$ in the Gaussian model. For M-GSP, we use the MLN singular vectors (MLN-SVD), and tensor factorization (MLN-FZ) for spectral clustering, separately. Their respective results are compared in Fig \ref{CELL1}-\ref{CELL4}. {WBCs are marked yellow, and RBCs are marked green. Platelet (P) is marked blue.} From the illustrative results, MLN methods exhibit better robustness and are better in detecting regions under noise. Comparing results from different MLN-based methods, we find MLN-FZ to be less stable than HOSVD, partly due to approximation algorithms used for tensor factorization. Overall, MLN-based methods shows reliable and robust performance over GSP-based method and $k$-means. Additional visualized results of BCCD images can be found in \textbf{Appendix F}. In addition to visual inspection of results for such images, we are further interested to numerically evaluate the performance of the proposed methods against some state-of-art methods for several more complex datasets that contain more classes. For this purpose, we test our methods on the BSD300 datasets \cite{d13}. We first cluster each image, and label each cluster with the best map of cluster orders against the ground truth. Numerically, we use mIOU (mean of Intersection-over-Union), also known as the mean Jaccard Distance, for all clusters in each image to measure the performance. The Jaccard Distance between two groups $A$ and $B$ is defined as $J(A,B)=\frac{|A\cap B|}{|A\cup B|}$. A larger mIOU indicates stronger performance. To better illustrate the results, we considered two setups of datasets, i.e., one containing fewer classes (coarse) and one containing all images (all). We compare our methods together with $k$-means and GSP-based spectral clustering. The best performance is marked in bold. From the results of Table \ref{BSD}, we can see that larger number clusters of the first two columns generate worse performance. There are two natural reasons. First, the mapping of the best order of cluster labels is more difficult for more classes. Second, the graph-based spectral clustering is sensitive to the number of $K$ leading spectra and the structure of graphs. Regardless, MLN-based methods still demonstrate better performance. Moreover, even when we use the same total number of nodes in a single layer graph and multilayer network for another fairness comparison in terms of complexity, i.e., $N=300$ for graph and $N=100$ for MLN, MLN-based methods still perform better than graph-based methods in this example application. \subsection{Semi-Supervised Classification} Semi-supervised classification is an important practical application for graph-based signal processing. In this application, we apply MLN polynomial filters for semi-supervised classification. Traditional GSP defines adaptive filter as $f(\mathbf{s})=\sum_i a_i\mathbf{W}^i\mathbf{s}$, where $\mathbf{W}$ is an adjacency matrix based on pairwise distance or a representing matrix constructed from the adjacency matrix. Here, signals are defined as labels or confidence values of nodes, i.e., $\mathbf{s}=[\mathbf{s}_{L}^{\mathrm{T}} \: \mathbf{0}_{UL}^{\mathrm{T}}]^{\mathrm{T}}$ by setting unlabeled signals to zero. To estimate parameters $a_i$ of $f(\cdot)$, Optimization can be formulated to minimize, e.g., the mean square error (MSE) from ground truth label $\mathbf{y}_L$ \begin{equation} \min_\mathbf{a}\quad ||M(f(\mathbf{s}))_L-\mathbf{y}_L||_2^2, \end{equation} where $M(\cdot)$ is a mapping of filtered signals to their discrete labels. For example, in a $\{\pm 1\}$ binary classification, one can assign a label to a filtered signal against a threshold (e.g. $0$). Some other objective functions include labeling uncertainty, Laplacian regularization, and total variation. Using estimated parameters, we can filter the signal one more time to determine labels for some unlabeled data by following the same process. \begin{figure}[t] \centering \includegraphics[width=2in]{class3.jpg} \vspace{-2mm} \caption{Results of Classification.} \label{cl} \vspace{-5mm} \end{figure} Similarly, in an MLN, we can also apply polynomial filters for label estimation. Given an arbitrary dataset $\mathbf{X}=[\mathbf{x}_1,\cdots,\mathbf{x}_N]\in\mathbb{R}^{K \times N}$ with $N$ signal points and $K$ features for each node, we can construct a MLN by defining $M=K$ layers based on features and $N$ entities based on signal points. The inter- and intra- layer connections are calculated by the Gaussian distance with different parameters. Let its adjacency tensor $\mathbf{A}\in\mathbb{R}^{M\times N\times M \times N}$. A signal is defined by \begin{equation}\label{sss} \mathbf{s}=\begin{bmatrix} \mathbf{s}_{L}&\cdots& \mathbf{s}_{L}\\ \mathbf{0}_{UL}&\cdots&\mathbf{0}_{UL} \end{bmatrix}^{\mathrm{T}}\in\mathbb{R}^{M\times N}, \end{equation} which is an extended version of graph signal. Note that we do not necessarily need to order signals by placing zeros in the rear. We only write the signal as Eq. (\ref{sss}) for notational convenience. We now apply polynomial filters on signals, i.e., $\mathbf{s}_1=h_1(\mathbf{s})=\sum_i a_i\mathbf{A}^{[i]}\diamond\mathbf{s}$, and $\mathbf{s}_2=h_2(\mathbf{s})=\mathbf{A}^{[i]}\diamond\mathbf{s}$, For a filtered signal $\mathbf{s}_X\in\mathbb{R}^{M\times N}$ {($X=1,2$)}, we define a function to map 2-D signals into 1-D by calculated the column-wise mean of $\mathbf{s}_X$, i.e., $\mathbf{\bar s}_X={\rm mean}_{col}(\mathbf{s}_X)\in\mathbb{R}^{1\times N}$. Next, we can define a function $M(\cdot)$ on $\mathbf{\bar s}_X$ and consider certain objective functions in filter design. To validate the efficacy of polynomial filtering in the MLN framework, we test $h_1(\cdot)$ and $h_2(\cdot)$ for the binary classification problem on the Cleveland Heart Disease Dataset. In this dataset, there are $297$ data points with 13 feature dimensions. We directly build a MLN with $N=297$ nodes in each of the $M=13$ layers. More specifically, we directly use the labels as $\mathbf{s}$. For $h_1(\cdot)$ (AF), we set $a_i\neq 0$ for at least one $i>0$. Using MSE as objective function, we apply a greedy algorithm to estimate parameters $\{a_i\}$. We limit the highest polynomial order to ${10}$. For $h_2(\cdot)$ (APF), we estimate a classification threshold via the mean of $\bar{ \mathbf{s}}_X$ by setting the polynomial order $i=10$. We compare our methods with GSP-based method in similar setups as in aforementioned examples. The only difference is that we use $\bar{ \mathbf{s}}_X$ in M-GSP and use $\mathbf{s}'=f([ \mathbf{s}_{L}^T\; \mathbf{0}_{\rm UL}^T ]^T)$ in GSP for mapping and classification. We also present the results of label propagation and SVM for comparison. We randomly split the test and training data for 100 rounds. From the results shown in Fig. \ref{cl}, GSP-based and M-GSP based methods exhibit better performance than traditional learning algorithms, particularly when the fraction of training samples is small. In general, M-GSP based methods demonstrate superior performance among all methods owing to its strength to extract `multilayer' features. \vspace{-3mm} \subsection{Dynamic Point Cloud Analysis} \begin{figure}[t] \centering \includegraphics[width=3in]{Figure2.png} \vspace{-2mm} \caption{Example of Transformed Signals in a Dynamic Point Cloud.} \label{ex} \vspace{-5mm} \end{figure} Spectral analysis of signals is one of the basic tools in data analysis. Here, we propose a short time M-GST method to analyze dynamic point cloud. Given a dynamic point cloud with $M$ frames and at most $N$ points in each frame, we model it as a multilayer network with $M$ layers and $N$ nodes in each layer. More specifically, we test the singular spectrum analysis over the motion sequences of subject 86 in the CMU database \cite{CMU}. To implement the M-GST, we first divide the motion sequence into several shorter sequences with $N_f$ frames. Next for each shorter sequence, we model interlayer connections by connecting points with the same label among successive frames. For points in the same frame, we connect two points based on the Gaussian-kernel within an Euclidean threshold $\tau_s$ \cite{c6}. Let $\mathbf{x}_i$ be the 3D coordinates of the $i$th point. We assign an edge weight between two points $\mathbf{x}_i$ and $\mathbf{x}_j$ as a nonzero $A_{ij} = \exp(-\Vert \mathbf{x}_i-\mathbf{x}_j \Vert^2_2/{\sigma^2} )$ only if $\Vert \mathbf{x}_i-\mathbf{x}_j \Vert^2_2\leq\tau_s$. Next, we estimate the spatial and temporal basis vectors of each shorter-term sequences by HOSVD in Eq. (\ref{decomposeS}). Finally, we use the 3D coordinates of all points in each shorter-term sequences as signals and calculate their M-GST. To illustrate the results of M-GST, we examine the spectrogram similar to that of short-time Fourier transform (STFT) \cite{c36}. In Fig. \ref{ex}, we transform the signal defined by the coordinates in $Z$ dimension via M-GST and illustrate the transformation results for the divided frame sequence. From Fig. \ref{ex}, one can easily identify different motions based on the MLN singular analysis. Our future works shall target more interpretable analysis of these works. For example, the physical meaning of nodes can be identified via filtered signals under MLN highpass filter, shown in \textbf{Appendix F}. \section{Conclusion} \label{con} In this work, we present a novel tensor-based framework of multilayer network signal processing (M-GSP) that naturally generalizes the traditional GSP to multilayer networks. We first present the basic foundation and definitions of M-GSP including MLN signals, signal shifting, spectral space, singular space, and filter design. We also provide interpretable discussion and physical insights through numerical results and examples to illustrate the strengths, general insights, and benefits of novel M-GSP framework. We further demonstrate exciting potentials of M-GSP in data processing applications through experimental results in several practical scenarios. \IEEEpeerreviewmaketitle \ifCLASSOPTIONcaptionsoff \newpage \fi
2,877,628,090,948
arxiv
\section{Introduction} The increasing number of galaxies covered by large-scale spectroscopic surveys \citep{SDSS_DR15,BOSS} provides an opportunity to revisit the fundamental plane of early-type galaxies and to explore new ideas on how to improve this scaling relation as a distance indicator. Furthermore, various other surveys and programmes have accumulated a huge amount of additional distance information, to which we can compare our data. During the pioneering work on the scaling-relations of early-type galaxies, the Faber-Jackson relation \citep{FaberJackson:1976,Schechter:1980,TonryDavis:1981} and the Kormendy relation \citep{Kormendy:1977} were discovered. Nowadays, they are seen as projections of the fundamental plane, which was properly defined and discussed in \citet{Dressler:1987} and \citet{Djorgovski:1987}, after being first mentioned in \citet{Terlevich:1981}. Its functional form is often given in the following way: \begin{equation} \textrm{log}_{10}\left(R_{e}\right) = a \cdot \textrm{log}_{10}\left(\sigma_{0}\right) + b \cdot \mu_{e} + c . \label{fundamentalplane} \end{equation} The fundamental plane is an empirical relation between three global parameters of elliptical galaxies: the central velocity dispersion $\sigma_{0}$, the physical effective radius $R_{e}$, and the mean surface brightness $\mu_{e}$ within the effective radius, which is occasionally written as $ \textrm{log}_{10}\left(I_{e}\right) = - \mu_{e} / 2.5 $ in the literature\footnote{The corresponding fundamental plane coefficient $b'=-2.5 b$ is then called $b$ in the literature, which can lead to some confusion.}. The coefficients $a$, $b$, and $c$ of the fundamental plane are obtained by fitting the relation to some set of early-type galaxies, whose distances are (approximately) known due to another distance indicator. The central velocity dispersion and the mean surface brightness\footnote{Corrected for the Tolman-effect, which is a cosmological effect that dims surface brightness proportional to $(1+z)^{4}$.} are distance-independent quantities. Consequently, one can use the fundamental plane as a distance indicator (standard rod) by comparing the predicted effective radius with the observed one. After its discovery the fundamental plane quickly became a complementary tool to the Tully-Fisher relation \citep{Tully_Fisher}, which uses late-type galaxies, for measuring extragalactic distances. From a more theoretical point of view, the Virial equilibrium predicts correlations between the three parameters $R_{e}$, $\sigma_{0}$, and $\mu_{e}$. Assuming a constant luminosity-independent mass-to-light (M/L) ratio for all early-type galaxies, the virial equilibrium condition would predict the following values for the coefficients: $a=2$ and $b=0.4$. In the literature, one typically finds values for $a$ ranging between 1 and 1.5 (depending on the fitting method) and for $b$ around 0.3 \citep{Saulder:2013}. This discrepancy between the theoretical prediction and observations is called the tilt of the fundamental plane. The reasons for this tilt have been a matter of substantial debate, especially in the context of galaxy evolution \citep{Ciotti:1996,Busarello:1997,Busarello:1998,Graham:1997,Trujillo:2004,DOnofrio:2006,Cappellari:2006,Magoulas:2013} or environmental dependence \citep{Lucey:1991,Jorgensen:1996,Pahre:1998,deCarvalho:1992,LaBarbera:2010b,Magoulas:2013,Hou:2015,Joachimi:2015,Samir:2016,Kipper:2016}. The fundamental plane is still a topic of ongoing research \citep{DOnofrio:2008,LaBarbera:2008,Gargiulo:2009,Hyde:2009,LaBarbera:2010,FraixBurnet:2010,Magoulas:2012,Hyde:2009b,Cappellari:2013} and there have been numerous discussions \citep{Jorgensen:1993,Jorgensen:1996,Jorgensen:2006,Pahre:1998,Bolton:2008,DOnofrio:2013,DOnofrio:2017} on how to understand this scaling relation. Recently, the focus has shifted towards studying the formation \citep{Bezanson:2013,vdSande:2014,Beifiori:2015,Beifiori:2017} and evolution \citep{Zahid:2015,Zahid:2016,Oldham:2017} of the fundamental plane and how to reconcile observations with simulations \citep{Taranu:2015,Desmond:2017}. Our understanding of early-type galaxies has significantly improved during the last decade. Thanks to the first integral field spectroscopic surveys \citep{SAURON1,ATLAS1}, it became clear that the majority of early-type galaxies exhibit significant rotation \citep{ATLAS3,SAURON10} and are not primarily pressure-supported systems. Furthermore, the importance of stellar populations \citep{Springob:2012} and the luminosity dependence of the mass-to-light ratio \citep{Hyde:2009,Cappellari:2013,Schechter:2014,Desmond:2017} of early-type galaxies became crucial in understanding the tilt and the scatter of the fundamental plane. Additionally, it was found by \citet{Padmanabhan:2004} and \citet{Gallazzi:2006} that the stellar-to-dynamical mass ratio is not constant across all populations of early-type galaxies and \citet{DOnofrio:2008} and \citet{Nigoche:2009} showed that the fundamental plane depends on the range in velocity dispersion and luminosity. A constant stream of data coming in and being analysed from currently ongoing big integral field spectroscopic surveys \citep{Manga,SAMI} drives the improvement of our knowledge about very complex interplay between the global parameters of early-type galaxies and their internal kinematics \citep{Scott:2015,vdSande:2017,Graham:2018}. Additionally, some modifications of gravity are discussed as alternatives \citep{Jovanovic:2016,Chiu:2017} to a luminosity dependence of the mass-to-light ratio. Furthermore, a connection between the stellar and dark matter halo has also been proposed \citep{Schechter:2016} to explain the shape of the fundamental plane. Since its discovery, the coefficients of the fundamental plane have been calibrated using various samples, ever increasing in size or quality. For example, some of the most notable works providing these coefficients are \citet{Djorgovski:1987,Dressler:1987,Smith:2001,Smith:2004,Hudson:1997,Gibbons:2001,Lucey:1991,Guzman:1993,Jorgensen:1996,Muller:1998,DOnofrio:2008,LaBarbera:2008,Gargiulo:2009,Hyde:2009,LaBarbera:2010,Pahre:1998,Kelson:2000,Colless:2001,Bernardi:2003c,Magoulas:2012,Campbell:2014,Scodeggio:1998,FraixBurnet:2010,Saulder:2013,Saulder:2015,Zahid:2016}. The actual values of the coefficients vary notably due to the different fitting methods \citep{Sheth:2012}, but for the application as a distance indicator a direct fit \citep{Bernardi:2003c,Sheth:2012} that minimizes scatter in the physical radius $R_{e}$ is the optimal choice, because it directly translates into a scatter in distances. The fundamental plane can be used as an efficient tool to measure peculiar motions in the local universe \citep{Campbell:2014,Mutabazi:2014}. The Sloan Digital Sky Survey (SDSS) has been continuously providing new data and made its DR15 \citep{SDSS_DR15} recently available to the public. While not including new galaxies in our range of interest since the completion of BOSS \citep{BOSS}, it provides updated photometric and spectroscopic fits for all galaxies. The previously largest sample of fundamental plane distances published along with a limited group catalogue in \citet{Saulder:2016} was effectively limited to the sample size of DR7 \citep{SDSS_DR7} by the use of GalaxyZoo-I \citep{GalaxyZoo}. By improving the selection criteria for early-type galaxies, one will be able to cover many galaxies for which fundamental plane distance have never been calculated. Furthermore, it provides an opportunity to improve the quality of the distance measurements using the fundamental plane by better considering the selection effects of SDSS. Different methods for calibrations can be tested, as well as how to best take into account known biases affecting the fundamental plane, such the impact of the mass-to-light ratio \citep{Hyde:2009,Cappellari:2013,Schechter:2014,Desmond:2017} and environmental effects \citep{Joachimi:2015}. To investigate the latter, a state-of-the-art group catalogue that covers at least the same volume as the early-type galaxies used for the fundamental plane is required. This can also be used to further improve the distance estimates to rich clusters by using statistics. Throughout this paper, we assumed a flat $\Lambda$-CDM cosmology with a relative dark energy density of $\Omega_{\Lambda}=0.7$ and a relative matter density of $\Omega_{M}=0.3$ as well as a present-day Hubble parameter of $H_{0}=70\, \textrm{km}\,\, \textrm{s}^{-1}\,\, \textrm{Mpc}^{-1}$. This paper is structured in the following way: in Section \ref{sec_data}, we present a description of the various datasets used for this work, with additional details provided in Appendix \ref{app_groupdatasel}. Our methods are explained in Section \ref{sec_method}. We present the main results of our work in Section \ref{sec_results} with a more detailed description of our catalogues provided in Appendix \ref{catalog}. We discuss our methods and results in Section \ref{sec_discussion} and provide a brief summary and conclusions in Section \ref{sec_sum_and_concl}. Alternative approaches that we tested are briefly discussed in Appendix \ref{app_add_fp} as well as transformations between SDSS and 2MRS colours that were required as a tool are provided in Appendix \ref{colourtrans}. \section{Data} \label{sec_data} Our primary source of data was SDSS DR15 \citep{SDSS_DR15} from which we selected an essentially unconstrained (aside from the intrinsic selection criteria of SDSS) spectroscopic sample of galaxies up to a redshift of 0.51\footnote{This value was reduced to 0.5 after the redshifts had been moved to the CMB rest frame to avoid an anisotropic cut-off.} as well as a sample of early-type galaxies, defined by colour-cuts and likelihoods for luminosity profile fits. Additionally, we used the value-added catalogue by \citet{Graham:2018}, which is based on MaNGA data \citep{Manga}, and the value-added catalogue by \citet{Simard:2011}, which provides additional parameters for SDSS galaxies. For the calibration of our group finder algorithm, we also required simulated data. To this end, we took the re-run of the Millennium simulation \citep{millennium} presented by \citet{Guo:2013}, who updated it to the WMAP7 cosmology \citep{WMAP_7}. Several additional datasets are used for comparison and testing of our derived distances. The mostly unfiltered galaxy sample was used to run our group finder algorithm. The resulting group catalogue may also be used for applications beyond the scope of this paper. We selected galaxies in SDSS DR15 using the set of criteria listed in Appendix \ref{app_groupdatasel}. With these criteria we found 1 527 251 objects (see Figure \ref{allgal_absmag_i}) in SDSS, for which we obtained their positions and basic photometric parameters (see Appendix \ref{app_groupdatasel} for a detailed list). \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{allSDSS_absmag_i.pdf}\includegraphics[width=0.45\textwidth]{unfiltered_etg_absmag_i.pdf}\\ \caption{Redshift-absolute magnitude distribution of our samples. Left panel: the initial galaxy sample used for the group finder (SDSS/BOSS only). Right panel: initial early-type galaxy sample. The bluish heatmap represent the relative number densities of galaxies with dark blue tones indicating higher numbers.} \label{allgal_absmag_i} \end{center} \end{figure*} Our sample of early-type galaxies is a subsample of the previous galaxy sample, hence all the above criteria were applied. Additionally, we required these galaxies to fulfil the set of criteria presented in Appendix \ref{app_etgsel}. With these selection criteria, we found 334 388 objects (see Figure \ref{allgal_absmag_i}) in the SDSS database. Additional constraints (finer cuts) were applied later in the calibration process to remove outliers and possible misclassifications. For the selected objects, we obtained their coordinates, basic spectroscopic and photometric parameters, and their stellar masses according to spectro-photometric Wisconsin method \citep{Chen:2012} using Maraston stellar mass models \citep{Maraston:2011}. A detailed list of the obtained parameters is provided in Appendix \ref{app_etgsel}. This sample of early-type galaxies formed the basis for our fundamental plane calibrations. \citet{Graham:2018} published a catalogue providing additional kinematic parameters for 2 774 galaxies, which were observed using integral-field spectroscopy as part of the MaNGA programme \citep{Manga}. Since MaNGA is part of SDSS, we could easily cross-match their sample with ours. We used the additional kinematic parameters and more precise stellar masses provided in their catalogue for supplementary tests of our calibrations. The value-added catalogue of \citet{Simard:2011} contains Sersic-profile fits based on SDSS DR7 data \citep{SDSS_DR7} for 1 123 718 galaxies. We used their data for additional tests of our calibrations, but since we could not improve our calibrations with them, their impact on our analysis was minimal. We found it useful to supplement the SDSS data with data from the 2MRS \citep{2MRS}, which was a spectroscopic follow-up to 2MASS \citep{2MASS}, in order to compensate for the saturation bias of the SDSS spectroscopic sample. Therefore, we included 43 533 galaxies from 2MRS with their 2MASS magnitudes (J, H, and K$_{\textrm{S}}$) and redshifts into our database. We crossmatched them with the our complete SDSS sample and found that 5 890 galaxies were identified\footnote{Using a tolerance of 10 arcseconds of angular separation and 300 km/s in radial velocity.} within the spectroscopic data of both surveys. We used these galaxies to calibrate the colour transformation (see Appendix \ref{colourtrans}) and thereby calculate SDSS magnitudes for all 2MRS galaxies. Excluding the galaxies which were detected in both surveys, we found that 8 948 galaxies of 2MRS lie either in or within one degree of the SDSS spectroscopic footprint. We added these galaxies to our our main sample, which was then used as a basis for our group catalogue. Since the cosmological parameters of the first run of the Millennium simulation \citep{millennium} are slightly outdated \citep{Planck_cosmo,Suzuki:2012}, we decided to use its re-run by \citet{Guo:2013}, which assumed the cosmological parameters found by WMAP7 \citep{WMAP_7}. The re-run also provides semi-analytical galaxy models based on \citet{Guo:2011}, which we used to build mock-catalogues for the calibration of our group finder algorithm. We selected every galaxy with an absolute magnitude brighter than -15 mag in the i band from all snapshots between 61 (corresponding to redshift 0) and 46 (corresponding to redshift 0.5086). For these simulated galaxies, we gathered the parameters listed in Appendix \ref{app_millsel}. Each snapshot contains more than 10 million objects from which we selected the galaxies to construct our mock-catalogues. In addition to all the data, which we required to calibrate and apply the fundamental plane, we also obtained various datasets using other distance indicators to test our own calibrations. Those include the catalogue of 740 Supernovae Type Ia distances by \citet{Betoule:2014}, the 56 124 distance measurements using the Tully-Fisher relation found in the latest version of the NASA/IPAC Extragalactic Database, and the distance measurements using various methods for 17 669 galaxies collected by the CosmicFlows project \citep{Cosmicflows3}. \section{Method} \label{sec_method} \subsection{Mock catalogues} The first step in building a group catalogue is to ensure that the group finder algorithm is well calibrated for the dataset it is applied on. The SDSS/BOSS data we used is the product of a series of selection criteria, which define the sparsity of the sample and thereby the optimal linking lengths of our FoF algorithm. To this end, we created a series of mock catalogues based on the data we obtained from the WMAP7 re-run of the Millennium simulation \citep{Guo:2013}. We built a set of (mostly) independent mock-catalogues from the available data. To this end, we decided to treat every snapshot independently as a representation of its particular redshift range. Each snapshot is a cube of a side length of 500$/h$ Mpc. We calculated that it is possible to create 4 slices with no overlap at the lower redshifts and only limited overlap at the higher redshifts from each snapshot, if applying the following procedure. We defined centres for each slice at the following co-moving Cartesian coordinates: (100$/h$ Mpc, 250$/h$ Mpc, 250$/h$ Mpc), (200$/h$ Mpc, 250$/h$ Mpc, 250$/h$ Mpc), (300$/h$ Mpc, 250$/h$ Mpc, 250$/h$ Mpc), and (400$/h$ Mpc, 250$/h$ Mpc, 250$/h$ Mpc). Then we rescaled each snapshot to physical units. We call the redshifts corresponding to the time at which the snapshots were taken, the central redshift. We defined upper and lower limits for the redshift range associated with each snapshot by taking the average value between the central redshifts of two neighbouring snapshots. Then we translated the central redshift as well as the upper and lower limits to co-moving distances. Our virtual observer is located at a point, which is the co-moving distance of the central redshift (in the negative x-direction) away from the centre of each slice. All galaxies closer than the lower redshift limit (as a co-moving distance) from the virtual observer were removed from that slice, as well as all galaxies further away than the upper redshift limit. The resulting four slices only share a few galaxies with each other (especially considering the magnitude and colour-cuts introduced in the next step). We repeated the entire procedure in the y- and z-direction as well and end up with 12 largely independent slices for each snapshot. Before we could introduce the SDSS/BOSS selection effects into our mock-catalogues, we had to obtain the redshift dependence of the uncertainties of the observed magnitudes in SDSS. To this end, we made use of the error-bars of the Petrosian magnitudes supplied by the catalogues. We split them into redshift bins, calculated the median, and did a simple interpolation between the bins. In the next step, we calculated the impact of the peculiar motions on the mock data. \begin{equation} z_{\textrm{real}} = \left(\left(1+z_{\textrm{cosmo}}\right) \cdot \left(1 + \frac{-\vec{v_{\textrm{pec}}} \cdot \vec{n_{\textrm{view}}}}{c} \right)\right) - 1 \label{mockredshift} \end{equation} While we could simply transform the co-moving distance to the virtual observer into a cosmological redshift $z_{\textrm{cosmo}}$, we had to take into account the peculiar motions of galaxies to obtain the 'real' redshift $z_{\textrm{real}}$. This is not the true observed redshift, since we still had to factor in the error of the redshift observation itself, which is done in Equation \ref{mockactuallyobservedredshift}. $\vec{v_{\textrm{pec}}}$ denotes the vector of the peculiar motions from the Millennium simulation, $\vec{n_{\textrm{view}}}$ the unit-vector of the line-of-sight from the virtual observer to the galaxy, and $c$ is the speed of light. \begin{align} m_{\textrm{app,mock}}=M_{\textrm{abs,mill}}+\Delta m\left(z_{\textrm{cosmo}}\right) \cdot \mathfrak{G} + \nonumber\\ K(z_{\textrm{real}}) + 5 \cdot \textrm{log}_{10}\left(D_{L}\left(z_{\textrm{cosmo}}\right)/\textrm{pc}\right)-5 \label{mockmag} \end{align} The apparent magnitude $m_{\textrm{app,mock}}$ of the galaxies in our mock-catalogues was obtained from the absolute magnitude $M_{\textrm{abs,mill}}$ found in the Millennium simulation by adding the observational error $\Delta m$ of the magnitudes, the K-correction $K$, and the distance modulus, which is derived from the luminosity distances $D_{L}$. We use the symbol $\mathfrak{G}$ to indicate a random Gaussian noise with a standard deviation $\sigma$ of 1. Naturally, these corrections were applied to all magnitudes in all bands. \begin{equation} z_{\textrm{obs}} = \left(\left(1+z_{\textrm{real}}\right) \cdot \left(1 + \Delta z \cdot \mathfrak{G}\right)\right) - 1 \label{mockactuallyobservedredshift} \end{equation} The actually observed redshift $z_{\textrm{obs}}$ is obtained by considering the measurement error $\Delta z$ of redshifts for the real redshifts $z_{\textrm{real}}$. In the next step, we applied the selection criteria for the various SDSS and BOSS samples on our mock-data. We considered the magnitude limit and saturation bias for the SDSS main galaxy sample, the colour, magnitude, and redshift cuts for the SDSS LRG low-z and SDSS LRG high-z samples, the colour and magnitude cuts for the BOSS low-z, BOSS CMASS and BOSS CMASSsparse samples, the main Quasar sample, and the magnitude limits of the 2MRS sample. Afterwards, we merged the various samples in each of our slices. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{galaxy_density.pdf}\\ \caption{Galaxy density as a function of redshift in observational data and the mock catalogues.} \label{galaxy_density} \end{center} \end{figure} By directly applying the SDSS selection criteria on the mock-catalogues, which are using the semi-analytic galaxy models of \citet{Guo:2011}, we found that the galaxy densities do not match and are not even reasonably near the values derived from observations (see Figure \ref{galaxy_density}). We found a notable dearth of galaxies (by almost an order of magnitude) between a redshifts 0.2 and 0.45. Therefore, we had to fine-tune the selection criteria by adding some tolerances until we got the galaxy density in the mock catalogues reasonably close to the observed values across the entire redshift range. In particular, we allowed for an about 0.2 mag wider range for all magnitude limits and colour-cuts applied to select the various SDSS samples. After the fine-tuning, we combined all pairs of non-neighbouring slices (with each slice set to be pointing at opposite directions on the virtual sky) into our set of 6 mock catalogues for all of our 16 snapshots. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{link_b.pdf} \includegraphics[width=0.45\textwidth]{link_R.pdf}\\ \caption{Optimized linking lengths used for our group finder algorithm. Left panel: angular linking lengths. Right panel: radial linking lengths. The various shapes mark optimal values derived from the different mock catalogues, the dashed-blue line indicates the interpolation we used in our group finder algorithm, with the blue shaded area highlighting the uncertainty.} \label{link_length} \end{center} \end{figure*} \subsection{Group finder algorithm} In the next step, we ran our group finder algorithm on these mock catalogues to obtain the optimal linking lengths. Our version of the friends-of-friends approach follows \citet{Duarte:2014}, who pre-grid the data before running the nearest neighbour search to improve efficiency. We also used the \emph{fofID} number from the Millennium Run database to assemble groups which lie in the same dark matter halo. These halo-based groups are used as the comparison sample for identification of the best linking length for FoF in a given snapshot. We follow \citet{Robotham:2011}, and match groups between the FoF and halo-based catalogues according to a cost function based on bijective matches between groups in each catalogue. The cost function is based on matched groups that share at least $50\%$ of their galaxies, and the group 'purity', see Equations 9-15 in \citet{Robotham:2011}. After obtaining the optimal values for all of our mock catalogues, we calculated the median of the optimal linking lengths of each mock catalogue for every snapshot (see Figure \ref{link_length}). The effects of the different samples are clearly visible in the linking lengths. At the lowest redshift bin, the saturation bias causes a larger linking length than for magnitude limited part of the SDSS main galaxy sample. There is a consistent rise in the linking lengths with redshift, which reaches a plateau once the more volume-limited samples such as the LRG and CMASS samples start to dominate. When comparing the scatter between the linking lengths of the individual mock catalogues to the galaxy densities of Figure \ref{galaxy_density}, we see that it noticeably increases once the sample gets sparser. We used cubic splines to interpolate between the different redshifts and applied these interpolations in our group finder. Before we could use our group finder on SDSS data, we had to filter and properly calibrate the observational data first. \begin{equation} m_{\textrm{extcor}}=m_{\textrm{obs}} - A_\textrm{Schlegel} . \label{extinctioncorrection} \end{equation} We corrected the SDSS\footnote{We use the index \emph{obs} to mark observational parameters directly taken from the SDSS database.} magnitudes $m_{\textrm{obs}}$ for galactic extinction $A_\textrm{Schlegel}$ according to the Schlegel maps \citep{Schlegelmaps} to obtain the extinction-corrected magnitude $m_{\textrm{extcor}}$. In the case of the galaxies, which were only in the 2MRS sample, we used the extrapolated SDSS magnitudes based on a fit using the H-K$_\textrm{s}$ and J-H colours and the K$_\textrm{s}$ band magnitudes instead. This fit was calibrated using galaxies in both surveys and the details of this method are explained in Appendix \ref{colourtrans}. \begin{equation} K(z,m_{f_{1}}-m_{f_{2}})=\sum\limits_{i,j} B_{ij} z^{i} (m_{f_{1}}-m_{f_{2}})^{j} \label{Kcorrection} \end{equation} \begin{equation} m_{\textrm{app}} = m_{\textrm{extcor}} - K(z_{\textrm{obs}},m_{f_{1}}-m_{f_{2}}) . \label{apperantmag} \end{equation} Afterwards, we applied a K-correction $K(z_{\textrm{obs}},m_{f_{1}}-m_{f_{2}})$ to the extinction-correction magnitudes $m_{\textrm{extcor}}$. We used the K-correction of \citet{Chilingarian:2010}, with updated coefficients from \citet{Saulder:2013}. $m_{f_{1}}-m_{f_{2}}$ denotes any suitable colour and $z_{\textrm{obs}}$ the observed redshift directly from the SDSS pipeline. We used the following combinations: g band: g-r colour, r band: g-r colour, i band: g-i colour, z-band: g-z colour, J band: J-K$_\textrm{s}$ colour, H band: H-K$_\textrm{s}$ colour, and K$_\textrm{s}$ band: J-K$_\textrm{s}$ colour. We also transformed all redshifts to the CMB-rest frame. With the CMB-redshifts $z_{\textrm{cmb}}$, we calculated the angular diameter distances $D_{A}$ and the luminosity distances $D_{L}$. Using the distance modulus, we calculated the absolute magnitude $M_{\textrm{abs}}$ and derived the luminosity $L$ in solar units using the absolute magnitude of the sun \citep{SolarMag} $M_{\textrm{abs},\sun}$. \begin{equation} v_{\textrm{rad}} = c \cdot \frac{(1+z_{\textrm{cmb}})^{2}-1}{(1+z_{\textrm{cmb}})^{2}+1} \label{radialvelocity} \end{equation} Since our radial linking length was calibrated in km/s, we also transformed the redshifts into radial velocities $v_{\textrm{rad}}$ for every galaxy. To remove potentially problematic objects, we removed all galaxies with an absolute magnitude brighter than -30 mag and fainter than -15 mag in the i band. Furthermore, all objects with a g-i colour of more than 3 mag or less than -2 mag were excluded from the sample. Since we did not want to have any galaxies outside the calibrated range of our group finder, we removed all objects with a redshift $z_{\textrm{cmb}}$ lower than zero and higher than 0.5. These cuts reduced our sample to 1 480 600 galaxies. \begin{equation} l_{\alpha} = \textrm{atan}\left(\frac{l_{b}}{D_{A}} \right) \label{linkrescale} \end{equation} When we ran our FoF group finder, we could take the radial linking length $l_{R}$ directly and compare it with the radial velocity $v_{\textrm{rad}}$. However, we had to transform our angular linking length $l_{b}$ from physical units to angles $l_{\alpha}$. Besides just assigning group memberships, our group finder calculated the several parameters for every group that it detected using the methods thoroughly tested in \citet{Robotham:2011} and discussed again for a similar group finder algorithm in \citet{Saulder:2016}. The radial group centre was calculated by taking the median of the redshifts of all group members. The projected group centre was found by using the centre of light of the group members and iteratively removing the members with largest angular separation. The projected group radius was defined as the distance from that projected group centre in which $50\%$ of the group members are located. We calculated group velocity dispersions using the gapper estimator of \citet{Beers:1990} including the modification of \citet{Eke:2004a}. \begin{equation} \sigma_{\textrm{gap}}=\frac{\pi}{N_{\textrm{fof}}(N_{\textrm{fof}}-1)}\sum\limits_{i=1}^{N_{\textrm{fof}}-1} w_{i} g_{i}, \label{gapper_basic} \end{equation} \begin{equation} w_{i}=i \cdot (N_{\textrm{fof}}-i), \label{gapper_weight} \end{equation} \begin{equation} g_{i}=v_{i+1}-v_{i}, \label{gapper_vgap} \end{equation} \begin{equation} \frac{v_{i}}{c} =\frac{(1+z_{\textrm{obs},i})^{2}-1}{(1+z_{\textrm{obs},i})^{2}+1}, \label{vrad_z} \end{equation} \begin{equation} \sigma_{\textrm{group}}=\sqrt{\frac{N_{\textrm{fof}}}{N_{\textrm{fof}}-1}\sigma_{\textrm{gap}}^{2}-\sigma_{\textrm{err}}^{2}}. \label{gapper_mod} \end{equation} The gapper velocity dispersion $\sigma_{\textrm{gap}}$ of a group with $N_{\textrm{fof}}$ member was calculated by summing up the product of the weights $w_{i}$ and the radial velocity gaps $g_{i}$ for all all its members. It was essential that the radial velocities $v_{i}$ were ordered for this approach, which we assured by applying a simple sorting algorithm for each group. The radial velocities $v_{i}$ were calculated using the observed redshifts $z_{\textrm{obs},i}$. The group velocity dispersion $\sigma_{\textrm{group}}$ also took into account the measurement errors of the redshift determination $\sigma_{\textrm{err}}$, which were 30 km/s for SDSS, 65 km/s for BOSS, and $\sim 32$ km/s for 2MRS. In the case that the obtained group velocity dispersion was lower than the measurement errors of the redshift determination, we set them to their corresponding $\sigma_{\textrm{err}}$. The observed group luminosity is merely the sum of the i band luminosities of all its detected members. \subsection{Basic calibrations for early-type galaxies} \begin{figure*} \begin{center} \includegraphics[width=0.90\textwidth]{SDSSBOSS_subsamples.png}\\ \caption{Our sample of early-type galaxies used for the fundamental plane calibrations split into the different subsamples. Top-left panel: SDSS main galaxy sample only; top-centre panel: SDSS LRG sample; top-right panel: other samples observed with SDSS fibres; bottom-left panel: BOSS low-z sample; bottom-centre panel: other samples observed with BOSS fibres, but no flags sets; bottom-right panel: CMASS sample (regular CMASS and CMASS sparse combined). Red dotted line: official saturation limit in SDSS; green dotted line: official saturation limit in BOSS; red dashed line: limiting magnitude of the SDSS main galaxy sample; green dashed line: limiting magnitude for the BOSS low-z galaxy sample; magenta dashed-dotted line: redshift-limit of our early-type galaxy sample.} \label{SDSSBOSS_subsamples} \end{center} \end{figure*} Most parameters needed to get to the different fundamental plane calibrations are the same. The extinction correction and the K-correction were already presented in Equation \ref{extinctioncorrection} and Equation \ref{Kcorrection}, respectively. Besides these two corrections, the apparent magnitudes are typically also corrected for evolutionary effects. As illustrated by the dearth of bright galaxies at very low redshifts in Figure \ref{allgal_absmag_i}, the saturation bias of SDSS spectroscopy removes all galaxies from the main galaxy sample with apparent magnitudes brighter than 15 mag in the g and r band and brighter than 14.5 mag in the i band. As illustrated in Figure \ref{SDSSBOSS_subsamples}, the saturation bias is different and poorly defined for the LRG sample of SDSS. The BOSS low-z sample, which also contributed galaxies to our catalogue, suffers from a saturation bias for galaxies brighter than 16 mag in the r band, but there are other galaxies observed with BOSS fibres that are not affected by this bias. At higher redshifts, the sample of early-type galaxies gets increasingly sparse, on the one hand due to Malmquist bias, which removes the intrinsically faintest galaxies in magnitude-limited surveys, and on the other hand the light profiles become increasing PSF-like, which means that the likelihood for a de Vaucouleurs-profile as calculated by the SDSS pipeline shrinks correspondingly. Consequently, this does not allow for easy classification according to our criteria (see Appendix \ref{app_etgsel}). \begin{equation} m_{\textrm{app,evcor}}=m_{\textrm{app}} + Q \cdot z_{\textrm{group}} \label{evolutioncorrection} \end{equation} To obtain the evolution corrected magnitude $m_{\textrm{app,evcor}}$, we took advantage of our group catalogue and used the group redshift $z_{\textrm{group}}$, which should be barely affected by the finger-of-god effects in clusters. All derivative quantities using the apparent magnitude were calculated in two ways, one using the evolution corrected magnitude $m_{\textrm{app,evcor}}$ and one using the uncorrected apparent magnitude $m_{\textrm{app}}$. The evolution correction parameter $Q$ is obtained by finding a constant number density for the brightest galaxies within the redshift range for which our sample of these galaxies is the most complete. We estimated the redshift range in which our early-type galaxy sample is complete for galaxies brighter than -23.5 mag in the z band to be between 0.07 and 0.25. Within this redshift range, we calculated the mean separation to the five nearest neighbours for all galaxies brighter than -23.5 mag after applying the evolution correction parameter $Q$ and split them into 0.01 wide redshift intervals. We varied $Q$ between 0 and 2 mag/$z$ and found that the optimal value that preserves the mean separation (hence indirectly the number density) of the brightest objects in the sample is 0.71 mag/$z$. This value is slightly lower than the evolution corrections found in \citet{Bernardi:2003c} and \citet{Saulder:2013}. We argue that this might be due to the fact that we only focused on the brightest galaxies of our sample. However, this is well motivated since these are the only galaxies that we are able to detected at higher redshifts, where the evolution correction becomes increasingly important. We also corrected the sizes and velocity dispersions for evolutionary effects. To this end, we used the corrections provided by \citet{Beifiori:2014}. \begin{equation} r_{\textrm{cor}} = r_{\textrm{sdss}} \left( 1 + z_{\textrm{group}} \right)^{-\beta} \label{rcor} \end{equation} \begin{equation} \sigma_{\textrm{cor}} = \sigma_{\textrm{sdss}} \left( 1 + z_{\textrm{group}} \right)^{-\gamma} \label{velcor} \end{equation} The corrected sizes $r_{\textrm{cor}}$ and velocity dispersions $\sigma_{\textrm{cor}}$ are rescaled from the observed parameters directly from the SDSS pipeline $r_{\textrm{sdss}}$ and $\sigma_{\textrm{sdss}}$. We took the values of the scaling coefficients $\beta$ and $\gamma$ directly from \citet{Beifiori:2014}, which were $-0.49 \pm 0.26$ and $0.12 \pm 0.02$ respectively. \begin{equation} r_{\textrm{circ}} = r_{\textrm{cor}} \sqrt{q_{b/a}} \label{rcirc} \end{equation} Following \citet{Bernardi:2003c}, we circularized the evolution corrected angular radius $r_{\textrm{cor}}$ using the minor semi-axis to major semi-axis ratio $q_{b/a}$ to obtain the circularized radii $r_{\textrm{circ}}$, which is more reliable quantity to compare galaxies of different shapes. Because SDSS/BOSS uses fixed size fibres, we had to correct for the fact that at different distances different fractions of the galaxies are covered by that fibre. Based on the work of \citet{Jorgensen:1995} and \citet{Wegner:1999}, we used the following equation: \begin{equation} \sigma_{0}=\sigma_{\textrm{cor}} \cdot \left( \frac{r_{\textrm{fiber}}}{r_{\textrm{circ}}/8} \right)^{0.04} \label{sigmacor} \end{equation} The radius $r_{\textrm{fiber}}$ of the SDSS fibres used to be 1.5 arcseconds, but with the upgrade \citep{SDSS_DR9} done for BOSS, new smaller fibres were installed. These fibres only have a radius of 1 arcsecond. SDSS marks whether a spectroscopic measurement was obtained using the SDSS fibres or the BOSS fibres. $\sigma_{0}$ denotes the corrected central velocity dispersion, while $\sigma_{\textrm{cor}}$ denotes the evolution corrected central velocity dispersion. We also tested the slightly modified version of Equation \ref{sigmacor} form \citet{Cappellari:2006} and found that the velocity dispersions obtained from their method yields a marginally higher scatter for the fundamental plane. \begin{equation} R_{e}=D_{A}(z_{\textrm{group}}) \cdot \textrm{tan}\left(r_{\textrm{circ}}\right) \label{realradius} \end{equation} Using basic trigonometry, one can calculate the physical radii $R_{e}$ of galaxies using their angular diameter distances $D_{A}$ (derived using the median group redshifts $z_{\textrm{group}}$) and circularized radii $r_{\textrm{circ}}$. When we refer to redshift-based distances throughout this paper, we mean distances derived using the redshift-distance relation with the assumed cosmology of this paper and the median group redshifts. \begin{equation} \mu_{e}=m_{\textrm{app}} + 2.5\cdot \textrm{log}_{10}\left( 2\pi \cdot r_{\textrm{circ}}^{2} \right) - 10\cdot \textrm{log}_{10} \left( 1 + z_{\textrm{group}} \right) \label{surfacebrightness} \end{equation} When calculating the surface brightness $\mu_{e}$, one has to include a correction for the cosmological dimming of surface brightnesses, which is proportional to $(1+z)^{4}$ in any Friedmann-Lema\^{i}tre-Robertson-Walker metric-based universe \citep{Tolman:1930,Hubble:1935,Sandage:1990a,Sandage:1990b,Sandage:1991,Pahre:1996}. Before we could fit any fundamental plane, we should further clean our sample, because we had some additional parameters to work with after doing the basic calibrations. We keep only galaxies that fulfil a set of criteria and cuts listed in Appendix \ref{app_fpgalsel}. For technical reasons, we had to merge the stellar masses provided directly by the SDSS database with the updated dataset\footnote{\url{https://www.sdss.org/DR15/spectro/galaxy_portsmouth/}} for galaxies observed with the new BOSS fibres. For a few ($\sim 2000$) galaxies of our initial sample of early-type galaxies, stellar masses were not provided and these galaxies were also removed from the sample. Additionally, we iteratively removed all 5-$\sigma$ outliers of the two main fundamental plane calibrations presented in this paper (see the next three subsections). To this end, we used a Levenberg-Marquardt algorithm and least squares using a 5-$\sigma$ clipping as implemented in {\sc astropy} \citep{astropy:2013,astropy:2018}. After applying all these filters, we ended up with the final sample of 317 285 early-type galaxies used for all our fundamental plane calibrations. The main contributions to our early-type galaxy sample are the SDSS main galaxy sample with 181 719 galaxies, the BOSS low-z sample with 71 311 galaxies, and the SDSS LRG sample with 60 505 galaxies. Additionally, there are minor contributions from other samples obtained using the SDSS fibres (162 galaxies), the CMASS samples (16 galaxies), and a poorly defined subsample obtained with the BOSS fibres, but no selection-flags set (3 579 galaxies). As illustrated in Figure \ref{SDSSBOSS_subsamples}, the selection criteria for most (aside from the SDSS main galaxy sample) of the different subsample are non-trivial. \subsection{Fitting the traditional fundamental plane} \label{meth_tradfp} Since we primarily intend to use the fundamental plane as a distance indicator, we aimed to minimize the scatter in the physical radii $R_{e}$. This can be best achieved using a direct fit \citep{Sheth:2012} applied on Equation \ref{fundamentalplane}. By inverting Equation \ref{realradius}, we could use the predicted physical radii for given surface brightnesses and central velocity dispersions to derive the angular diameter distances for the traditional fundamental plane by comparing it to the observed angular radii. We used our group catalogue again on the resulting fundamental plane distances and calculated the median fundamental plane distance to every detected group. Thereby, we improved the distance estimates to all groups containing more than one early-type galaxy by taking the average of the fundamental plane distances to all members and thereby reducing the statistical uncertainty. \subsection{Distances corrected for systematic residuals} \label{meth_corfp} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{dist_absmag_old.pdf} \includegraphics[width=0.45\textwidth]{debiased_absmag_old.pdf}\\ \caption{Ratio between the fundamental plane distance and redshift-based distance as a function of the absolute magnitude in the z band. Left panel: traditional fundamental plane distances without any corrections; right panel: fundamental plane distances corrected for selection effects.} \label{dist_absmag} \end{center} \end{figure*} We intentionally did not apply a correction for various selection effects (e.g. Malmquest bias, saturation bias, and colour-cuts). We attempted to consider these effects using the method of volume-weights \citep{Sheth:2012,Saulder:2013}, but due to various sub-samples of SDSS/BOSS contributing to our sample and their sometimes difficult to reproduces selection criteria and affected cross-correlations with the main parameters of the fundamental plane, we were not able to get useful results. Hence, we considered the methods used in 6dFGSv \citep{Magoulas:2012,Howlett:2017,Qin:2018}. While a fully Bayesian approach to correct for the biases after the traditional fundamental plane calibrations would run into the same problems as the volume-weights due to insufficiently well-defined selection criteria of several subsamples, we settled for a slightly simpler but effective model inspired by that method. \begin{figure*} \begin{center} \includegraphics[width=0.90\textwidth]{modelfunction.png}\\ \caption{Effective correction of systematic residuals. Left panel: mean systematic residuals in each bin; right panel: fitted polynomial to correct for these systematic residuals. Red-white-blue colours: values of the residuals within the bin/at the point of the correction function; light green contour: distribution of the early-type galaxy sample; dark green contour: distribution of the most dense part of the early-type galaxy sample.} \label{modelfunction} \end{center} \end{figure*} The distances obtained using the traditional fundamental plane following the method of the previous section are systematically biased due to various selection effects (illustrated in Figure \ref{dist_absmag} for the strong dependence on the absolute magnitude of the fundamental plane residuals). To correct for this, we measured the average systematic offsets in the apparent magnitude-redshift plane within bins (see Figure \ref{modelfunction}). We choose this parametrization, because some of the selection effects create relatively clear cuts. We designed the bins to be one magnitude times 0.04 in redshift wide. We sampled at twice the resolution of the bins sizes, so that the data in each bin is partially shared with its neighbours. We then fit a fourth-order (second-order has notable problems for the faintest and brightest galaxies, third-order was off-set in the centre) two-dimensional polynomial to these bins and used it to obtained the correction function: $f_{\textrm{cor}}\left(m_{\textrm{app,cor}},z_{\textrm{group}}\right)$. \begin{equation} \textrm{log}_{10}\left(R_{e,\textrm{cor}}\right)=a \cdot \textrm{log}_{10}\left(\sigma_{0}\right) + b \cdot \mu_{e} + c \cdot + f_{\textrm{cor}}\left(m_{\textrm{app,cor}},z_{\textrm{group}}\right). \label{correctedfundamentalplane} \end{equation} Using the correction function to obtain corrected sizes $R_{e,\textrm{cor}}$ for the fundamental plane galaxies and using them in turn to calculate distances, we were able to largely remove luminosity and redshift dependent biases and selection effects. As illustrated in Figure \ref{modelfunction}, our correction function can reproduce the mean residuals in the bin very well within the range of our galaxy sample. Beyond the range of our sample of early-type galaxies, the function is barely constraint, but also irrelevant for our analysis. \subsection{The expanded fundamental plane} \label{meth_expfp} The simplest way to reduce scatter in a relation is by adding additional terms (and thereby also free parameters) to account for previously unconsidered correlations. \begin{equation} \textrm{log}_{10}\left(R_{e}\right)=a_{\textrm{exp}} \cdot \textrm{log}_{10}\left(\sigma_{0}\right) + b_{\textrm{exp}} \cdot \mu_{e} + d_{\textrm{exp}} \cdot \textrm{log}_{10}\left(M_{*}\right) + c_{\textrm{exp}} . \label{expandedfundamentalplane} \end{equation} To distinguish the coefficients of the expanded fundamental plane from the coefficients of the traditional fundamental plane, we added the index \emph{exp} to the coefficients in Equation \ref{expandedfundamentalplane}. When testing for systematic biases and studying the residuals of the fundamental plane, we found that for our specific SDSS-based sample that the single best expansions of the fundamental plane is the stellar mass $M_{*}$ obtained by the Wisconsin method \citep{Chen:2012} using Maraston models \citep{Maraston:2011}. We also tested other stellar mass estimates provided by SDSS such as the one based on \citet{Maraston:2009}, but found that the Wisconsin method yielded the best results for our applications. \section{Results} \label{sec_results} \subsection{Group catalogue} We used 1 473 971 galaxies from SDSS \citep{SDSS_DR15} and 6 629 galaxies from 2MRS \citep{2MRS} to create a group catalogue out to as far as a redshift of 0.5 while covering the 9 376 square degree footprint of the SDSS spectroscopic sample. The group catalogue was constructed using a Friends-of-Friends algorithm, for which we calibrated the linking length based on mock catalogues derived from the WMAP7 re-run of the Millennium simulation \citep{Guo:2013} and the selection criteria for the various samples that compose SDSS/BOSS and 2MRS, taking into account all significant biases. However, the direct implementation of the selection criteria yielded a far too low galaxy density (see Figure \ref{galaxy_density}), which required us to fine-tune the sample selection for the mock catalogues. The colour cuts of the LRG, BOSS low-z and CMASS sample are especially sensitive to small systematic offsets between the semi-analytic galaxy models of \citet{Guo:2011} and observations. The inclusion of 2MRS partially compensated the saturation bias of the SDSS spectroscopic sample by supplying redshifts to nearby bright galaxies. It assures that the brightest group galaxies are included, making sure the group centre is found correctly. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{richness.pdf}\\ \caption{Richness of the detected groups as a function of the redshift. Groups with more than 100 members are mapped to 100 to keep this figure compact.} \label{richness} \end{center} \end{figure} With our optimized FoF group finder algorithm, we detected 165 132 groups and 997 161 individual galaxies within our SDSS/BOSS/2MRS dataset consisting of 1 480 600 objects. This does not necessarily mean that all the individual galaxies that are not members of detected groups are isolated galaxies, but that due to Malmquist bias and other selection effects, we often only detected the brightest galaxy in many of these groups. We find 3 467 groups with ten or more members and 25 groups even contain more than a hundred galaxies. Naturally, the (apparently) richest groups are at lower redshifts (see Figure \ref{richness}), which is expected since there the sample is the most complete, because it was derived from primarily magnitude-limited surveys. The majority of the saturation bias of SDSS was successfully corrected by the inclusion of 2MRS data. Overall, the group catalogue shows the expected properties, given the dataset used to create it. The complete group catalogue can be found in Tables \ref{tab_groupcat} and \ref{tab_gal_groupcat}. The primary application of the group catalogue in this paper was to collapse the huge redshift space distortions (Finger of God effect) caused by the proper motions of galaxies in clusters and to be able to derive more accurate fundamental plane distances for rich clusters by combining the distances derived for different individual galaxies in said clusters. To be more specific, of our 318 149 early-type galaxies, 182 057 are individual galaxies and the remainder is located 75 822 different groups. Within these groups, 43 851 only contain one early-type galaxy for which we have fundamental plane distances. This leaves 31 971 groups hosting at least two early-type galaxies and 4 864 groups contain four or more early-type galaxies, which means that we could reduce the error of the distance measurements by about a factor of two. By combining the redshift measurments of several cluster members, we were able to largely remove the scatter introduced by the virial motions in said clusters. Additionally, the combination of independent fundamental plane distance measurments to several early-type galaxies within these clusters, we were also able to reduce the scatter on the distance measurements, save for residual systematic uncertainties. 582 groups even host at least ten early-type galaxies resulting in even better distance estimates for them. Furthermore, the group catalogue allowed for a comparison of our results with Tully-Fisher relation distances and the distance from the CosmicFlows-3 sample. With our catalogue, we reached beyond the group catalogue of \citet{Yang:2007}, which was limited by the SDSS DR7 spectroscopic sample and did not provide any groups at very low redshifts ($z<0.05$). The RedMapper catalogue \citep{Rykoff:2014} also excluded galaxies below a redshift of 0.08, but it has a bigger sample, since it also contains galaxies with only photometric redshifts. However, for the comparison with the Tully-Fisher relation and the CosmicFlows-3 dataset, nearby clusters are crucial, hence we could not just use the RedMapper catalog. We also moved beyond the limited depth ($z<0.1$) of the \citet{Saulder:2016} catalogue, which is completed at the low redshift range. Thereby, our improved group catalogue presented in this paper provides the ideal properties for our application to improve the fundamental plane distance measurements and compare them to other distance indicators. \subsection{Traditional fundamental plane} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{cfp_projection_z.pdf}\\ \caption{The traditional fundamental plane in z band, projected edge-on. } \label{cfp_projection_z} \end{center} \end{figure} \begin{table*} \begin{center} \begin{tabular}{c|cccc} band & $a$ & $b$ & $c$ & rms \\ \hline g & 0.889 $\pm$ 0.002 & 0.2772 $\pm$ 0.0002 & -7.129 $\pm$ 0.006 & 0.0908 \\ r & 0.958 $\pm$ 0.001 & 0.2896 $\pm$ 0.0002 & -7.311 $\pm$ 0.006 & 0.0871 \\ i & 0.986 $\pm$ 0.001 & 0.2944 $\pm$ 0.0002 & -7.355 $\pm$ 0.005 & 0.0850 \\ z & 1.004 $\pm$ 0.001 & 0.2979 $\pm$ 0.0002 & -7.371 $\pm$ 0.005 & 0.0833 \end{tabular} \end{center} \caption{Coefficients of the traditional fundamental plane optimized for usage as a distance indicator for our SDSS/BOSS sample.} \label{tfp_coeff} \end{table*} We fitted the traditional fundamental plane using Equation \ref{fundamentalplane} to our sample of 317 285 early-type galaxies. Thereby, we obtained the coefficients and root-mean square listed in Table \ref{tfp_coeff}. The fit is illustrated in Figure \ref{cfp_projection_z}. The complete catalogue of fundamental plane distances derived using this method can be found in Table \ref{tab_tradfp_dist}. These calibrations were not corrected for any biases and selection effects yet, because of the various overlapping sample these effects and cross-correlations arising from them are extremely difficult to estimate. Hence, the hereby obtained coefficients are only to be used for the same SDSS/BOSS dataset, and not for any other galaxies without additional corrections. We discuss an effective correction for the distances obtained using these calibrations in the next section. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{distances_c.pdf}\\ \caption{Traditional fundamental plane distance compared to the redshift based distances, which were used for calibration.} \label{distances_c} \end{center} \end{figure} The root-mean square of the fundamental plane is smaller in the redder filters. Hence, we use the z band for our distance measurements. In \citet{Saulder:2013}, it was shown that a combination of traditional fundamental plane distances from different filters would not improve the distance estimate beyond what one can reach in the band with the smallest root-mean square due to tight correlations of the fundamental plane parameters in between the different bands. We repeated this test with our data and could confirm their results. We find that the relative scatter of the traditional fundamental plane seems to slightly decrease (stays constant for absolute values after rising with distance at lower redshifts) for higher redshift galaxies, as illustrated in Figure \ref{distances_c}. In the z band, we found a mean relative distance uncertainty of the fundamental plane of $18.4\%$ when combining it with our group catalogue. The distance uncertainty without the group catalogue lies at $20.2\%$, which nicely illustrates the improvement achieved by combining the fundamental plane distances with our group catalogue. About 0.3 percentage points of this uncertainty can be attributed to systematic redshift bias, because of the hidden redshift dependences in the evolution corrections as well as the correction for the Tolman effect. When studying the residuals of the fundamental plane, we found a notable dependence on the galaxies' absolute magnitudes (see Figure \ref{dist_absmag}). Distances to the intrinsically fainter early-type galaxies are systematically overestimated by almost a factor of two, and distances to the intrinsically brightest early-type galaxies are systematically underestimated. Considering the saturation bias of some of the SDSS spectroscopic sample as well as the Malmquist bias of the magnitude limited parts of the survey, this causes a systematic overestimation of the distances to the most nearby objects and an underestimations of the distances to the farthest galaxies. A closer investigation of the biases and selection effects lead us directly to the effective model discussed in the next section as well as to the expanded fundamental plane. \subsection{Distances corrected for systematic residuals} \label{sec_distcor} The dominant bias of affecting the distances obtained using the tradition fundamental plane correlates with the absolute magnitude of the respective galaxies (see Figure \ref{dist_absmag}). Since these absolute magnitudes were calculated using the redshift-based distances, we could not directly use them to remove the residuals created by them. In fact, the selection effects and cut-offs are best constraint in the redshift-apparent magnitude plane. Therefore, by applying the method described in Section \ref{meth_corfp}, we mapped the average residuals in this plane within bins and fitted a polynomial to obtain a correction function (see Figure \ref{modelfunction}). The correction function is well constraint with the range of our sample, which was sufficient for our applications. Using Equation \ref{correctedfundamentalplane}, we adjusted the predicted radii of the early-type galaxies for these systematic residuals. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{distances_d.pdf}\\ \caption{Corrected fundamental plane distance compared to the redshift-based distances.} \label{cor_distances} \end{center} \end{figure} We used these corrected radii to obtain distances (see Figure \ref{cor_distances}) to the early-type galaxies in our sample, which are provided in Table \ref{tab_cor_dist}. Aside from removing systematic effects created by the various selection criteria, this method also reduces the overall scatter of the fundamental plane distances to $15.9\%$ without and $14.5\%$ with the group catalogue. Since the correction function by its very definition is redshift dependent, one might suspect that the redshift-dependent systematics might increase, but they actually slightly decrease to 0.2 percentage points of the scatter. \subsection{The expanded fundamental plane} \label{sec_expFP_res} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{sfp_mstar_res_sdss_manga.pdf}\\ \caption{Dependence of the residuals of the traditional fundamental plane on the stellar mass based on SDSS data (blueish cloud) and MaNGA data (tiny red stars).} \label{sfp_mstar_res_sdss_manga} \end{center} \end{figure} Here, we present the results of our calibration of the expanded fundamental plane, which was explained in Section \ref{meth_expfp}. We start by examining the short-comings of the traditional fundamental plane, which motivated us to proceed with this alternative calibration. The residuals of the traditional fundamental plane are strongly correlated with the estimated stellar masses of the galaxies (see Figure \ref{sfp_mstar_res_sdss_manga}). Despite the notable scatter of the stellar masses of the spectro-photometric Wisconsin method \citep{Chen:2012} using Maraston models \citep{Maraston:2011}, one can clearly see a systematic effect. It becomes more striking, when one uses the higher quality stellar masses for MaNGA galaxies \citep{Graham:2018}. As already illustrated in Figure \ref{dist_absmag}, the residuals also correlated with the absolute magnitudes, which is expected, since the stellar mass and the (redder) absolute magnitudes also correlate with each other. The simplest way to incorporate this in the fundamental plane calibrations is by expanding with a term proportional to the logarithm of the stellar mass. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{exp_fp_projection_z.pdf}\\ \caption{The expanded fundamental plane in z band, projected edge-on. } \label{exp_fp_projection_z} \end{center} \end{figure} \begin{table*} \begin{center} \begin{tabular}{c|ccccc} band & $a_{\textrm{exp}}$ & $b_{\textrm{exp}}$ & $d_{\textrm{exp}}$ & $c_{\textrm{exp}}$ & rms \\ \hline g & -0.121 $\pm$ 0.001 & 0.1929 $\pm$ 0.0001 & 0.4100 $\pm$ 0.0004 & -7.628 $\pm$ 0.003 & 0.0454 \\ r & -0.043 $\pm$ 0.001 & 0.1971 $\pm$ 0.0001 & 0.4022 $\pm$ 0.0004 & -7.657 $\pm$ 0.003 & 0.0424 \\ i & -0.002 $\pm$ 0.001 & 0.2023 $\pm$ 0.0001 & 0.3930 $\pm$ 0.0004 & -7.681 $\pm$ 0.003 & 0.0404 \\ z & 0.022 $\pm$ 0.001 & 0.2064 $\pm$ 0.0001 & 0.3840 $\pm$ 0.0004 & -7.660 $\pm$ 0.003 & 0.0403 \end{tabular} \end{center} \caption{Coefficients of the expanded fundamental plane optimized for usage as a distance indicator for our SDSS/BOSS sample.} \label{exp_coeff} \end{table*} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{exp_distances_c.pdf}\\ \caption{Expanded fundamental plane distance compared to the redshift based distances, which were used for calibration.} \label{exp_distances_c} \end{center} \end{figure} By fitting Equation \ref{expandedfundamentalplane} to the data, we obtained the values listed Table \ref{exp_coeff} for the coefficients of the expanded fundamental plane. As illustrated in Figure \ref{exp_fp_projection_z}, this fit is notably tighter than for the traditional fundamental plane (see Figure \ref{cfp_projection_z}) and it reduced the uncertainty of the individual fundamental plane distances to $9.6\%$ and to $9.0\%$ when also applying the group catalogue to further reduce the scatter. This is a significant improvement in the distance estimates (see Figure \ref{exp_distances_c}). However an explicit systematic redshift dependence gets more complex. In contrast to the redshift dependent systematics of the traditional fundamental plane, the magnitude of the systematics for the expanded fundamental plane correlates with the redshift itself as well. In the case of nearby galaxies (redshifts below 0.03), we have a contribution of 1.7$\%$ redshift dependent systematic bias. It continuously shrinks to almost zero (0.07 $\%$) for redshifts of 0.2 and higher. On average for the entire sample, we find a contribution to the overall scatter due to systematic redshift bias is with 0.4 percentage points of the same magnitude as the traditional fundamental plane. This systematic bias arose from a combination of the redshift dependence of the evolution correction, the correction for the Tolman effect, and the additional systematics caused by the use of the stellar masses. We provide a complete catalogue of expanded fundamental distances derived using this method in Table \ref{tab_expfp_dist}. \subsection{Comparison with Tully-Fisher relation data} \label{sec_tf_comp} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{TF_FP_distances_all.pdf}\\ \caption{Comparison between Tully-Fisher relation distances and various fundamental plane and redshift-based distances. Error bars were omitted in this figure to avoid overcrowding, but they were of about the same size as in Figure \ref{TF_FP_distances_richclusters} } \label{TF_FP_distances} \end{center} \end{figure} We cross-matched galaxies with known Tully-Fisher relation distances within the NASA/IPAC Extragalactic Database with our group catalogue and found 4 481 objects. To be more precise, we found 20 900 Tully-Fisher relation based distance measurements for 4 481 unique galaxies. As a consistency check, we compared the Tully-Fisher distances to the redshift-based distances and found an overall scatter of $27.6\%$ (and $23.5\%$ for groups) hosting them, which is about the magnitude expected for it, considering the database contains distances from various sources. Since the Tully-Fisher relation only works for late-type galaxies and the fundamental plane only works for early-type galaxies, we do not have a direct overlap between the distance indicators. Hence we had to take advantage of our group catalogue. We selected every cluster that had at least one galaxy with Tully-Fisher relation distances and at least one galaxy with fundamental plane distances. 539 groups in our dataset fulfilled this requirement. Also illustrated in Figure \ref{TF_FP_distances}, we find poor agreement with the traditional fundamental plane ($41.7\%$ error on average), which is mostly because the traditional fundamental plane tends to overestimate distances due to the saturation bias of SDSS the parameters being optimized for the bright galaxies due to our sample selection. The brightest galaxies are missing in the overlapping region between our fundamental plane distances and the Tully-Fisher relation distances. After correcting for the systematic biases of the traditional fundamental plane, we still found a sizeable scatter of $37.0\%$ when comparing them to Tully-Fisher relation data. With our expanded fundamental plane, which also considers the stellar masses of the galaxies, we obtained a $31.3\%$ scatter between the Tully-Fisher relation distances and the distances derived from the expanded fundamental plane. These values are marginally better than the scatter between the redshift-based distances and the Tully-Fisher relation distances of $29.4\%$. This subsample is still plagued by occasional interlopers due to imperfections of the group catalogue. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{TF_FP_distances_richclusters.pdf}\\ \caption{Comparison between Tully-Fisher relation distances and various fundamental plane and redshift-based distances for groups that host at least three late-type and at least three early-type galaxies.} \label{TF_FP_distances_richclusters} \end{center} \end{figure} When looking at richer groups, that contain at least three galaxies for which we have Tully-Fisher relation distances in our database and at least three galaxies for which we derived fundamental plane distances, we found even stronger correlations for the 45 groups that fulfil these criteria (see Figure \ref{TF_FP_distances_richclusters}). Thereby, we reduced the impact of interlopers and imperfections of our group catalogue as well as increased the statistical quality of the distance estimate to each cluster for all methods. To be more precise, the scatter between the redshift-based distances and the Tully-Fisher relation distances is $7.5\%$, while the scatter between the traditional fundamental plane and the Tully-Fisher relation distances is $18.7\%$. Interestingly, the scatter for the corrected fundamental plane is with $17.0\%$ only marginally lower, but it visibly reduced the systematic offset present in the traditional fundamental plane (see Figure \ref{TF_FP_distances_richclusters}). The expanded fundamental plane yields a scatter of $10.8\%$, when compared to the Tully-Fisher relation for the richer groups sample, and thereby also provides the best agreement between the two methods. \subsection{Comparison with CosmicFlows-3 data} \label{sec_cf_comp} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{CosmicFlows_distances_all.pdf}\\ \caption{Comparison between CosmicFlows-3 distances and various fundamental plane and redshift-based distances. Error bars were omitted in this figure to avoid overcrowding, but they were of about the same size as in Figure \ref{CosmicFlows_distances_richclusters}.} \label{CosmicFlows_distances_all} \end{center} \end{figure} The CosmicFlows \citep{Cosmicflows3} projects collects distances from a multitude of different methods to model the matter distribution in the local universe and the peculiar motion field. We matched the 17 669 CosmicFlows-3 galaxies to our group catalogue and excluded all galaxies in the CosmicFlows sample for which the distances were only obtained using the traditional fundamental plane (marked with \emph{P} or \emph{F} in their catalogue). We found 2 955 galaxies fulfilling these requirements. When comparing these distances provided by CosmicFlows-3, after rescaling to the cosmology used in our paper, to redshift-based distances for the same galaxies, we found a scatter of about $27.5\%$ (and $24.9\%$ for groups). We further restricted our sample in the same way as in the previous section by selecting only groups that have at least one galaxy for which we have fundamental plane distances, and one galaxy for which we have an alternative distance estimator. This left us with 339 groups (see Figure \ref{CosmicFlows_distances_all}), that yielded correlations similar to our previous findings. The redshifts agree with a scatter of $19.7\%$ to the CosmicFlows distances. The traditional fundamental plane exhibits the same bias as before and we obtained a scatter of $36.9\%$, when comparing it to the CosmicFlows distances, again due to the same systematics already discussed with the Tully-Fisher relation distances. After correcting for the dominant systematics in the residuals of the fundamental plane, we got a scatter of $31.7\%$ between the corrected fundamental plane distances and the CosmicFlows distances. Expanded fundamental plane yields a scatter of $23.5\%$ when compared to the CosmicFlows distances. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{CosmicFlows_distances_richclusters.pdf}\\ \caption{Comparison between CosmicFlows-3 distances and various fundamental plane and redshift-based distances for groups that host at least three galaxies with fundamental plane distances and at least three galaxies with complementary distance measurements.} \label{CosmicFlows_distances_richclusters} \end{center} \end{figure} We refined our sample by restricting it to rich groups that have at least three galaxies for which we have obtained fundamental plane distances and also at least three galaxies for which we have alternative distance measurements from CosmicFlows-3. Thereby, we found 29 groups. The scatter between the redshift-based distances and the CosmicFlows distances was found to be $12.7\%$ for this subsample. The traditional fundamental plane clearly (see Figure \ref{CosmicFlows_distances_richclusters}) exhibits the same systematic offset as in the case of the Tully-Fisher distances and yields a scatter of $27.3\%$ compared to the CosmicFlows distances. Again the corrected fundamental plane produces a slightly lower scatter of $26.7\%$ and the expanded fundamental plane a notably lower scatter of $18.8\%$. \subsection{Comparison with Supernova Type Ia data} \label{sec_sn_comp} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{sn_bet15_distsdss.pdf}\\ \caption{Comparison between Supernova Type Ia distance and various fundamental plane and redshift-based distances.} \label{sn_bet15_distsdss} \end{center} \end{figure} We took the catalogue of Supernova Type Ia distances\footnote{They were derived from the distance moduli listed in the cited catalogue following the procedure explained in their paper.} from \citet{Betoule:2014} and cross-matched it with our catalogue of various fundamental plane distances. We found that 33 of our early-type galaxies hosted Supernovae Type Ia from that catalogue. Again the traditional fundamental plane performs poorly in comparison to the Supernova Type Ia distance and we found a scatter of $27.8\%$ (see Figure \ref{sn_bet15_distsdss}). The corrected fundamental plane distances have a scatter of $25.0\%$, when compared to the Supernova Type Ia distances. The expanded fundamental plane yields with a scatter of $21.0\%$ compared to the Supernova Type Ia distances. \section{Discussion} \label{sec_discussion} \begin{table*} \begin{center} \begin{tabular}{c|cccc|ccccc} distance indicator & $\overline{D_{\textrm{err,ind}}}$ & $\overline{D_{\textrm{err,group}}}$ & $\overline{D_{\textrm{err,sys}}}$ & range $D_{\textrm{err,sys}}$ & $\Delta_{\textrm{TF,all}}$ & $\Delta_{\textrm{TF,rich}}$ & $\Delta_{\textrm{CF3,all}}$ & $\Delta_{\textrm{CF3,rich}}$ & $\Delta_{\textrm{SN Ia}}$ \\ \hline traditional FP & $20.2\%$ & $18.4\%$ & $0.3\%$ & $\sim0.3\%$ & $41.7\%$ & $18.7\%$ & $36.9\%$ & $27.3\%$ & $27.8\%$ \\ corrected FP & $15.9\%$ & $14.5\%$ & $0.2\%$ & $\sim0.2\%$ & $37.0\%$ & $17.0\%$ & $31.7\%$ & $26.7\%$ & $25.0\%$ \\ expanded FP & $9.6\%$ & $9.0\%$ & $1.1\%$ & 2.0 - 0.1 $\%$ & $31.3\%$ & $10.8\%$ & $23.5\%$ & $18.8\%$ & $21.0\%$ \\ redshifts & - & - & - & - & $29.4\%$ & $7.5\%$ & $19.7\%$ & $12.7\%$ & $8.2\%$ \end{tabular} \end{center} \caption{Summary of the different methods to obtain fundamental plane distances presented in this paper as well as redshift-based distances for comparison. First column: name of the method; second column: overall average error in the distance estimate for individual galaxies; third column: overall average error in the distance estimate for galaxy groups; forth column: average systematic redshift-dependent error of the distance estimate; fifth column: range of the systematic redshift-dependent error of the distance estimate due to redshift-space distortions; sixth column: scatter between the respective distance indicator and the Tully-Fisher relation distances using the complete overlapping sample; seventh column: scatter between the respective distance indicator and the Tully-Fisher relation distances using only rich clusters in the overlapping sample; eighth column: scatter between the respective distance indicator and the CosmicFlows-3 distances using the complete overlapping sample; ninth column: scatter between the respective distance indicator and the CosmicFlows-3 distances using only rich clusters in the overlapping sample; tenth column: scatter between the respective distance indicator and Supernova Type-Ia distances.} \label{overview} \end{table*} It can be difficult to tell, which is the optimal way to implement the fundamental plane as distance indicator. Several issues arise from the fact the SDSS spectroscopic sample is a mostly magnitude-limited, but not completely due to some colour-selected subsamples, as well as suffering from a saturation bias. Hidden and explicit redshift-dependences are problems, when one intends to use the fundamental plane as a redshift-independent distance indicator. By examining the advantages and disadvantages of the various fundamental plane calibrations and definitions, which we provided in the previous section, we want to illustrate which calibration is best-suited for which application. \subsection{Sample selection and basic methods} One of the main goals of this paper is to maximize the sample size of galaxies with fundamental plane galaxies. To this end, we had to move beyond our previous selection criteria \citep{Saulder:2013,Saulder:2015}, which were dominated by the limitations of GalaxyZoo \citep{GalaxyZoo,GalaxyZoo_data}. The citizen science project GalaxyZoo only provided visual morphological classifications for the galaxies covered by SDSS DR7 \citep{SDSS_DR7}. Alternative approaches providing morphological classifications using machine-learning \citep{DominguezSanchez:2018} were also limited by their restricted sample selection (again SDSS DR7). While there are advantages in the more clearly definied such as SDSS DR7, it excludes valuable galaxies even many at the lower redshift range. As illustrated in Figure \ref{SDSSBOSS_subsamples}, we used a composite SDSS/BOSS sample based on its latest data release \citep{SDSS_DR15}. Our selection criteria (see Appendix \ref{app_etgsel}) did not restrict our sample to any specific subset of SDSS/BOSS. Which means that if there is sufficient quality data for a galaxy in SDSS, it was used in our sample. If we had restricted ourselves to SDSS DR7, we would have missed out on 72 262 galaxies, which is a significant fraction of our dataset. The size of our sample beyond SDSS DR7 was also the reason, why we only used the data from \citet{Simard:2011} and \citet{Mendel:2014} only for additional tests and we could not take advantage of the data of \citet{Meert:2015} and \citet{Meert:2016}. Our quality selection criteria ensured that our sample became increasingly sparse at higher redshifts and thereby avoiding problematic galaxies and uncertain parameter estimates. We barely had any CMASS galaxies in our sample of early-type galaxies and thereby avoided most of the problems described in \citet{Bernardi:2011} and \citet{MonteroDorta:2016}. We used the de Vaucouleurs magnitudes and sizes from SDSS, because we found in \citet{Saulder:2013} that they yield the best fitting values for the fundamental plane. The composite model and Petrosian magnitudes and sizes performed worse and in the Appendix of \citet{Saulder:2015}, we also showed weaker fits for the Sersic models of \citet{Simard:2011}. In contrast to this, \citet{Bernardi:2017a} and \citet{Bernardi:2017b} found notable deficiencies in the profile fits provided by SDSS, especially at their outer edges. However the alternative catalogues provided by them are limited to the SDSS DR7 spectroscopic sample, which cause the same problems as the other catalogues mentioned earlier. We also tested various stellar mass models provided by the SDSS database. We found that stellar masses of the Wisconsin method \citep{Chen:2012} using Maraston models \citep{Maraston:2011} works best for the expanded fundamental plane calibrations. Alternatively, we used the passive port of the stellar mass models of \citet{Maraston:2009}, which yielded a expanded fundamental plane with a larger scatter than the one provided in Section \ref{sec_expFP_res}. With the stellar masses of \citet{Maraston:2009}, we found some very interesting relations for an alternative distance calibration briefly explained in Appendix \ref{app_add_fp}. This relation was tentative at best and did not reappear with the stellar masses of the Wisconsin method. \subsection{Traditional fundamental plane} The traditional fundamental plane has been used for about three decades and during this time various approaches, on how to calibrate it and apply it, have been developed. Since in this paper, we primarily view the fundamental plane as a distance indicator, we restrict ourselves to direct fits, which according to the very detailed work of \citet{Sheth:2012}, yield the most-suitable coefficients for our applications. The selection effects due to the survey design were another issue. The common way to address it is to derive unbiased fundamental plane coefficients using volume-weights. However, given the combination of colour cuts and magnitude limits make this approach unfeasible. Hence, as illustrated in Figure \ref{dist_absmag}, our sample is clearly biased. Our sample contains a disproportionally large number of bright galaxies. Since the traditional fundamental plane residuals have a strong dependence on the stellar mass, and thereby the luminosity, we would underestimate distances for bright (and thereby on average further away) galaxies. Hence, we did not use such bias corrections for our calibrations, because we wanted to gain the best-suited coefficients for our biased galaxy sample that yields the smallest error in terms of distance measurement for said sample. One of our goals for this paper is to provide the largest possible sample of fundamental plane distances that one can obtained from the latest data release of SDSS. To this end, we slightly relaxed the selection criteria for what qualifies as an early-type galaxy in some aspects (but also tightened them up in other aspects) compared to previous work \citep{Saulder:2013,Saulder:2015,Saulder:2016}. The most notable difference was dropping the GalaxyZoo \citep{GalaxyZoo} classifications in favour of a more reproducible method using colours and profile fits. Thereby, we were also able to move beyond SDSS DR7 \citep{SDSS_DR7}, the basis of GalaxyZoo, and include significantly more galaxies than in previous calibrations \citep{Saulder:2013,Saulder:2015}. We could identify 334 388 early-type galaxies with our method and while calibrating the fundamental plane, we excluded notable outliers, mildly reducing our sample to 317 285 for which we were able to derive fundamental plane distances. \citet{DOnofrio:2008,Nigoche:2009} have already shown that fundamental plane varies for different luminosity and velocity dispersion ranges. When varying our selection criteria slightly for the luminosity (absolute magnitude) and central velocity dispersion ranges, we found that galaxies with very low central velocity dispersions have the most impact on the quality of our calibrations. However, a cut in this parameter also affects the sample size, which we want to keep as large as reasonably possible. Therefore, we compromise for an uncorrected velocity dispersion limit of 100 km/s, which was previously used in \citet{Saulder:2015}, and this only reduced the sample size by about 10 000 galaxies, while decreasing the distance uncertainty by 0.4 percentage points. This was a reasonable trade-off in our opinion. An additional improvement of the fundamental plane calibrations was achieved by our group catalogue. It allowed us to correct for the redshift space distortion caused by the peculiar motions of galaxies in clusters. This worked in two ways. First, it help with the calibration of the fundamental plane (or actually fundamental planes, since we also used the same method for the stellar mass fundamental plane), because we used the median group redshift instead of the individual redshifts of the galaxies, when we derived the fundamental plane parameters\footnote{The magnitudes used to the derive the surface brightnesses were evolution corrected. Also the estimated distances to get $R_{e}$ for the calibration made use of the group redshifts.}. Additionally, we used it to reduce the distance uncertainties to groups that hosted more than one early-type galaxy for which we were able to derive a fundamental plane distance. By taking the median of the fundamental plane distance of the different early-type galaxies, we could improve the distance estimate to these groups and clusters significantly. Using the median instead of the mean has the advantage that it is less sensitive towards interlopers that plague all FoF-based group catalogues. The group catalogue will also help us in our future research, when we will take a quality selected subsample from our distance catalogue to study peculiar motions. The magnitudes used for the traditional fundamental plane were corrected for evolutionary effects using Equation \ref{evolutioncorrection}, which based on the established method by \citet{Bernardi:2003c}. Assuming a constant number density of the brightest galaxies, we derived a $Q$ parameter of 0.71 mag/$z$, which is slightly lower than previous estimates \citep{Bernardi:2003c,Saulder:2013} using different methods. We argue that adjusting the evolution effects for the brightest galaxies is sufficient for our application, because at the higher redshifts, when evolution becomes the most relevant, those galaxies are the only ones still detected within the sample. However, evolution corrections have an explicit redshift dependence, which creates a small systematic bias. Furthermore, the surface brightnesses used for the fundamental plane have to be corrected for Tolman effect, which dims surface brightnesses as a function of the cosmological redshift (hence distance). Although the K-corrections are, by their very nature, also redshift-dependent, this is not an issue for them. The K-correction only corrects the shift in the spectral energy distribution, which depends on the observed redshift (caused by peculiar motions and the Hubble expansion). Therefore, there is no implicit pure distance dependence on this correction (it does not matter what caused the redshift). In contrast to this, the evolution correction as well as the correction for the Tolman effect depend explicitly on the cosmological redshift which correlates with the distance. However, one cannot measure the cosmological redshift directly, because in practice, the observed redshift is the sum of the cosmological redshift and the redshift caused by peculiar motions. In order to estimate the systematic effect, we introduced a Gaussian scatter of the same magnitude as the average 1-dimensional peculiar velocities of the groups ($\sim340$ km/s) with the help of our mock catalogues. By comparing the distances obtained from the perturbed and unperturbed data, we found a systematic bias of $0.3\%$ on the distance estimates caused by the hidden redshift-dependences and redshift-space distortions. We tested the dependences of the residuals of the traditional fundamental plane on several parameters. We focussed on parameters that are (mostly) independent of the parameters of the traditional fundamental plane. Using the data of \citet{Simard:2011}, we could not find any dependence on the Sersic parameter for our sample of early-type galaxies. There is a clear dependence on the number of (early-type) galaxies per group, which will be discussed in the section in more detail. Also the the dependence on the stellar masses \citep{Maraston:2011,Chen:2012} will be discussed along with the expanded fundamental plane. Using the data from MaNGA \citep{Manga}, we were able also test the dependence on the $\lambda_{\textrm{R}}$ parameter, which according to \citep{Graham:2018} correlates with the stellar mass. However using the same data, we could not find any notable dependence between the $\lambda_{\textrm{R}}$ parameter \citep{SauronIX} and the residuals of the traditional fundamental plane. We did not find any correlation for the residuals and galaxy colours or axis-ratios. \subsection{Corrected fundamental plane} \begin{figure*} \begin{center} \includegraphics[width=0.31\textwidth]{groupsizes_etg.pdf} \includegraphics[width=0.31\textwidth]{groupsizes_etg_near.pdf} \includegraphics[width=0.31\textwidth]{groupsizes_etg_cor.pdf} \\ \caption{Dependence of the fundamental plane residuals on the number of early-type galaxies per group. Left panel: residuals of the traditional fundamental plane using the entire sample; central panel: residuals of the traditional fundamental plane only using the galaxies with a redshift of less than 0.1; right panel: residuals of the corrected fundamental plane using the entire sample.} \label{groupsizes_etg} \end{center} \end{figure*} To account for the systematic biases of the traditional fundamental plane, we measured the mean residuals in bins in redshift-magnitude space. By adding a fitting function based on the residuals to the fundamental plane, we were not only able to remove the most dominant systematic bias, but also notably reduced the scatter. This correction also removed the systematic offset of nearby galaxies in rich clusters seen in Figures \ref{TF_FP_distances_richclusters} and \ref{CosmicFlows_distances_richclusters}. We illustrated in Figure \ref{groupsizes_etg} that it the systematic bias that correlates with the richness (in early-type) galaxies of the groups/clusters is visibly reduced. Furthermore, there is no systematic offset of the residuals of corrected fundamental plane for nearby clusters in contrast to the traditional fundamental plane. Aside from removing notable systematics, the overall scatter of the corrected fundamental plane is reduced to $14.5\%$. Despite our correction function being redshift-dependent, the overall redshift-dependent systematics due to redshift-space distortions are with just $0.2\%$ comparable to the ones from the traditional fundamental plane. The correction function, which we used is just a simple and effective model, that is best suited for our large and complex sample of early-type galaxies. There is some room for further improvement to get possibly better distances using a fully Bayesian model similar to \citet{Howlett:2017} and \citet{Qin:2018} to correct for systematics, but only for a smaller and well-defined subsample. However to maximize the galaxy sample, the corrected fundamental plane is the best save improvement of the systematically biased traditional fundamental plane calibrations. \subsection{Expanded fundamental plane} There is a clear (absolute) luminosity dependence of the traditional fundamental plane (see Figure \ref{dist_absmag}), which naturally causes problems for magnitude-limited surveys. Also one cannot use the absolute magnitudes obtained from redshift-distances to improve the (redshift-independent) fundamental plane without being plagued by countless other systematic biases. Aside from using the corrected fundamental plane, we tried to address this in many different ways, which are briefly discussed in Appendix \ref{app_add_fp}. The stellar mass roughly correlates with the absolute magnitudes and it can be estimated by fitting spectro-photometric models of the spectral energy distribution using the method of \citet{Chen:2012} and the models of \citet{Maraston:2011} as provided by SDSS. By using their stellar masses as an additional parameter for the fundamental plane\footnote{Strictly speaking it is not a plane any more, but a hyper-plane then.}, we could noticeably reduce the scatter of the distances obtained from this relation. As illustrated in Figure \ref{sfp_mstar_res_sdss_manga}, higher quality stellar masses \citep{Graham:2018} as those derived from integral field surveys such as in this case of MaNGA \citep{Manga}, have the potential to further improve the distance estimates. We also tested other stellar mass estimates provided by SDSS such the photometric stellar masses using the method of \citet{Maraston:2009} and \citet{Maraston:2013}. We found a notably larger scatter than using these stellar masses and the coefficients would be different. Most notably the $a_{\textrm{exp}}$ coefficient is more important with the photometric stellar masses than with the spectro-photometric stellar masses. This makes sense, since the central velocity dispersion was used the calibrations using the method of \citet{Chen:2012}. For our definition of the expanded fundamental plane (see Equation \ref{expandedfundamentalplane}), we took advantage of the dominant bias and added a term to the traditional fundamental plane for the stellar mass dependence. This way we could remove the some of the systematic bias at low redshift while also significantly reducing the overall scatter of our distance estimates. We found a scatter of $9.6\%$ for the distances obtained from the expanded fundamental plane, when compared to the redshift-distance used for calibration. The average systematic redshift dependent bias is with $1.1\%$ , which is notably larger than for the traditional and corrected fundamental plane. However, there is the hidden redshift dependence in the stellar mass models used, which was difficult to exactly quantify. Therefore to test its impact on the systematics, we simply rescaled the stellar masses according to the introduced redshift perturbation introduced in the previous subsection by considering the difference in real and derived luminosity distance). Another problem is that the magnitude of the systematic bias depends on the the redshift itself and reaches higher values (up to $2\%$) for nearby galaxies. This will have to be taken into account, when deriving peculiar motions from these distances. \subsection{Comparison with other distance indicators} In order to test our fundamental plane distances, we compared them to both redshift-based distances and other distance indicators. Since we used them for calibrations, we have redshift-based distances to all galaxies in our sample at our disposal. Additionally, we obtained Supernovae Type Ia distance to a small subset of our galaxies. Furthermore, by using our group catalogue, we were able to determine Tully-Fisher relation distances to nearby groups hosting both early- and late-type galaxies and used them for comparison as well. Moreover, we took advantage of the CosmicFlows-3 \citep{Cosmicflows3} sample to test our distance estimates. The comparison with the redshift-based distances yielded an upper-limit for the statistical error of our calibration, because the redshift-based distances are biased themselves by the peculiar motions of the galaxy groups\footnote{Not individual galaxies, because we used our group catalogue to correct for the redshift-space distortions in clusters, but might get occasional additional bias from interlopers in return.}. Furthermore, the complementary distance indicators allowed us to test the quality of our calibrations and to better check for any systematic biases (see Table \ref{overview} for a brief overview and comparison of our results). It is impossible to compare Tully-Fisher relation distances and fundamental plane distances directly, because by their very definition they target mutually exclusive types of galaxies. However our group catalogue allowed us to compare these two distance indicators for several galaxy groups and clusters. The slight disadvantage of this method is that group catalogues are not perfect and there might be interlopers affecting the dataset. The only ways to minimize this effect is by taking rich groups and median distances. When just merging the Tully-Fisher relation distances obtained from the NASA/IPAC Extragalactic Database with our group catalogue and comparing them to the redshift-based distances, we obtained an uncertainty of about $23.5\%$ ($23.5\%$ without the group catalogue), which is worse than the traditional fundamental plane. Considering that the Tully-Fisher relation distances are compiled from various sources, both indicators can be considered to be of about the same overall quality. However, the traditional fundamental plane exhibits a strong systematic bias (see Figure \ref{TF_FP_distances}) at short distances, which becomes very apparent in this test, because Tully-Fisher relation data only reaches out to about 300 Mpc. This is due to the SDSS saturation bias, which excludes the brightest galaxies from the main galaxy sample) in the nearby universe as well as due to selection effects introduced by the survey design. As illustrated in Figure \ref{dist_absmag}, there is a systematic bias in the traditional fundamental plane depending on intrinsic brightness of galaxies. Therefore, the fundamental plane distances, which are calibrated for the entire range of magnitudes\footnote{We double-check that this is not due to the lack of a Malmquist-bias/saturation correction by also looking at the distances derived using the fundamental plane coefficients obtained using volume-weights. We found a similar (actually slightly worse) systematically biased distribution.} of the SDSS and BOSS sample, are systematically overestimated. In contrast to this, both the corrected fundamental plane and the expanded fundamental plane are not affected by this bias, not even for rich clusters where it is the most striking for the traditional fundamental plane (see Figure \ref{TF_FP_distances_richclusters}). We repeated the same procedure with the CosmicFlows-3 \citep{Cosmicflows3} dataset from which we only excluded all fundamental plane distances. The advantage of the CosmicFlows-3 sample compared to the Tully-Fisher relation distances obtained from NED are that it is consistently calibrated. As illustrated in Figures \ref{CosmicFlows_distances_all} and \ref{CosmicFlows_distances_richclusters}, the overall behaviour is fairly similar to the Tully-Fisher relation distances sample. Due to the overlap between the two samples, this is expected. Supernovae Type Ia are rare, but out of the 740 supernovae in the database of \citet{Betoule:2014}, we found 33 within our sample of galaxies with fundamental plane distances. The main advantage of the supernovae Type Ia dataset is that they cover a much wider range in distances than the Tully-Fisher relation dataset. The supernovae Type Ia dataset does not show any notable systematic biases (see Figure \ref{sn_bet15_distsdss}) for any fundamental plane. Furthermore, there is a minor discrepancy between the redshifts from the supernova catalogue and the SDSS redshifts, but using the other redshifts from the supernova catalogue instead only marginally decreases the error between the supernovae distances and the redshifts to $7\%$\footnote{And would remain at $8\%$, if taking all 740 galaxies of the supernova catalogue.}, while slightly increasing all other errors. \section{Summary and Conclusions} \label{sec_sum_and_concl} We used the latest data release from SDSS \citep{SDSS_DR15} to derive the largest set of fundamental plane distances to date. We provided a comprehensive catalogue of fundamental plane distances to 317 285 galaxies up to a redshift of 0.4. We calculated distances using the traditional fundamental plane, as well as two alternative variants of the fundamental plane, which we called the corrected fundamental plane and the expanded fundamental plane. Additionally, we constructed a FoF group catalogue based on the SDSS spectroscopic sample up to a redshift of 0.5, which was supplemented by 2MRS \citep{2MRS} data to partially compensate for the saturation limit of SDSS spectroscopy. This group catalogue helped us to reduce the scatter of distances obtained from the traditional fundamental plane from an average of $20.2\%$ down to an average of $18.4\%$. Additionally, it allowed us to conduct further tests of our distance calibrations by helping us to compare our fundamental plane distances to Tully-Fisher relation distances obtained from NED, distances from the CosmicFlows-3 \citep{Cosmicflows3} sample, and supernovae Type Ia distance obtained from \citep{Betoule:2014}. We defined the corrected fundamental plane to combat systematic biases affecting the traditional fundamental plane by adding a correction function that removes said biases. Although this function is explicitly redshift dependent, we did not measure any increase in the systematics due to redshift-space distortions. With a reduced scatter of the distance estimates to $14.5\%$, we consider best and safest way to improve the traditional fundamental plane. A more experimental way to even further reduce the uncertainties in the distance measurements is the expanded mass fundamental plane, which we obtained by adding a term proportional to the stellar mass to the definition of the traditional fundamental plane. While we were able to reduce the scatter of the distance measurements using the expanded fundamental plane to only $9.0\%$, which is half the value of the traditional fundamental plane, we found it to be strongly dependent on the specific stellar mass model. Furthermore, the cross-correlations between the stellar masses and various parameters created additional problems with the systematics from redshift-space distortions. While the improvements in the overall scatter are great for the expanded fundamental plane, the increased systematics will cause problems for future peculiar motion studies using these distances. We consider the corrected fundamental plane as the best approach of obtaining redshift-independent distances using our methods. A detailed description of our complete set of catalogues can be found in Appendix \ref{catalog}. In the future we hope to use quality selected subsets of our catalogues using some of the improved fundamental plane distances for peculiar velocity studies and to further our understanding of the matter distribution in the local universe. \section*{Acknowledgments} We want to thank David Parkison, Benjamin Joachimi, and Shravan Shetty for inspiring discussions. We also acknowledge helpful advise from Suhail Dhawan and Barry F. Madore. Furthermore, we want to thank Maret Einasto and Cullan Howlett for some important comments and suggestions. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We thank the Korea Institute for Advanced Study for providing computing resources (KIAS Center for Advanced Computation Linux Cluster System) for this work. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The codes required for this project were written in {\sc Python}. We want to take the opportunity to thank all its developers and especially the people behind the following packages: {\sc SciPy} \citep{scipy}, {\sc NumPy} \citep{NumPy}, {\sc Matplotlib} \citep{matplotlib}, and {\sc astropy} \citep{astropy:2013,astropy:2018}, who get far too little recognition and too few citations despite their work being regularly used by a large fraction of the astronomical community. \addcontentsline{toc}{section}{References} \bibliographystyle{mnras}
2,877,628,090,949
arxiv
\section{Introduction} Rayleigh-Taylor instability (RTI) occurs at the interface between two fluids with different densities, subjected to an acceleration directed from the lower density fluid to the higher density one. A typical case is a heavy fluid resting on the top of a lighter one in the presence of a gravitational field. Under such conditions, density perturbations at the interface grow in time under the effect of gravity. The first detailed study of this instability was conducted by Rayleigh \cite{Rayleigh} in the 1880s. Later the first study was extended to accelerated fluids by Taylor \cite{Taylor} in 1950. The first experiment was performed by Lewis in the evolution of an unstable air-water interface \cite{Lewis}. Another experiment by Emmons \emph{et al}. confirmed these findings \cite{Emmons}. Such an instability plays a prominent role in many natural and industrial processes, such as devices for sustainable energy production, say turbines \cite{JuYG} and inertial-confinement fusion (ICF) \cite{ICF}, type-la supernovae \cite{supernovae}, hot-wire diagnostics \cite{Kraft}, quantum magnetized plasmas \cite{WLF2014}, colloidal mixtures \cite{soft2011}, etc. In the above-mentioned fields, the compressibility effects on RTI are essential, and even dominate \cite{Bernstein-Book,Livescu2,Benzi01,Benzi02}, deserving careful investigation. In fact, many theoretical and numerical studies have been performed, especially on the initial linear stage \cite{Jin,Hoshoudy,Livescu3,Lafay,He-Hu,Xue-Ye,Gauthier}. In those studies, the compressibility effects on RTI growth rate are generally probed via changing the ratio of specific heats and the equilibrium pressure at the interface. Specifically, in 2007, Lafay \emph{et al.} found that, in isothermal case, the stratification has a stabilizing effect while the compressibility has a destabilizing effect for two miscible viscous and compressible fluids \cite{Lafay}. In 2008, He \emph{et al.} reported that, in an inviscid case, the influences of the ratio of specific heats are as below. It mitigates the RTI when the upper heavy fluid is more compressible, while it enhances the RTI when the lower fluid is more compressible \cite{He-Hu}. In 2010, Ye \emph{et al.} demonstrated that the compressibility has destabilizing effects for inviscid compressible fluid with exponentially variable density profile \cite{Xue-Ye}. Although the compressibility effects have been studied extensively, several fundamental problems remain open, such as the nonequilibrium effects in RTI, especially for the case of increasing compressibility \cite{Gauthier,Abarzhi,Livescu}. For the case with strong compressibility, the interfacial dynamics becomes more complicated as the RTI unfolds, resulting in very substantial gradient forces ($\nabla \rho $,$\nabla \mathbf{u}$ and $\nabla T$) around the interfaces and very pronounced thermodynamic non-equilibrium (TNE) effects, where $\rho $,$\mathbf{u}$ and $T$ are the local density, flow velocity and temperature, respectively. The more pronounced the compressibility, the more complex the interfaces and the TNE effects as well. It is known that the Navier-Stokes model falls short of describing the complex interfaces and TNE effects \cite{Succi-Book,Review2012,XuYan2013,XuGan2013,XuLin2014,XuLin2016,XuLin2015PRE,XuGan2015}. At the same time, molecular dynamics and Monte Carlo simulations can not access macroscopic spatial-temporal scales of interest at affordable computational cost. Under such conditions, a kinetic approach based on a suitably simplified model Boltzmann equation is preferable. As a special discretization of the Boltzmann equation, the lattice Boltzmann (LB) method has achieved great success in various complex flows \cite{Succi-Book,Review2012,Ottaviani,Succi2015,LiQ2,LiQ3,WangY,Yeomans-group1,Yeomans-group2,Yeomans-group3,Zhangyh-group1,Zhangyh-group2,Zhangyh-group3}. The LB applications in RTI can be classified into two groups, RTI in incompressible flows \cite{Clavin,Gunstensen,Nie,He1,He2,ZhangRY,Clark,LiQ1,LiuGuo,LiangShi,Livescu,Abarzhi} and in compressible flows \cite{Sbragaglia1,Sbragaglia2,Sbragaglia3}. In these studies, the LB method appears as an effective numerical scheme to solve the traditional hydrodynamic models. In recent works \cite{Review2012,XuYan2013,XuGan2013,XuLin2014,XuLin2016,XuLin2015PRE,XuGan2015}, the LB method was developed to probe the trans- and supercritical fluid behaviors or both the hydrodynamic non-equilibrium (HNE) and TNE behaviours, which has brought some new physical insights into the fundamental mechanisms of the system. Physically, such an extended LB kinetic model or discrete Boltzmann model (DBM) is roughly equivalent to a hydrodynamic model supplemented by a coarse-grained model of the TNE behaviours \footnote{By coarse-grained, we imply that since only a finite number of kinetic models are retained, the fine-structure of non-equilibrium phenomena cannot be fully resolved by the simulation.}. The DBM has been applied in the combustion system, phase separation system, and compressible flow system with shocks \cite{Review2012,XuYan2013,XuGan2013,XuLin2014,XuLin2016,XuLin2015PRE,XuGan2015}, but still not in the RTI system. In this work, we further extend the DBM to investigate both the HNE and TNE behaviours in the compressible RTI system. Compared with previous studies on RTI, besides the compressibility effects, the interplays of various non-equilibrium behaviours are our main concerns. The rest of the paper is structured as follows. In Sec. II, we first briefly review the DBM used in this work, then show the basic evolutions of so called ``single" mode RTI and its TNE effects. A new front-tracking scheme based on the TNE property is presented in the same section. The effects of compressibility on RTI are studied in detail in Sec. III. Sec. IV summarizes and concludes the present paper. \section{Evolutions of RTI and its TNE characterizations} \subsection{Discrete Boltzmann Model} Instead of using the traditional Navier-Stokes equations, in this work the compressible RTI system is described by the following discrete Boltzmann equation with Bhatnagar-Gross-Krook model \cite{Nie,He1}, \begin{equation}\label{eq1} \dfrac{\partial f_i}{\partial t}+\mathbf{v}_{i}\cdot\dfrac{\partial f_i}{\partial \mathbf{r}}-\dfrac{\mathbf{a}\cdot(\mathbf{v}_{i}-\mathbf{u})}{T}f_{i}^{eq}=-\frac{1}{\tau}( f_i-f_i^{eq}), \end{equation} where $f_i(\mathbf{r},\mathbf{v}_{i},t)$ is the discrete distribution function, $\mathbf{r}$ the spatial coordinate, $t$ the time, $\mathbf{v}_{i}$ the discrete velocity model and $i=1,2,\cdots,N$ the direction of the discrete velocity model. $\mathbf{u}$ is the macroscopic velocity, $\mathbf{a}$ an external body force, $\tau$ the relaxation time, and $f_i^{eq}$ is the equilibrium distribution function. Following the ideas presented in Refs. \cite{Review2012,XuYan2013,XuGan2013,XuLin2014,XuLin2016,XuLin2015PRE,XuGan2015}, we use \begin{eqnarray}\label{eq2} \boldsymbol{\Delta}^*_{m,n} &=&\mathbf{M}^*_{m,n}(f_i)-\mathbf{M}^*_{m,n}(f_i^{eq}), \end{eqnarray} to describe the TNE effects, where $\mathbf{M}^*_{m,n}$ represent the kinetic central moments, in which the variable $\mathbf{v}$ in Eqs. (\ref{mo4})-(\ref{mo7}) (see the Appendix) is replaced by $\mathbf{v}^*=\mathbf{v}-\mathbf{u}$. ${\mathbf M}^*_{2}$, ${\mathbf M}^*_{3}$, ${\mathbf M}^*_{3,1}$ and ${\mathbf M}^*_{4,2}$ are associated with the Non-Organized Momentum Flux (NOMF), Non-Organized Stress Flux (NOSF), Non-Organized Energy Flux (NOEF), and Flux of NOEF, respectively. Here, high order kinetic moments reflect the molecular individualism on top of organised collective motion, which we conventionally label as non-organised (NO) modes. In our simulations, we study the spatiotemporal evolutions of a single-component fluid when initially prepared on the hydrostatic unstable equilibrium, i.e., with a cold uniform region in the top half and a hot uniform region in the bottom half. In the two half volumes, we fix two different homogeneous temperatures, with the corresponding hydrostatic density profiles \cite{Sbragaglia2}. We consider the lower and upper borders are kept far from the perturbation interface during the process of RTI, so there is no heat change in the lower and upper borders. Thus, adiabatic and non-slip boundary conditions are applied at the top and bottom walls and periodic boundary conditions on the horizontal boundaries. Specifically, at the top and bottom boundaries, we set the velocity to be zero. The forward Euler finite difference scheme and the nonoscillatory nonfree dissipative scheme \cite{NND} are used to discretize the temporal and spatial derivatives, respectively. \subsection{Evolutions of RTI} The starting configuration of RTI is a compressible flow in a 2D domain $[-d/2,d/2]\times [-2d,2d]$. We consider two layers of the fluids at rest in the constant gravity field with the initial position of the interface $y_c(x)=y_0 \cos(k x)$, where $y_0=0.05d$, the wave number $k=2\pi/\lambda$, and the $\lambda=d$ is the wavelength of the perturbation. The temperatures of two half volumes are initially constants and each of the half part is in hydrostatic equilibrium \begin{equation}\label{eq3} \partial_y P_0(y)=-g\rho_0(y). \end{equation} So the initial hydrostatic unstable configuration is given by \begin{equation}\label{eq4} \left\{ \begin{array}{l} T_0(y)=T_u,\rho_0(y)=\dfrac{P_0}{T_u}\exp{\big[\dfrac{g}{T_u}\big(2d-y\big)\big]},y>y_c(x), \\[8pt] T_0(y)=T_d,\rho_0(y)=\dfrac{P_0}{T_d}\exp\big[\dfrac{g}{T_u}\big(2d-y_c(x)\big) \\[8pt] -\dfrac{g}{T_d}\big(y-y_c(x)\big)\big],y<y_c(x), \end{array} \right. \end{equation} where $P_0$ is the initial pressure at the top boundary. $T_u$ and $T_d$ represent the initial temperature of the upper half part and the lower half part, respectively. Under this condition, we have the same pressure at the interface \begin{equation}\label{eq5} \rho_u T_u=\rho_d T_d, \end{equation} where $\rho_u$ and $\rho_d$ are the densities of the upper and the lower grid aside the interface. Then the initial Atwood number can be defined as \cite{Sbragaglia2} \begin{equation}\label{eq6} At=\dfrac{\rho_u-\rho_d}{\rho_u+\rho_d}=\dfrac{T_d-T_u}{T_d+T_u}. \end{equation} Here we study both the hydrodynamic and thermodynamic behaviours of the single component compressible RTI. In our simulations, a grid size of $256\times 1024$ is adopted. The other parameters are $n=3$, $c=1.3$, $\eta_0=15$, $\Delta x=\Delta y=0.001$, $\Delta t=2\times 10^{-5}$, $P_0=1.0$, $a_x=0.0$, $a_y=-g=-1.0$, $\tau=5\times 10^{-5}$, $T_u=1.0$ and $At=0.6$. Figure \ref{FIG1} displays the density evolutions of RTI. We observe that, at the beginning, thermal diffusion smoothes the discontinuous initial density interface, then a transition layer with finite thickness appears and the local effective Atwood number decreases. The amplitude of perturbation exponentially grows and the initial configuration remains cosine type until $t=0.6$. After that, the RTI enters the nonlinear stage, highlighted by the outstanding spike and the appearance of Kelvin-Helmholtz instability due to the difference of the tangential velocity at the interface. \begin{figure*}[!ht] \center { {\epsfig{file=FIG1-1.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-2.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-3.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-4.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\\ \vspace{0.2cm} {\epsfig{file=FIG1-5.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-6.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-7.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG1-8.eps,bbllx=22pt,bblly=22pt,bburx=320pt,bbury=742pt, width=0.165\textwidth,clip=}} } \caption{(Color online) Density evolutions in the RTI simulated by the DBM at various times. The same results can also be obtained from the Navier-Stokes model. The larger the density, the stronger the inertial effects, the more difficult to change the velocity. Consequently, the upward perturbation grows into a shape similar to bubble, while the downward perturbation grows into a shape similar to spike.} \label{FIG1} \end{figure*} Owing to the effects of gravity, the fingers of lighter fluid continuously penetrate into the heavier fluid, while the heavier fluid falls into the lighter one with a rolling-up process, resulted in the increase in the mixing layer amplitude, and forming a pair of secondary vortices which appear at the tails of the roll-ups, just like ``mushroom" shape. The bubble rises due to the release of the compressive energy from the lighter fluid to the heavier one. In the later stage, since the effects of the viscosity and thermal diffusion, the tails on both vortices gradually become less sharp and long-narrow. The simulation results are qualitatively consistent with that of the experiments \cite{Lewis,Emmons}, reflecting the basic characteristics of the real physical process. \begin{figure*}[!ht] \center { \epsfig{file=FIG2.eps,bbllx=53pt,bblly=38pt,bburx=515pt,bbury=380pt, width=0.6\textwidth,clip=}} \caption{(Color online) Temperature profiles averaged in the $x$-direction versus $y$ coordinate at different times. The mixing layer becomes wider with time. The perturbation depth in the upper part is smaller than that in the lower part.} \label{FIG2} \end{figure*} To quantitatively describe the characteristics of the mixing layer, we plot the averaged temperature profile $\overline{T}(y)$ against the $y$ axis at $t=0.0$, $0.2$, $0.6$, $1.0$, $1.3$, $1.6$, $1.9$, and $2.2$ in Fig. \ref{FIG2}. $\overline{T}(y)$ is defined as \begin{equation}\label{eq7} \overline{T}(y)=\dfrac{1}{L}\int_LT(x,y)dx. \end{equation} It varies from being discontinuous to being irregular that shows the thickness of the mixing layer and the amplitude of the temperature oscillation increases with time. The zig-zags in the profiles indicate the heat conduction of fluids from the high temperature area to the low temperature region and the irregularity in the mixing layer. \subsection{TNE characterizations of RTI and corresponding interface-tracking technique} Through the DBM, we can study not only the HNE behaviours, but also the TNE effects of RTI. The TNE effects can be interpreted as the manifestations of molecular thermo-fluctuations relative to the macroscopic flow velocity $\mathbf{u}$ and can therefore help in gaining a better understanding of the kinetic effects on the onset and development of the RTI. \begin{figure*}[!ht] \center { {\epsfig{file=FIG3-1.eps,bbllx=23pt,bblly=25pt,bburx=590pt,bbury=290pt, height=0.28\textwidth,width=0.5\textwidth,clip=}}\\ {\epsfig{file=FIG3-2.eps,bbllx=50pt,bblly=25pt,bburx=530pt,bbury=390pt, height=0.25\textwidth,width=0.35\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG3-3.eps,bbllx=50pt,bblly=25pt,bburx=530pt,bbury=390pt, height=0.25\textwidth,width=0.35\textwidth,clip=}}\\ {\epsfig{file=FIG3-4.eps,bbllx=50pt,bblly=25pt,bburx=530pt,bbury=390pt, height=0.25\textwidth,width=0.35\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG3-5.eps,bbllx=50pt,bblly=25pt,bburx=530pt,bbury=390pt, height=0.25\textwidth,width=0.35\textwidth,clip=}} } \caption{(Color online) Profiles of macroscopic quantities and non-equilibrium effects along the central line $x=N_{x}/2$ at $t=1.6$. The gradients of macroscopic quantities work as driving forces of the TNE effects. The TNE quantities show specific TNE status of the system. The most pronounced TNE effects occur around the interface where the macroscopic quantities have large gradients. Because the gravity is in the $y$-direction, the ``flux'' in the $y$-direction is more pronounced. The non-zero value of $\Delta^{*}_{2xy}$ is a typical TNE quantity.} \label{FIG3} \end{figure*} In Fig. \ref{FIG3}, we illustrate the profiles of macroscopic and TNE quantities along the central line $x=N_{x}/2$ at $t=1.6$. From Fig. \ref{FIG3} (a) one can observe that all the macroscopic quantities show complex behaviours around the interface. The temperature and density profiles show the largest and second largest gradients at this time, respectively. Non-equilibrium effects are mostly pronounced around the interface, where the gradients of macroscopic quantities are particularly strong. Figure \ref{FIG3} (b) shows that the internal kinetic energies in the $x$ and $y$ degrees of freedom deviate from their equilibrium value oppositely in the same amplitude around the interface. Each one of $\Delta^{*}_{2xx}$ and $\Delta^{*}_{2yy}$ deviates from zero oppositely in front of and behind the interface. $\Delta^{*}_{2xy}$, which is zero at equilibrium, shows small but finite values around the interface, which is a typical TNE effect. From Figs. \ref{FIG3} (c)-(e) we can also appreciate that $\Delta^{*}_{3yyy}$, $\Delta^{*}_{3xxy}$, $\Delta^{*}_{(3,1)y}$, $\Delta^{*}_{(4,2)xx}$, $\Delta^{*}_{(4,2)yy}$ own peaks at the interface. Around the interface, $\Delta^{*}_{3yyy} > 0$, $\Delta^{*}_{3xxy} < 0$. The value of $\Delta^{*}_{3yyy} + \Delta^{*}_{3xxy} $ in Fig. \ref{FIG3} (c) can be read from the curve of $\Delta^{*}_{(3,1)y}$ in Fig. \ref{FIG3} (d). The positive peak of $\Delta^{*}_{(3,1)y}$ signals an upward internal kinetic energy flux. This is plausible, because heat transfers from higher to the lower temperature regions. In Fig. \ref{FIG3} (e), besides the non-zero value of $\Delta^{*}_{(4,2)xy}$, $\Delta^{*}_{(4,2)xx}$ shows a larger amplitude than $\Delta^{*}_{(4,2)yy}$ at this time. Usually, the depth of the mixing layer is an important parameter to measure the evolution of RTI. We use the half amplitude to measure the mixing layer by capturing the spike and bubble. For incompressible RTI, this measurement is readily performed by tracing the constant density. However, in the compressible case, how to measure the mixing layer remains a thorny problem. Here we present two independent interface-tracking methods: (i) tracking the mean temperature of the upper and lower fluids, (ii) tracking the maximum values of TNE characteristic quantities, such as $\Delta^*_{(3,1)y}$. The second method is based on the fact that $\Delta^*_{(3,1)y}$ takes its maximum value at the position of interface along the $y$-direction of the spike and bubble. We can adopt this TNE observable to capture the spike and bubble and obtain the thickness of the mixing layer, see Fig. \ref{FIG4}. The agreement between the results obtained from the above two approaches shows the effectiveness of the two tracking schemes, see Fig. \ref{FIG5}. From Fig. \ref{FIG6}, we find that the flow field is qualitatively consistent: from $t=0.2$ to $t=0.6$, the perturbation amplitude grows exponentially, with a linear growth rate of about 0.082. \begin{figure}[!ht] \center { \epsfig{file=FIG4.eps,bbllx=48pt,bblly=35pt,bburx=530pt,bbury=387pt, width=0.5\textwidth,clip=}} \caption{(Color online) Positions of the bubble and spike versus time. The velocity of the bubble is smaller than that of the spike. The lighter fluid is ``softer'' and thus it is easier for the spike to pass through and grow.} \label{FIG4} \end{figure} \begin{figure}[!ht] \center { \epsfig{file=FIG5.eps,bbllx=50pt,bblly=35pt,bburx=530pt,bbury=390pt, width=0.5\textwidth,clip=}} \caption{(Color online) The perturbation amplitudes obtained by two different tracking approaches. The local TNE stength can be used to track interfaces. The good agreement shows that the two approaches validate each other.} \label{FIG5} \end{figure} \begin{figure}[!ht] \center { \epsfig{file=FIG6.eps,bbllx=36pt,bblly=32pt,bburx=530pt,bbury=380pt, width=0.5\textwidth,clip=}} \caption{(Color online) The growth of the perturbation amplitude by the DBM. The good linear fit is consistent with the linear theory on the RTI. The result shows that the linear theory for incompressible flows works also for a period in the compressible flows. The DBM and the linear theory validate each other. } \label{FIG6} \end{figure} \section{Effects of compressibility on RTI} According to the inviscid, isentropic Euler equation \begin{equation}\label{eq8} \partial_{t}\mathbf{u}+\mathbf{u}\cdot \nabla \mathbf{u}+\frac{1}{\rho } c_{s}^{2}\nabla \rho =\mathbf{g}, \end{equation} we can introduce a non-dimensional number $H_{1}$ as below: \begin{equation}\label{eq9} \dfrac{\left\vert \mathbf{g}\right\vert }{\left\vert \dfrac{\nabla \rho }{\rho }c_{s}^{2}\right\vert } \sim \frac{g}{kc_{s}^{2}}=\frac{c_{s}^{-2}}{k/g =H_{1}. \end{equation} It is clear that $H_{1}$ can be regarded as the strength of the gravity relative to the gradient of pressure. Since $d\rho /dp=c_{s}^{-2}$ describes the compressibility of the flow system, the non-dimensional parameter $H_{1}$ can also be regarded as a relative compressibility. Since the compressibility $c_{s}^{-2}$ is dimensional, it is not suitable for studying the effects of compressibility on RTI. Besides $c_{s}$, both gravity $g$ and the wave number $k$ of the perturbation can influence the increasing rate of RTI. Under such conditions, $H_{1}$ is a good non-dimensional parameter to describe the relative compressibility and is also a good parameter for studying the effects of compressibility on RTI. $H_{1}$ is a variation of speed of sound and consequently a variation of $\gamma $, and also of the stratification width, $1/k$. It can be controlled by adjusting $g$. However, it is totally unrelated to the viscosity and heat conduction. Similarly, \begin{equation}\label{eq10} H_{2}=\tau \sqrt{gk}=\frac{\tau }{\left( gk\right) ^{-1/2}}, \end{equation} can be regarded as the ratio of two time scales. Since the relation time $\tau$ is relevant to the viscosity and thermal conductivity, the non-dimensional parameter $H_{2}$ can also be regarded as a relative viscosity or thermal conductivity. Therefore, we can define $H_1=g/(kc_s^2)$ and $H_2=\tau\sqrt{gk}$ to nondimensionalize the compressibility and the viscosity effects. The dimensionless time is then defined as $t^*=t/\sqrt{2\pi/(kg)}$ and the dimensionless length scale is $2\pi/k$. To study the compressibility $H_1$, we should make sure the viscosity $H_2$ is constant. Furthermore, we can set the initial $\tau=2.0\times 10^{-4}$ and $g=1.0$ to obtain the prescribed viscosity $H_2$. So we can prescribe $H_1$ by changing $g$. To ensure $H_2$ is a constant in all simulations, we need to recalculate $\tau$ as derived from $H_2=\tau\sqrt{gk}$ each time. In our numerical simulations, we set the initial Atwood number $At=0.6$, $T_u=1.0$, $P_0=1.0$ uniformly for simplicity. Meanwhile, the mesh is specified by setting $\Delta x=\Delta y=0.001$, and $N_x\times N_y=256\times 1024$. Time step is $\Delta t=2\times10^{-5}$. The other parameters used here are uniformly $n=3$, $c=1.3$ and $\eta_0=15.0$. \begin{figure}[!ht] \center { \epsfig{file=FIG7.eps,bbllx=47pt,bblly=32pt,bburx=530pt,bbury=383pt, width=0.5\textwidth,clip=}} \caption{(Color online) The growth of the dimensionless amplitude $A^*$ with different values of compressibility $H_1$. In the initial stage, the compressibility tends to inhibit the RTI, while in the later stage it tends to strengthen the RTI and the strengthening effect approaches saturation with increasing compressibility.} \label{FIG7} \end{figure} \begin{figure}[!ht] \center { {\epsfig{file=FIG8-1.eps,bbllx=32pt,bblly=22pt,bburx=516pt,bbury=380pt, width=0.43\textwidth,clip=}} {\epsfig{file=FIG8-2.eps,bbllx=32pt,bblly=22pt,bburx=516pt,bbury=380pt, width=0.43\textwidth,clip=}} } \caption{(Color online) The dimensionless amplitude versus the compressibility at (a) $t^*=0.6$ and (b) $t^*=2.5$. This figure shows more neatly the specific compressibility effects at one given time in each of the two stages.} \label{FIG8} \end{figure} Figure \ref{FIG7} displays the time evolutions of the dimensionless amplitude $A^*$ with different values of the compressibility. It is found that the effects of compressibility can be roughly divided into two stages: (i) In the first stage, for $t^*<1.1$, compressibility stabilizes the RTI; (ii) In the later stage, for $t^*>1.1$, compressibility accelerates the RTI. Particularly, we show the results at two dimensionless time instants $t^*=0.6$ and $t^*=2.5$, see Fig. \ref{FIG8}, which are fitted by two typical power-law relationships. When the compressibility is sufficiently large, its effects on the growth of RTI become much less evident. To interpret the above phenomena, we focus on the evolutions of the rates of the system internal and kinetic energies. For the rate of the internal energy, from Eq. (\ref{A8}) (see the Appendix), we have \begin{equation}\label{eq11} \rho \dfrac{de}{dt}=-P\nabla\cdot \mathbf{u}+\nabla\cdot(\kappa \nabla T)+\nabla \mathbf{u}:\mathbf{P}^{^{\prime }}, \end{equation} where $d/dt=\partial/\partial t+\mathbf{u}\cdot \nabla$. The region that we measure lies in the middle of the system and its width is half of the system height. This region is large enough to prevent interference of the upper and lower boundaries with the spike and the bubble, i.e. no heat is supplied to or removed from the measured region as a result of any interaction with the boundaries. The heat conduction within the measured region makes no contribution to the increasing rate of internal energy. Therefore, when considering the rate of internal energy, the second term in the right-hand side of Eq. (\ref{eq11}) can safely be ignored. We define the rate of the compressive energy, $\dot E_c$, as \begin{equation}\label{eq12} \dot E_c=-\int P\nabla\cdot \mathbf{u}dV, \end{equation} where $dV$ is the volume element, and the rate of the internal energy due to dissipation or viscosity $\dot E_d$ as \begin{equation}\label{eq13} \dot E_d=\int \nabla \mathbf{u}:\mathbf{P}^{^{\prime }} dV. \end{equation} For the rate of the kinetic energy, from Eq. (\ref{A8}) (see the Appendix), we have \begin{equation}\label{eq14} \rho \dfrac{d\mathbf{u}}{dt}=\rho \mathbf{a}-\nabla P+\nabla\cdot \mathbf{P}^{^{\prime }}. \end{equation} This term includes three contributions. Similarly, we define the rate of the kinetic energy change by gravity, $\dot E_{kg}$, as \begin{equation}\label{eq15} \dot E_{kg}=\int \mathbf{u} \cdot (\rho \mathbf{a}) dV, \end{equation} the rate of kinetic energy change due to dissipation, $\dot E_{kd}$, as \begin{equation}\label{eq16} \dot E_{kd}=\int \mathbf{u} \cdot (\nabla\cdot \mathbf{P}^{^{\prime }}) dV, \end{equation} and the rate of kinetic energy change by pressure, $\dot E_{kp}$, as \begin{equation}\label{eq17} \dot E_{kp}=-\int \mathbf{u}\cdot\nabla P dV. \end{equation} \begin{figure*}[!ht] \center { {\epsfig{file=FIG9-1.eps,bbllx=26pt,bblly=34pt,bburx=517pt,bbury=382pt, width=0.45\textwidth,clip=}}\\ \vspace{0.5cm} {\epsfig{file=FIG9-2.eps,bbllx=26pt,bblly=34pt,bburx=517pt,bbury=382pt, width=0.45\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG9-3.eps,bbllx=26pt,bblly=34pt,bburx=517pt,bbury=382pt, width=0.45\textwidth,clip=}}\\ \vspace{0.5cm} {\epsfig{file=FIG9-4.eps,bbllx=26pt,bblly=34pt,bburx=517pt,bbury=382pt, width=0.45\textwidth,clip=}}\hspace{0.5cm} {\epsfig{file=FIG9-5.eps,bbllx=26pt,bblly=34pt,bburx=517pt,bbury=389pt, width=0.45\textwidth,clip=}} } \caption{(Color online) The time evolutions of the various energy rates at different values of compressibility $H_1$. The energy rates include the rates of compressive energy $\dot E_c$, internal dissipation energy $\dot E_d$, kinetic energy induced by gravity $\dot E_{kg}$, kinetic energy induced by dissipation $\dot E_{kd}$ and kinetic energy induced by pressure $\dot E_{kp}$. It is clear that a larger compressive energy rate is involved in the later stage. This figure highlights significant differences as compared to the incompressible scenario.} \label{FIG9} \end{figure*} Figure \ref{FIG9} illustrates the time evolution of the various energy rates, $\dot E_c$, $\dot E_d$, $\dot E_{kg}$, $\dot E_{kd}$ and $\dot E_{kp}$, with different compressibilities. We first discuss observations for the second stage of RTI evolution. From Fig. \ref{FIG9} (a) it is clear that with decreasing the compressibility, $\dot E_c$ goes gradually back to the case of incompressible flows, where $\dot E_c = 0$. The extent of compressive energy rate $\dot E_c$ increases with time and compressibility. The rate $\dot E_c$ is negative, which means that the fluid volume expands in time. Since the fluid has internal dissipation, the rate of the energy dissipation $\dot E_d$ increases in time and with increasing viscosity [see Fig. \ref{FIG9} (b)]. In this study, the compressibility $H_{1}$ is increased by increasing the gravity acceleration $g$, consequently the kinetic energy rate by gravity increases with increasing compressibility $H_{1}$ [see Fig. \ref{FIG9} (c)]. Since the volume expansion rate increases with increasing compressibility and time, the rate of kinetic energy by dissipation or viscosity $\dot E_{kd}$ show similar behaviour [see Fig. \ref{FIG9} (d)]. It is interesting to observe that the rate of kinetic energy by pressure $\dot E_{kp}$ also increases with the compressibility $H_{1}$ [see Fig. \ref{FIG9} (e)]. This behaviour can be understood by considering that the compressibility $H_{1}$ increase means that gravity acceleration $g$ increases. Consequently, the pressure gradient, $\nabla P$, becomes larger, thereby enhancing the rate of kinetic energy by pressure $\dot E_{kp}$ as well [see Eq. (\ref{eq17})]. Now, we come back to the initial stage. Initially, the flow velocity is low, and so is the Mach number, so that the compressibility of the fluid is naturally small. So, the values of $\dot E_c$, $\dot E_{kg}$ and $\dot E_{kp}$ are also small. The initial state of the system around the interface is thermodynamically unstable. Without any perturbation, the interface molecules are in a mechanically stable but thermodynamically unstable state. In this case, the forces due to the temperature and density gradients tend to increase the boundary layer and decrease the local Atwood number. If the interface is initially perturbed, most interface molecules, in between the crest and trough, experience a gradient force with a non-zero horizontal component, which tends to flatten the interface. It should be pointed out that in our numerical simulations, the initial state is slightly different from the one given by Eq. (\ref{eq4}), due to the finite lattice spacing and the free-energy of the numerical initial state is higher than the one given by Eq. (\ref{eq4}). The system relaxes towards its minimum free energy state and gravitational energy transforms into kinetic energy. Due to viscosity, while approaching this minimum free energy state, the local flow velocity tends to zero, so that the amplitudes of $\dot E_{d}$ and $\dot E_{kd}$, after a quick initial rise, decrease significantly in the longer term. The stage for the initial quick increase is indeed very short. The process of RTI can also be divided into two stages and be interpreted from the point view of energy transformation: (i) In the initial stage, the compressibility provides a stabilizing effect. This is mainly because a higher compressibility $H_{1}$ corresponds to a larger gravity acceleration $g$, corresponding in turn to a larger local density $\rho$ or pressure $P$, and consequently a higher heat conductivity $\kappa$. The heat conduction tends to decrease the local Atwood number and broaden the interfaces of the density and temperature. (ii) In the later stage, the compressibility has a destabilizing effect, which can be explained as the transformation of the stored compressive energy into kinetic energy. To provide an estimate of the TNE effects resulting from compressibility, we follow the same idea used in Refs. \cite{XuLin2015PRE} and \cite{XuGan2015}, and define a global average non-equilibrium intensity or ``TNE strength" \begin{eqnarray}\label{eq18} D^{*}=\sqrt{(\bar{\boldsymbol{\Delta}}^*_{2})^2+(\bar{\boldsymbol{\Delta}}^*_{3})^2 +(\bar{\boldsymbol{\Delta}}^*_{3,1})^2+(\bar{\boldsymbol{\Delta}}^*_{4,2})^2}, \end{eqnarray} where $\bar{\boldsymbol{\Delta}}^*_{m,n}$ is the average absolute value of TNE components. \begin{figure}[!ht] \center { \epsfig{file=FIG10.eps,bbllx=23pt,bblly=33pt,bburx=518pt,bbury=391pt, width=0.5\textwidth,clip=}} \caption{(Color online) The time evolution of the global average TNE strength with different values of compressibility $H_{1}$. The compressibility first decreases in the first stage and then increases in the later stage the global average TNE strength. The higher the compressibility, the stronger the global average TNE effects. } \label{FIG10} \end{figure} Figure \ref{FIG10} shows how the compressibility affects the global average $D^{*}$. With increasing compressibility, the global average deviation from thermodynamic equilibrium increases. Since the initial condition is in thermal non-equilibrium state, the system has a tendency to approach the thermodynamic equilibrium state at first. Therefore, in the first stage, $D^{*}$ shows a decreasing trend. In the later stage, $D^{*}$ is found to grow exponentially. This is because TNE effects are tightly coupled to the interface dynamics, which shows an increasing area (coarsening) and morphological complexity. \begin{figure*}[!ht] \center { {\epsfig{file=FIG11.eps,bbllx=19pt,bblly=20pt,bburx=591pt,bbury=400pt, height=0.6\textwidth,width=0.95\textwidth,clip=}} } \caption{(Color online) The average values of various components of TNE quantities versus compressibility at four nondimensional times: $t^*=0.1$, $0.6$, $2.0$ and $2.5$. The figure shows more specific global or mean TNE effects than Fig. \ref{FIG10}. The relative strengths of those dependences on compressibility may change with time.} \label{FIG11} \end{figure*} Figure \ref{FIG11} shows the average value of each component of TNE observables with changing compressibility at four different times $t^*=0.1$, $0.6$, $2.0$ and $2.5$. It is found that: (i) All TNE dynamic modes increase as the compressibility increases, at all times. Since all the observations have a limited accuracy, a dynamic mode is not visible whenever its amplitude or strength is smaller than a critical value, say $10^{-3}$. Therefore, one can observe that more TNE dynamic modes emerge and stand out with increasing the compressibility. At later stage, the higher order terms of the TNE dynamic modes, such as $\bar{\Delta}^*_{(4,2)xy}$, play a more important role. A single TNE dynamic mode is not the only decisive factor for the amplitude of the $D^{*}$; (ii) The relative strength among the dynamic modes may change at different stages, for example, $\bar{\Delta}^*_{(3,1)x}$ is less than $\bar{\Delta}^*_{(3,1)y}$ at the first stage, but reverse at the later stage; (iii) The strengths of some dynamic modes are always relatively small, such as $\bar{\Delta}^*_{2xx}$, $\bar{\Delta}^*_{2xy}$ and $\bar{\Delta}^*_{2yy}$. These details are complementary with the above highly synthetic ``TNE strength" $D^{*}$. \section{Conclusions} The Rayleigh-Taylor instability in compressible flows is studied via a discrete Boltzmann model. Besides hydrodynamic behaviour, the thermodynamic non-equilibrium effects most relevant to the hydrodynamic behaviour have also been studies in much detail, up to Atwood numbers around $0.9$. It is found that the process of the Rayleigh-Taylor instability in compressible flows can be divided into two stages, exhibiting opposite compressibility effects: in the initial stage, compressibility stabilizes the Rayleigh-Taylor instability, while in the later stage it accelerates it. The physical reasons are as follows. A higher compressibility leads to a stronger gravity acceleration, which corresponds to a higher local pressure and consequently to a higher heat conductivity. In the first stage, the heat conduction tends to decrease the local Atwood number and broaden the interfaces of the density and temperature profiles. In the second stage, part of the compressive energy stored in the fluid is released and transformed into kinetic energy, thereby accelerating the Rayleigh-Taylor instability. The local thermodynamic non-equilibrium indicators provide useful observables to physically track the interfaces. Indeed, the global or mean thermodynamic non-equilibrium indicators permit to discriminate the two stages of the Rayleigh-Taylor instability. In the first stage, the system slowly evolves towards its equilibrium, while in the later stage, as the interface develops, the system moves away from local equilibrium, especially in the regions near complex interfaces. The above behaviour is enhanced by increasing compressibility, and so are the amplitudes of thermodynamic non-equilibrium kinetic modes. Besides a deeper physical insight into the kinetic procedures, the methodology and resulting observations may help to formulate more accurate meso and macroscale models for the complex compressible phenomena. \section*{Acknowledgments} The authors thank Prof. Hua Li, Drs. Chuandong Lin, Qing Li, Fangbao Tian, Zhipeng Liu and Yudong Zhang for many helpful discussions. AX and GZ acknowledge support of Foundation of LCP and National Natural Science Foundation of China (under Grant No. 11475028). HL acknowledges support of National Natural Science Foundation of China (under Grant No. 11301082), China Postdoctoral Science Foundation (under Grant No. 2014M550660) and Natural Science Foundation of Fujian Provinces (under Grant Nos. 2014J05003, JA13069, JB13020). YG acknowledges support of National Natural Science Foundation of China (under Grant No. 11202003) and Natural Science Foundation of Hebei Provinces (under Grant Nos. A2013409003, A201500111).
2,877,628,090,950
arxiv
\section{Introduction} The calibrated light sources developed for our SNDICE project \citep{ref1} are based on a direct illumination concept (\citealt{ref1'}) using the new generation of light emitting diodes (LEDs) to reach a high stability of about 10$^{-4}$. This opens the possibility of measuring the sensitivity of good space-based CCD cameras such as those of the Corot and Kepler telescopes with a good precision. We have tested the calibration of a telescope and a large-field camera by using the images of the SNDICE light source taken by the CFHT telescope equipped with the Megacam camera \citep{ref2}. We placed the limiting precision of the direct illumination calibration of the CFHT telescope at the quantum bound ($\approx$10$^{-6}$), which only depends on the potential improvement of the CCD readout electronics described in Sect.\ref{sec:4}. Our proof-of-concept goals are distinct from those of the more practical study by \cite{ref3} who used SNDICE to yield an improved photometric calibration of the SNLS experiment of about 10$^{-3}$. A novel feature of the present paper is the comprehensive model of the telescope transmission, including its optical defects. This model, defined in Sect.\ref{sec:21}, combines diffuse and specular light in a common photon wave packet (WP) model. It extends the conventional models called here "specular models", where the optical surfaces are mathematically defined and the properties of optical media are represented by continuous reflection and refraction functions, while the primary light propagation is symbolized by ray optics. Our WP model, which parametrizes the whole interference pattern, is validated quantitatively with an exquisite precision. In Sect.\ref{sec:22}, the spatial frequency spectrum of the interference signal is shown to separate the effect of light propagation in free space from that of electronic and optical defects. The former is used as a validation of the model and then taken as a prior. The latter is used as an ultra precise control of the optical quality and of the CCD electronic response. Following this spectrum allowed us to monitor during one or two hours runs the stability of the interference signal at the quantum precision limit. The only deviation found is due to the microscopic motion of the LED source. It is then integrated in the spectral analysis and corrected for. Independently of this analysis of the mirror surface, we provide efficient algorithms for detecting, localizing, and parametrizing the defects of the Megacam camera in Sect.\ref{sec:25}. This is a first step, since there are about 10$^{5}$ such defects to monitor individually during the life of the camera. Their effect on astronomical images is obviously diverse and cannot be represented by a simple pixel-to-pixel correction. The last part of our analysis describes the successive steps of a complete telescope photometry based purely on photon counting. This analysis implicitly uses the spectral properties of the interference pattern found in Sect.\ref{sec:22} and the mitigation of the electronic problems found in Sect.\ref{sec:4}. First Sect.\ref{sec:51} defines the four operators (pixel combinations) that permit a clean photon counting analysis and introduces their pure Gaussian properties that allow precisely applying the law of large numbers up to the 10$^{13}$ photons contained in a Megacam image. Second, Sect.\ref{sec:52} and Sect.\ref{sec:54} establish the second-order corrections to the pure Gaussian model needed for multinomial statistics and for LED motion checks, respectively. Last, Sect.\ref{sec:55} and Sect.\ref{sec:56} apply these methods to flux and noise estimation respectively. Combining these two methods, we show that the fluctuation of pixel counts is rigorously proportional to the square root of the flux. More generally, these methods offer a perspective to replace the paradigms of the classical optics and the photometric standards by paradigms relying on fundamental physics. The technical breakthrough behind these progresses, beyond the new optical sources and the detectors already mentioned, is clearly the data processing power which is essential for the analyses presented in this paper. \section{Using coherent light for calibration} \label{sec:2} Measuring the overall response of a telescope by placing a point illumination source (i.e. a partially coherent source) at the focal distance is attractive because it is expected to yield smooth images, and each pixel of the camera would define a single light ray. Previous attempts \footnote{cf. Stubbs, C. et al. (unpublished)} have met the obstacle also seen by SNDICE (Fig. \ref{fig:fig2}.a), which is a plethora of diffraction patterns that is due to the imperfections in the mirror surface. We can consider these artifacts as a nuisance caused by the partial coherence, but they alter an image exactly as they would for a target object at infinity, as suggested by Fig. \ref{fig:fig1}, based on the classical Fraunhofer diffraction theory (\citealt{ref0}, chapter VIII, fig 8.6). Therefore they need to be taken into account by astronomical calibration. The first goal of this section is to demonstrate the exact correspondance of the light diffracted by the same area of the mirror, either from a point source at infinity or at a focal distance. Diffracted light is expressed by the Fresnel diffraction integral as a convolution product of an aperture function representing the defects of the mirror and the impulse response of the free space propagation from the mirror to the focal plane. This property is used in Sect.\ref{sec:22} to extract by Fourier analysis pure diffracted light from non-diffracted light. In Sect.\ref{sec:23} we measure the effect of the translation of the point source on the diffracted light. By joining these two developments, we can compare extended source and point source images. We can also measure the stability of the SNDICE source with high precision. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure2} \caption{{\bf a) (left)} Wave packet signal (WP) measured in a 1024$\times$1024 pixel$^2$ area. The gray scale covers <WP> $\pm$1.5$\sigma$. {\bf b) (right)} The 1\% of pixels in this area with a sharp WP gradient due to defects of the camera optics ($\|\protect\overrightarrow{\vec{\nabla (WP)}}\| \geq 16 \sigma$). Four camera defects a,b,c,d of different types are circled in the two figures for discussion in Sect.\ref{sec:25}. \label{fig:fig2}} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure1} \caption{Ray optics and wave packet: in red we plot a star source at infinity imaged at focus F and its diffuse reflection falling in the Strehl ratio area around F; in blue we show a LED source S at focal distance and diffuse reflection interfering with specular reflection (reflected beam MF displaced for clarity into M'F'). \label{fig:fig1}} \end{figure} \subsection{SNDICE geometrical setup and the generation of a CCD image } \label{sec:21} In our setup drawn in Fig. \ref{fig:fig1}, the SNDICE LED source in S is a 0.1 mm$^{2}$ chip situated at a focal distance f=13.5 m (19 m in reality) from the mirror. The SNDICE axis SM is aligned with the telescope axis FO. It pierces the mirror in M within its 1.8 m radius. The SNDICE beam one degree aperture just covers focal plane (no stray light). The Megacam camera is centered on the focus F of the parabolic mirror. A pixel covers a 13.5$\times$13.5 $\mu m^{2}$ area, that is, a 1.0 $\mu rad^2$ solid angle. The center of curvature C of the mirror is used for geometrical ray tracing. The optics is completed by an image corrector, made of four lenses and one out of five filters. Filters are not used in this study. The optical band-pass provided by a LED spectrum is $\Delta \lambda /\lambda\approx$5\%, a third of the band-pass of a typical astronomical filter. The base concept is to consider the photon transmission through the telescope as a quantum process as well as the photon emission (LED) and the photon absorption (CCD). Telescope calibration establishes the balance sheet between a photon-counting calibrated light source and a photon-counting light detector. The LED emits a thin spherical wave packet (WP). Planar after reflection on the mirror around M, the wave packet collapses when absorbed by the focal plane in F. The WP probability of counting a photon in a given CCD pixel ${\{v\}}$ is \begin{equation} P b \{ \gamma \in S \rightarrow e \in v \} =|\psi|^2 \ast \delta_{\{ v \}} = A_{\{ v \}} \label{eq:1} \end{equation} where |$\psi$(x,y)|$^2$ is the modulus square of the wave function amplitude and $\delta$(x,y)$_{\{v\}}$ the electron collection efficiency. The total number of photons falling on the channel $k$ is \begin{equation} \underbrace{N^{adu}_{i,j,k,I}}_{N\{v\}} = \underbrace{\phi_I \times \Delta t \times L(T)}_{\Phi_I} \times \underbrace{a_{i,j,k,I} \times b_k}_{A\{v\}} \times \underbrace{\epsilon_ {k'} \times g_{k,I}}_{g\{v\}} + \underbrace{P_ {i,j,k,I}}_{P\{v\}} \quad\quad . \label{eq:2} \end{equation} Assuming an uniform photon emission rate $\phi_I$, the expected photon total count $\Phi_I$ during the exposure $I$ is proportional to the exposure duration $\Delta t_I$, with a temperature correction $L(T_I$). The photoelectron count in one pixel follows a Poisson law with an expected value $\Phi_I \times$A$_{\{v\}}$. The counting rates in all individual pixels or any subset inside the complete 3.4$\times$10$^{8}$ pixel set $\{v\}$ follows a multinomial law. The last part of the Eq. (\ref{eq:2}) represents the digitization of the pixel counts, represented ideally by two constants: a gain factor g$_{\{v\}}$ that transforms a number of photons into a number of ADUs (analog to digital unit) and the pedestal P$_{\{v\}}$. These electronic constants are two Gaussian variables whose fluctuations have to be added to the multinomial fluctuations of the photon counts to constitute the global statistical factor studied in Sect.\ref{sec:56}. Both electronic constants are studied specially in the electronic section Sect.\ref{sec:4}, but we state here that they have defects that introduce strong variations (1.5$\times10^{-3}$ RMS) from one image $I$ to the next, depending on the electronic channel $k$. For this reason, these constants are indexed with $I$ and $k$ instead of being considered as long-term constants. We take into account that the gain fluctuation equally affects all the pixels read by one channel $k$ and that the quantum efficiency $\epsilon_k$ is the same for the two channels inside one CCD. For this purpose, Eq. (\ref{eq:2}) introduces the fraction $b_k$ of the total number of photons falling on the channel $k$ and the respective quantum efficiency $\epsilon_k'$. The image matrix $a_{i,j,k,I}$ depends on $I$ because the interference pattern slides (LED jitter), which we extensively study in Sect.\ref{sec:54} and which is normalized by $\sum\limits_{i,j} a_{i,j,k,I} = 1$. \subsection{Wave packet signal and its Fresnel spectrum } \label{sec:22} Within the quantum mechanical framework, each individual photon wave function carries the complete interference pattern, and the image builds up by independently piling up a large number of photoelectrons (10$^{12}$/s) in all pixels. Accordingly, the optical modulation of an image is perfectly represented in Eq. (\ref{eq:1})\&(\ref{eq:2}) by a probability density, constant at a 10$^{-6}$ precision level for hours. Before proving it in Sect.\ref{sec:56}, we show here that the wave packet signal conforms to the laws of optics. Our LED light propagates in free space excepted for the reflection on the mirror surface, which can be represented by a Fresnel integral. (We neglect the diffraction on the optical surfaces of the image corrector optics, which is treated separately in Sect.\ref{sec:25}). We subdivided the mirror into sections covered by some 1024$\times$1024 pixel sub-matrices. The WP signal in each section (e.g., in Fig. \ref{fig:fig2}-a) was then Fourier transformed. The Fresnel integral being a convolution of the Fresnel free space propagation function and a mirror defect distribution, its transform is the product of two terms: the well-known Fresnel diffraction figure, and the transform of the mirror defect distribution. We call the distribution of the modulus the Fresnel spectrum. The quadratic average of all spectra is shown in Fig. \ref{fig:fig3}(left). \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure3} \caption{Effect of the Fourier transform on a 1024x1024 vignette of the Sndice image. {\bf left:} applied to the original field; {\bf right:} applied to the 3 PDE transformed fields -$\nabla_x$, $\nabla_y$, $\Delta^2/\Delta x\Delta y$-. The 288 vignettes covering the focal plane are quadratically averaged. The frames are set by |$\nu_x$| and |$\nu_y$|=$\nu_{nyq}$. The horizontal and vertical lines passing through the center are due to electronic noise. Dashed circles mark radial cuts $\nu_1$ and $\nu_2$ in Fig. \ref{fig:fig4_a}. The three points $\beta$, $\gamma$ , and $\delta$ mark the three real FFT components averaging same name filters ($\alpha$ is at the center). \label{fig:fig3}} \end{figure} The spectrum is contained in a square defined by spatial frequencies |$\nu_x$| and |$\nu_y$|$\leq \nu_{nyq}$. The Nyquist frequency $\nu_{nyq}$ is 37 mm$^{-1}$, that is, the inverse of twice the pixel width 1/(2$\times$ \SI{13.5}{\micro\metre}). The spectrum, being the digital Fourier transform (DFT) of a real function, is centrally symmetric. The rotational invariance around the center ($\nu_x$=$\nu_y$=0) is predicted by Fresnel symmetry. We explain the two lines on the $x$ and $y$ axes crossing at the center by residual electronic problems\footnote{after a large reduction by mitigation of main electronics problems} such as pixel-to-pixel (e.g., dead column) or line-to-line (e.g., microphonic noise), respectively. The bright spot at the center is due to specular reflection, which is in this way separated from diffraction. The rotational invariance of the DFT field reduces the amount of empirical data representing the surface state of the mirror by a huge factor. Instead of a two-dimensional spatial frequency plot such as Fig. \ref{fig:fig3}, we can make two one-dimensional spectral curves: one radial frequency distribution (Fig. \ref{fig:fig4_a}), and one angular distribution (Fig. \ref{fig:fig5}). Each sample of the field in the spatial frequency plane $\{\nu_x,\nu_y\}$ is taken as a sample at the radial frequency $\nu_{\rho}$ and the angle $\theta$~: \begin{equation} \nu_{\rho} = (\nu^2_x + \nu^2_y)^{1/2} \qquad \theta = atan(\nu_y / \nu_x) \quad\quad . \label{eq:6} \end{equation} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figure4_a} \caption{Radial spectra of the sum and the difference of two images (image $v$ is close to $u$; $w$ distant). All are normalized to the 0.7\% rms photon noise. The [$\nu_1, \nu_2$] cut yields the angular plots in Fig. \ref{fig:fig5}. The Nyquist frequency is $\nu_{nyq}$. \label{fig:fig4_a}} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure5} \caption{Spectral angular distribution of $u$+$v$, normalized by the photon noise and then by subtracting 1. The angular average S/N$\approx$6 equals the radial average inside the [$\nu_1$, $\nu_2$] cut. The electronic noise peaks at $\theta$=0 and $\theta$=$\pm \pi/2$. \label{fig:fig5}} \end{figure} The samples can either be integrated in angular and radial bins, or averaged\footnote{take a white-noise CCD image (photon flat field): its |DFT| field is flat. The integrated radial spectrum, proportional to the surface in a given ring, rises linearly with radius and then decreases when the rings are no longer contained in the square. The averaged radial and angular spectra are flat, as seen in Fig. \ref{fig:fig4_a} and Fig. \ref{fig:fig5}.}. We remark that the radial sampling along a diagonal is defined by a spacing divided by $\sqrt{2}$ and that the square pixel sampling filter projected on a diagonal is a triangle with a 13.5$\times \sqrt{2}$ micron base. The highest radial sampling frequency $\nu_{max}$ is $\sqrt{2} \nu_{nyq}$ ($\nu_{max}$=52.4mm$^{-1}$). The radial spectrum corresponding to the field in Fig. \ref{fig:fig3} is found in Fig. \ref{fig:fig4_a}. Two images are added to this spectrum. This allows comparing in the same figure the radial spectrum of the sum of two images (in black) with their difference (blue and red). For a sum there is no effect depending on the choice of the images. In contrast, for a difference there is an effect that is related to the vicinity of the images in time. There is a greater difference between the images $u$ and $w$ taken after waiting for one hour (blue) than between u and v taken within a one-minute delay (red). This effect is explained in Sect.\ref{sec:54} by a progressive drift of the LED position with respect to the optical axis of the telescope. The angular distribution of the sum spectrum, seen in Fig. \ref{fig:fig5}, is computed within the ring $\nu_1<\nu_{\rho}<\nu_2$ . The two-dimensional Fourier transform of a real function being centrally symmetric, we need to plot only one half of the unit circle $(-\pi/2<\theta <\pi/2)$. As predicted by our model, the distribution is flat, except for the electronic noise, which yields accumulations and peaks at $\theta$=0 and $\theta \pm \pi/2$ (where line or column frequencies are null). \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure6} \caption{ {\bf a)} Spectral angular distribution of $u$+$w$ after applying PDE operators, divided by the unity distribution U of Fig. \ref{fig:fig5}. {\bf b)} Spectral angular distribution of $u$-$w$, divided by U (inside the radial cut [$\nu_1$, $\nu_2$]). The uncorrelated random spectra, null after subtraction of unity, are plotted in yellow. Fitted curves are shown in red. \label{fig:fig6}} \end{figure} A Fresnel spectrum is a stable and reproductible characteristic of the status of a section of mirror (2 cm$^2$ (one CCD vignette), 0.1 m$^2$ (one SNDICE image), or 8 m$^2$ (the whole mirror)). A complete mirror scan lasts about two hours. Systematic studies such as aging or color dependence have not been made so far. \subsection{Effect of a transverse LED motion on the Fresnel spectrum} \label{sec:23} Figure \ref{fig:fig1} shows that moving the source S moves its projection M, the center of the illuminated section of the mirror, but keeps it projected on the center of the focal plane. We call $T_x$ and $T_y$ the two translation operators representing a shift of the mirror points reflected on a given pixel by one pixel leftward or upward. Partial derivatives of the WP signal are approximated by the finite differences of the translation operators $T_x$ and $T_y$ and the identity operator U, \begin{equation} \nabla_x =T_x -U ;\, \nabla_y=T_y -U ;\, \Delta^2/\Delta x \Delta y =(T_x -U)(T_y -U) \quad . \label{eq:3} \end{equation} When we apply the these operators, commonly named gradient and hessian partial derivative equation (PDE) filters, to a CCD image, we obtain three rather uniform new images. The Fresnel spectra of these images are seen on the right of the main spectrum in Fig. \ref{fig:fig3}. Simple mathematics predict the shape of these images. For instance, the DFT of the translation $T_y$ yields the product of the complex DFT by a phase shift factor exp(i$\pi \nu_{y} / \nu_{nyq}$). Subtracting unity and taking the modulus gives the observed result: the two-dimensional spectrum of $\nabla_y$ is the whole spectrum multiplied by a sin($\pi \nu_{y} / 2\nu_{nyq}$) factor. The $\nabla_x$ formula is obtained by exchanging x and y and $\Delta^2/\Delta x \Delta y$ by multiplying the two angular factors. The angular spectra resulting from the application of PDE filters to the sum of the images are found in Fig. \ref{fig:fig6}.a). They are explained by the factor introduced in DFT by differentiation. For $\nabla_y$, the factor is |$sin(\pi \nu_y /2\nu_{nyq})$|. Inside the ring $\nu_1<\nu_{\rho}<\nu_2$ the average value of $\nu_{\rho}$ is $\nu_{nyq}$/2 and $\nu_y = \nu_{\rho}\times$ sin $\theta$, therefore the angular factor is: S($\theta$)=sin($\pi$sin$\theta$/4) The $\nabla_y$ spectrum in Fig. \ref{fig:fig6}.a) and the spectrum of the image $u$-$w$ in Fig. \ref{fig:fig6}.b) are both proportional to S($\theta$). A fit yields the respective factors 1.34 and 0.14. For the other pair of images $u$ and $v$ taken at one-minute intervals, the angular spectrum is almost null. This proves that a LED drift in the y direction is the cause of the small difference between exposures $u$ and $w$. When we apply a proportional rule of thumb, the u-w shift distance is a tenth of that of a one CCD line shift computed in $\nabla_y$ (13.7$\mu$). Hence we estimate a 1.4$\mu$ LED shift in one hour! \subsection{Orthogonal basis of differential operators: $\alpha, \beta, \gamma$, and $\delta$} \label{sec:24} Similarly we introduce four orthogonal operators $\alpha, \beta, \gamma,$ and $\delta$ which have a crucial role in our image analysis method: \begin{equation} \begin{split} \alpha =(T_x +U)(T_y +U) \quad;\qquad \beta =(T_x -U)(T_y +U) \quad; \\ \gamma =(T_x +U)(T_y -U) \quad;\qquad \delta =(T_x -U)(T_y -U) \quad\quad . \end{split} \label{eq:4} \end{equation} We modified these operators to project the original CCD images $\{a_{i,j}\}$ into lower resolution images (scale 1/2$\times$1/2) by restricting indices to even values. More explicitly, we developed Eq. (\ref{eq:4}) using the pixels $a_{i,j}$ defined in Eq. (\ref{eq:2}): \begin{equation} \begin{bmatrix} \alpha_{m,\ n} \\ \beta_{m,\ n} \\ \gamma_{m,\ n} \\ \delta_{m,\ n} \\ \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{bmatrix} \begin{bmatrix} a_{2m+1,\ 2n+1} \\ a_{2m+1,\ 2n} \\ a_{2m,\ 2n+1} \\ a_{2m,\ 2n} \\ \end{bmatrix} \quad\quad . \label{eq:5} \end{equation} The operator $\alpha =\{ \alpha_{m,\ n}\}$ sums pixels with adjacent even and odd indices. Its spectrum in Fig. \ref{fig:fig4_a} follows the original spectrum ($U$), but stops at half the frequency range. $\beta$ and $\gamma$ are similar to the gradient $\overrightarrow{\nabla}$= $\{\nabla_x, \nabla_y\}$, and $\delta$ to the Hessian $\Delta^2/\Delta x \Delta y$. Taking the four $\alpha, \beta, \gamma,$ and $\delta$ images together, we have an efficient lossless encoding of the original image, represented by the orthogonal matrix of Eq. (\ref{eq:5}). \begin{figure} \centering\includegraphics[width=0.8\linewidth]{figure4_b} \caption{Radial spectra of the sum and difference of two images after the application of four filters ($\alpha, \beta, \gamma, \delta$). \label{fig:fig4_b}} \end{figure} In Fig. \ref{fig:fig4_b} we show the radial spectra of $\alpha, \beta, \gamma$, and $\delta$ for image sums and differences. The spectra of the image differences are almost drowned in the noise, except for those of $\beta$ and $\gamma$ for distant images $u$ and $w$. The radial spectra of the $\beta, \gamma$, and $\delta$ operators are cut severely at low frequencies, but higher frequencies are unchanged. The four radial spectra converge at $\nu_{max}'=\nu_{max}/2$. The uncorrelated photon noise spectrum was obtained by simulation, using a Gaussian variable generator\footnote{adjusted in the residual plot in Fig. \ref{fig:fig7}} with an rms equal to 98 adu for each pixel, which corresponds to about 4$\times$10$^4$ photon/pixel. It was processed in the same way as for a real image. The resulting radial and angular spectra are flat for the $\alpha, \beta, \gamma$, and $\delta$ operators but not for the PDEs. We tested with the highest precision defined by the photon statistics of whole images the hypothesis that the $\delta$ operator applied to a difference of images yields a pure photon noise spectrum. For this we fit a flat $\delta$ radial spectrum on the $u$-$v$ and $u$-$w$ images. The histograms of the residuals are shown in Fig. \ref{fig:fig7}. The reference level of 1 corresponds to the approximate level of the noise (98 adu). The dispersion of the radial samples is 0.17 adu (rms) on a mean signal of 17000 adu, that is, $\approx$10$^{-5}$. The number of photons contributing to one sample is the whole content of two images: 56$\times$10$^{12}$ divided by 4 (the number of estimators) and then by 1500 (the number of samples), that is, $\approx$10$^{10}$. This verifies that the dispersion of the radial samples (10$^{-5}$) is consistent with photon statistical error (10$^{10}$)$^{-1/2}$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figure7} \caption{Residuals of $\delta$($u$-$v$)/noise and $\delta$($u$-$w$)/noise linear fits of radial frequency spectra (noise=98 adu). The precision for each sample is $\approx$10$^{-5}$, and for the whole image the average is $\approx$0.3x10$^{-6}$. \label{fig:fig7}} \end{figure} \subsection{Detection of camera defects} \label{sec:25} The photon propagation from the camera lens to the focal plane is shorter than from the mirror by a factor larger than 15. Therefore its Fresnel spectrum is much sharper. We developed a test on this premise (\citealt{ref4}), using the gradient vector length: \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure8} \caption{Camera defects a, c, and d circled in Fig. \ref{fig:fig2}. {\bf Top:} the flux map around the defects. {\bf Bottom:} the profile of the adc content drawn along a line passing through the center of the defects. \label{fig:fig8}} \end{figure} \begin{itemize} \item[$\bullet$] 1\% of the pixels are tagged by the ||$\overrightarrow{\nabla}$||$\geq$16$\sigma$ cut (see Fig.\ref{fig:fig2}b), (46,000 per channel). All are connected to an isolated defect (or to dead columns). \item[$\bullet$] There are around 1000 defects per CCD channel, that is, about 10$^5$ for the whole camera. \item[$\bullet$] Many types of defects are found. Some with a few tagged pixels and some with hundreds of tagged pixels. For instance, the defects a, c and d, circled in Fig. \ref{fig:fig2}, are examined in Fig. \ref{fig:fig8} : $\underline{a}$ is circular with no absorption, $\underline{c}$ is strongly absorbing with a complex shape, $\underline{d}$ is slightly absorbing with no interference rings. In addition, $\underline{b}$ is a single absorbing pixel surrounded by 8 pixels at half level, that is, a dead pixel. \item[$\bullet$] The defect distribution is sufficiently sparse to separate individual defects and to build a comprehensive catalog. \item[$\bullet$] For each tagged pixel, we measure a significant $\overrightarrow{\nabla}$ vector. Therefore a given defect is characterized by a vector field. \end{itemize} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure9} \caption{$\bf a)$ Gradient field pattern around a defect A is characterized by a regression analysis of angle $\Theta$ (cotg$\Theta$=$\mathrm {\nabla}$.Ox) versus pixel x in each CCD line y. The regression equation is x=a+(b-y)cotg$\Theta$. $\bf b)$ Four horizontal slices of six lines each are drawn corresponding to groups of lines in a). The field lines converge on the point A {x=a, y=b}. \label{fig:fig9}} \end{figure} Figure \ref{fig:fig9} shows how the analysis of the vector field transforms the cloud of pixels produced by tagging into field lines and a center of curvature $\underline{A}$. It defines piecewise the phase contours by joining some concentric arcs of circle. This method was adapted for contours that are more circular at the periphery of a cluster than in the central region where the center of gravity $\underline{A}$ of the defect is. A regression analysis fits a common center of curvature A(a,b) for parallel phase contours defined by the property of field lines: (x-a)sin$\Theta$ + (y-b)cos$\Theta$=0. \section{High-precision CCD electronics} \label{sec:4} The aim of this paper is to track the precision limit of photometry as it is applied in astronomy. The basic concept is to count photons along the three quantum steps: LED emission, telescope transmission and CCD absorption. The second step is the only one that uses optics. It has been treated above in Sect.\ref{sec:2}. The remaining analysis in Sect.\ref{sec:5} is pure statistics. However, the precision of the third step, photon counting by Megacam, is currently limited by the instability of the electronics. We observe hourly fluctuations of the gain in a 0.8\% interval and of the pedestal around 0.1 mV (15 adu). The resulting problems are mitigated for astronomers by the empirical subtraction of local sky background and by the comparison to local reference stars, but maybe not as well as needed. In Sect.\ref{sec:41} we expose the mitigating techniques developed especially for Sndice images that reach a 1-30 ppm fluctuation range. Then in Sect.\ref{sec:42} we describe why we are confident that better CCD readout electronics might yield all the precision needed without resorting to mitigation. \subsection{Mitigation of Megacam electronics problems} \label{sec:41} Megacam electronics problems (cf. \citealt{ref8}) are expressed by a variation of the pedestal and gain constants in Eq. (\ref{eq:2}) depending on the image number and by fluctuations during image readout. The pedestal fluctuations, which yield horizontal and vertical lines in Fig.\ref{fig:fig3}, are controlled using the $\nabla_x$ and $\nabla_y$ filters, which suppress these lines selectively. The gain fluctuations are controlled using the stability of the Sndice light source. This could yield a vicious circle because we need gain corrections to yield CCD data and vice versa. To break the loop, we introduce the concept of a flux$\times$efficiency$\times$gain (FEG) scale. It is based on the fact that the fraction of the total number of photons impinging a half CCD, noted b$_k$ in Eq. (\ref{eq:2}), is constant within its $\approx$3$\times$10$^{-6}$ statistical fluctuation. The observed fluctuation of the CCD count is due to a common multiplicative factor $\psi_{k,I}$ of the flux, the gain, or the efficiency in the Eq. (\ref{eq:2}). We slightly transform this equation by replacing the global flux $\Phi_I$ by the FEG average $\psi_{I}$ : \begin{equation} \psi_{I}=<\psi_{k,I}>_{k=1,72}=\Phi_I \times <\epsilon_k \times g_{k,I}>_{k=1,72} \quad\quad . \label{eq:7} \end{equation} The average of the gains of the 72 channels is an order of magnitude more stable than the gain of one channel ($\approx$0.02\% rms versus $\approx$0.2\% rms). This fluctuation is on the same order of magnitude as the effect of the LED thermal fluctuations (0.1$\ensuremath{^\circ}$C) on the flux $\Phi_I$ . Both contribute equally to the FEG fluctuations. Using only ADC counts, we cannot distinguish between them. We determine $\psi_{I}$ at $\approx$3$\times$10$^{-5}$ precision, an order of magnitude better than each of its components $\psi_{k,I}$. The relative gain parameter of each channel is the real one divided by the 72-channel gain average. In practice, we fix a reference image and fix the gains for the other images relative to this reference. With this FEG scale, we mimic what would be done with an ideal electronics: we would check that the gains in each image are compatible with those of the reference image at a 3$\times$10$^{-5}$ level. Then their average could be tested at the next order of precision, that is, 3$\times$10$^{-6}$, which is the limiting precision of the photon statistics in one channel (i.e., one half CCD). This precision is reached after mitigation in the remaining study, except in Sect.\ref{sec:56}, where the FEG mitigating method is replaced by the use of the real gain of each channel determined by the photon noise in the actual frame. \subsection{Making high-precision CCD electronics} \label{sec:42} We claim that making an ideal CCD readout electronics is feasible rather easily with modern technology. We base this claim on our experience with a large electronic system in the H1 experiment (\citet{ref7}) calibrated at a 30 ppm level for 15 years and on a R\&D on Megacam electronics (\citet{ref5} and \citet{ref6}) reaching a 0.2 e equivalent noise charge. High-precision electronics would open a wide range of applications to the methods developed in this paper. One example is the preventive maintenance of a telescope using a measurement much more sensitive than the usual scientific requirements (i.e., detecting problems before they hurt). Another example is the creation of photometric standards and the photometric calibration of any instrument (not only telescope) at the ppm level. This paper also prooves that for the low light fluxes of astronomy, cooled CCDs are the best photometric calibrators\footnote{better than cooled large area photodiodes used by Sndice, which are in turn better than the NIST warm photodiodes (calibrated in the ill-defined photovoltaic mode)}. \section{Coherent illumination calibration at the quantum precision limit} \label{sec:5} We have regrouped in Sect.\ref{sec:51} the mathematical methods used when comparing CCD images. Readers interested in bare results could skip it. However, this paragraph is needed to understand why the precision of our regression analyses is so good. These methods are used in Sect.\ref{sec:52} and Sect.\ref{sec:55} to measure the single parameter that defines a particular image inside a sequence: the total number of photons emitted by the LED during the exposure. In Sect.\ref{sec:54} we show how the stability of the led position during a sequence has to be controled. Finally, in Sect.\ref{sec:56} we check for two image sequences (8.5 and 24 billions of pixels, respectively) that the content of each pixel in each image is entirely defined by quantum mechanics and photon statistics. \subsection{Mathematical properties of the statistical distributions of the four-vector $\{{\mathrm \alpha, \beta, \gamma, \delta}\}$} \label{sec:51} Our measurement model is Eq. (\ref{eq:2}). Groups of four pixels are replaced by the four filters\footnote{introduced as differential operators in Sect.\ref{sec:24}, they act as filters on a Fresnel spectrum}, according to Eq. (\ref{eq:5}). The uncorrelated noise, which is the quadratic average of the noises coming from the four individual pixels, is the same for each filter. In contrast the correlated electronic noise and the led-jitter noise are different. As shown in Sect.\ref{sec:2}, the $\delta$ filter suppresses the pedestal, the correlated gain fluctuations, and the led jitter at a 10$^{-6}$ level. The $\beta$ and $\gamma$ filters suppress correlated noises down to a few 10$^{-4}$ level. The $\alpha$ filter keep most correlated noises that are at a few 10$^{-3}$ level and the led jitter noise. (We recall that according to the Parseval theorem, signal and noise power are globally conserved in the Fourier transform and in the orthogonal change from the pixel basis to the filter basis in Eq. (\ref{eq:5})). The next step is to consider the 1.18 million four-vectors inside a given half-CCD k of a given image $I$ as the successive occurrence of four random variables $\alpha_{k,I},~ \beta_{k,I},~ \gamma_{k,I}$, and~ $\delta_{k,I}$, themselves the components of a random four-vector $\Xi_{k,I}$. Equation (\ref{eq:2}) yields that the expected value of $\Xi_{k,I}$ is the result of applying the filters to the WP signal seen by one given half-CCD. The sequence of 1.18 million four-vectors almost perfectly simulates those taken by a multivariate Gaussian variable whose distribution is represented by Eq. (\ref{eq:9}): \begin{equation} \begin{split} &\langle\Xi_{k,I}\rangle = [\Psi_{k,I}\quad 0\quad 0\quad 0]\quad\quad\; \Xi_{k,I}' = \Xi_{k,I} - \langle\Xi_{k,I}\rangle \quad \\ \label{eq:9} &\langle\Xi_{k,I}' \times \Tilde{\Xi}_{k,I}' \rangle =\Psi^2_{k, I} \begin{bmatrix} \sigma^2_{\alpha_{k}} & 0 & 0 & 0 \\ 0 & \sigma^2_{\beta_{k}} & 0 & 0 \\ 0 & 0 & \sigma^2_{\gamma_{k}} & 0 \\ 0 & 0 & 0 & \sigma^2_{\delta_{k}} \\ \end{bmatrix} \quad \sigma^2_{\beta_{k}} = \sigma^2_{\gamma_{k}} \quad\quad . \end{split} \end{equation} The relations in Eq. (\ref{eq:9}) have all been verified. First, the mean of $\alpha$ has been defined in Eq. (\ref{eq:7}). The three other means are null within a fraction of an adu. This property is explained theoretically using the Fourier analysis of the WP signal as reported in Sect.\ref{sec:2}. Second, the covariance matrices are diagonal due to the algebraic properties of the WP phase contours and because of the rotation invariance. The values of $\sigma_{\alpha}$, $\sigma_{\beta}$, $\sigma_{\gamma}$, and $\sigma_{\delta}$ are directly related to the Fresnel spectra seen in Fig.\ref{fig:fig4_b} ($\sigma_{\alpha}\approx$3.5\%, $\sigma_{\beta}=\sigma_{\gamma}\approx$0.9\%, and $\sigma_{\delta}\approx$0.4\%). They do not depend on the flux of image $I$ and not much on the channel number $k$. Therefore we sometimes dropped the indices $k$ and $I$ in their expression. Moreover, <$\beta$|$\gamma$>=0 and $\sigma_{\beta}=\sigma_{\gamma}$ because of rotation invariance. After associating a 4$\times$4 diagonal Gaussian with each CCD channel, we added to it the diagonal Gaussian noise in Eq. (\ref{eq:10}). \begin{equation} \begin{split} \label{eq:10} &\langle \delta \Xi_{k,I}' \times \delta \Tilde{\Xi}_{k,I}' \rangle = \begin{bmatrix} \varsigma^2_{\alpha_{k,I}} & 0 & 0 & 0 \\ 0 & \varsigma^2_{\beta_{k,I}} & 0 & 0 \\ 0 & 0 & \varsigma^2_{\gamma_{k,I}} & 0 \\ 0 & 0 & 0 & \varsigma^2_{\delta_{k,I}} \\ \end{bmatrix} \quad\quad . \end{split} \end{equation} The 72 four-vector variables $\Xi_{k,I}'+\delta \Xi_{k,I}'$ are also centered Gaussians. Their means are null and their variances, measured independently for each image, are the raw $\Psi\sigma$ flux estimators. They are plotted in Fig.\ref{fig:fig17}.b) for the level ramp and in Fig.\ref{fig:fig19_a} for the flux ramp (after dividing the expression by $\Psi_{k,I}^2$ and averaging all channels). To extract the pure WP signal from the noise, we compared different images two by two. Extending Eq. (\ref{eq:9}) to all pairs of images $I_1$ and $I_2$ leads to Eq. (\ref{eq:11}). \begin{align}\label{eq:11} &\langle\Xi_{k,I_1}' \times \Tilde{\Xi}_{k,I_2}' \rangle = \Psi_{k,I_1} \Psi_{k,I_2} \begin{bmatrix} \sigma^2_{\alpha_{k}} & \eta_{x_k} \Delta_{x_{I_1 \rightarrow I_2}} & \eta_{y_k} \Delta_{y_{I_1 \rightarrow I_2}} & 0 \\ \eta_{x_k} \Delta_{x_{I_2 \rightarrow I_1}} & \sigma^2_{\beta_{k}} & 0 & 0 \\ \eta_{y_k} \Delta_{y_{I_2 \rightarrow I_1}} & 0 & \sigma^2_{\gamma_{k}} & 0 \\ 0 & 0 & 0 & \sigma^2_{\delta_{k}} \\ \end{bmatrix} \\ \notag &\Delta_{x_{I_1 \rightarrow I_2}} = -\Delta_{x_{I_2 \rightarrow I_1}} \quad\quad\quad \Delta_{y_{I_1 \rightarrow I_2}} = -\Delta_{y_{I_2 \rightarrow I_1}} \quad\quad . \end{align} The compact matrix form of Eqs. (\ref{eq:9}), (\ref{eq:10}) and (\ref{eq:11}) hides a great complexity. For example, the total number of variables N$_t$ = 4$\times$72$\times$N$_I$ is 20160 for the sequence of N$_I$=70 images in the flux ramp. This yields 20,160 diagonal terms and 2,812,320 pairs of non-diagonal terms of interest. This is the number of terms that we analyse in Sect.\ref{sec:52}. When we restrict the distribution of these enormous Gaussian variables to some components $x$ and $y$, it yields a bivariate Gaussian law with a two$\times$two covariance matrix C$_{xy}$. The C$_{xy}$ matrix is written conventionally with the two marginal variances $\sigma_x^2$ and $\sigma_y^2$ on the diagonal and a non-diagonal term $\rho \sigma_x \sigma_y$ ($\rho$ being the correlation coefficient). The additivity of covariance matrices allows us to add the noise in Eq. (\ref{eq:10}) to the WP signal of Eq. (\ref{eq:11}). This yields the following equation: \begin{equation} \begin{split} C_{\alpha_{k,I_1} \alpha_{k,I_2}} = & \underbrace{\sigma^2_{\alpha_{k}} \begin{bmatrix} \Psi^2_{k, I_1} & \Psi_{k, I_1} \Psi_{k, I_2} \\ \Psi_{k, I_1} \Psi_{k, I_2} & \Psi^2_{k, I_2} \\ \end{bmatrix} }_{WP} + \underbrace{\begin{bmatrix} \varsigma^2_{\alpha_{k, I_1}} & 0 \\ 0 & \varsigma^2_{\alpha_{k, I_2}} \\ \end{bmatrix}}_{Noise} \\ = & \begin{bmatrix} \sigma^2_{x} & \rho\sigma_{x}\sigma_{y} \\ \rho\sigma_{x}\sigma_{y} & \sigma^2_{y} \\ \end{bmatrix} \quad\quad . \end{split} \label{eq:12} \end{equation} In Sect.\ref{sec:52} we assume for the bivariate Gaussian distribution of two $\alpha_{k,I}$ variables a common representation of the regression analysis: $x$ is the marginal variable, $y$ the conditional variable, and the three parameters are $\sigma_{x}$, $\sigma_{y/x}$, and the slope $a_{y/x}$. Classical formulas\footnote{Formulas and their application to our problem are found in \citet{ref8}, Appendix C.} that relate the two parametrizations of the regression analysis are used in Sect.\ref{sec:52} to evaluate the difference between the slope $a_{y/x}$ of the regression line and the gain ratio of WP signal $\Psi_y/\Psi_x$ ($x=\alpha_{k, I_1}$ ; $y=\alpha_{k,I_2}$). This difference $D=2(\sigma_{y/x}/\sigma_{\alpha})^2$ $\approx$1\% is small for the $\alpha$ variables at the reference flux, supporting the choice of a regression estimator for the gain ratio in this case. But D is large for the other three variables $\beta$, $\gamma$, and $\delta$, imposing another type of noise estimator, the variance of $\Delta \delta$ (in which $\delta$ by may be replaced by $\beta$ or $\gamma$): \begin{equation} \Delta \delta_{k,cur} = \delta_{k,cur} - \xi_{k,cur}\delta_{k,ref} \qquad \xi_{k,cur} = \Psi_{k,cur}/\Psi_{k,ref} \quad\quad . \label{eq:17} \end{equation} This linear combination of the current and the reference images eliminates the WP signal on a pixel-by-pixel basis and yields a pure noise variable. Its mean is null and its variance, using Eq. (\ref{eq:12}) is: \begin{equation} \langle \Delta \delta^2_{k,I} \rangle / \Psi^2_{k,I} = \varsigma^2_{k,I} / \Psi^2_{k,I} + \varsigma^2_{k,ref} / \Psi^2_{k,ref} = S_k(\Phi_I) + S_k(\Phi_{ref}) \quad\quad . \label{eq:18} \end{equation} The identification of the square of $\varsigma_{\delta}/\alpha$ with the so-called statistical factor S($\Phi$) is a key of the analysis of uncorrelated noise in Sect.\ref{sec:56}. In summary, the four-vector Gaussian model yields four mean estimators and a variance matrix (four variances and six covariances) for one image. Three means and four covariances are null. We are left with the mean of $\alpha$ and four variances used in the following as five redundant flux estimators, plus two covariances used as led motion estimators. For a sequence of images we extract four sequences of noise estimators based on the variance of the flux-weighted difference of two images (one for each filter). \subsection{High-precision flux ratio estimates} \label{sec:52} The algorithm estimating the flux ratio of the two images $I_1$ (reference) and $I_2$ (current) was introduced by \cite{ref12}. Its principle, which is illustrated in Fig.\ref{fig:fig10}, considers that the photon distribution is multinomial (not Gaussian). The ratio of the FEG variables $\alpha_{k,I}$ in a given four-pixel matches the ratio of exposure duration, which is about 5/6 in this example. We reconstructed the joint probability distribution as a product of the marginal distribution of the reference variable $\alpha_{k,ref}$ (left) and the conditional probability of the current variable $\alpha_{k,cur}$(right). The joint distribution has three properties: \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure10} \caption{ $\bf Left$ the histogram of $\alpha$ in the reference image subdivided in slices 40-adu wide. $\bf Right$ the histogram of the projection of each reference slice in any current image is a Gaussian. Only one slice for every five is represented. $\bf Top$(inset) the means of the current slices are fit as a linear function of the means of the reference slices. The distribution of the residuals is shown in Fig.\ref{fig:fig11}.} \label{fig:fig10} \end{figure} a) The marginal distribution is gaussian: A Gaussian fit\footnote{The truncated Gaussian fit is good within $\pm 2 \sigma_{\alpha}$ ; 4 out of 72 channels have non-Gaussian tails due to stains on the mirror (see \citealt{ref8}, fig.12).} yields <$\alpha_{k,ref}$>= $\Psi_{k,ref}$ and $\sigma(\alpha_{k,ref}) = \Psi_{k,ref} \sigma_{\alpha}$. b) The regression curve is a straight line (see inset of Fig.\ref{fig:fig10}): \begin{equation} <\alpha_{k,cur} / \alpha_{k,ref}>= \Psi_{k,cur} + a_{y/x}(\alpha_{k,ref}-\Psi_{k,ref}) \quad\quad . \label{eq:13} \end{equation} In each bin of the $\alpha_{k,ref}$ histogram, we fit a Gaussian on the $\alpha_{k,cur}$ distribution and hence give a value of the conditional mean $<\alpha_{k,cur}/\alpha_{k,ref}>$ and of the conditional standard deviation $\sigma(\alpha_{k,cur}/\alpha_{k,ref})$. The quality of the fit of Eq. (\ref{eq:13}) is excellent, as shown in Fig.\ref{fig:fig11}-a, where individual errors bars are the Gaussian width divided by the root of event number, or more conservatively, in Fig.\ref{fig:fig11}-b by the width (0.4 adu rms) of the distribution of residuals. It determines $\Psi_{k,cur}$ with a 0.06 adu (rms) point precision (4$\times$10$^{-6}$). Particular care is taken for such high-precision point measurements involving a small fraction of an adu. They are valid only as representing an average of the digital sampling of a continuous analog variable over a wide ADC range. In this example, where $\alpha_{k,cur}$ is sampled within a 14000$\pm$750 adu interval, the precision of the fit of the slope $a_{y/x}$ is -1.7$\times$10$^{-4}$. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure11} \caption{Residuals of the linear fit of Fig.\ref{fig:fig10}: $\bf a)$ as a function of $\alpha_{k,reference}$; $\bf b)$ as a histogram (0.4 adu rms). In red: points included in the fit ($\alpha \in [\langle \alpha \rangle \pm $1.5$\sigma ]$)}. \label{fig:fig11} \end{figure} c) The conditional standard deviation $\sigma(\alpha_{k,cur}/\alpha_{k,ref})$ varies as a square root of the flux (because of the multinomial law of photon counts). We fit a polynomial on the data, as shown in Fig.\ref{fig:fig12}-a. Its first-order linear approximation is:\begin{equation} \sigma(\alpha_{k,cur}/\alpha_{k,ref}) = \sigma_{y/x} + b_{y/x} \times (\alpha_{k,ref} - \Psi_{k,ref}) \quad\quad . \label{eq:14} \end{equation} The central value $\varsigma_{k,cur}=\sigma_{y/x}$ will take the place of the constant value defined for a Gaussian. The point precision in this example is excellent (0.02 adu $\approx\Psi_{k,ref}\times$10$^{-6}$). The precision on $b_{y/x}$ is 0.023 adu/adu. This process of extrapolation at the central reference flux $\Psi=\Psi_{k,ref}$ of the flux-dependent quantities f($\Psi$), used in Eq. (\ref{eq:13}) and (\ref{eq:14}), is applied systematically to all other variables. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure12} \caption{$\bf a)$ Fit of the width of the joint distribution as a function of $\alpha_{k,reference}$ in the $\{\langle \alpha \rangle \pm $1.5 $\sigma \}$ interval. $\bf b)$ Histogram of the residuals of the previous fit, fitted by a Gaussian (0.14 adu rms). \label{fig:fig12}} \end{figure} \subsection{Determining the LED jitter using the $\alpha'/\beta$ and $\alpha'/\gamma$ correlation} \label{sec:54} Two correlation terms, <$\beta$|$\alpha$> or <$\gamma$|$\alpha$>, appear in the covariance matrix of Eq. (\ref{eq:11}), while they are null in the autocovariance matrix of Eq. (\ref{eq:9}). An intuitive explanation of this puzzle is found in \cite{ref8}. The point of interest here is that the non-diagonal matrix elements noted $\Psi_{k,I_1}\times\Psi_{k,I_2}\times\eta_{y_k}\times\Delta_{y_{1 \rightarrow 2}}$ are a very sensitive probe of the LED jitter projected on y axis (idem for x). LED jitter is the only source of noise found in the optical signal in addition to the photon noise. The <$\gamma$|$\alpha$> terms for each CCD channel k (0 $\le$ k $\le$ 71) in the sequence of $N_I$=25 images at constant flux level yields a $N_I\times N_I$ matrix. In Fig.\ref{fig:fig14}, we only keep the last row ($I_1$=24, $I_2$=0 to 24) of the matrix, but we repeat the operation for the 72 channels. The raw data (in blue) are the slopes $a_{\gamma/\alpha}$ of the regression fit in Eq. (\ref{eq:13}). They are ordered by time (that is, by image number $I$) and by electronic channel number $k$. Here $\alpha_{24}$ is the marginal variable and $\gamma_0,...,\gamma_{24}$ are the conditional variables. The reason for taking the reference image from among the last eleven images in Fig.\ref{fig:fig14} is obvious: it belongs to a group of images ($I$=14,...,24) in which led jitter is minimal. The result of a complementary method is shown in Fig.\ref{fig:fig14}. It is a principal component analysis that fits the 1800 raw data using 72 $\epsilon_k \eta_k$ and 25 $\Delta y_I$ parameters ($a_{\gamma / \alpha}= \epsilon_k \eta_k \times \Delta y_I$ ; <$\epsilon_k \eta_k$>=1). The index $\epsilon_k$=$\pm$1 is introduced to take the up/down orientation of the CCD readout within the focal plane into account. It explains the characteristic data pattern: negative for the first 36 and positive for the last 36 channels, or vice versa. The fit values of the vertical displacement $\Delta y_I$ are drawn in green and the residuals of the fit in black. The calibration of the $\Delta y$ scale was made using Fig.\ref{fig:fig6}, where the displacement of the LED between image $w$ ($I$=2 and $\Delta y$=0.016) and image $u$ ($I$=17 and $\Delta y$=0.0005) is estimated at 1.4 $\mu$. This yields a 1\% per micron calibration ratio of the a$_{\gamma/\alpha}$ slope change per LED displacement. The distribution of residuals in the inset of Fig.\ref{fig:fig14} displays a 0.05\% Gaussian width, that is, a sensitivity for the LED position given by one channel equal to 0.05 $\mu$ rms. The average sensitivity for all 72 channels is 0.006 $\mu$, that is, a mean angular position of the LED defined at 0.4 nrad. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure14} \caption{In blue we show the signal of $\gamma$ vs. $\alpha$ correlation as a function of channel $k$ and image $I$ (noted $\eta_k \Delta y_I$ , with $\Delta y_I$ fixed to 0 for $I$=24). In black we plot the residual of the fit of this signal with one $\eta_k$ parameter per channel and one $\Delta y_I$ per image. Images $I$=2,17,18 were called $w$, $u$, $v$ in the spectral studies of Sect.\ref{sec:2}. $\Delta y$ is calibrated by comparison with $\Delta y_{w\rightarrow u}$. The inset shows a Gaussian fit of the residuals (1 point/channel/image) with a 5.10$^{-4}$ rms, yielding $\Delta y_{k}$ = 0.05 $\mu$ or <$\Delta y_{k}$> = 0.006 $\mu$. The green line indicate the effect of a 1$\mu$ LED drift. \label{fig:fig14}} \end{figure} We conclude this study by observing that we are fortunate to have a rather good mechanical stability of the telescope illumination system, because there was no provision for this effect during the construction of Sndice. We did not yet perform a study of the mechanical stability, but we note that the flux ramp run during two hours was affected by no $\delta y$ displacement and only one significant $\delta x$ displacement. This is used in the next paragraph to obtain a full flux ramp unaffected by led jitter. \subsection{Determining the fluxes for a sequence of images} \label{sec:55} The integrated flux $\Phi_I$ emitted by an LED is a product of LED current, exposure time, and temperature terms (cf. Eq. (\ref{eq:2})). It is measured by the LED electronics. In Fig.\ref{fig:fig17} we represent the trend due to the linear temperature variation of 1$\ensuremath{^\circ}$C and 2$\ensuremath{^\circ}$C per hour (due to the warming of the CFHT dome after dawn). Under these conditions, the test-bench calibration of Sndice tells us that the precision is limited to a few 10$^{-4}$ and could be improved to a few 10$^{-5}$ by monitoring the LED current and the temperature (\citealt{ref11}, §4)\footnote{There was no monitoring of the LED current and temperature during Megacam data taking.}. In the constant level run the exposure time was kept constant by means of an electronic LED shutter defined at a 0.3 $\mu$s time resolution. In the flux ramp the exposure time was varied using the megacam shutter (1ms resolution). Alternatively, we measured the mean flux absorbed in the CCDs. The linear fit of Eq. (\ref{eq:13}) yields for each channel $k$ a constant term $\Psi_{k,cur}/\Psi_{k,ref}$ and a slope term $a_{y/x}$, with a statistical precision of 4$\times$10$^{-6}$ and 1.7$\times$10$^{-5}$/(0.1$\times\Psi_{k,ref}$), respectively. LED jitter has no effect on the constant term of the fit. The large error bars seen in Fig.\ref{fig:fig17}.b) show the spread of the gain fluctuations in the 72 channel data. The mitigation method described in Sect.\ref{sec:41} reduces the gain fluctuations ($\delta g_{k,cur/ref}$ $\approx$1.5$\times$10$^{-3}$ rms) and yields an average FEG flux ratio (black points), \begin{equation} \Psi_{cur}/\Psi_{ref} =\langle \Psi_{k,cur}/\Psi_{k,ref}\rangle_k = (1+ \langle \delta g_{k,cur/ref}\rangle_k) \times \Phi_{cur}/\Phi_{ref} \quad\quad . \label{eq:16} \end{equation} The deviation from the linear trend is 1.8$\times$10$^{-4}$ rms. It is compatible with the averaging of 72 channels ($\delta g$/$\sqrt{72}$ =1.5$\times$10$^{-3}$/$\sqrt{72}$). Thermal fluctuations of LED, around 0.1$\ensuremath{^\circ}$C per minute, have comparable effects. \begin{figure} \subfigure{\includegraphics[width=0.8\linewidth]{figure17_a2}} \subfigure{\includegraphics[width=0.8\linewidth]{figure17_b}} \caption{Effect of LED temperature on light flux (warming of CFHT dome at dawn): {\bf a)} Variable exposure (1s<$\Delta t$<8s) {\bf b)} Constant exposure ($\Delta t$=8s): two independent estimators <$\alpha_k$>$\approx$16000 adu (black points) and <$\Psi\sigma_\delta$>$\approx$100 adu (red points) agree within 0.8$\times$10$^{-4}$ rms. Deviation from linearity is 1.8$\times$10$^{-4}$ rms. The precision on <$\Psi\sigma_\delta$> is $\approx$0.008 adu, i.e., 0.6$\times$10$^{-6}$. Error bars cover gain spread before averaging. The two other estimators <$\Psi\sigma_\beta$> and <$\Psi\sigma_\gamma$> (green and blue) are more sensitive to LED jitter. \label{fig:fig17}} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure15} \caption{Setting up the relative fluxes within a sequence of 70 images: $\bf 1)$ Three sequences, covered by horizontal lines, are built around three reference images (black, blue, and brown arrows). Gain$\times$fluxes ratios are determined as in Fig. \ref{fig:fig10} ($I_{cur}$=9, $I_{ref}$=11). $\bf 2)$ The reference image of sequence 2 is measured relative to reference 1 and reference 3 relative to reference 2. By transitivity all fluxes are related. $\bf 3)$ The two overlap regions 1 over 2 and 2 over 3 yield a set of double determinations. The relative fluxes of all 70 images agree within 3.10$^{-5}$ rms. They give the relations flux vs. exposure time (shaded area: 28 images at a common LED current) and flux vs. LED current (constant exposure time). The only significant LED jitter occurs between images 55 and 56. \label{fig:fig15}} \end{figure} For the variable exposure run, the 70 images in Fig. \ref{fig:fig15} yield a point representing $\Psi_{cur}/\Psi_{ref}$ the ratio of its averaged FEG flux over that of a reference image. Integrated fluxes are varied by two different means : \uline{exposure time} using shutter speed (magenta shade) or \uline{LED current} (plain). As a precaution, because the long periods at low flux destroy the continuity of high-precision data, we took three reference images marked by vertical arrows (one for each peak of flux). To reconnect the results based on different references, we measured the relation between each pair of reference images and checked the transitivity of the flux ratio measurements. The relative flux precision that we obtained at highest flux is $\approx$3$\times$10$^{-5}$ rms. The mechanical shutter yields the error bars ($\delta$($\Delta$t)= 1ms) seen in Fig. \ref{fig:fig17}-a). Clearly, the electronic shutter is prefered. The <$\alpha$> flux ratio estimator $\Psi_{cur}/\Psi_{ref}$ measured so far reaches a precision of around 10$^{-4}$ after mitigation of the electronics errors. It is essentially a measurement of the flux of specular light. Four other measurements, $\Psi_{k,I}(\sigma_{\alpha}, \sigma_{\beta}, \sigma_{\gamma}, \sigma_{\delta})$, the square root of the covariances in Eq. (\ref{eq:9}), yields four completely independent estimates of the flux based on the diffused light ($\approx$10\% of specular light). The application of these covariance estimators provides a positive test of the WP model with the spectacular precision shown in Fig. \ref{fig:fig17}-b. The $\Psi_{k,I}\sigma_{\delta}$ estimate yields the red points superimposed on the black ones. There is a 0.8$\times$10$^{-4}$ rms agreement between the two types of estimators. The agreement is better than the 1.8$\times$10$^{-4}$ precision resulting from averaging the gains in Eq. (\ref{eq:16}). This is explained by considering that both types of estimators are based on the same FEG scale and not on the real flux scale. The 0.8$\times$10$^{-4}$ precision on $\sigma_{\delta}$ corresponds to a 0.008 adu precision on the pixel counts. This result, relative to the average pixel content of 16000 adu, entails a remarkable precision on the WP hessian signal width $\Psi\sigma_{\delta}$ of 0.5$\times$10$^{-6}$ rms, almost at the statistical precision limit of the 10$^{13}$ photons. The two other quantities shown in the figure -$\Psi \sigma_{\beta}, \Psi\sigma_{\gamma}$- yield similar results, but the analysis is complicated by the introduction of the LED jitter noise, which adds up quadratically to the WP signal dispersion. The limiting precision for the WP estimators is set by the photon noise. The filtering of the low spatial frequencies suppresses the effect of electronic bugs. \subsection{ $\alpha$, $\beta$, $\gamma$, and $\delta$ noise estimators} \label{sec:56} The raw variance of a filter content in Eq. (\ref{eq:12}) is the sum of the WP variance $\Psi_{k,I}\sigma_{\delta}$ and the noise variance $\varsigma_{\delta}$. Figure \ref{fig:fig19_a} reproduces the variances of three filters in a relative form (divided by the FEG flux <$\alpha$>). This figure is one half of the consistency check of our model. It shows on a very broad dynamical range that the interference pattern is proportional to the flux (because it is defined by the probability density of the wave packet). The second half of the demonstration is contained in the analysis of the noise variance as a function of flux (Fig.\ref{fig:fig21}), because it demonstrates that in most of the range the noise is dominated by the photon statistics. Without too many technical details, we report that we applied the slicing method and the fits shown in Fig.\ref{fig:fig11} and Fig.\ref{fig:fig12} to the distributions ($\delta_{k,cur}$, $\alpha_{ref}'$) and ($\Delta\delta_{k,cur}$, $\alpha_{ref}'$). The variances of $\delta_{k,cur}$ and $\Delta\delta_{k,cur}$ (Eq. \ref{eq:17}) yield the estimates for the raw WP signal and for the pure noise, respectively. Both are shown in Fig.\ref{fig:fig19_a} for three filters. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figure19_a} \caption{ Pure noise $\varsigma_{\delta}$ is extracted by subtracting the raw reference image (with its relative weight) from all other raw images. Then pure WP signals $\Psi \sigma_\beta$, $\Psi \sigma_\gamma$, and $\Psi \sigma_\delta$ are extracted from raw images by subtracting noise $\varsigma_{\delta}$. Check: $\sigma_\beta, \sigma_\gamma$, and $\sigma_\delta$ are constant and $\sigma_\beta =\sigma_\gamma$ (superimposed). \label{fig:fig19_a}} \end{figure} We used the constant level run as a benchmark for the high-precision noise estimators. A good summary of the study is seen in Fig. \ref{fig:fig18}. It represents two versions of the same four ($\alpha$, $\beta$, $\gamma$, and $\delta$) noise variance estimators. For the upper one (Fig.\ref{fig:fig18}.a), mitigation yields only one average adu count per image per filter proportional to the average adu count of the reference image. A global LED jitter noise correction was applied using the parameter $\Delta y$ from Fig. \ref{fig:fig14} (open circles before and full circles after correction). In the lower one (Fig.\ref{fig:fig18}.b), there are 72 data per image (one relative noise per channel). Relative noise is not affected by gain fluctuations (cancelled between the numerator and the denominator). Data are represented in the plot by their mean and rms. The precision is sufficient to fit the flux dependence of the noise (proportional to 1/$\sqrt{\Phi_I}$). The comparison of the four filters after LED jitter correction gives the size of the correlated fluctuations among four neighboring pixels. Fully correlated fluctuations such as pedestal or gain fluctuations are seen by the $\varsigma_{\alpha}$ variable. Their effect is in a 0.5-3 adu range. The line-to-line (or column-to-column) fluctuations sensed by $\varsigma_{\beta}$ (or $\varsigma_{\gamma}$) yield a 0.25 adu effect. The fourth variable $\varsigma_{\delta}$ serves as a pure sample of uncorrelated noise to be used for a fine study of the photon noise on the whole flux range covered by the 70 images flux ramp. \begin{figure} \subfigure{\includegraphics[width=0.9\linewidth]{figure18_a}} \subfigure{\includegraphics[width=0.9\linewidth]{figure18_b}} \caption{{\bf a)} Fluctuations $\varsigma_{\alpha,...,\delta}$ (rms) of $\alpha,...,\delta$ between the last image $I$=25 and any other one $I$=1,24 in temporal order (72 channels average). Open circles represent raw data and full circles data corrected for LED jitter. Variable $\alpha$ senses all noise sources; $\beta$ ($\gamma$) suppresses the line (column) correlated electronic noise and the LED jitter along x (y) axes; $\delta$ suppresses all correlated noises and LED jitter. {\bf b)} Relative fluctuations 2$\times$<$\varsigma_{\beta(,\gamma,\delta)} / \alpha$> are compared to the prediction. The continuous line representing the prediction $\propto$1/$\sqrt{\Phi(t)}$ uses the flux $\Phi$(t) drawn in Fig. \ref{fig:fig17}-b. Error bars are given by the $\Psi_k$ dispersion (k=1,72). The individual channel precision is 5$\times$10$^{-6}$, the 72 channel average precision is 0.7$\times$10$^{-6}$. \label{fig:fig18}} \end{figure} The three uncorrelated random processes affecting the WP signal have a different flux dependence: the pedestal noise is constant, the gain noise is proportional to the flux, and the photon noise to the square root of the flux. The variances were added to constitute what is classically called the statistical factor (Eq. (\ref{eq:19})). We could take into account the LED jitter variance in the statistical factor, but we do not need it because the flux ramp is divided into two sequences with no internal LED jitter, \begin{equation} \begin{split} S_k(\Phi) = &\; (\varsigma_k(\Phi)/\Phi)^2 = (A_k + B_k\Phi + C_k \Phi^2)/ \Phi^2\\ S_k(\Xi) = &\; C_k + B_k \Xi + A_{k}\Xi^2 \qquad \Xi = \Phi_{ref} / \Phi \quad\quad . \end{split} \label{eq:19} \end{equation} The link between the variance of the noise variable $\Delta \delta_{k,I}$ in Eq. (\ref{eq:17}) and the statistical factor has been given in Eq. (\ref{eq:18}), which sums the statistical factors of the current image and the reference image. This eliminates not only the WP signal, but also the fluctuation of gains of both current and reference images, which are the root of our electronic problems. The change of variable from the flux $\Phi$ to its inverse $\Xi=\Phi_{ref}/\Phi$ in Eq. (\ref{eq:19}) transforms S($\Phi$) into a second-degree polynomial in $\Xi$. The photo-electron noise is in the $B_k\Xi$ term and the pedestal noise in $A_{k}\Xi^2$. Figure \ref{fig:fig21}.b shows one of the 72 curves representing the S($\Xi_{k,I}$) vs. $\Xi_{k,I}$ data and their fit by a second-degree polynomial on a flux range of two orders of magnitude. The second-degree term is visible only when extending the flux range down, from hundreds to tens of photo-electrons per pixel. For each point of a S$_k(\Xi)$ curve a gain fluctuation does not alter the ordinate S$_k(\Xi_I)$ but shift the abscissa $\Xi_I$. The shift of $\Xi_I$ from $\Psi_{ref}$/$\Psi_I$ to $\Phi_{ref}$/$\Phi_I$ is common to all the 72 channels of an image. Figure \ref{fig:fig20}.a displays the residuals of the 72 linear fits of S$_k(\Psi_I)$ vs. $\Psi_I$, which contain the common mode effect of the $\Phi_I - \Psi_I$ shift in addition to the random noise. This effect is statistically significant, therefore we corrected for it. The correction reduces the dispersion of residuals seen in Fig. \ref{fig:fig21}.a by a factor two for all channels. This amounts to replacing the FEG flux $\Psi_I$ by an FE flux $\Phi_I$ (or to correct the fluctuation of the average gain assuming that the average efficiency is constant). Figure \ref{fig:fig20}.a represents a continuous drift of the average gain with time independently of the flux. The overall distribution of final residuals, shown in Fig. \ref{fig:fig21}-a, is an unbiased Gaussian with a 2.7$\times$10$^{-8}$ rms. For channel 72, whose S($\Xi$) vs $\Xi$ fit is given in Fig. \ref{fig:fig21}-b, this entails a 0.30\% Gaussian width of $\Delta S_k/S_k$. Using the S=($\varsigma_{\delta}/\alpha$)$^2$ relation, $\Delta\varsigma_{\delta}/\alpha$ =0.5$\times$($\Delta$S/S)$\times \varsigma_{\delta}/\alpha$ = 4.5$\times$10$^{-6}$ rms (at reference flux $\Xi$=1).\footnote{Another way to quote the precision is $\Delta(\varsigma_{\delta}/\varsigma_{\delta})$= 0.5$\times$($\Delta$S/S)= 0.15\%} This is the number expected from a pure photo-electron statistical noise, which proves that there is no other unknown or uncorrected systematic fluctuation in the CCD measurement. In addition to photon noise, our random noise model in Eq. (\ref{eq:19}) contains the two auxiliary terms $C_k$ and $A_k$. In Eq. (\ref{eq:18}), our noise estimator, $S_k(\Phi_{ref})$ is a constant added to $C_k$. In practice, we fit a second-degree polynomial $P_2(\Xi)$ to the data and evaluate it at $\Xi$=1. This yielded $P_2$(1)=2$\times S_k(\Phi_{ref})$. The constant $S_k(\Phi_{ref})$ =$P_2$(1)/2 was subtracted from the data. Then we repeated the fit on these reduced data. The new constant term is the real $C_k$ seen in Fig. \ref{fig:fig21}-b. For all channels $C_k$ is null within a good approximation: there is no need to envisage a noise component other than photon statistics in the wide range above 2000 adu. The coefficient $A_k$ includes the Johnson noise of the amplifier and fluctuations of the pedestal, reaching a few adus. We see in Fig. \ref{fig:fig20}-b that, first in the 2000 -20000 adu range (above the yellow line) the $A_k$ term is too small compared to the signal to be sensed and a linear fit is perfect, then in the 200-2000 adu range (inside rectangles) $A_k$ is needed and the dispersion of the residuals (error bars) increases as a result of the pedestal fluctuations, and finally in the range below 200 adu (not sampled in the ramp), pedestals should be processed differently. In the present section we emphasized the importance of an accurate $\Phi$ scale for the photon noise ramp. Previously in Sect.\ref{sec:55}, we developped an accurate $\Psi$ scale needed for the WP signal ramp. This does not set the two scales on the same footing. The $\Phi$ scale is already at its theoretical precision limit of 1.8$\times$10$^{-4}$ because of the photon statistics, while the $\Psi$ scale precision is limited by the poor stability of electronics, also at 1.8$\times$10$^{-4}$. $\Psi$ scale could be improved by two orders of magnitude by using high precision electronics, to reach its photon statistical limit, and both scale could be reconciled at a common value better than 10$^{-5}$. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure20} \caption{{\bf a)} Bias <$\delta S_k(\Xi_I)$>$_k$ (in blue) drift continuously with time. It equally affects all 72 residuals of the fit S($\Psi$) vs. $\Psi$. When corrected, the width of the residual distribution is reduced to its photon statistics value (Fig. \ref{fig:fig21}-a). This correction is equivalent to a modification of the flux scale $\Psi_I$ $\rightarrow$ $\Phi_I$. {\bf b)} The flux ramp sequence $\Psi_I$ (red), taken from Fig. \ref{fig:fig15}, is correlated to the dispersion of biases (error bars in Fig. \ref{fig:fig20}-a). For $\Psi_I$ <2000 adu (black boxes) the dispersion of the pedestals dominates the dispersion of the photon number. \label{fig:fig20}} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figure21} \caption{{\bf a)} Residuals of $S_k(\Xi_I)$ vs. $\Xi_I$ fits (k=1, 72, I=1;70). In black we plot data and Gaussian fit for $\Psi_I$>2000 adu with a width =2.7$\times$10$^{-8}$ rms; in blue we show the complete data. {\bf b)} Extrapolation at infinite flux for any channel k yields a negligible value of C$_k$=S$_k$(0). \label{fig:fig21}} \end{figure} \section{Conclusions and perspectives} \label{sec:conclusion} We have shown in this paper that there is no other limit for a photometry based on the Megacam camera than the statistical fluctuations of the photon count in any CCD area, set in our case to below 1 ppm per exposure for the whole image. The proof, using the properties of some difference operators applied to the photon field, is indirect because of the defects of the electronics. The single photo-electron response is calibrated by statistics and the integrated flux is measured for each exposure using the total response of the whole detector or a part of it. We might call this type of photometry self-consistent or self-calibrated. If the LED flux or the CCD flux are deduced from electronic readings alone, the precision is limited by the stability and the calibration of electronics. After optimizing the electronics this precision limit should be around 20-30 ppm. In practice, with either Sndice or Megacam electronics, the current precision is degraded to 100-200 ppm. We showed how electronics might be optimized to suppress these practical limitations. We call this type of photometry electrically calibrated. The precision of an optimized electric calibration could be maintained for years. It surpasses the best photometric results obtained using stable stellar sources. Electric calibration allows comparing different exposures of a varying light source or monitoring the evolution of a detector with a constant LED source, while self-calibrated fluxes are used to compare even more precisely two sources with a common detector or two detectors with a common source (e.g., for calibration transfer). Based on these examples, we can build a large set of direct illumination calibration applications. A third type of calibration, the absolute calibration, is done in SNDICE by a common practice method using a NIST calibrated photodiode in a test bench. In this case, we would speak of accuracy instead of precision. We did not discuss accuracy because the absolute calibration procedure mentioned above is not sufficiently reliable. First, because it does not consider either the angular dependence and the map of the detector quantum efficiencies or the emission pattern of the light sources. Second, because in the light of our study of cooled large area photodiodes for other Sndice publications, we cannot take the accuracy of the photovoltaic quantum efficiency used by NIST for granted as the reference photodiode yield. At this point, we could have concluded the review of the instrumental results obtained from our Sndice-on-Megacam data by observing that we had reached a level of precision better by two order of magnitude than best astronomical photometry and that we measure the effect of diffuse reflection on the mirror that is not seen by other means with the same precision. However, it was more fruitful to take a new point of view: to consider the interference patterns that we called the WP signal like a signal to be studied instead of a noise to hide (as calibration systems using incoherent extended illumination do). The WP signal represents 10\% of the flux seen in Megacam. It is stable and we measure it at an overall 10$^{-5}$ precision level. Classical signal processing methods have been applied to the WP signal. They produced simple and useful results. In frequency space, the WP spectrum separates the specular and diffuse components of light. It measures the effect of the photon propagator in free space and yields optical surface quality estimators. It also maps the defects of optical surfaces individually. As compared with holographic or phase-contrast systems with similar abilities, we have had a large-scale high-performance system for free already built in the camera. In direct space, we have used and developed pixel difference operators (PDE) with extremely useful results. In particular, the extraction of photon statistical noise is performed by four independent operators that first define the angular position of an LED in the telescope frame at a 0.4 nrad rms precision and then yield four noise estimators at better than 1 ppm. The perspective of entering a new territory of high-precision photometry should be seriously considered. The first steps might easily be to build dedicated photometric systems and improve commercial components such as LEDs for our purposes. \begin{acknowledgements} This work incorporates a number of important contributions to the subject that must be quoted and for which I express all my gratitude. First of all is the Megacam camera including its E2V CCDs, which represents an epochal realization. I am grateful to Pierre Borgeaud, who introduced us to the Megacam electronics and helped starting the electronic developments described in Claire Juramy's thesis. In this thesis we also found the first proposition of a direct illumination calibration of a telescope with LEDs that we developed together in a later article. Reynald Pain together with the CFHT scientific and technical team transformed this concept into the SNDICE project. Kyan Schahmaneche has led the realization of Sndice with its associated calibration bench in record time and its installation in Hawaii. Kyan and Augustin Guyonnet did the calibration of Sndice on the bench. Augustin's thesis started the study of the high-precision methods presented here. Sndice calibration tools were developed and advanced in an article by Nicolas Regnault. He applied these tools with Marc Betoule to the calibration of the SNLS experiment. Last but not least, we benefited from illuminating discussions with Pierre Astier. \end{acknowledgements} \def{\em A\&A}{{\em A\&A}} \defApJ{ApJ} \defApJ Lett.{ApJ Lett.} \defApJ Supp.{ApJ Supp.} \defAJ{AJ} \defPhys. Rev. D{Phys. Rev. D} \bibliographystyle{aa}
2,877,628,090,951
arxiv
\section{Introduction} \label{Sec:1} The behaviour of individuals or groups of people analyzed by means of mathematical physics, appended by empirical data (observations) is attracting researchers working in various fields of natural sciences: mathematics, biology, sociology, economy etc., see e.g. \cite{Buchanan, Barabasi, Lovasz, Galam, Perc, Perc1}. A branch related to sociology is called social physics or sociophysics \cite{Sociophysics, Sociology}. It investigates various phenomena by analogy with astronomical, physical, chemical, and physiological phenomena. Social physics refers also to using big data analysis and the mathematical laws to understand collective effects of human crowds. The basic idea is that data about human activity contain mathematical patterns that are characteristic of social interactions as well. The application of methods of mathematical physics in chemistry, biology, physiology, sociology and related area is not new. In the present paper I try to advance further by including innermost motivations of human behaviour and/or groups of persons. I adhere to the philosophical principal of dualism by which life is based on balance between two competing components, "good" (love, interest, sympathy, empathy etc.) and "bad" (disgust, hate, antipathy, evil etc.). Alternatives to dualism are monism, by which only a singly category exists, or pluralism, implying a multitude of categories in nature. We chose dualism (Sec. \ref{Sec:Conjugal}) for its universality, simplicity and affinity to the basic laws of physics. The idea that mental and bodily events are coordinated, without causal interaction between them is known as {\it philosophical parallelism}. It assumes correlation of mental and bodily events, but denies any direct causal relation between mind and body. Accordingly, mental and bodily phenomena are independent, yet inseparable. Psychophysical parallelism \cite{Chisholm} is a third possible alternative regarding the relation between mind and body, between interaction (e.g., dualism) and one-sided action (e.g., materialism, epiphenomenalism). It is a theory related to dualism suggesting that although there is correlation between mental and physical events there is no causal connection. The definition of and relation between {\it body and soul} is a delicate issue, not easy to fix rigorously or unambiguously. In any case, I believe that part of the universe, namely its materialistic component may be conceived or, at least approached by means of mathematical physics, while the other one, spiritual belongs to theology. I do not enter that area, instead try to approach the interface region between the two by using general and flexible physical laws and models of interaction. These models are based on attraction and repulsion with free parameters adjusted to empirical data, intuition and guesses suggested by great artworks. By the latter I mean the treasury of world art, accumulated in works of philosophers, artists, writers, poets and musicians. Great artists were able to unveil the past and intuitively reveal the future. A typical problem in specifying the border and transition region between matter and spirit is in understanding emotions (feelings). Physiologists attempted to find the origin and nature of feeling in experiments with animals. Popular are discussions connected with the pleasure and searches of relevant centres in animals and human beings. Brain stimulation reward (BSR), discovered by James Olds and Peter Milner \cite{Olds} is a pleasurable phenomenon generated by stimulation of specific brain regions. Profound observations of anlimals' internal world ({\it ethology}) and mysterious telepath phenomena can be found in Refs. \cite{Lorenz1, Lorenz2}. Human souls are battlefields between body (physiology) and spirit (divine), subject of literature masterpieces such as L. Tolstoy's "Father Sergius" \cite{Tolstoy}. I will not penetrate the "other side", instead try to model the behaviour of human beings based on observations, literature, intuition and common sense. I try to demonstrate that human relations may follow simple laws of attraction and repulsion appended by observations (empiric). At this point it is appropriate to cite Emmanuel Kant \cite{Kant}: "Two things fill the mind with ever new and increasing admiration and awe, the more often and steadily we reflect upon them: the starry heavens above me and the moral law within me. I do not seek or conjecture either of them as if they were veiled obscurities or extravagances beyond the horizon of my vision; I see them before me and connect them immediately with the consciousness of my existence." The aim of the present paper is animation of world lines and social networks. By "animation" I mean inspiration, i.e. endowing familiar mechanical, statistical or mathematical constructions (world lines and social networks) with footprints of spirit, absent from machines but present in human behaviour. The paper is organized as follows: in Sec. \ref{Sec:Wlines} I introduce World lines, to be combined with networks in Sec. \ref{Sec:WL_Net}. In Sec. \ref{Sec:Binary} I overview various binary systems and their interrelation, with Sec. \ref{Sec:Interrelate}, preparing the ground for the central part of the paper, Sec. \ref{Sec:Conjugal} modelling the dynamics of particular binary systems, namely those of married couples. \section{Worldlines} \label{Sec:Wlines} Worldlines (WL) \cite{WL1}, called also time-space geography are three-dimensional plots in which two dimensions are spatial (we use flat, Euclidean space), the third coordinate being time. The existence of the fourth, time dimension, apart from those spacial was intuited, well before Lorentz, Poincar\'e and Minkovski by great Greeks, followed by Spinoza, Kant, I. Newton and, last but not least, Herbert Wells in his {\it Time Mashine}. Less familiar, moreover unkonwn, are the related ideas of the Russian philosopher Aksionov, available in Refs. \cite{Aksionov, UFN}. When Newton formulated his theory of gravity, he assumed time to be linear, with unchangeable rate of flow. He assumed space to the absolute, unchanging, and Euclidean: ’the divine sensorium’ \cite{Barrow}. \begin{figure}[H] \centering \includegraphics[scale=0.7]{WL00.pdf} \caption{World lines and branes: from the WL of a point-like object (leftmmost), via a two-dimensional sheet (propagating rubber band, {\it i.e.} a string) to a multi-dimensional brane.} \label{Fig:WL1} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=1.0]{WLine2.pdf} \caption{Worldlines (WL). Leftmost is a dimensionless line; crossing lines (next to the right) may or may not interact. The rightmost object, emerging from the merge of two structureless lines has finite dimensions; it is called strip (rubber band), tube or brane.} \label{Fig:WL} \end{figure} Time geography or time-space geography is an interdisciplinary merge of spatial and temporal events. Time geography is a framework and visual language in which space and time are basic dimensions to analyse dynamic processes. Time geography was originally developed by geographers, but now it is used in various fields including anthropology, environmental science etc. Since the 1980s, time geography is used also by researchers in biological and social sciences, and in interdisciplinary fields. Benjamin Bach with his colleagues \cite{Bach} have generalized the space-time cube towards a framework for temporal data visualization applicable to all data that can be represented in two spatial dimensions plus time. Worldlines are perfect tools to illustrate biographies \cite{Gamow, Jenk}. Constructing and drawing worldlines of known persons, based on documents and/or (auto)biographies, combined with their genealogical lines is an amusing and useful exercise. \begin{figure}[H] \centering \includegraphics[scale=0.8]{Wline2+.pdf} \caption{Worldlines appended by elements of a "marriage network" (right margin, to be expanded in Sec. \ref{Sec:Hall}). Such worldlines may evolve also in genealogical graphs, alluded to at the bottom and top of the present figure.} \label{Fig:Marriage} \end{figure} A historical example with simplified (straightened) worldines (WL) of four actors (EBGL) is shown in Fig. \ref{Fig:BGL}, Ref. \cite{Jenk}. \begin{figure}[H] \centering \includegraphics[scale=1.4]{World_lines.pdf} \caption{Straightened WLs of Euclid, J\'anos Bolyai, C.F. Gau{\ss} and N.I. Lobachevsky. Three genii who discovered the new, non-Euclidean world, living on the same continent, nearly at the same time, never met \cite{Jenk}.} \label{Fig:BGL} \end{figure} Worldlines (time geography) may be useful to visualise events or, extended/appended by social networks, in analysing data (e.g. in history) and in making predictions (e.g. in sociology) by extrapolation. WLs themselves offer an infinity of options and applications, e.g. by replacing lines with extended objects - strips, sheets, bands, tubes, branes. (NB: A brane is a physical object that generalizes the notion of a point particle to higher dimensions.). 2) increasing the number of lines/tubes, including continuum (merge of WLs). With the advent of computational and storage capacities, infinitely large manifolds of WLs, coming close and interacting multiply, evolving towards a continuous, three-dimensional bulk of world history, parametrized by relevant computer codes may be realized in the near future. A particular extension of world lines are the phase- and configuration spaces. In classical mechanics, {\it phase space} is the space of all possible states of a system. Remind that the state of a mechanical system is determined by the position $p$ and momenta $q$ of its constituents, where $p$ and $q$ predict the further evolution of the system at any time, provided the laws governing the motion of these objects are known. In configuration space, the parameters that define the configuration of the system are called generalized coordinates and the vector space defined by these coordinates is called configuration space. For example, the position of a single particle moving in ordinary Euclidean three-dimensional space is defined as $q=(x,y,z)$ and its configuration space is $R^3$. A particle may be constrained to move within a specified manifold. For example, if the particle is constrained by a rigid bond, free to swing about the origin, it is constrained to lie on a sphere. For $n$ disconnected, non-interacting point-like particles, the configuration space is $R^{3n}$. A word of warning: in life sciences/sociophysics, mechanical momentum should be replaced by a relevant variable. In this sense, the predictive power of the phase- or configuration spaces is the same as that of a world line, but it is a convenient and powerful technical tool, especially in the case of a large number of objects. The formalism of the phase- and configuration spaces offers huge perspectives in social science studies, provided we know the laws governing human beings or societies. {\it A priory} we do not. In the present paper I try to guess and model these laws and regularities, to be verified empirically. Worldlines and social networks are different. While WLs evolve in time along certain trajectories, as shown {\it e.g.} in Figs. \ref{Fig:WL1}--\ref{Fig:BGL}, networks are static. Time dependence, and more details, such as the "price" of a vertex {\it etc.}, may be introduced in networks. If so, the time dependence becomes hierarchic: "internal" within the network and "external" along the WL, as in Sec. \ref{Sec:Conjugal} and Fig. \ref{Fig:Embed}, obeying statistics and topology. Still, they have some common features. With genealogy trees included, WLs acquires many features of a social network (mind the arrow of time!), hence one may look for a particular {\it duality} by interchanging time (vertical orientation) and the (horizontal) spatial coordinate ("interaction range"), remembering however of the uniqueness of the time arrow. In perspective, worldlines may play an important role in descriptive history. By this I mean a detailed panoramic view of the evolution of mankind including WLs of individuals and groups, societies etc., as well as their interaction/intersection at various levels and forms. The realization of such a huge "bank of world lines" technically was incredible in the past, but now, with the advent of huge computation and storage capacities, it may be realized! In the present paper I consider WLs of a single person or a couple -- building block of our societies. This will be useful when generalized to more complex systems, their interaction, collective effects etc., all that now realizable. \section{Worldlines and networks, arrow of time} \label{Sec:WL_Net} Networks \cite{Neural, Konig, Net} are studied and used in mathematics, computer science, geography and other fields of science. Random networks were proposed by Erd\H{o}s and R\'enyi \cite{Renyi} at the end of 1950s. The interest was renewed and reinforced after the discovery by Albert and Barab\'asi \cite{Barabasi} of strong heterogeneities. \begin{figure}[H] \label{fig:Pol} \centering \includegraphics[scale=1.0]{Net_simple.pdf} \caption{Simplest, primitive net: a binary system (see Sec.\ref{Sec:Conjugal}).} \end{figure} Network science studies also complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes (vertices) and the connections between the elements as links (edges), see {\it e.g.} \cite{Net} and references therein. The field exploits graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, and social structure from sociology. D\'enes K\"onig \cite{Konig} was among the pioneers. Probabilistic theory in network science was developed in Paul Erd\H{o}s and Alfr\'ed R\'enyi's papers on random graphs \cite{Renyi}. Albert-L\'aszl\'o Barab\'asi and R\'eka Albert \cite{Barabasi} developed the scale-free network which is a loosely defined network topology that contains hub vertices with many connections, that grow in a way to maintain a constant ratio in the number of the connections versus all other nodes. Network models serve as a tool in understanding interactions within empirical complex networks. The Erd\H{o}s –- R\'enyi model \cite{Renyi} is used for generating random graphs in which edges are set between nodes with equal probabilities. It can be used in the probabilistic method to prove the existence of graphs satisfying various properties. The Barab\'asi –- Albert model (BA) \cite{Barabasi} is a random network model used to demonstrate a preferential attachment or a "rich-get-richer" effect (see also \cite{Buchanan}). A successful social network model is that of Galams \cite{Galam}. Their work focuses on the dynamics of group decision making and how minority opinions can influence public opinion. Bipolarity, ({\it e.g.} Western "democracy" vs. Eastern "administrative command system") has a parallel with the title of the present paper: while {\it networks} correspond to democracy, {\it worldlines} are hierarchic. Funding of science is an example: centralized, vertical hierarchic funding, typical of the ex-Soviet Union is opposite to the horizontal system of grants, based on unbiased pear referees system (network!), provided it is free of corruption and conflicts (coincidence, correlation) of interests. Networks are widely used in estimating citation indices, that became important in scientometrix in deciding about the financial support of a researcher, group or institution. A citation network is a kind of social network that can be represented as a direct graph with nodes representing papers {P1, …, Pn} and edges e(Pi, Pj) between two nodes Pi and Pj denoting a co-citation relationship \cite{Perc1}, when the paper Pi cites paper Pj. The number of citations of scientific articles is becoming one of the most important measures of scientific impact and quality. Hence, the authors are trying to obtain as many citations as possible for their works by creating corrupted citation cartels, where members cite each other in order to increase their own number of citations. Besides the structure of interactions within the networks, of interest is also the interaction between networks. This aspect was studied in Ref. \cite{Perc} and papers quoted therein. As repeatedly emphasized, we combine ascending (arrow of time!) WLs with horizontal networks. Symmetries with respect to space, $P$, time, $T$ and charge conjugation, $C$ and their combinations play an important role in the microworld. In a fantastic scenario, such symmetries may be used in sociophysics as well. A simple example of such synthesis is shown in Fig. \ref{Fig:Marriage}, where the horizontal lines (marriage) on the right margin are hinted. They will be discussed in more details in Sec. \ref{Sec:Hall}, referring to Hall's marriage theorem. {\it Music} is a symbiosis of horizontal networks and vertical world lines (evolution with time): while harmony (key, chord, orchestration) is horizontal, melody and rhythm correspond to vertical evolution along the time arrow. The merge of the two produces {\it symphony}. Plastic art, painting, architecture, photo art are "frozen music". \section{Binary systems (dyads, dipoles, biparticle graphs)} \label{Sec:Binary} In this Section I specify the notion of binary systems. They may imply individuals, families, companies, countries, nations. Binary systems form the basis of further generalizations to "many-body" systems, big numbers, collective phenomena, etc. With dynamical equilibrium in mind, I rely on models of binary interactions known in physics. In the microworld (e.g. in the "standard theory" of basic, {\it i.e.} electro-week and strong interactions), stable are systems formed by two elementary constituents of opposite charge, as in the hydrogen atom, made of an electron and a proton or in a meson, made of a quark and antiquark. Quarks are bound by strings forming dipoles in mesons or triangles (or "Mercedes" stars) in baryons. Different is gravitation where all massive objects attract, no "antigravity"! The emergence and evolution of coupling in a binary system is an essential ingredient in our analysis. It depends on many factors: internal motivation, external influence and many more. Three types of motivations may induce two-body correlations: \begin{itemize} \item{\bf Confounding}. A motivation for correlation between actions of adjacent agents in a social network is external influence from elements in the environment. Mathematically, this means that there is a confounding variable $X$, and both the network $G$ and the set of active individuals $W$ come from distributions correlated with $X$. This is in contrast with the influence model, defined below. \item{\bf Influence}. An obvious explanation for social correlation is social influence. Mathematically, this can be modelled as follows: first, the graph $G$ is drawn according to some distribution. Then, in each of the time steps $1, . . ., t$, each non-active agent decides whether to become active. The probability of becoming active for each agent $u$ is a function $p(x)$ of the number $x$ of other agents $v$ that have an edge to $u$ and are already active. \item{\bf Homophile}. The third and the most obvious tendency of individuals to choose partners is based on similarity of characteristics (homophile, see Sec. \ref{Sec:AppI}). It leads to correlation between the actions of adjacent nodes in a social network. Mathematically, the set $W$ of active nodes is first selected according to some distribution, and then the graph $G$ is picked from a distribution that depends on $W$. \end{itemize} \subsection{Measuring social correlations} \label{Sec:Interrelate} Social correlation is a well-known phenomenon. Formally, this means that for two nodes $u$ and $v$ that are adjacent in $G$, the events that $u$ becomes active is correlated with $v$ becoming active. There are three primary explanations for this phenomenon: homophile, the environment (or confounding factors), and social influence. A logistic function with the logarithm of the number of friends as the explanatory variable provides a good fit for the probability. Therefore, one uses the logistic function with this variable, that is one estimates the probability $p(a)$ of activation for an agent with a already-active partner as follows: \begin{equation}\label{1} p(a) = \frac{e^{\alpha \ln(a+1)+\beta}}{1 + e^{\alpha \ln(a+1)+\beta}}, \end{equation} where $\alpha$ and $\beta$ are parameters. The parameter $\alpha$ measures social correlation: a large value of $\alpha$ indicates a large degree of correlation. \section{Supply and demand; optimal pairing} \label{Sec:Hall} Optimization of marriages is a popular exercises in network theory, especially that "marriage" may imply variants. Huge literature exists on the subject, see e.g. \cite{Marriage, Matching, Hall, Dilworth} and earlier references therein. A popular model is based on the so-called Hall's marriage theorem \cite{Hall} resulting from the combinatory that specifies distinct elements to be chosen from overlapping finite sets of elements. It is equivalent to several theorems in combinatory, including that of Dilworth \cite{Dilworth}. The name comes from an application to matchmaking, given a list of potential matches among an equal number of brides and grooms. The theorem gives a necessary and sufficient condition on the list for everyone to be married to an optimal match. Also, this is an example of the efficiency of the social networks. \subsection{Application to marriage, business partners, etc.} Suppose there is a certain number of women and men wanting to get married to someone of the opposite sex. Suppose that the women each have a list of the men they would like to marry, and that every man would like to marry any woman who is happy to marry him, and that each person can only have one spouse. The best known mathematical result is Hall's marriage theorem \cite{Hall}. It says that men and womenfolk can all be paired iff the following marriage condition holds: if in any group of women, the total number of men who are acceptable to at least one of the women in the group is greater than or equal to the size of the group. It is clear that this condition is a necessary one. Hall's marriage theorem says that it is also sufficient. This is a wide and popular subject, related to different areas of life, not only marriage but also employment, business, education, social relations. It is also a typical market problem of optimizing supply and demand. Below we illustrate the problem by a simple example of optimizing the employment of four students in four universities. The procedure may be applied in many areas, including marriage, of course. Let us have four post-doc candidates, call them Peter, Paul, Juan and Maria, aspiring to best universities, and assume there are four prominent universities, say those of Padova, Heidelberg, Oxford and Kiev, opening exactly the same number of post-doc positions/scholarships, one position in each. The students are not of equal capacity. The universities obviously want the best among the students, while the students do not care too much about the choice. In Fig. \ref{Fig:Hall} a biparticle graph, the students are on the top and the universities are at the bottom. A student and a university are connected if the university wants to have that student. For example, Kiev will invite any student, so it is connected with all four applicants, as shown in Fig. \ref{Fig:Hall}, left panel. \begin{figure}[H] \centering \includegraphics[scale=1.3]{Marriage1.pdf} \caption{Visualising Hall's "marriage theorem" (demand vs. supply).} \label{Fig:Hall} \end{figure} Hall's theorem suggests the following optimal matching: suppose $F$ is a biparticle graph ($A, B$). There is matching covering $A$ iff for every subset $X\subseteq A,\ \ N(X)\geq|X|$, where $N(X)$ is the number of neighbours of $X$. Let us illustrate Hall's condition in the following way: for a set of $n$ universities denote by $m$ the number of students that at least one of these universities want to have. If $m>n$ for every list of universities, than a matching is possible. Otherwise it is not. In Fig. \ref{Fig:Hall}, the middle panel highlights two universities (Padova and Oxford), but only one student, Juan is wanted by either one. Thus, since $1<2$, the matching fails. A solution: suppose Padova University's Council will invite Pietro instead. Then matching is successful and every student gets a position. Generally speaking, the number of students and positions does not need to be equal. If, for example, there are 10 students and 4 positions, and one wishes to fill every position, one can still use Hall's theorem, however in this case not every student will be granted a position. As argued by Barab\'asi \cite{Barabasi}, this problem has much in common with matching optimally the supply and demand. For example, it works when there is a certain number of aspirants applying for a position at a company (university etc.) and that company (university etc.) has a finite number of vacancies to be filled by best aspirants, see also Ref. \cite{Marriage}. In what follows we will be interested in the relations a couple of opposite sex. NB: In general, "sex" refers to the biological differences between males and females, such as the genitalia and genetic differences. "Gender" is more difficult to define, but it can refer to the role of a male or female in society, known as a gender role, or an individual's concept of themselves, or gender identity. \section{Conjugal life}\label{Sec:Conjugal} In this Section I investigate the creation and evolution of a family, a married couple -- nucleus of any society. Marriage is usually preceded by a period of "pairing", {\it i.e.} search for the optimal partner, see Sec. \ref{Sec:Hall}. I consider several scenarios illustrating conjugal life (many more are credible!). The dynamics $V(r,t)$ of the couples' life will be shown as function of two variables -- distance between the actors $r$ and time $t$, as well as their "worldline" (actually, a 2-dimensional curved string/band) embedded (living in) a 3-dimensional space-time world, see Sec. \ref{Sec:Wlines}. In the following Subsection we apply the successful model of inter-particle interactions known in high-energy physics, based on the so-called Cornell potential \cite{Cornell}. In Subsection \ref{Models} we use several empirical functions to model the conjugal life. Many more options are possible, e.g. those attached to milestones in couples' life, taken from the treasury of world art and literature, see Appendix B (Marriage quotes, aphorisms). \subsection{Toy model}\label{Models} In physics, the interaction between two charged particles, {\it e.g.} quarks, gluons is well described by the so-called Cornell potential \cite{Cornell}, balancing between attraction at large distances $r$ and repulsion at short distances: \begin{equation} V(r)=ar-b/r, \end{equation} where $a$ and $b$ are parameters. I apply this model to a married couple, extending it by introducing time dependence in $a$ and $b$, different for the partners, $a\rightarrow a_m(t)+a_f(t)$ and $b\rightarrow b_m(t)+b_f(t)$. Also, I add a background term $c(t)$ accounting for any external influence. Thus, the potential becomes \begin{equation} \label{Eq:Cornell} V(r,t)=[a_m(t)+a_f(t)]r-[b_m(t)+b_f(t)]/r+c(t). \end{equation} \begin{figure}[H] \centering \includegraphics[scale=1.2]{Cornell+.pdf} \caption{Conjugal relations following the "Cornell" potential Eq. (\ref{Eq:Cornell}) with time-independent (for the moment) parameters $a$ and $b$.} \label{Fig:Cornell} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.95]{Cornell++.pdf} \caption{Simple (and optimistic) trends in conjugal relations; the "Cornell model" closely follows logarithmic rise. Such a monotonic behaviour may be perturbed by small oscillations as in the "Swedish family" of a 1973 TV miniseries written and directed by Ingmar Bergman, starring Liv Ullmann and Erland Josephson. Their matrimony is a sequence of attraction and repulsion caused by a mixture of common intellectual interests, sex, frustration etc., that may be described by a sinusoid as in Figs. \ref{Fig:Cornell1} and \ref{Fig:Cornell_Sin} (right), depending both on distance $r$ and time $t$. Pessimistic scenarios, {\it i.e.} those with degrading or interrupted/ruined relations are not considered here.} \label{Fig:Cornell1} \end{figure} The interaction between individuals, similar to physics, is a function of the internal (inherent) properties of the individuals and their interrelation, both depending on time and relative distance. To be specific, below I concentrate on the basic binary systems, that formed by a male and female. My choice is motivated by: 1) the importance of the family as the basic cell in any society, 2)~similarity with the physical world: attraction between opposite (electric or magnetic) "charges" and repulsion between like charges. The above model is only an approximation to reality: manifestation of masculine or feminine characters, respectively by men and womenfolk is not as unique as for electric/magnetic charges. Even within traditional sexual relations, men may be endowed with feminine features and vice versa. By this I mean psychology, not physiology (biology). (I avoid the delicate and disgusting subject of "erroneous" inborn gender and its correction ("repair") by surgical intervention, popular in certain media). I mean something simple and obvious: opposite characters attract compensating something they are short of; usually we avoid/reject what we dislike in ourselves, or we miss. Passive males usually chose an active female and {\it v.v.}. This "compensation mechanism" in human relations was subject of numerous studies, see e.g. \cite{Otto}. The variety of tempers of the individuals (encoded in $a$ and $b$) may be modelled by replacing their sum with \begin{equation}\label{Eq:lambda} A=\lambda a+(1-\lambda)b, \end{equation} where $\lambda$ is a measure of masculinity (or, symmetrically, feminity), varying between $0$ and $1$. This formula accounts for complementarity, important in balancing attraction and repulsion, assuming that $a$ and $b$ correspond to opposite (or at least different) tempers (see \cite{Otto}). As noted, a balanced couple is that where the partners compensate the shortage/excess of their inborn qualities (egoism-altruism, openness-insularity, optimism-pessimism, practicality-dreaminess, etc.). The equilibrium may be regulated by the parameter $\lambda$ in Eq. \ref{Eq:lambda}. Available literature (see Appendix, Sec. \ref{Sec:Quotes}), folklore and daily observations offers innumerable examples of binary relations as functions of time and space. \begin{figure}[H] \centering \includegraphics[scale=0.35]{Cornell3D_6.pdf} \includegraphics[scale=0.6]{Connell_Sin.pdf} \caption{Two-dimensional plots of Figs. 7 and \ref{Fig:Cornell1} are generalized to 3D by introducing, apart from distance, also time dependence of the parameters $a\rightarrow a(t),\ b \rightarrow b(t)$ and $c\rightarrow c(t)$. NB: The right-hand icon shows also mild oscillations with time (cf. Fig. \ref{Fig:Cornell1}).} \label{Fig:Cornell_Sin} \end{figure} \begin{figure}[H] \includegraphics[scale=0.7]{Fig10.pdf} \caption{Rescaled surfaces (bands, strips) of Fig. 9 embedded in an external coordinate system $(x,y,t)$ live there string-like, the string’s elements (here, a married couple, labelled with $m$ and $f$) interacting continuously, sweeping a world-sheet (brane) in the external $3$-dimensional space-time (cf. Figs. 2 and 3). This simplified case aims to illustrate the idea.} \label{Fig:Embed} \end{figure} Figs. \ref{Fig:Cornell1} -- \ref{Fig:Embed} illustrate scenarios of a couple's life. Note an interesting phenomenon, known in the microworld, and related to repulsive forces at small distances $r$ known as {\it fall to the center} or {\it ultraviolate divergence}. The problem was extensively studied in relativistic quantum field theory, and cured by {\it renormalization technique} \cite{Bog}. A popular review can be found {\it e.g.} in Ref. \cite{Kosyakov}. In a similar way, two persons (here a married couple), lean towards each other up to a certain distance corresponding to $V(r)=0$ in Fig. \ref{Fig:Cornell1}. This is understandable: each human being has his own innermost "privacy territory", closed to external intervention. This limit is "case-dependent", but it necessarily exists! Violated, i.e. penetrated by an outsider (intruder), the core/sole may be damaged and the individual may loose his/her {\it ego}. This is similar to the materialistic microword: a particle looses its identity if an external agent penetrates the barrier of critical repulsion, see Ref. \cite{Kosyakov}. \subsection{Units and scales}\label{ssec:Units} In moving from the material ("body") to the spiritual ("soul") world, we are facing the delicate problem of units, indispensable in natural sciences. Without going into details, let me only mention three options: classification of quantitative notions, comparative ones and quantitative, called also metric (contrary to the previous two, defined as "topological"). Let us try to be "metric" as far a possible, relying on the dominant logical, causal rather than chaotic behaviour of people. Causality implies that any effect is preceded (caused) by its cause. On the other hand, any physical action, including mechanical motion to large extent is induced by emotions, motivation. In other words, a {\it measurable} action, may be related/reduced, in some non-trivial way to- or derived from accompanying emotions. The toy model of Subsection \ref{Models} is an attempt to do so: translate ("materialize") feeling by using a familiar coordinate system. The linear dependence on $r$ may be replaced by more complicated functions, but the above "Cornell" form is a convenient way to demonstrate the idea. By choosing $a,\ b,\ r,\ t$ as variables, we must define their dimensions as well as that of the "potential" $V(a,b,r,t)$. While the dimensions of $r$ (distance) and $t$ (time) are obvious, ({\it e.g.} meters and seconds), the intensity of "feelings", $a$ and $b$, are to be defined. A new unit may be introduced. In choosing the scale one may follow the definition of temperature, scaled to boiling and freezing points. For example, set $100$ degrees as the upper ("boiling") value of $V(r,t)$ (marriage) and $0$ as the "freezing" point, {\it e.g.} associated with separation (divorce etc). Actually, it is common to characterize feelings using "thermal" vocabulary, ranging from frozen relations ($0$ degrees), though cold, cool, worm, hot relations, arriving to boil (blow up) at $100$ degrees. In this section we have set down a framework admitting of various dynamical inputs, visualized by simple semi-quantitative examples. Common is the start: a marriage occurs when two individuals decide to connect their world lines. The subsequent evolution may follow different avenues. The predictive power of our formalism depends on the dynamics fed in. The main source of information comes from observations (empiric), literature, inspiration, etc. Common and definite are the "initial conditions": people marry when they prefer to live together rather than separately -- attraction dominates over repulsion. \section{Conclusions, perspectives and open questions}\label{Sec:Concl} In this paper I attempted the almost impossible: to combine irrational/indeterministic human behaviour with rational/deterministic laws of physics. The conclusion of this study are manifold. On the one hand, reproducing world lines of known people extended by relevant social network and genealogy, given the huge amount of "data" in the literature, is more than just amusing entertainment: it is instructive and useful not only for history but also as a check of the method presented in this paper, evolving towards useful applications and predictions. Interesting is the next step involving multiply interacting networks fitted to more than two actors. In any case, understanding binary systems is indispensable to progress towards more complex collective systems including possible critical phenomena, with phase transitions in social systems. Van der Waals forces and the relevant equation, similarly to binary systems discussed in Sec. \ref{Sec:Conjugal} are based on attraction and repulsion between the constituents, offering many possibilities of their application in sociophysics. Simple semi-quantitative examples in that Section (Figs. \ref{Fig:Cornell}--\ref{Fig:Embed}) are meant merely to illustrate the basic ideas. Promising are studies of an increasing number of worldlines/tubes, including continuum (merge of WLs). With the advent of computational and storage capacities, infinitely large manifolds of WLs, coming close and interacting multiply, evolving towards a continuous $3$-dimensional bulk of world history, parametrized numerically of by phenomenological models. Modern computing and storage capacities offer perspectives to handle the interaction of large numbers of encoded world sheets. In studying the behaviour of a large number of individuals one faces two kinds of hierarchic systems: totally hierarchic (vertical, as in a WL) and completely democratic (horizontal networks) systems The real world is a mixtures of two. The above-mentioned bipolarity (Western "democracy" vs. Eastern "administrative command system") has a parallel with the title of the present paper: while {\it networks} correspond to democracy, {\it worldlines} are hierarchic. In perspectives, worldlines may play an important role in descriptive history. By this I mean a panoramic view of the evolution of the society, including WLs of individuals and groups, societies etc., as well as their interaction/intersection at various levels and forms. Realization of such a huge "bank of world lines" technically was incredible in the past, but now, with the advent huge computation and storage capacities, it may be realized! The fate of an ethnic/linguistic minorities in alien environment may be modelled by a drop of oil in water. Chances for its survival/assimilation (by solution) depends on its homogeneity and surface tension of the drop and aggression of the medium, the surrounding liquid. The theory of percolation may mimic contacts and flow across borders. We intend to study these phenomenon with applications to familiar examples of big diaspora: Armenian, Russian, Jewish, Chinese, Hungarian, Spanish etc. \section*{Ackowledgements} I thank A. Zhokhin for numerous discussions and Yu. Shtanov for useful remarks. \section*{Appenix A. Types and dimensions of homophile} \label{Sec:AppI} \begin{itemize} \item{\bf Baseline vs.inbreeding.} One distinguishes between baseline homophile and inbreeding homophily. The former is the amount of homophily that would be expected by chance given an existing uneven distribution of people with varying characteristics, and the second is the amount of homophily over and above the expected value \cite{Homophily}. \item{\bf Status vs. value.} Different are the status homophile and value homophile: individuals with similar social status characteristics are more likely to associate with each other than by chance. "Status" includes both ascribed characteristics like race, ethnicity, sex, and age. In contrast, value homophile involves association with others who think in similar ways. \item{\bf Race and ethnicity.} Social networks may be affected by race and ethnicity, which account for the greatest proportion of inbreeding homophile. Smaller groups have lower diversity simply due to the number of members, and this tends to give racial and ethnic minority groups a higher baseline homophile. \item{\bf Sex/gender.} As to sex and gender, baseline homophile of networks is relatively low compared to race and ethnicity. Men and women frequently live together, and are both large and equally-sized populations. \item{\bf Age.} Most age homophile is of the baseline type. For example, the larger age gap someone had, the smaller chances that they were confided by others with lower ages. \item{\bf Religion.} Homophile based on religion is due to both baseline homophile and inbreeding. \item{\bf Education, occupation and social class.} Parents account for considerable baseline homophile with respect to education, occupation, and social class. \end{itemize} \section*{Appendix B. Marriage quotes, aphorisms} \label{Sec:Quotes} Literature and folklore offer an inexhaustible source to guide empirical world lines, see Sec. \ref{Models}. Below is a short selection of aphorisms. Many more were collected and commented by Leo Tolstoy, see \cite{Leo}. {\bf Fran\c coise Sagan}: All marriages are successful. Difficulties begin when living together begins. {\bf Leo Tolstoy, "War and piece"}: Les marriages se font dans les cieux (Die Ehen werden im Himmel geschlossen); {\bf Leo Tolstoy, "Anna Karenina"}: All happy families resemble one another, each unhappy family is unhappy in its own way. {\bf Maria Ebner von Eschenbach}: Marriages are made in heaven, but they do not care that they are successful. {\bf A.I. Kuprin}: Separation for love is the same as the wind for fire: it extinguishes a small love, and inflates a large one even more). {\bf Friedrich Wilhelm Nitzsche}: If the couple did not live together, successful marriages would occur more often. {\bf Folklore}: https://ru.citaty.net/tsitaty/452612-mariia-fon-ebner-eshenbakh-braki-sovershaiutsia-na-nebesakh-no-tam-ne-zabotiatsia/ Out of sight, out of mind (Aus den Augen – aus dem Sinn). Far from eye, far from heart. It's one step from love to hatred. \section*{Appendix C. Simple Wolfram Mathematica codes} \label{Wolfrem} For readers' convenience and easement, below we quote several simple Wolfram Mathematica codes used in Subsec. \ref{Models}, Figs. \ref{Fig:Cornell}--\ref{Fig:Embed}. The toy models are intended merely to illustrate the idea, opening the way to more sophisticated applications. {\bf Cornell Conjugal}: \begin{verbatim} a1 = a2 = 1; b1 = b2 = 1; al1 = a1/t; al2 = a2/t; bl1 = b1; t; bl2 = b2*t; cl = 0; Plot3D[{-(al1 + al2)/r + (bl1 + bl2) r + cl}, {r, -0.1, -1} , {t, 0.1, 10}, AxesLabel -> {r, t, V}, BoxRatios -> {1, 1, 1}] \end{verbatim} {\bf World sheet; hierarchic coordinates}: \begin{verbatim} a1 = a2 = 1; b1 = b2 = 1; al1 = a1/t; al2 = a2/t; bl1 = b1; t; bl2 = b2*t; cl = 0; Plot3D[{-(al1 + al2)/r + (bl1 + bl2) r + cl}, {r, -0.1, -1} , {t, 0.1, 10}, AxesLabel -> {r, t, V}, BoxRatios -> {1, 1, 10}] \end{verbatim} \newpage {\bf Cornell3D} \begin{verbatim} a1 = a2 = 1; b1 = b2 = 1; al1 = a1*t; al2 = a2; bl1 = b1; bl2 = b2/t^(0.1); cl = 0;Plot3D[{-0.1 (al1 + al2) r + 10 (bl1 + bl2)/r + cl}, {r, 2, 5} , {t,5, 10}, AxesLabel -> {r, t, V}, BoxRatios -> {1, 1, 1}] \end{verbatim} \begin{verbatim} a1 = a2 = 1; b1 = b2 = 1; al1 = a1*t; al2 = a2*Sin[3 t]; bl1 = b1*t; bl2 = b2/t; cl = 0; Plot3D[{-(al1 + 2 al2)/(r + 1) + 0.5 (bl1 + bl2) r + cl}, {r, 1, 10} , {t, 1, 5}, BoxRatios -> {1, 1, 1}, AxesLabel -> {"r", "t", "v"}] c = 0; a = 1; b = 1; f1 = a*r; f2 = -b/r; f3 = f1 + f2 + c; Plot[{f1, f2, f3}, {r, 0, 5}, PlotLegends -> "Expressions"] \end{verbatim} \newpage
2,877,628,090,952
arxiv
\section{\large{Intro: Loop-erasure and random partitioning}}\label{intro} Consider an arbitrary simple undirected weighted graph $G=(V, E, w)$ on $N=|V|$ vertices where $E=\{e=(x,y): x,y \in V \}$ stands for the edge set and $w: E \rightarrow [0,\infty)$ is a given weight function. We call the Simple Random Walk (SRW) associated to $G$ the continuous-time Markov chain $(X_t)_{t\ge 0}$ with state space $V$ and minus \emph{the graph Laplacian} as infinitesimal generator, i.e., the $N\times N$ matrix: \begin{equation}\label{Laplacian} -\mathcal{L}= \mathcal{A}- \mathcal{D}, \end{equation} where for any $x,y\in [N]:=\{1,2,\dots ,N\}$, $\mathcal A(x,y)=w(x,y)\mathbf{1}_{\{ x\neq y\}}$ is the \emph{weighted adjacency matrix} and $\mathcal D(x,y)=\mathbf{1}_{\{ x=y\}}\sum_{z\in[N]\setminus\{x\}} w(x,z) $ is the \emph{diagonal matrix} guarantying that the entries of each row in $\mathcal L$ sum up to $0$. The goal of this paper is to explore the following probability measure on the set of partitions $\mathcal P(V)$ of the vertex set $V$. \begin{definition}[{\bf Loop-erased partitioning}]\label{LEP} For a given graph $G=(V, E, w)$, fix a positive parameter $q>0$. We call \emph{loop-erased} a partition of $V$ in $m\leq N$ blocks sampled according to the following probability measure: \begin{equation}\label{LEPmeas} \mu_q(\Pi_m)= \frac{q^m \times \sum_{F: \Pi(F)=\Pi_m } w(F)}{Z(q)}, \quad\quad \Pi_m\in \mathcal P(V), \end{equation} where the sum is over spanning rooted forests $F$'s of $G$, $\Pi(F)$ stands for the partition of $V$ induced by a forest $F$, $w(F):=\prod_{e\in F} w(e)$ for the forest weight, and $Z(q)$ is a normalizing constant. We denote by $\Pi_q$ a random variable in $\mathcal P(V)$ with law $\mu_q$. \end{definition} In the above definition a spanning rooted forest of a graph is a collection of rooted trees spanning its vertex set. Denoting by $\mathcal F$ the set of spanning rooted forests of $G$, we notice that---due to the Markov tree theorem---the normalizing constant in \cref{LEPmeas} can be expressed as the characteristic polynomial of the matrix $-\mathcal L$ evaluated at $q$, i.e. $$Z(q):=\sum_{F\in \mathcal F }w(F)=\det[q+\mathcal L].$$ Furthermore, the number of blocks in $\Pi_q$, denoted by $|\Pi_q|$ , is distributed as the sum of $N$ independent Bernoulli random variables with success probabilities $\frac{q}{q+\lambda_i}$, for $i\leq N $, with $\lambda_i$'s being the eigenvalues of $\cL$. We refer the reader to \cite[Prop. 2.1]{AG} for the latter statements. \subsection{Small Vs ``fat'' clusters.} The first factor $q^m$ in \cref{LEPmeas} favors partitions having many small blocks as $q$ growths, while as $q$ vanishes, the measure degenerates into a one-block partition. The second combinatorial factor concentrates instead on partitions with a few ``fat'' blocks. Indeed in the unweighted case this second factor is counting how many rooted trees can be arranged in each block. For example, in the simple setup of a complete graph with $w\equiv1$, the measure in \cref{LEP} reduces to \begin{equation}\label{completeLEP} \mu_q(\Pi_m)= \frac{q^m \times \prod_{i=1}^m n_i^{n_i-1}}{Z(q)}, \end{equation} for a partition $\Pi_m=\{B_1,\ldots, B_m\}\in \mathcal P(V)$ constituted of $m$ blocks with sizes $|B_i|=:n_i$, $i\leq m$. \cref{completeLEP} holds true because, by Cayley's formula, $n_i^{n_i-2}$ unrooted trees can cover block $B_i$, and since we are dealing with rooted trees, an extra volume factor $n_i$ for the possible roots is needed. This competition between ``many small and few fat'' blocks depends on the delicate interplay among the tuning parameter $q$, the underlying geometry and the weight function $w$. \subsection{Sampling algorithm and Loop-Erased RW (LERW)}\label{Wilson} An attractive feature of this measure is that there exists a simple efficient\footnote{The averaged running time is given by the sum of the inverse of the eigenvalues of the graph Laplacian, see \cite{M00}.} sampling algorithm. Originally due to Wilson \cite{W96} and based on the associated LERW killed at random times. The LERW with killing is the process obtained by running the SRW, erasing cycles as soon as they appear, and stopping the resulting self-avoiding trajectory at an independent time $\tau_q$ with law an exponential of parameter $q$. The algorithm can be described as follows: \begin{enumerate} \item\label{1} pick \emph{any arbitrary} vertex in $V$ and run a LERW up to time $\tau_q\overset{d}{\sim}\exp(q).$ Call $\gamma_1$ the obtained self-avoiding trajectory. \item\label{2} pick \emph{any arbitrary} vertex in $V$ that does not belong to $\gamma_1$. Run a LERW until $\min\{\tau_q, \tau_{\gamma_1}\}$, $\tau_{\gamma_1}$ being the first time the SRW hits a vertex in $\gamma_1$. Call $\gamma_2$ the union of $\gamma_1$ and the new self-avoiding trajectory obtained in this step. Notice that if the killing occurs before $\tau_{\gamma_1}$, then $\gamma_2$ is a rooted forest in $G$, else $\gamma_2$ is a rooted tree. \item Iterate step (\ref{2}) with $\gamma_{\ell+1}$ in place of $\gamma_{\ell}$ until exhaustion of the vertex set $V$. \end{enumerate} When this algorithm stops, it produces a \emph{spanning rooted forest} $F\in \mathcal F$, where the roots are the points where the involved LERWs were killed along the algorithm steps. The resulting forest $F$ on $G$ induces the partition $\Pi(F)$ of the vertex set $V$, where each block is identified by vertices belonging to the same tree. It can be shown that the probability to obtain a given rooted spanning forest $F$ is proportional to $q$ to the power of the number of trees, times the forest weight $w(F)$. It follows that the induced partition is distributed as $\Pi_q$ in \cref{LEP}. We refer the reader to \cite{AG} for the proof of the latter and for more detailed aspects of this algorithm, including dynamical variants. In the sequel we will denote by $\P$ a probability measure on an abstract probability space sufficiently rich for the randomness required in the steps of this algorithm. \subsection{Partition detecting metastable landscapes.} The sampling algorithm described above shows that the resulting partition has the tendency to cluster in the same block (tree) points that can be visited by the SRW with high probability on time scale $\tau_q$. In this sense the loop-erased partition has the tendency to capture \emph{metastable-like regions} (blocks), namely, regions of points from which is difficult for the SRW to escape on time scale $1/q$. This makes the probability $\mu_q$ an interesting measure for randomized clustering procedures, see in this direction \cite{ACGM1} and \cite[Sec. 5]{ACGM2}. Still, it is not a-priori clear how strong and stable is this feature of capturing metastable landscapes, since it heavily depends on the underlying geometry (weighted adjacency matrix) and the choice of the killing parameter, $q$. The goal of this paper is to start making precise this heuristic by analyzing 2-points correlations associated to $\mu_q$ on the simplest informative geometries. \subsection{Two-point correlations} Consider the probability that two points in $V$ belong to different blocks in $\Pi_q$. As we will see, such a 2-point correlation function turns out to be analyzable by means of LERW explorations, and it encodes relevant information on how the loop-erased partition looks like on the underlying graph as a function of the parameters. Here is the formal definition together with an operative characterization. \begin{definition}[{\bf Pairwise interaction potential}]\label{FIP} For a given $q>0$ and $G$, fix $x,y\in V$, we call \emph{pairwise interaction potential} the following probability: \begin{align}\notag U_q(x,y):=&\P (x \text{ and } y \text{ are in different blocks of } \Pi_q)\\ &=\sum_{\gamma}\P^{LE _q}_x(\Gamma=\gamma)\P_y(\tau_\gamma>\tau_q)\label{LEdec} \end{align} where $\P_x^{LE_q}$ and $\P_x$ stand for the laws of the LERW killed at rate $q$ and of the SRW, respectively, starting from $x\in V$, and the above sum runs over all possible loop-erased paths $\gamma$'s starting at $x$. \end{definition} The representation in \cref{LEdec} is a consequence of the sampling algorithm in \cref{Wilson} and it holds true since, remarkably, in steps (\ref{1}) and (\ref{2}) of the algorithm the starting points can be chosen arbitrarily. Furthermore, we notice that, as for any general random partitioning of a vertex set, such an interaction potential defines a distance on the vertex set. This specific metric $U_q(x,y)$ can be interpreted as an affinity measure capturing how densely connected vertices $x$ and $y$ are in the graph $G$. Thus providing a further motivation to analyze it. \subsection{Related literature} Several properties of the forest measure associated to the loop-erased partitioning have been derived in the recent \cite{AG,AG1}. Based on these results, in~\cite[Prop. 6]{ACGM2} and~\cite[Sect. 5.2]{ACGM3}, the authors proposed an approach making use of the loop-erased partition and so-called intertwining dualities to describe the evolution of \emph{local equilibria} of a finite state space Markov chain presenting traps. From a broad perspective, the presence of the first factor in \cref{LEPmeas} shows that this measure has the flavor of the celebrated \emph{Random Cluster Model}, or so-called \emph{FK-percolation}, see e.g. \cite{G09}. Nonetheless, these objects are quite different. Indeed, our clusters are identified by oriented spanning trees rather than arbitrary undirected subgraphs, and from a practical view-point, unlike the FK-percolation, we do have at disposal an efficient exact sampling method. As mentioned before, this sampling method based on LERW is originally due to Wilson \cite{W96} and shows that the measure considered herein is intimately related to the well-known \emph{Uniform Spanning Tree} (UST) measure. Actually the measure on spanning rooted forests mentioned in \cref{Wilson} can be seen as a non-homogenous variant of the UST measure. Therefore the results presented in the next section are along the line of the flourishing literature on scaling limits of the UST and LERW, see e.g. \cite{A91,BK05,BP93,G80,K07,LS18,LSW04,Pitman02,S00}. A detailed exact and asymptotic analysis of observables related to Wilson's algorithm on a complete graph have been pursued in \cite{P02}. The derivation of our results is in this spirit, though, we deal with the additional randomness given by the presence of the killing parameter, which in turns makes the combinatorics more involved. \subsection{Paper overview} Our main theorems are presented in \cref{results} and identify the pairwise-potential and its scaling limits on a complete graph, \cref{proporso}, and on a non-homogeneous complete graph with two communities, \cref{2par2comssintpot,phasetrans}. Some basic consequences on the macroscopic emergent partition on these mean-field models are derived in \cref{macro}. The concluding \cref{proofcomplete,proof2com} are devoted to the proofs for the complete graph and the community model, respectively. \subsection{Basic standard notation} In what follows we will use the following standard asymptotic notation. For given sequences $f(N)$ and $g(N)$, we write: \begin{itemize} \item $f(N)=o(g(N))$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=0$. \item $f(N)=O(g(N))$ if $\lim\sup_{N\to\infty}\frac{f(N)}{g(N)}<\infty$. \item $f(N)=\omega(g(N))$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=\infty$. \item $f(N)=\Omega(g(N))$ if $\lim\inf_{N\to\infty}\frac{f(N)}{g(N)}>0$. \item $f(N)=\Theta(g(N))$ if $0< \lim\inf_{N\to\infty}\frac{f(N)}{g(N)}\le \lim\sup_{N\to\infty}\frac{f(N)}{g(N)}<\infty$. \item $f(N)\sim g(N)$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=1$. \end{itemize} For $k\leq n\in \ensuremath{\mathbb{N}}$ we will denote by $(n)_{k}:=n(n-1)(n-2)\cdots(n-k)$ the descendent factorial. Furthermore, we denote by $I$ the identity matrix, $\mathbf{1}$ and $\mathbf{1}'$, respectively, for the row and column vectors of all $1$'s, where the dimensions will be clear from the context. We will write $A^{Tr}$ for the \emph{transpose} of a matrix $A$. \section{\large{Results: Potential and its scaling on mean-field models}}\label{results} Our first result gives the characterization of the pairwise interaction potential in absence of geometry for finite $N$, and shows that this probability is asymptotically non-degenerate at scale $\sqrt{N}$: \begin{theorem}\label{proporso}{\bf (Mean-field potential and limiting law)} Fix $q>0$ and let $\mathcal{K}_N$ be a complete graph on $N\geq 1$ vertices with constant edge weight $w>0$. Then, for all $x\neq y \in [N]$, \begin{equation}\label{orsoformula} U^{(N)}_q(x,y)=U^{(N)}_q=\sum_{h=1}^{N-1}\frac{q}{q+Nw}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{k=2}^{h}\left(1-\frac{k}{N}\right), \end{equation} Furthermore, if $q=\Theta (wz\sqrt{N})$, for fixed $z>0$ and $w=\Theta(1)$, then \begin{equation}\label{orsolimite}U_{q}:=\lim_{N\to\infty}U^{(N)}_{q}=\sqrt{2\pi}ze^{\frac{z^2}{2}}\P(Z>z),\end{equation} with $Z$ being a standard Gaussian random variable. \end{theorem} Our second result is the analogous of \cref{orsoformula} when still every vertex is accessible from any other, but the edge weights are non-homogeneous and give rise to a community structure. In this sense we will informally refer to this graph as of a \emph{mean-field-community} model. Formally, for given positive reals $w_1$ and $w_2$, we denote by $\mathcal{K}_{2N}(w_1,w_2)$ the graph $G$ with $V=[2N]$, and $w(e)=w_1$ if $e=(x,y)$ is such that either $x,y\in[N]$ or $x,y\in[2N]\setminus[N]$, and $w(e)=w_2$ otherwise. Thus, the weight $w_1$ measures the pairwise connection intensity within the same community, while $w_2$ between pairs of nodes belonging to different communities. Given the symmetry of the model, we will use the notation $U^{(N)}_{q}(out)$ to refer to the potential $U^{(N)}_{q}(x,y)$, for $x$ and $y$ in different communities. Conversely, we set $U^{(N)}_{q}(in)$ for the potential associated to two nodes belonging to the same community. \begin{theorem}\label{2par2comssintpot}{\bf (Potential for mean-field-community model) } Fix $q, w_1, w_2>0$ and consider a two-community-graph $\mathcal{K}_{2N}(w_1,w_2)$. Let $T_q\geq1$ be a geometric random variable with success parameter $$\alpha:=\frac{q}{q+N(w_1+w_2)}$$ and let $\(\tilde X_n\)_{n\in\ensuremath{\mathbb{N}}_0}$ be a discrete-time Markov chain with state space $\{\underline{1},\underline{2}\}$ and transition matrix $$\tilde P=\left(\begin{matrix}p&1-p\\1-p&p \end{matrix}\right),\quad p=\frac{w_1}{w_1+w_2}.$$ Denote by $\ell(t)=\sum_{s<t}\mathbf{1}_{\left\{\tilde X_s=\underline{1} \right\}}$ the corresponding local time in state $\underline{1}$ up to time $t$ and by $\tilde \P_{\underline{1}}$ the corresponding path measure starting from $\underline{1}$. For $x \in[N]$, set $\star= in$ if $y\in[N]$, and $\star=out$ if $y\in[2N]\setminus[N]$,then \begin{equation} \begin{aligned}\label{g} U^{(N)}_{q}(x,y)= U^{(N)}_{q}(\star) := \sum_{n\geq 1} \P(T_q=n)\sum_{k= 1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k) N^{-n+1}\hat{f}(n,k)\theta(n,k)P^{\dagger}_{\star}(n,k) \end{aligned} \end{equation} where \begin{equation}\label{fandtheta} \hat{f}(n,k)= (N-2)_{k-1}(N-1)_{n-k},\quad\quad \theta(n,k)= \frac{\left(q-\lambda_1(n,k)\right)\left(q-\lambda_2(n,k)\right)}{q(q+2Nw_2)}\end{equation} with, for $i=1,2$, \begin{equation} \lambda_{i}(n,k)=-\frac{1}{2}\left[w_1n+w_2N+(-1)^{i}\sqrt{w_1^2(2k-n)^2+4\left(N-k\right)\left(N-k\right)w_2^2}\right], \end{equation} and \begin{equation}\label{Pmorte} P^{\dagger}_{\star}(n,k)=\frac{ q( q+k_\star(w_1-w_2)+w_2N)} {[q+k w_1][q+(n-k)w_1]+Nw_2(2q+nw_1)+w_2^2[Nn-k(n-k)]} \times \eta_\star\end{equation} with \begin{equation} k_\star:=\begin{cases} k, & \text{ if } \star = out, \\ n- k, & \text{ if } \star = in, \end{cases} \quad \quad\quad\quad \eta_\star=\begin{cases} (N-1)(N-n+k-1), & \text{ if } \star = out, \\ N(N-k-1), & \text{ if } \star = in. \end{cases} \end{equation} \end{theorem} The above theorem is saying that the pairwise potential can be seen as the double-expectation of the function $g_{\star}(n,k)=N^{-n+1} \left(\hat{f}\theta P^{\dagger}_{\star}\right)(n,k)$ in \cref{g} with respect to the geometric time $T_q$ and to the local time of the coarse-grained RW $\{\tilde X_n\}_{n\in\ensuremath{\mathbb{N}}_0}$. As can be seen in the proof, the analysis of this model can be in fact reduced to the study of such a coarse-grained RW jumping between the two ``clumped communities'' up to the independent random time $T_q$. The function $g_{\star}$ is the crucial combinatorial term encoding in the different parameter regimes the most likely trajectories for such a stopped two-state macroscopic walk. \begin{remark}{\bf (Extensions to many communities of arbitrary sizes and weigths) } The formula in \cref{g} can be derived also for the general model with arbitrary number of communities of variable compatible sizes and arbitrary weights within and among communities. The corresponding statement and proof are more involved but they follow exactly the same scheme of this equal-size-two-community case captured in the above theorem. We refer the reader interested in such an extension to \cite{Q16}. \end{remark} The next theorem gives the scaling limit of the potential computed in \cref{2par2comssintpot}, the resulting scenario is summarized in the phase-diagram in \cref{fig:phdiag}. \begin{figure}[h] \includegraphics[width=14cm]{fip-diagram-new.png} \caption{The above diagram describes at glance the limiting behavior of the interaction potential as captured in \cref{phasetrans}. The \emph{detectability} region corresponds to the regimes where the difference of the \emph{in}- and \emph{out}-potential is maximal. In this case, indeed, the SRW does not manage to exit its starting community within time scale $1/q$ and hence it is ``confined with high probability to its local universe''. In the \emph{dust region} both \emph{in}- and \emph{out}-potential degenerates to 1, it is in fact a regime where the killing rate is sufficiently large (recall from \cref{orsolimite} that $\sqrt{N}$ is the critical scale for the complete graph) to produce ``dust'' as emerging partition. Finally, the \emph{global mixing region} is the other degenerate regime where the RW ``mixes globally'' in the sense that it changes community many times within time scale $1/q$, hence loosing memory of its starting community. The separating lines correspond to the delicate critical phases where the competition of the above behaviors occurs. This will become transparent in the proof in \cref{detect} where such boundaries will deserve a more detailed asymptotic analysis.} \label{fig:phdiag} \end{figure} \begin{theorem}\label{phasetrans}{\bf (Detectability and phase diagram for two communities) } Under the assumptions of \cref{2par2comssintpot}, set $w_1=1$ , $w_2=N^{-\beta}$ and $q=N^\alpha$ for some $\alpha\in\ensuremath{\mathbb{R}},\: \beta\in\ensuremath{\mathbb{R}}^+$. Then: \begin{itemize} \item[\bf{(a)}] if $1-\beta<\alpha=\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=1$ and $\lim_{N\to\infty}U^{(N)}_q(in)= \varepsilon_0(\beta)\in(0,1)$. \item[\bf{(b)}] if $1-\beta<\alpha<\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=1$ and $\lim_{N\to\infty}U^{(N)}_q(in)=0$. \item[\bf{(c)}] if $\alpha=1-\beta< \frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=\varepsilon_2(\alpha,\beta)\in(0,1)$ and $\lim_{N\to\infty}U^{(N)}_q(in)=0$. \item[\bf{(d)}] if $\alpha<\min\{\frac{1}{2},1-\beta\}$, $\lim_{N\to\infty}U^{(N)}_q(\star)=0, \star\in\{in,out\}.$ \item[\bf{(e)}] if $\alpha=\frac{1}{2}<1-\beta$, $\lim_{N\to\infty}U^{(N)}_q(\star)= \varepsilon_1(\alpha,\beta)\in(0,1)$ , $\star\in\{in,out\}$. \item[\bf{(f)}] if $\alpha>\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(\star)=1, \star\in\{in,out\}.$ \end{itemize} \end{theorem} \begin{remark}{\bf (Anticommunities for negative $\beta$)} The above theorem is stated for arbitrary $\alpha\in\ensuremath{\mathbb{R}}$ and $\beta>0$. We notice that while for $\beta=0$ we are back to the complete graph with constant weight 1, for $\beta<0$, it would be more appropriate to speak about ``anticommunities" rather than communities. In fact in this case, at every step, the SRW prefers to change community rather than staying in its original one. Thus, it is somewhat artificial to see what the loop-erased partitioning captures. This is the reason why the plot in \cref{fig:phdiag} is restricted to $\beta\geq 0$. However, the theorem still remains valid and not surprisingly the difference between the \emph{in} and \emph{out} potentials turns out to be zero. \end{remark} \begin{remark}{\bf (Community detection)} We notice that this two-point correlation function is sufficient to detect the underlying communities in a sub-region where the ratio of the \emph{out} and \emph{in} weights is bigger than $\sqrt{N}$. This suggests that estimating the probabilities in \cref{FIP} could be a valuable cheap method to design a community detection algorithm for well-separated regions. Nonetheless, there might be other observables associated to $\Pi_q$ which perform better, in the sense that they can be used for detection beyond regions \textbf{(a)}-\textbf{(c)} in \cref{fig:phdiag}. It is not the scope of this paper to explore the practical implication of this loop-erased partitioning in the context of community detection. For this reason we will omit this and similar type of algorithmic considerations. As mentioned in the introduction, the main goal is rather to start understanding analytically the measure $\mu_q$ as a functions of the tuning parameter. \end{remark} The last statement below collects some simple consequences, deduced from these two-point correlations, on the macroscopic structure of $\Pi_q$ on such mean-field models. We recall that $|\Pi_q|$ stands for the number of blocks in the random partition $\Pi_q$. \begin{corollary}\label{macro}{\bf (Macroscopic emergent structure)} Under the assumption of \cref{phasetrans}, the following scenarios hold true. If $\beta>0$, there exists $c>0$ depending only on $\alpha$ and $\beta$ s.t. $$\P\left(|\Pi_q|=cN^{\alpha\wedge1}(1\pm o(1) ) \right)=1-o(1).$$ Moreover: \begin{itemize} \item[\bf{(a)}] if $1-\beta<\alpha=\frac{1}{2}$ then $\textbf{whp}$ there are two blocks of linear size s.t. each block has a fraction $(1-o(1))$ of vertices from the same community. \item[\bf{(b)}] if $1-\beta<\alpha<\frac{1}{2}$ then $\textbf{whp}$ there are two blocks of size $N(1-o(1))$ s.t. each block has a fraction $(1-o(1))$ of vertices from the same community. \item[\bf{(c)}] if $\alpha=1-\beta< \frac{1}{2}$ then $\textbf{whp}$ there is at least a block of linear size. \item[\bf{(d)}] if $\alpha<\min\{\frac{1}{2},1-\beta\}$ then $\textbf{whp}$ there is one block of size $2N(1-o(1))$. \item[\bf{(e)}] if $\alpha=\frac{1}{2}<1-\beta$ then $\textbf{whp}$ there is at least a block of linear size. \item[\bf{(f)}] if $\alpha>\frac{1}{2}$ then $\textbf{whp}$ blocks of linear size do not exist. \end{itemize} \end{corollary} \section{Proofs of \cref{proporso}: homogeneous complete graph}\label{proofcomplete} \subsection*{Proof of \cref{orsoformula}} For convenience, we consider a discretization of the continuous time Markov process with generator \begin{equation}\label{def:lap} -\cL=\cA-\cD,\quad\text{ with }\quad \cA=w(\mathbf{1}\mathbf{1}'-I) \quad \text{ and }\quad\text{ with } \cD=-(n-1)wI. \end{equation} Set $L=\frac{1}{\alpha}\cL$ with $\alpha=Nw$, so that $L=I-\frac{1}{n}\mathbf{1}\mathbf{1}'$ and the associated transition matrix is given by \begin{equation} P=L-I=\frac{1}{n}\mathbf{1}\mathbf{1}' \end{equation} If we consider the killing as an absorbing state within the state space of the Markov chain extended from $V$ to $V\bigcup\{\Delta\}$, $\Delta$ denoting this absorbing state, we get the adjacency matrix \begin{equation} \widehat \cA=\left(\begin{matrix} w\mathbf{1}\mathbf{1}'&q\mathbf{1}\\ \mathbf{0}'&0 \end{matrix}\right), \end{equation} and generator \begin{equation} -\widehat \cL= \widehat \cA-\widehat \cD,\qquad \widehat \cD=\left(\begin{matrix} -[(N-1)w+q]I&\mathbf{0}\\ \mathbf{0}'&0 \end{matrix}\right). \end{equation} We can then normalize it by setting \begin{equation} -\widehat L=-\frac{1}{\alpha+q}\widehat \cL=\left(\begin{matrix} \frac{w}{Nw+q}\mathbf{1}\mathbf{1}'-I&\frac{q}{Nw+q}\mathbf{1}\\ \mathbf{0}'&0 \end{matrix}\right) \end{equation} and get a discrete RW with associate transition matrix given by \begin{equation} \widehat P= I-\widehat L=\left(\begin{matrix} \frac{w}{Nw+q}\mathbf{1}\mathbf{1}'&\frac{q}{Nw+q}\mathbf{1}\\ \mathbf{0}'&1 \end{matrix}\right)=\left(\begin{matrix} (1-p)\frac{1}{N}\mathbf{1}\mathbf{1}'&p\mathbf{1}\\ \mathbf{0}'&1 \end{matrix}\right), \end{equation} where \begin{equation}\label{p} p:=\frac{q}{\alpha + q}. \end{equation} It should be clear that a sample of a LE-path starting at a given vertex can be obtained as the output of the following procedure: \begin{itemize} \item With probability $p$ the discrete process reaches the absorbing state. In particular we set $T_q$ for a geometric random variable of parameter $q/(\alpha+q)$. \item With probability $1-p$ the LERW moves accordingly to the law $P(v,\cdot)$ where $v$ is the last reached node. \item We call $H_n$ the vertices covered by the LE-path up to time $n$. Then, if at time $n+1$ the transition $X_n\to X_{n+1}$ takes place and the vertex $X_{n+1}\not\in H_n$, then $ H_{n+1}=H_n\cup\{X_{n+1} \}$. Conditioning on $|H_n|$, the latter event occurs with probability $\frac{N- H_n}{N}$. Conversely, if $X_{n+1}\in H_n$, then we remove from $H_n$ all the vertices that has been visited by the LERW since its last visit to $X_{n+1}$. As consequence the quantity $|H|$ reduces. One can then compute that the reductions occur with law \begin{equation} \P\left( |H_{n+1}|=h\:|\:|H_n|\ge h, T_q>n+1 \right)=\frac{1}{N}. \end{equation} \end{itemize} It would be easier to look at the quantity $|H_{n}|$ by using the following metaphor. We interpret $|H_{n}|$ as the height from which a bear fall down while moving on a stair of height $n$. In particular, we will assume that \begin{itemize} \item The bear starts with probability 1 from the first step. \item At each time the bear select a step of the stair uniformly at random, including also the step he currently stands on. \item If the choice made by the bear is a lower step (or the current one), he moves to that step. \item If he chooses an upper step, then he walks in the upper direction by a single step. \item Before doing each step, there is a probability $p$ as in \cref{p} that the bear ``falls down''. \end{itemize} Let us next fix $q=0$, that is, $p=0$, so that we can study the bear's dynamic independently of his falling. By setting $Z(n)$ for the position of the bear at time $n\in \ensuremath{\mathbb{N}}$, we get \begin{align} \P(Z(0)=\cdot)=&\left(1,0,0,0,\dots,0\right)\\ \P(Z(1)=\cdot)=&\left(\frac{1}{N},1-\frac{1}{N},0,0,\dots,0\right)\\ \P(Z(2)=\cdot)=&\left(\frac{1}{N},\left(1-\frac{1}{N}\right)\frac{2}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right),0,\dots,0\right)\\ \P(Z(3)=\cdot)=&\left(\frac{1}{N},\left(1-\frac{1}{N}\right)\frac{2}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\frac{3}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\left(1-\frac{3}{N}\right),\dots,0\right)\\ \P(Z(n)=\cdot)=&\begin{cases} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N} \right)\frac{h}{N}&\text{ if }n\ge h\\ \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N} \right)&\text{ if }n=h-1\\ 0&\text{ if }n<h -1. \end{cases} \end{align} The latter implies that at time $n=h$ we reached the ergodic measure over the first $h$ steps of the stair, while at time $n=N$ the probability measure is exactly the ergodic one. It is interesting to notice that an easier expression can be written for the cumulative distribution of the variable $Z(n)$, i.e. \begin{equation} \P\left(Z(n)\ge h\right)=\begin{cases} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{n-1}{N} \right)&\text{ if }n\ge h-1\\ 0&\text{ if }n<h -1\\ \end{cases} \end{equation} Next, calling $T^-$ the time immediately before the bear falls, we get \begin{align} \nonumber\P\left(Z(T^-)\ge h \right)=&\P\left(T^-<h-1 \right)\P\left(Z(T^-)\ge h| T^-<n-1 \right)+\P\left(T^-\ge h-1 \right)\P\left(Z(T^-)\ge h| T^-\ge n-1 \right)\\ =&0+(1-p)^{h-1} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N}\right) \end{align} which gives us the distribution of the last step of the bear before his failing. Recall that this is equivalent to the length of the original LERW starting on $x\in \cK_{N}$, when the walk is stopped at an exponential time of rate $q$. Hence, we are now left to compute the probability that another walker, starting on $y\not= x$, is killed before it hits the previously sampled LERW. Thanks to the bear metaphor, for the size of the LE-trajectory we get: \begin{equation} \P^{LE_q}_x(|\Gamma|\geq h)=(1-p)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right) \end{equation} and by explicit computation, setting $T_\Gamma$ for the first hitting time of the LE-path $\Gamma$, \begin{align*} U^{(N)}_q(x,y)=&\sum_{h\geq 1}^{N-1}\P^{LE_q}_x(|\Gamma|= h)\P_y(T_q<T_\Gamma | |\Gamma|=h)\\ =&\sum_{h=1}^{N-1}\P^{LE_q}_x(|\Gamma|= h)[\P_y(T_q<T_\Gamma| |\Gamma|=h, y\in \Gamma)\P(y\in \Gamma ||\Gamma|=h)\\ &+\P_y(T_q<T_\Gamma||\Gamma|=h, y\notin \Gamma)\P(y\notin \Gamma| |\Gamma|=h)]\\ =&\sum_{h=1}^{N-1}\P^{LE_q}_x(|\Gamma|= h)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\P^{LE_q}_x(|\Gamma|\geq h)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}-\sum_{h=1}^{N-1}\P_x(|\Gamma|\geq h+1)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\left[\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\right]\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}+\\ &-\sum_{h=1}^{N-1}\left[\left(\frac{Nw}{q+Nw}\right)^{h}\prod_{i=1}^{h}\left(1-\frac{i}{N}\right)\right]\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\frac{q}{q+Nw}\frac{N-h}{N-1}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\left[1-\frac{Nw}{Nw+q}\left(\frac{N-h}{N}\right)\right]\\ =&\sum_{h=1}^{N-1}\frac{q}{q+hw}\frac{N-h}{N-1}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\left(\frac{q+hw}{q+\alpha}\right)\\ =&\sum_{h=1}^{N-1}\frac{q}{q+\alpha}\left(\frac{\alpha}{q+\alpha}\right)^{h-1}\frac{N-h}{N-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\\ =&\sum_{h=1}^{N-1}\frac{q}{q+\alpha}\left(\frac{\alpha}{q+\alpha}\right)^{h-1}\prod_{i=2}^{h}\left(1-\frac{i}{N}\right)\\ =&\sum_{k=0}^{N-2}\frac{q}{q+\alpha}\left(\frac{\alpha}{q+\alpha}\right)^{k}\prod_{i=2}^{k+1}\left(1-\frac{i}{N}\right). \end{align*} \qed \subsection*{Proof of \cref{orsolimite}} Let \begin{equation} \frac{\xi_q}{N}:=\frac{q}{Nw+q} \end{equation} and notice that if $q=x\sqrt{N}$, with $x,w=\Theta(1)$, then \begin{equation} q=\frac{Nw\xi_q}{N-\xi_q}\Longrightarrow q\sim w\xi_q. \end{equation} Call \begin{equation} f(k,N):=\prod_{i=2}^k\left(1-\frac{i}{N}\right), \end{equation} in order to rewrite \begin{align} \begin{split} U^{(N)}_q=&\sum_{k=0}^{N-2}\left(\frac{\xi_q}{N}\right)\left(1-\frac{\xi_q}{N} \right)^{k}\prod_{i=2}^{k+1}\left(1-\frac{i}{N}\right)\\ =&\sum_{k=0}^{N-2}\left(\frac{\xi_q}{N}\right)\left(1-\frac{\xi_q}{N} \right)^{k}f(k+1,N) \end{split} \end{align} and notice that the first term in this last sum is the probability that the geometric random variable $T_q \overset{d}{\sim} Geom\left(\frac{\xi_q}{N}\right)$ assumes value $k$. Moreover it trivially holds that \begin{equation}\label{fknmin1} f(k+1,N)\le1,\:\:\forall k\in\ensuremath{\mathbb{N}},\qquad f(k+1,N)=0,\:\:\forall k\ge N-1. \end{equation} Hence, \begin{equation}\label{uqmeant} U^{(N)}_q=\ensuremath{\mathbb{E}}[f(T_q+1,N)]. \end{equation} \noindent Let us approximate $\ln f(k+1,N)$ at the first order as follows \begin{align}\label{eolo} \begin{split} \ln f(k+1,N)=&\sum_{i=2}^{k+1}\ln\left(1-\frac{i}{N}\right)=-\sum_{i=2}^{k+1}\frac{i}{N}+O\left(\frac{i^2}{N^2}\right)\\ =&-\frac{1}{N}\frac{(k+1)(k+2)-2}{2}+kO\left(\frac{k^2}{N^2}\right)=-\frac{1}{N}\frac{k^2+3k}{2}+O\left(\frac{k^3}{N^2}\right)\\ =&-\frac{k^2}{2N}+O\left(\frac{k}{N}+\frac{k^3}{N^2}\right)=:-\frac{k^2}{2N}+c_N(k). \end{split} \end{align} Next, set $Y\,\overset{d}{\sim}\, exp(x)$ and $Z\,\overset{d}{\sim} \,\mathcal N(0,1)$, notice that $\ensuremath{\mathbb{E}}[e^{\frac{Y^2}{2}}]=\sqrt{2\pi}xe^{\frac{x^2}{2}}\P(Z>x)$ and that \begin{equation} \lim_{N\to\infty}|\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]-\ensuremath{\mathbb{E}}[e^{\frac{Y^2}{2}}]|=0, \end{equation} since $T_q/\sqrt{N}$ converges in distribution to $Y$ as $N$ diverges. In view of the latter together with \cref{uqmeant}, we can estimate \begin{align*} \left|U^{(N)}_q-\sqrt{2\pi}xe^{\frac{x^2}{2}}\P(Z>x) \right|&\leq \left| \ensuremath{\mathbb{E}}[f(T_q+1,N)]-\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]\right| + o(1) \\ \le & \left| \ensuremath{\mathbb{E}}[f(T_q+1,N)]-\sum_{k=0}^{\lfloor N^\delta\rfloor}\P(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}\right| \\ &+ \left|\sum_{k=0}^{\lfloor N^\delta\rfloor}\P(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}-\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]\right| + o(1) \\\le& \sum_{k=\lfloor N^\delta\rfloor+1}^\infty\P(T_q=k) + \left|\sum_{k=0}^{\lfloor N^\delta\rfloor}\P(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}-\sum_{k=0}^{\lfloor N^\delta\rfloor}\P(T_q=k)e^{-\frac{k^2}{2N}}\right| + o(1)\\ =& o(1), \end{align*} where the last inequality holds true by choosing any $\delta\in\left(\frac{1}{2},\frac{2}{3}\right)$ which in particular guarantees that $c_N(k)=o(1)$. \qed \section{Proofs for mean-field-communities}\label{proof2com} \subsection{Proof of \cref{2par2comssintpot}} We use here the same line of argument used in the proof of \cref{proporso}. We will consider the process having state space $V=V_1\sqcup V_2$, where $$V_1=\left\{1,\dots, N_1 \right\},\qquad V_2=\left\{N_1+1,\dots,N_1+N_2 \right\},$$ and generator \begin{equation}\label{lap2com} -\cL(x,y)=\begin{cases} w_1&\text{if } x\not=y \text{ and } x,y \text{ in the same community}\\ w_2&\text{if } x\not=y \text{ and } x,y \text{ not in the same community}\\ -(N_1-1)w_1-N_2 w_2&\text{if } x=y \text{ and } x\in V_1\\ -(N_2-1)w_1-N_1 w_2&\text{if } x=y \text{ and } x\in V_2. \end{cases} \end{equation} We will specialize later on the case $N_1=N_2=N$.\\ We now consider a killed LERW $\Gamma$, and we denote by $\Gamma_i$ the set of points of the $i$-th community belonging to $\Gamma$, i.e., \begin{equation} \Gamma_i=\Gamma\cap V_i,\qquad i=1,2. \end{equation} We can write \begin{equation} \P_x^{LE_q}(|\Gamma_1|=k_1, |\Gamma_2|=k_2)=\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \P_x^{LE_q}(\gamma)\label{marshall}, \end{equation} and we assume, without loss of generality, that $x\in V_1$; then, by conditioning, we get for $ y\neq x$ with $y\in V_j$, $j=1,2$ \begin{equation} U_q^{(N)}(x,y)=\sum_{k_1=1}^{N_1-\mathbf{1}_{j=1}}\sum_{k_2=0}^{N_2-\mathbf{1}_{j=2}}\P^{LE_q}_x(|\Gamma_1|=k_1, |\Gamma_2|=k_2)\cdot \P_y\left(T_q<T_{\Gamma}\big|\Gamma\right),\label{ma} \end{equation} $T_{\Gamma}$ being the hitting time of $\Gamma$. \subsection*{The LERW starting from $x$} A result due to Marchal \cite{M00} provides the following explicit expression for the probability of a loop erased trajectory: \begin{equation}\label{LERWlaw} \P_x^{LE_q}(\Gamma=\gamma)=\prod_{i=1}^{|\gamma|}w(x_{i-1},x_i)\frac{\det_{V\setminus\gamma}{(qI+\mathcal{L})}}{\det{(qI+\mathcal{L})}}. \end{equation} By looking closely at the latter formula we distinguish two parts: a product over the weights of the edges of the path and an algebraic part containing the ratio of two determinants which encodes the ``loop-erased'' feature of the process. In particular we notice that the former contains all the details about the trajectory, while the latter only depends on the number of points visited in each community. Let $j_1$ (respectively, $j_2$) be the number of jumps from the first community to the second (from the second to the first, respectively) along the LE-path. We have \begin{equation}\label{2comm} \begin{split} \P_x^{LE_q}(|\Gamma_1|=k_1, &|\Gamma_2|=k_2|x\in V_1,\:y\in V_2)=\\ =&\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \P_x^{LE_q}(\Gamma=\gamma)\\ =&\binom{N_1-1}{k_1-1}\binom{N_2-1}{k_2}\cdot(k_1-1)!(k_2)!\cdot\sum_{j_{1}=0}^{\min\{k_1,k_2\}}\sum_{j_{2}=j_{1}-1}^{j_{1}}\binom{k_1-1}{j_1-\mathbf{1}_{j_{1}\neq j_{2}}}\binom{k_2-1}{j_{2}-\mathbf{1}_{j_{1}=j_{2}}}\cdot\\ &\cdot w_1^{k_1+k_2-(j_1+j_2)-1}w_2^{j_1+j_2}q\frac{\det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})}{\det(qI+\mathcal{L})} \end{split} \end{equation} where \begin{itemize} \item The first binomial coefficients stays for the $k_1-1$ possible choices for the points in $G_1$ (one of those must be $x$) over the possible $N_1-1$ points of the first community (except $x$). In the second community we can choose any $k_2$ vertices over the possible $N_2-1$ vertices of the second community (except $y$). \item The factorials stay for the possible ordering of the nodes covered in each community. Notice that the path on the first community must start by $x$. \item We sum over all the possible jumps from the first community to the second, $j_1$, and from the second to the first, $j_2$ (notice that if $j_2$ must be equal or one smaller than $j_1$). \item For any choice over the product of the previous three terms we have a path that has probability as given by the Marchal formula. \end{itemize} In the case in which we condition on having both $x$ and $y$ in the same (first, say) community we have \begin{equation}\label{2comm2} \begin{split} \P_x^{LE_q}(|\Gamma_1|=k_1, &|\Gamma_2|=k_2|x\in V_1,\:y\in V_1)=\\ =&\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \P^{LE_q}_x(\Gamma=\gamma)\\ =&\binom{N_1-2}{k_1-1}\binom{N_2}{k_2}\cdot(k_1-1)!(k_2)!\cdot\sum_{j_{1}=0}^{\min\{k_1,k_2\}}\sum_{j_{2}=j_{1}-1}^{j_{1}}\binom{k_1-1}{j_1-\mathbf{1}_{j_{1}\neq j_{2}}}\binom{k_2-1}{j_{2}-\mathbf{1}_{j_{1}=j_{2}}}\cdot\\ &\cdot w_1^{k_1+k_2-(j_1+j_2)-1}w_2^{j_1+j_2}q\frac{\det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})}{\det(qI+\mathcal{L})}. \end{split} \end{equation} Namely, only the first combinatorial term changes. \subsection*{The ratio of determinants} In our \emph{mean-field} setup, the terms in~\cref{2comm} and~\cref{2comm2} coming from ~\cref{LERWlaw} can be explicitly computed. We consider here the two communities case, i.e. $V=V_1\sqcup V_2$, where the communities possibly have different sizes, $|V_1|=N_1$ and $|V_2|=N_2$. Now, consider the matrix obtained by erasing $k_1$ ($k_2$) rows and corresponding columns in the first community (the second one, respectively) in $-\cL$. We are left with a square matrix made of two square blocks on the diagonal of size $N_1-k_1=:K_1$ (respectively $N_2-k_2=:K_2$). We will denote this matrix by \begin{equation} -M= \begin{pmatrix} d_1 & \cdots &w_1 & w_2 & \cdots & w_2 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ w_1 & \cdots & d_1 & w_2 & \cdots &w_2 \\ w_2 & \cdots &w_2 &d_2 &\cdots &w_1 \\ \vdots & \ddots & \vdots &\vdots &\ddots & \vdots \\ w_2 &\cdots & w_2 &w_1 &w_1 & d_2\\ \end{pmatrix}= \begin{pmatrix} A_1 &B \\ B^{Tr} &A_2 \end{pmatrix}, \end{equation} where the elements on the diagonal are given by \begin{equation} d_1=-((N_1-1)w_1 + N_2w_2),\qquad d_2=-((N_2-1)w_1 +N_1w_2). \end{equation} We want to find $K_1+K_2$ solutions of the problem \begin{equation} -Mv=\lambda v \label{eigen1} \end{equation} First we consider eigenvectors of the form $v=(x_1,x_1,...,x_1,x_2,...,x_2)^{Tr}$, where the upper component has length $K_1$ and the lower one has length $K_2$. If we write explicitly \cref{eigen1} we get the following linear system: \begin{equation}\label{smalllumppro} -\begin{pmatrix} d_1 +(K_1-1)w_1 & K_2w_2 \\ K_1w_2 & d_2 + (K_2-1)w_1 \end{pmatrix} \begin{pmatrix}x_1\\mathbf{x}_2\end{pmatrix}=\lambda\begin{pmatrix}x_1\\mathbf{x}_2\end{pmatrix}, \end{equation} from which we get two eigenvalues, which we will refer to as $\lambda_1$ and $\lambda_2$. \\ Then we consider $v=(x_1,x_2,..., x_{K_1},0,...,0)^{Tr}$; with this choice we are left with the system \begin{equation} -\begin{pmatrix} d_1 &\cdots &w_1 \\ \vdots &\ddots &\vdots \\ w_1 &\cdots &d_1 \end{pmatrix} \begin{pmatrix} x_1 \\ \vdots \\ x_{K_1} \end{pmatrix}=\lambda\begin{pmatrix} x_1 \\ \vdots \\ x_{K_1} \end{pmatrix}, \qquad w_2(x_1+\cdots+x_{K_1})=0\end{equation} and we have to find $K_1-1$ eigenvalues that are associated with eigenvector orthogonal to constants. By direct computation, $A_1$ has eigenvalue $\lambda_1':=(N_1w_1+N_2w_2)$ with multiplicity $K_1-1$. With the opposite choice, namely $v=(0,...,0, x_1,..., x_{K_2})^{Tr}$, we get \begin{equation} -\begin{pmatrix} d_2 &\cdots &w_1 \\ \vdots &\ddots &\vdots \\ w_1 &\cdots &d_2 \end{pmatrix} \begin{pmatrix} x_1 \\ \vdots \\ x_{K_2} \end{pmatrix}=\lambda\begin{pmatrix} x_1 \\ \vdots \\ x_{K_2} \end{pmatrix}, \qquad\qquad w_2(x_1+\cdots+x_{K_2})=0. \end{equation} Namely, there is an eigenvalue $\lambda_2':=(N_2w_1+N_1w_2)$ with multiplicity $K_2-1$. So the spectrum of $M$ is \begin{equation} \text{spec}(M)=(\lambda_1, \lambda_2, \lambda_1',\lambda_2' ) \end{equation} with multiplicity denoted by $\mu_{M}(\cdot)$: \begin{equation} \mu_{M}(\lambda_1)=1,\quad \mu_{M}(\lambda_2)=1,\quad \mu_{M}(\lambda_1')=K_1-1,\quad \mu_{M}(\lambda_2')=K_2-1. \end{equation} Therefore, we can see that the ratio of determinants in \cref{2comm} and \cref{2comm2} can be written explicitly. Indeed, at the denominator we have \begin{equation} \det{(qI+\mathcal{L})}=q(q+Nw_2)(q+N_1w_1+N_2w_2)^{N_1-1}(q+N_2w_1+N_1w_2)^{N_2-1}, \end{equation} while at the numerator we are left with \begin{equation} \det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})=(q+\lambda_1)(q+\lambda_2)(q+\lambda_1')^{N_1-k_1-1}(q+\lambda_2')^{N_2-k_2-1} \end{equation} where \begin{equation} \lambda_1':=N_1w_1+N_2w_2, \qquad \lambda_2':=N_1w_2+N_2w_1, \end{equation} while $\lambda_{1}$ and $\lambda_{2}$ are the two solutions of the system in \cref{smalllumppro}. In particular, if we specialize in the case $N_1=N_2=N$ we can conclude that the ratio of determinants is given by \begin{equation}\label{def:theta} \theta(k_1,k_2):=\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)(q+\alpha)^{k_1+k_2}} \end{equation} where we defined \begin{equation} \alpha:=N(w_1+w_2), \end{equation} and \begin{equation*} \lambda_{i}(k_1,k_2):=-\frac{1}{2}\left[w_1(k_1+k_2)+2Nw_2+(-1)^i\sqrt{w_1^2(k_1-k_2)^2+4\left(N-k_1\right)\left(N-k_1\right)w_2^2}\right],\quad i=1,2. \end{equation*} \subsection*{The path starting from $y$} Now we have to consider the second path starting from $y$ which decides the root at which $y$ will be connected in the forest generated by the algorithm. The latter corresponds to the second factor in \cref{ma}. Notice that it is sufficient to consider such path in the simpler fashion, i.e. without erasing the loops, since we are only concerned with the absorption of the walker: either in $\gamma$ or killed at rate $q$. Moreover, we can exploit again the symmetry of the model to reduce it to a Markov chain $\bar X$ with state space $\{\bar 1,\bar 2,\bar 3,\bar 4\}$ corresponding to the sets $\left\{V_1\setminus \gamma_1, V_2\setminus \gamma_2, \gamma_1\sqcup \gamma_2,\Delta \right\}$, where $\Delta$ is again the absorbing state, i.e., the ``state-independent'' exponential killing. We will assume that $$|\gamma_i|=k_i,\qquad |V_i|=N_i,\qquad i=1,2.$$ Hence, the transition matrix we are interested in is given by \begin{equation}\label{smallprocmorte} \bar P:=\left(\begin{matrix} Q&R\\0&I \end{matrix} \right), \end{equation} where \begin{equation} Q:=D^{-1}\left(\begin{matrix} \left(N_1-k_1-1 \right)w_1&\left(N_2-k_2 -1 \right)w_2\\ \left(N_1-k_1 \right)w_2&\left(N_2-k_2 \right)w_1 \end{matrix} \right), \end{equation} \begin{equation} D^{-1}:=\left(\begin{matrix}(q+\alpha_1-w_1)^{-1}&0\\0&(q+\alpha_2-w_1)^{-1}\end{matrix}\right),\qquad R:=D^{-1}\left(\begin{matrix} k_1w_1+k_2w_2&q\\ k_1w_2+k_2w_1&q \end{matrix} \right). \end{equation} with \begin{equation} \alpha_1:=N_1w_1+N_2w_2,\qquad \alpha_2:=N_1w_2+N_2w_1. \end{equation} The states represent: \begin{itemize} \item[($\bar 1$)] nodes of the $1^{st}$ community that have \emph{not} been covered by the LE-path started at $x$. \item[($\bar 2$)] nodes of the $2^{nd}$ community that have \emph{not} been covered by the LE-path started at $x$. \item[($\bar 3$)] nodes of \emph{both} communities that have been covered by the LE-path started at $x$. \item[($\bar 4$)] the absorbing state $\Delta$. \end{itemize} Called $T_{abs}$ the hitting time of the absorbing set $\left\{\bar 3 ,\bar 4\right\}$, we want to compute the probability that the process $\bar X$ is absorbed in state $\bar 4$ and not in $\bar 3$. In terms of our original process, this means that the process is killed before the hitting of the LE-path starting at $x$. By direct computation \begin{align}\label{pmorte} \begin{split} \P_{\bar 2}(\bar X(T_{abs})=\bar 4)=&\sum_{k=0}^\infty\bar P^k(\bar 2,\bar 1)\frac{q}{q+\alpha_1-w_1}+\sum_{k=0}^\infty\bar P^k(\bar 2,\bar 2)\frac{q}{q+\alpha_2-w_1}\\ =&\left(\sum_{k=0}^\infty Q^k\right)D^{-1}\binom{q}{q}(2)\\ =&(I-Q)^{-1}D^{-1}\binom{q}{q}(2)\\ =:&P^{\dagger}(2) \end{split} \end{align} notice that the first component of the vector $P^\dagger\in\ensuremath{\mathbb{R}}^2$ corresponds to the \emph{intra-community} case $\left\{x,y\right\}\in V_i$ for some $i$, i.e., $U^{(N)}_q(in)$, while the second one to the \emph{inter-community} case, namely $U^{(N)}_q(out)$. \newline\newline If we now use the assumption that $N_1=N_2=N$, the steps above allow us to write the following formulas \begin{align}\label{vinter2ss} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}\binom{N-1}{k_1-1}\binom{N-1}{k_2}(k_1-1)!(k_2)!\theta(k_1,k_2)P^\dagger(2)\cdot\\ &\cdot\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}q \end{split} \end{align} \begin{align}\label{vintra2ss} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}\binom{N-2}{k_1-1}\binom{N}{k_2}(k_1-1)!(k_2)!\theta(k_1,k_2)P^\dagger(1)\cdot\\ &\cdot\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}q \end{split} \end{align} where \begin{equation} f_1(j_1,j_2):=j_1-\mathbf{1}_{\left\{j_1\not=j_2 \right\}},\:\:\:f_2(j_1,j_2):=j_2-\mathbf{1}_{\left\{j_1=j_2 \right\}}, \end{equation} $\theta(k_1,k_2)$ as in \cref{def:theta} and \begin{equation} P^\dagger=\frac{1}{q+\alpha-w_1}(I-Q)^{-1}\binom{q}{q}. \end{equation} By direct computation we see that \begin{equation} P^\dagger=\frac{q}{c}\binom{q+k_2(w_1-w_2)+2w_2N}{q+k_1(w_1-w_2)+2w_2N}. \end{equation} where \begin{equation} c:=(q+k_1w_1)(q+k_2w_1)+Nw_2(2q+(k_1+k_2)w_1)+w_2^2[N(k_1+k_2)-k_1k_2]. \end{equation} \subsection*{Local time interpretation} Now consider the part of the formula concerning the jumps among the two communities of the killed-LE-path starting at $x$, i.e. \begin{equation}\label{431} \sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}. \end{equation} The latter can be thought of as a function of a Markov Chain $(\tilde X_n)_{n\in\ensuremath{\mathbb{N}}}$ on the state space $\left\{\underline{1},\underline{2}\right\}$, with transition matrix \begin{equation}\label{smallproc} \tilde P=\left(\begin{matrix}p&1-p\\1-p&p \end{matrix}\right),\qquad p=\frac{w_1}{w_1+w_2} \end{equation} where the $\underline{i}$-th state stays for the $i$-th community. Indeed, we can rewrite \cref{431} as \begin{equation*} (w_1+w_2)^{k_1+k_2-1}\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}\left(\frac{w_1}{w_1+w_2}\right)^{k_1+k_2-1-j_1-j_2}\left(\frac{w_2}{w_1+w_2}\right)^{j_1+j_2}= \end{equation*} \begin{equation}\label{local} =(w_1+w_2)^{k_1+k_2-1}\tilde \P_1(\ell(k_1+k_2)=k_1) \end{equation} with $\ell$ being the local time as in the statement of \cref{2par2comssintpot}. \subsection*{Geometric smoothing} From the previous steps we get the following expression \begin{align} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)(q+\alpha)^{k_1+k_2}}\cdot\\ &\cdot q(w_1+w_2)^{k_1+k_2-1}\tilde \P_1(\ell(k_1+k_2)=k_1)P^\dagger(2). \end{split} \end{align} Next, we would like to make appear a geometric term as in the complete and uniform case of \cref{proporso}. Notice that multiplying and dividing by $N^{k_1+k_2-1}$ one obtains \begin{align} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \frac{q}{q+\alpha}\left(\frac{\alpha}{q+\alpha}\right)^{k_1+k_2-1}\tilde \P_1(\ell(k_1+k_2)=k_1)P^\dagger(2). \end{split} \end{align} We can then define \begin{equation}\label{xi} \frac{\xi_q}{N}:=\frac{q}{q+\alpha}=\frac{q}{q+N(w_1+w_2)} \end{equation} in order to obtain \begin{align}\label{438} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \P(T_q=k_1+k_2)\tilde \P_1(\ell(k_1+k_2)=k_1)P^\dagger(2) \end{split} \end{align} and \begin{align}\label{439} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}N^{-(k_1+k_2-1)}\left(N-2\right)_{k_1-1}\left(N\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \P(T_q=k_1+k_2)\tilde \P_{\underline{1}}(\ell(k_1+k_2)=k_1)P^\dagger(1) \end{split} \end{align} where $T_q$ is an independent random variable with law $Geom\left(\frac{\xi_q}{N}\right)$. \subsection*{Conclusions} One can ideally divide the formulas in \cref{438,439} in five terms, namely \begin{enumerate} \item The entropic term \begin{equation} N^{-(k_1+k_2-1)}\left(N-2\right)_{k_1-1}\left(N\right)_{k_2}\qquad\text{ or }\qquad N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2} \end{equation} was already present in the complete and uniform case \cref{orsoformula}. Indeed \begin{equation} \prod_{h=2}^k\left(1-\frac{h}{N}\right)=N^{-(k-1)}(N-2)_{k-2}. \end{equation} \item The term related to the spectrum of the size 2 matrix presented in \cref{smalllumppro}, i.e. \begin{equation} \frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)} \end{equation} which is the same in both \emph{in} e \emph{out} community cases. It can be rewritten as the ratio between two parabolas in $q$, i.e., \begin{equation} \frac{q^2+[(k_1+k_2)w_1+2Nw_2]q+(w_1+w_2)[(k_1+k_2)Nw_2+k_1k_2(w_1-w_2)]}{q^2+2Nw_2q} \end{equation} \item The term related to the geometric random variable of parameter $\frac{\xi_q}{N}$, which was present also in the case of the uniform graph, \cref{orsoformula}. \item The term related to the local times of the 2-states Markov chain $\tilde P$, in \cref{smallproc}. \item The term related to the absorption probability, i.e., to the quantity $P^\dagger$, see \cref{pmorte}, as a function of the process $\bar{P}$ presented in \cref{smallprocmorte}. \end{enumerate} It is worth noticing that the $P^\dagger$ above is slightly different from the $P^\dagger_\star$ in the statement of \cref{2par2comssintpot} which contains the extra factor $\eta_\star$. At this point by setting \begin{equation} g'_{out}(k_1,k_2):=N^{-(k_1+k_2-1)}\left(N-1 \right)_{k_1-1}\left(N-1 \right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}P^{\dagger}(2), \end{equation} \begin{equation} g'_{in}(k_1,k_2):=N^{-(k_1+k_2-1)}\left(N-2 \right)_{k_1-1}\left(N \right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}P^{\dagger}(1), \end{equation} we can write \begin{align}\label{446} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}g'_{out}(k_1,k_2)\P(T_q=k_1+k_2)\tilde \P_{\underline{1}}(\ell(k_1+k_2)=k_1)\\ =&\sum_{n=1}^{2N}\sum_{k_1+k_2=n}g'_{out}(k_1,k_2)\P(T_q=n)\tilde \P_{\underline{1}}(\ell(n)=k_1), \end{split} \end{align} and \begin{align}\label{447} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}g'_{in}(k_1,k_2)\P(T_q=k_1+k_2)\tilde{\P}_{\underline{1}}(\ell(k_1+k_2)=k_1)\\ =&\sum_{n=1}^{2N}\sum_{k_1+k_2=n}g'_{in}(k_1,k_2)\P(T_q=n)\tilde \P_{\underline{1}}(\ell(n)=k_1),\\ \end{split} \end{align} which is equivalent to the statement in \cref{2par2comssintpot}. \qed \subsection{Proof of \cref{phasetrans}}\label{detect} \noindent{ \bf{ Proofs of \bf{(a)} and \bf{(b)}: $1-\beta<\alpha<(=)\frac{1}{2}$ (detectability) } } As expressed in the following lemma in this regime the SRW is confined to its starting community for the entire life-time. \begin{lemma}[RW is confined to its community up to dying]\label{lemmageom} Let $1>\alpha>1-\beta$ and for $x\in [2N]$, consider the event $$E_x:=\{T_q>T_x^{out} \}$$ where $T_x^{out}$ is the first time in which the SRW moves out of the community in which $x$ lies. Then, as $N\to\infty$, $$\P_x(E_x)=o(1).$$ \end{lemma} \begin{proof} Let $Z$ be a r.v. that can assume values in the set $\{Out, In, \Delta\}$ with probabilities: $$\P(Z=Out)= \frac{N^{1-\beta}}{N^\alpha+N+ N^{1-\beta}}=:a_N,$$ $$\P(Z=In)= \frac{N}{N^\alpha+N+ N^{1-\beta}}=:b_N \quad \text{ and } \quad \P(Z=\Delta)=1- (a_N+b_N).$$ Let $(Z_n)_{n\in\ensuremath{\mathbb{N}}}$ be a sequence of i.i.d. r.v.s with the same law of $Z$ and notice that $$ \P(T_q<T_x^{out})=\P\left(\min\{n\ge0\:|\:Z_n=\Delta \} <\min\{n\ge0\:|\:Z_n=Out \}\right).$$ Therefore \begin{align*} \P_x(E_x)=\P_x(T_q>T_x^{out} )=&\sum_{n=1}^\infty\P_x(T_x^{out}=n,T_q>n)\\ =&\sum_{n=1}^\infty b_N^{n-1}a_N\\ =&\frac{a_Nb_N}{1-b_N}\sim N^{1-\beta-\alpha}, \end{align*} from which the claim. \end{proof} In view of the decomposition in \cref{LEdec} and the above lemma, we can write for any $x\neq y$ \begin{align}\notag U^{(N)}_q(x,y)=&\sum_{\gamma}\P^{LE}_x(\gamma)\[\P_y(T_\gamma>T_q|E_x^c)\P_y(E_x^c)+\P_y(T_\gamma>T_q|E_x)\P_y(E_x)\]\\ \notag =&o(1)+(1-o(1))\sum_{\gamma}\P_x^{LE}(\gamma)\P_y(T_\gamma>T_q|E_x^c)\\ \label{LEdetect}\sim&\sum_{\gamma}\P_x^{LE}(\gamma)\P_y(T_\gamma>T_q|E_x^c). \end{align} Let us first consider $U^{(N)}_q(out)$. In this case, by \cref{lemmageom}, for any $\alpha\leq 1/2$ and uniformly in $\gamma$, we have that \begin{align*} \P_y(T_\gamma<T_q|E_x^c)\le&\P_y(T_y^{out}<T_q|E_x^c)\\ =&\P_y(E_y)\\ =&o(1). \end{align*} As a consequence $\P_y(T_\gamma>T_q|E_x^c)\geq 1-o(1)$, and by plugging this estimate in \cref{LEdetect}, we get $U^{(N)}_q(out)\to 1$. Concerning $U^{(N)}_q(in)$, one has to notice that, for every LERW $\gamma$ starting from $x$ and ending at the absorbing state, we can consider the event $$E_{\gamma,y}=\{T_y^{out}<\min(T_\gamma,T_q) \}.$$ Once more, uniformly in $\gamma$, we get by \cref{lemmageom} that \begin{align*} \P_y(E_{\gamma,y})\leq \P_y(E_y)=o(1) \end{align*} Thus, for $x,y \in [N]$, by \cref{LEdetect}, we can estimate \begin{align*} U^{(N)}_q(x,y)=&o(1)+(1-o(1))\sum_{\gamma}\P_x^{LE}(\gamma|E_x^c)\P_y(T_\gamma>T_q|E_x^c,E_{\gamma,y}^c) \end{align*} Notice that, under such conditioning, the sum can be read as the probability that two vertices in a complete graph with $N$ vertices end up in two different trees. Therefore, this reduces to \cref{orsolimite}, which in turns gives $U^{(N)}_q(in)\to 0$ for $\alpha<1/2$ and $U^{(N)}_q(in)\to \varepsilon_0(\alpha)$ else. \qed \noindent{\bf{ Proof of {\bf(f)} : $\alpha>\frac{1}{2}$ (high killing region)}} We will only show that $U^{(N)}_q(in)\to 1$, this will suffice since e.g. by direct computation one can check that $U^{(N)}_q(in)\geq U^{(N)}_q(out)$. Observe first that being $\alpha>\frac{1}{2}$, the length of the Loop-Erased path $\Gamma$ must be ``small'' with high probability. In particular we can bound \begin{align*} \P^{LE_q}_x\(|\Gamma|>\sqrt{N} \)\le& \P(T_q>\sqrt{N})\\ =&\(1-\frac{N^\alpha}{N+N^{1-\beta}+N^\alpha} \)^{\sqrt{N}}\\ =&o(1), \end{align*} hence \begin{align*} U^{(N)}_q(in)=&o(1)+\sum_{\gamma:\:|\gamma|\le\sqrt{n}}\P_x^{LE_q}(\Gamma=\gamma)\P_y(T_\gamma>T_q)\\ \ge&\sum_{\gamma:\:|\gamma|\le\sqrt{N}}\P_x^{LE_q}(\Gamma=\gamma)\frac{N^\alpha}{\sqrt{N}+N^\alpha}\\ =&1-o(1). \end{align*} \qed We next prove the remaining items in \cref{phasetrans} for which we will implement a similar strategy which we start explaining. In all remaining regimes we need to show that $U^{(N)}_q(\star)$, $\star\in\{in,out\}$ either vanishes or stays bounded away from zero. To this aim, we will use the representation in \cref{g}. Depending on the parameter regimes, we will split the sum over $t$ in different pieces to be treated according to the asymptotic behavior of the involved factors. To simplify the exposition we will restrict in what follows to the positive quadrant $\alpha,\beta>0$. We stress however that, as the reader can check, the following estimates hold true and actually converge faster even outside of the positive quadrant. Let us start with a few observations. We notice that $\hat{f}(n,k)\leq 1$ for every choice of $k,N,n$, moreover $\hat{f}(t,n)=0$ if $n\ge N$. Furthermore, for each $N$ , \begin{equation}\label{sum1}\sum_{n=1}^{\infty} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)=\sum_{n=1}^{\infty} \P(T_q=n)=1,\end{equation} and while estimating the involved factors it will be crucial the behavior of the product $\left(\hat{f} \theta P_{\star}^{\dagger}\right)(n,k)$ for which we can in general observe the following facts. \begin{enumerate}[(A)] \item\label{b} For any $\varepsilon >0$, if $n>N^{1/2+\varepsilon}$, then it follows from \cref{eolo} that $N\mapsto \hat{f}_N$ decays to zero, uniformly in $k$, faster than any polynomial as $N\to \infty$. For such $n$'s , since $N\mapsto \theta_N P^\dagger_{\star}$ is polynomially bounded (uniformly in $n,k$), the contribution in \cref{g} of such terms can be neglected. \item\label{c} Whenever we consider $n$'s for which $ \theta P^\dagger_{\star}=o(1)$, because of \cref{sum1} and the uniform control on $\hat{f}$, the contribution of such terms in \cref{g} can also be neglected. \item\label{d} For $n$'s for which neither \cref{b} nor \cref{c} hold, we will estimate the asymptotics of such part of the sum by controlling the mass of the geometric time $T_q$ against $\theta P^\dagger_{\star}$, and in the most delicate cases (on the separation lines in \cref{fig:phdiag}), taking into account the behavior of the local time too. \end{enumerate} We are now ready to treat the remaining parameter regimes using such facts. \noindent{ \bf{ Proof of {\bf(d)}: $\alpha<\min\{\frac{1}{2},1-\beta\}$ (changing-communities before dying) } } In this regime, the overall picture resembles the phenomenology of the complete graph. In particular, the SRW will manage to change community before being killed and up to the killing time scale, it will forget its starting community. Moreover, with high probability a single tree of size $2N(1-o(1))$ will be formed, so that, given any two points $x,y$, they will end up in the same tree with high probability independently on their communities. To prove the claim notice that, uniformly in $n,k$, \begin{equation}\label{Pblue} P^\dagger_{\star}(n,k)\sim \frac{N^{1-\beta+\alpha} + N^{\alpha} k_\star} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)}= \frac{N^{1-\beta+\alpha} } {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)}+ O\left( \frac{1}{N^{1-\beta-\alpha} } \right). \end{equation} As a consequence the asymptotics of $U^{(N)}_q(\star)$ will be independent of $\star$. To show that such a limit is zero we argue as follows. Within this parameter region: \begin{equation} \theta(n,k)\sim 1+ \frac{nN^{\alpha} + 2k(n-k)} {2N^{1-\beta+\alpha}}, \end{equation} which together with \cref{Pblue} leads to \begin{align}\label{TPblue} \nonumber\theta P^\dagger_{\star}(n,k)=& \frac{N^{1-\beta+\alpha}} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)} +\frac{k(n-k)} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)} + O\left( \frac{k(n-k)}{N^{2(1-\beta)} }\right) + O\left( \frac{nN^{\alpha}}{N^{2(1-\beta)}} \right)\\ =:&\theta P^\dagger_I(n,k) +\theta P^\dagger_{II}(n,k)+\theta P^\dagger_{III}(n,k)+\theta P^\dagger_{IV}(n,k),\end{align} We can now plug in this asymptotic representation of $\theta P^\dagger_{\star}$ in \cref{g}, and separately treat the four resulting terms. For the first term, namely the sum in \cref{g} with $\theta P^\dagger_I$ in place of $\theta P^\dagger_{\star}$, we split the sum in $n$ into two parts at $N^{\alpha+\varepsilon}$, for small $\varepsilon>0$, and show that they both goes to zero, by using \cref{d} and \cref{c}, respectively In fact, with this ``cut'' we see that: \begin{align}\label{formulavai1} (I):= &\sum_{n=1}^{\infty} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_I(n,k)\\ =&\sum_{n<N^{\alpha+\varepsilon}} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1\cdot \Theta(1)+\sum_{n\ge N^{\alpha+\varepsilon}}\P(T_q=n)\sum_{k=0}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot o(1)\\ =&\Theta\(\sum_{n<N^{\alpha+\varepsilon}} \P(T_q=n)\)+o(1)=o(1). \end{align} Analogously, for the second term we split the sum over $n$ into two parts at $N^{1/2+\varepsilon}$, with small $\varepsilon>0$. Using \cref{d} for the first part and \cref{b} for the second one, we see that \begin{align}\label{formulavai2} (II):= &\sum_{n=1}^{\infty} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{II}(n,k)\\ =&\sum_{n<N^{1/2+\varepsilon}} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1\cdot O(1)+o(1)\\ =&O\(\sum_{n<N^{1/2+\varepsilon}} \P(T_q=n) \)+o(1)\\ =&o(1). \end{align} For the third term we need to split the corresponding sum into three parts at $T_1:=N^{1-\beta-\varepsilon}$ and $T_2:=N^{1/2+\varepsilon}$, which will be controlled by \cref{c}, \cref{d} and \cref{b}, respectively. That is \begin{align}\label{formulavai3} (III):= &\sum_{n=1}^{\infty} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{III}(n,k)\\ \le&\sum_{n<T_1} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot o(1)+\sum_{n= T_1}^{T_2}\P(T_q=n))\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot O(N^{-1+2\beta+2\varepsilon})+o(1)\nonumber\\ =&o(1)+O\(N^{\alpha-\beta-\varepsilon}\cdot 1\cdot 1\cdot N^{-1+2\beta+2\varepsilon}\)+o(1)\\ =&o(1). \end{align} Finally, for the last term, we split the sum at $N^{1/2+\varepsilon}$. Indeed we see that: on the one hand, for $n\le N^{1/2+\varepsilon}$, we can use \cref{d} since $$\theta P^\dagger_{IV}(n,k)=O\(N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)\qquad\text{ and }\qquad\P\(T_q\le N^{\frac{1}{2}+\varepsilon}\)=O\(N^{-\frac{1}{2}+\alpha+\varepsilon} \).$$ On the other hand, for $n\geq N^{1/2+\varepsilon}$, we can argue as in \cref{b}. Hence, \begin{align}\label{formulavai4} (IV):= &\sum_{n=1}^{\infty} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{IV}(n,k)\\ \le&\sum_{n=1}^{N^{1/2+\varepsilon}} \P(T_q=n)\sum_{k=1}^{n} \tilde \P_{\underline{1}}(\ell(n)=k)\cdot 1\cdot O\(N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)+o(1)\\ =&O\(N^{-\frac{1}{2}+\alpha+\varepsilon} \cdot 1\cdot 1\cdot N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)+o(1)=o(1) \end{align} \qed \noindent{\bf{Proofs of {\bf(c)} and {\bf(e)} (high-entropy separating lines)}} We start by proving {\bf(e)}, i.e. \begin{equation} \text{if } \alpha=\frac{1}{2}<1-\beta\Longrightarrow \exists \varepsilon>0\text{ s.t. }\lim_{N\to\infty}U^{(N)}_q(in)=U_q(out)=\varepsilon. \end{equation} Start noting that under our assumptions on $\alpha$ and $\beta$ we have that \begin{equation}\label{thetagiallo} \theta(n,k)\sim\frac{n\sqrt{N}+2N^{\frac{3}{2}-\beta}+2k(n-k)}{2N^{\frac{3}{2}-\beta}}, \end{equation} and \begin{equation}\label{pmortegiallo} P_{\star}^\dagger(n,k)\sim\frac{k_\star\sqrt{N}+N^{\frac{3}{2}-\beta}}{2N^{\frac{3}{2}-\beta}+nN^{1-\beta}+k(n-k)}. \end{equation} We are going to split the sum over $n$ in \cref{g} in three parts: \begin{itemize} \item $n\le N^{\frac{1}{2}-\varepsilon}$. For such $n$'s we have that the product $\theta P_\star^\dagger(n,k)$ is of order $1$. Hence we can neglect this part by using \cref{d} together with the estimate $$\P(T_q\le N^{\frac{1}{2}-\varepsilon})=O\(N^{-\frac{1}{2}-\alpha-\varepsilon} \).$$ \item $ n> N^{\frac{1}{2}+\varepsilon}$. Also this part can be neglected thanks to the argument of \cref{b}. \item $N^{\frac{1}{2}-\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$. This is the delicate non-vanishing part. We start by noticing that, due to \cref{thetagiallo} and \cref{pmortegiallo}, the leading term in $\theta P_\star^\dagger$ does not involve $k_\star$, so that ---at first order--- $U^{(N)}_q(in)$ must equal $U^{(N)}_q(out)$. In order to show that the latter are asymptotically bounded away from zero, we fix $c\in(0,1)$ and consider \begin{align} U^{(N)}_q(\star)\ge&\sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\P(T_q=n)\sum_{k=1}^{n}\tilde \P_{\underline{1}}(\ell(n)=k)\theta(n,k)P^\dagger_\star(n,k)\hat{f}(n,k)\\ \hat f=\Theta(1)\Rightarrow=&\Omega\( \sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\P(T_q=n)\sum_{k=1}^{n}\tilde \P_{\underline{1}}(\ell(n)=k)\theta(t,k) P^\dagger_\star(n,k)\)\\ \theta P^\dagger_\star(n,k)\in\[\frac{1}{2+c^{-1}},\frac{1}{2+c}\]\Rightarrow=&\Omega\(\sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\P(T_q=n) \)=\Omega(1).\label{lessthan1} \end{align} Moreover, thanks to \cref{lessthan1} we can easily deduce that the limit is strictly smaller than $\frac{1}{2}$. \end{itemize} We next conclude by giving the proof of {\bf(e)}, i.e., we are going to show that \begin{equation} \text{if } \alpha=1-\beta<\frac{1}{2}\Longrightarrow \exists \varepsilon>0\:\text{ s.t. }\lim_{N\to\infty}U^{(N)}_q(in)=0\:\text{ while }\lim_{N\to\infty}U^{(N)}_q(out)=\varepsilon. \end{equation} Observe that, under our assumptions on $\alpha$ and $\beta$, we have that \begin{equation} \theta(n,k)\sim\frac{3N^{2\alpha}+nN^\alpha+2k(n-k)}{3N^{2\alpha}}, \end{equation} and \begin{equation} P_{\star}^\dagger(n,k)\sim\frac{N^{2\alpha}+k_\star N^\alpha}{3N^{2\alpha}+2nN^{\alpha}+k(n-k)}, \end{equation} hence, their product behaves asymptotically as \begin{equation}\label{formulavai5} \theta P_{\star}^\dagger(n,k)=\Theta\(1+\frac{k_\star}{N^\alpha}\). \end{equation} To evaluate the asymptotic behavior of $U^{(N)}_q(\star)$, we split the sum over $n$ in \cref{g} in three pieces: \begin{itemize} \item $n\le N^{\alpha+\varepsilon}$: where, thanks to \cref{formulavai5}, we know that $\theta P_{\star}^\dagger(n,k)=O(N^\varepsilon)$. We argue as in \cref{d}, obtaining \begin{align} \sum_{n\le N^{\alpha+\varepsilon}}\P(T_q=n)\sum_{k=1}^{n}\tilde \P_{\underline{1}}(\ell(n)=k)\theta(n,k)P^\dagger_\star(n,k)\hat{f}(n,k)\le&O\(N^\varepsilon\sum_{n\le N^{\alpha+\varepsilon}}\P(T_q=n)\)\\ =&O\(N^{-1+2\alpha} \) \end{align} \item $n> N^{\frac{1}{2}+\varepsilon}$: in this case we can argue as in \cref{b}. \item $N^{\alpha+\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$: in this case we have to distinguish between $U^{(N)}_q(in)$ and $U^{(N)}_q(out)$. \end{itemize} Consider first $U^{(N)}_q(in)$. We call $E_n$ the following event concerning the Markov chain $(\tilde X_n)_{n\in\ensuremath{\mathbb{N}}}$ \begin{equation} E_n:=\left\{\text{At least one jump occurs before time $n$}\right\}. \end{equation} Notice that if $N^{\alpha+\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$ then the event $E_n^c$ occurs with high probability. Hence, for any choice of $n\in[1,N]$ and $k\in[1,n]$ we can write \begin{align} \tilde\P_{\underline{1}}\(\ell(n)=k \)=&\tilde\P_{\underline{1}}(\ell(n)=k|E_n^c)\tilde\P_{\underline{1}}(E_n^c)+\tilde\P_{\underline{1}}(\ell(n)=k|E_n)\tilde\P_{\underline{1}}(E_n) =\delta_{k,n}+o(1), \end{align} $\delta_{k,n}$ being the Kronecker delta. Hence \begin{align} \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\P(T_q=n)\sum_{k=1}^{n}\tilde \P_{\underline{1}}(\ell(n)=k)\theta P^\dagger_{in}(n,k)\hat{f}(n,k)=&\Theta\( \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\P(T_q=n)\sum_{k=1}^{n}\delta_{k,n}\(\frac{n-k}{N^\alpha}+1\)\)\\ =&\Theta\( \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\P(T_q=n)\)=o(1). \end{align} Concerning $U^{(N)}_q(out)$, it is easy to get a lower bound via a soft argument by considering the events \begin{equation} B_x=\left\{\text{The LERW starting at $x$ never changes community} \right\} \end{equation} \begin{equation} B'_y=\left\{\text{The RW starting at $y$ does not change community before dying} \right\}. \end{equation} Indeed, \begin{align*} U^{(N)}_q(out)\ge&\P\(B_x\)\P\(B'_y\)=\(\frac{N^\alpha}{N^\alpha+N^{1-\beta}}\)^2=\frac{1}{4}. \end{align*} Finally, we are left to show that $U^{(N)}_q(out)$ is asymptotically bounded away from $1$. We consider the further split \begin{align*} U^{(N)}_q(out)\le&o(1)+\sum_{n=N^{\alpha+\varepsilon}}^{\sqrt{N}}\P(T_q=n)\sum_{k=1}^{n}\tilde{\P}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)+\sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\P(T_q=n)\sum_{k=1}^{n}\tilde{\P}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k). \end{align*} Focusing on the first sum in the latter display, thanks to \cref{formulavai5}, we have that \begin{align*} \sum_{n=N^{\alpha+\varepsilon}}^{\sqrt{N}}\P(T_q=n)\sum_{k=1}^{n}\tilde{\P}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)\le& \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\P(T_q=n)\frac{n}{N^\alpha}+\sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\P(T_q=n)\\ =&\frac{1}{N}\sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\(1-\frac{1}{N^{1-\alpha}} \)^n+o(1)\\ \le&\frac{1}{N}\(\frac{\sqrt{N}(\sqrt{N}+1)}{2} \)\sim\frac{1}{2}. \end{align*} Concerning the second sum, we have \begin{align*} \sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\P(T_q=n)\sum_{k=1}^{n}\tilde{\P}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)=&O\( \sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\P(T_q=n)\hat f(n,n)\frac{n}{N^\alpha} \)\\ =&O\(\frac{1}{N}\sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}ne^{-\frac{n^2}{2N}}\)\\ =&O\(\frac{1}{\sqrt{N}}\sum_{m=1}^{N^{\varepsilon}}me^{-\frac{m^2}{2}}\)\\ =&O\(\frac{N^\varepsilon}{\sqrt{N}}\sum_{m=1}^{\infty}e^{-\frac{m^2}{2}} \)=o(1). \end{align*} \qed\\ \subsection{Proof of \cref{macro}} Let $0=\lambda_0\le \lambda_1\le\dots\le\lambda_{2N-1}$ be the eigenvalues of $\cL$. As shown in\cite[Prop. 2.1]{AG}, the number of blocks of the induced partition, $|\Pi_q| $, is distributed as the sum of $2N$ independent Bernoulli random variables with success probabilities $\frac{q}{q+\lambda_i}$. That is $$|\Pi_q|\overset{d}{\sim} \sum_{i=0}^{2N-1}X_i^{(q)},\qquad\text{ with}\qquad X_i^{(q)}\overset{d}{\sim} Ber\(\frac{q}{q+\lambda_i} \),\quad i\in\left\{0,\dots,2N-1 \right\}$$ In case of the mean-field-two-communities model we have $$\lambda_0=0,\qquad\lambda_1=2N^{1-\beta},\qquad\lambda_i=N(1+N^{-\beta}),\quad i\in\left\{2,\dots,2N-1 \right\}.$$ Therefore $$|\Pi_q|\overset{d}{\sim} 1+X+\sum_{i=1}^{2(N-1)}Y_i$$ where $$X\overset{d}{\sim} Ber\(\frac{N^\alpha}{2N^{1-\beta}+N^\alpha} \)\qquad\text{and}\qquad Y_i\overset{d}{\sim} Ber\(\frac{N^\alpha}{N(1+N^{-\beta})+N^\alpha} \),\quad i\in\{1,\dots,2(N-1)\}.$$ Hence $$\ensuremath{\mathbb{E}}\left[|\Pi_q|\right]\sim 1+ \frac{N^\alpha}{N^{1-\beta}+N^\alpha}+\frac{2N^{\alpha+1}}{N^\alpha+N}=\Theta(N^{\alpha\wedge 1}).$$ Moreover, we can prove the concentration result claimed in the first part of the statement by using the multiplicative version of the Chernoff bound on the sum of $Y_i$'s. Indeed, denoting by $$S:=\sum_{i=1}^{2(N-1)}Y_i$$ we have that $$\P\(\left|S-\ensuremath{\mathbb{E}} [S] \right|\ge \varepsilon\ensuremath{\mathbb{E}} S \)\le 2\exp\(-\frac{\varepsilon^2\ensuremath{\mathbb{E}} S}{3} \),$$ and since $$\ensuremath{\mathbb{E}} [S]\sim\frac{2N^{\alpha+1}}{N^\alpha+N}=\omega(1)$$ we can deduce the concentration of $|\Pi_q|$.\\ Notice also that the second part of the statement is a trivial consequence of the detectability result of \cref{phasetrans}. \qed \section*{{Acknowledgments}} {Part of this work started during the preparation of the master thesis~\cite{Q16} and the authors are thankful to Diego Garlaschelli for acting as co-supervisor of this thesis project. A special acknowledgment is devoted to Alexandre Gaudilli\`ere for suggesting the ``bear-strategy'' in the proof of \cref{proporso} and for the many inspiring discussions on the subject. L. Avena was supported by NWO Gravitation Grant 024.002.003-NETWORKS. M. Quattropani was partially supported by the INdAM-GNAMPA Project 2019 ``Markov chains and games on networks''.}
2,877,628,090,953
arxiv
\section{Introduction} The ever increasing flood of information knowledge workers face in their daily lives even intensifies by technological trends like digital transformation. Thus, their usual multi-tasking craziness \cite{GonzalezMark04}, constantly switching from one context to another, each being associated with different tasks, documents, mails, etc., gets even worse. As a typical consequence, their personal information space, such as file/mail/bookmark folders, is cluttered with information that has become irrelevant. Thus, finding important information gets harder and much of previously gained knowledge is practically lost. To address these problems, we have been investigating solutions inspired by human forgetting since 2013, starting with the EU-project \textit{ForgetIT}\footnote{2013--2016, \url{www.forgetit-project.eu}} and continuing in the \textit{Managed Forgetting} project\footnote{2016--2019, \url{www.spp1921.de/projekte/p4.html.de}}, which is part of the recent priority program on ``Intentional Forgetting in Organizations'' by the German Research Foundation (DFG). Together with other teams of this program we already presented an overview on perspectives and challenges of intentional forgetting in artificial intelligence systems in general \cite{TimmStaabSiebers+2018}. In this paper, we complement that survey having a particular focus on knowledge work and information management support. First, we give an overview of solutions we already found in the two aforementioned projects. We especially share experience gained with the prototype of a first forgetful information system (FIS) that we have been using 24/7 in our daily work for the last three years (Section 2). Additionally, we point out which challenges still need to be tackled, give insights on how we intend to address them, or present first solutions or prototypes that are still under development (Section 3). Section 4 concludes this paper and gives an outlook on planned next steps. \section{Towards Forgetful Information Systems in Practice} In the \textit{ForgetIT} and \textit{Managed Forgetting} projects, we investigated knowledge work and information management support measures inspired by human forgetting. Especially in the second project, our investigation is positioned in the context of a grass-roots Organizational Memory (OM), which relies on the principles of decentralization and self-organization: Effective, dynamic and tailored knowledge management is achieved by knowledge-based assistance and knowledge acquisition in daily activities of knowledge workers which in turn also shapes the captured and represented knowledge. Following the \textit{eat-your-own-dogfood} credo, we extended our OM system, which we have been using in daily work for over seven years now, with forgetting mechanisms, in use now for the last three years. Before presenting its details, also serving as solutions to the aforementioned problems, we will first give an introduction into the terminological and technical background. \subsection{Managed Forgetting} As an extension to the binary keep-or-delete paradigm, we understand \textit{Managed Forgetting} (MF) \cite{KanhabuaNS13,NiedereeKanhabuaGallo+15,NiedereeMezarisMaus+2018} as an escalating set of measures: from temporal hiding, to condensation, to adaptive synchronization, archiving and deletion. It is a form of intentional forgetting that is completely based on observed evidences: the system learns what to forget and what to focus on in a self-organizing and decentralized way. As a key concept for realizing this form of MF we have presented \textit{Memory Buoyancy} (MB) \cite{NiedereeKanhabuaGallo+15,NiedereeKanhabuaTran+2018}, which is intended to represent an information item's current value for the user. It follows the metaphor that items which start to lose relevance for the user ``sink away'', while those that are important are pushed closer to the user by their higher buoyancy. \textit{Information Value Assessment} (IVA) for deciding about the current importance of an information item~is core for dynamically determining its MB value. IVA~in the context of MF has been investigated for \mbox{individual} types of resources such as photos \cite{CeroniSolachidisNiederee+2015} as well as in broader terms for the resources on a user's desktop \cite{TranSchwarzNiederee+2016,MausJilekSchwarz2018}. Like stated before, this form of MF requires capturing and interpreting evidences in order to work. We chose the \textit{Semantic Desktop}, which will be discussed in the following, to serve this purpose. \subsection{The Semantic Desktop as an ecosystem for Managed Forgetting} \label{sec:SemDesk} \paragraph{The Semantic Desktop \& PIMO.} The \textit{Semantic Desktop} (SemDesk) \cite{sauermann2005semdesk} is especially intended to capture knowledge that emerges from individuals and then spreads into groups like project teams. SemDesk brings \textit{Semantic Web}\footnote{\url{www.w3.org/standards/semanticweb}} technology to users' computing devices using a knowledge representation, i.e. giving resources unique identifiers (URIs) and allowing to make statements about them, e.g. using RDF\footnote{\url{www.w3.org/RDF}}, resulting in a semantic graph. Information items (files, mails, contacts, events, topics, \ldots) that are separated on the computer (file system, mail client, web browser, \ldots) but are related to each other in a person's mind, can thus be semantically represented and interlinked in a machine understandable way. As soon as such an item is semantically represented, it is called a ``thing'', which describes the item uniquely as an URI complemented by further statements like its type or a reference to the originating resource such as an URL or message-id of an e-mail. Capturing a user's mental model as accurate as possible is done in a \textit{Personal Information Model} (PIMO) \cite{SauermannVanElstDengel2007}, which serves as the basis for knowledge representation in SemDesk. Shared parts of multiple PIMOs result in a \textit{Group Information Model} (GIMO) forming the basis for an OM. \paragraph{From Evidence Collection to User Support Measures.} Concerning SemDesk applications, two categories, newly created semantic ones and plug-ins to enhance traditional, non-semantic ones, could be observed so~far~\cite{DraganDecker2012}. We recently presented our idea of \textit{Plug-Outs} \cite{JilekSchroederSchwarz+2018}, \textit{headless plug-ins} often just having the rudimentary functionality of \textit{sending out} in-app events to the SemDesk. Complementing these plug-outs with the transparent integration of SemDesk using standard protocols and a sidebar for advanced features \cite{JilekSchroederSchwarz+2018}, we get an environment capable of capturing rich contextual evidences in two ways: implicitly (plug-out and protocol information) as well as explicitly (sidebar usage). These evidences are then processed further, especially in terms of information extraction \cite{JilekSchroederNovik+2018}, to elicit the user's current activity and context. This results in the respective stimulation of the user's PIMO and appropriate MF measures. Currently, we observe the file system, web browsers and email clients. Further tools for process and application observation, especially also using accessibility interfaces \cite{HertlingSchroederJilek+2017}, are under development. A comparative, yet incomplete literature overview of user activity tracking endeavors can be found in \cite{Schmidt2013PhD}. In summary, the whole cycle from evidence collection to user support measures of a \textit{Forgetful} Semantic Desktop is depicted in Figure~\ref{fig:ForgetfulSemDesk}. \begin{figure \includegraphics[width=1\columnwidth]{ForgetfulSemDesk_v2} \caption{Evidence collection to user support measures cycle of a Forgetful Semantic Desktop} \label{fig:ForgetfulSemDesk} \end{figure} \subsection{First prototype running 24/7 in practice} \label{sec:Prototype} In \textit{ForgetIT}, we extended our SemDesk prototype \cite{MausSchwarzDengel2013}, that we were already using 24/7 in daily work, with MF features. These new forgetting capabilities were thus directly embedded in daily activities, too, enabling us to continuously test and optimize them in real-world scenarios as well as understand implications and challenges of a forgetful information system. To illustrate the challenge for the SemDesk usage, let us look at one user there. As of July 2018, the semantic graph of his PIMO consists of more than 18.000 things, i.e., topics, tasks, events, persons, organizations, documents, web pages, images, notes, \ldots, of which 2.000 are private ones and the rest either shared by him or other users forming the group's GIMO. There are not only noteworthy or timeless things such as scientific papers or project proposals among them, but also various ephemeral things. For instance, more than 1.000 tasks from over 7 years of daily usage leave their electronic footprints (including connected resources, topics, or notes) in the PIMO. Now, these once relevant tasks bear the potential to serve as a task journal -- if not even as source for know-how reuse -- but also to congest the semantic graph and search results. Therefore, it is evident that the user would face an information overflow, if each and every thing would be treated with the same importance as currently required information or important one from the past; PIMO's usefulness would thus be endangered. But requiring to delete presumably outdated information counteracts the philosophy of our evolving knowledge management system due to the potentials for future situations of the individual and the group. \paragraph{Memory Buoyancy Calculation.} The MB calculation in our SemDesk evolved from the insights of the approach in \cite{TranSchwarzNiederee+2016} and finally follows the design principles presented in \cite{MausJilekSchwarz2018}. The basic ones are inspired by human brain activity applied to the user's mental model as represented in a semantic graph and discussions with the team of Prof. Logie (Psychology, University of Edinburgh) who presented their insights in \cite{LogieWoltersNiven2018} (an interdisciplinary approach is also taken in the \textit{Managed Forgetting} project and is subject of Section~\ref{sec:potential}). Therefore, the MB value drops over time for things that are not stimulated (first a steep decline then a long-tail of slow decline) whereas the value increases for things and their associations that are stimulated. To reflect learning effects, the MB value decreases slower for things that are repeatedly stimulated over time. The stimulations are based on evidences derived from user actions (such as view, create, or modify) involving resources represented as things in the semantic graph. The intensity of stimulating a thing and the outreach along the sub-graph it spans is determined by applying a dedicated spreading activation algorithm \cite{Crestani1997} using parameters such as types of the things, connecting predicates as well as numbers of connections, and several heuristics. This leads to effects like things such as topics, which are not directly accessed but are connected to resources currently in use, are raised in their MB value, then forming hotspots in the semantic graph resembling the user's current mindset (of those items represented in the~PIMO). Apart from tuning the spreading algorithm, most influential on the IVA are the various heuristics applied. These cover assumptions as well as intended beneficial effects for the knowledge work scenario. For instance, applying specific decay curves for dedicated types, e.g. enforcing e-mails to decay faster than presentations, is based on the observation that inherently \mbox{e-mails} are more ephemeral whereas presentations might imply more longevity. Further, upcoming events and connected things are stimulated when approaching their date and vice versa, decaying faster after the event (if not stimulated again). Finished tasks do not get external stimulations any more. This results in MB values for all things in a PIMO, which then can be used for various forgetting strategies outlined in the following. \paragraph{Temporal Hiding.} A first escalation step in our MF strategy is hiding things which are below a certain threshold in different places where users might be confronted with an overwhelming amount of information items from the semantic graph. Therefore, while browsing the PIMO, in which a thing is usually represented on a single (HTML) page with details (such as start and end date of an event) and all its relations to other things, connected things below a specific MB threshold are hidden from direct view of the user. This threshold is lower on desktop devices and higher on mobile devices to reduce cognitive load while being mobile, i.e., showing less presumably irrelevant information. A button ``show forgotten'' allows to show the yet hidden things as well as to manipulate the threshold for viewing. A further experimental feature is hiding things in a search result list with low MB value which simply hides those things to be forgotten from the result list. The search page has a threshold slider to lower or raise the currently set threshold for the specific search allowing to blend in those forgotten things in the position of their original ranking. Considering snippets used for explaining the content of result list entries (of a search or a proactive information delivery), the MB value is also used to choose a set of things to be shown in the snippet. Here, instead of deciding to hide, the sub-set of high buoyant things out of all annotations of a thing are selected and shown in the result entry as snippets. The assumption is that showing all annotations of a document will overwhelm the user, instead selecting the ones which are high-buoyant will allow to give a clue to quickly grasp the relevancy for the user's mindset. \paragraph{Adaptive Synchronization.} Next step in our escalating MF strategy is to move files to be forgotten from the desktop to the cloud and finally to an archive. This still keeps the semantic representation available, only the place of the actual file changes. If access is required, it can be automatically drawn in again. This is an experimental feature embedded in the PIMO cloud synchronization service running on a users' desktop. Current implementation proposes the user a list of files (which are things) that can be forgotten on the computer, selected files are created as cloud-files (if not already there) and then removed locally. Moreover, an extension enables an unsupervised adaptive synchronization of files to a local storage if their MB is above or below a certain threshold (which again can be different depending on the device type). This is useful for files which, e.g., originate from other users and are not yet available on the device. The PIMO's user interface then allows to open the local version directly instead of downloading it first. This leads to a set of documents on the device which are relevant for the user's current mindset (and are not elsewhere on the device anyway). \paragraph{Condensation.} Further action can be undertaken if a whole region of the semantic graph has a low MB such as a long-finished task or project. Then it is possible to condense this region (consisting of things and connected resources) and just leave a representation of this region for the user. Our PIMO Diary \cite{JilekMausSchwarz+2015,JilekSchwarzMaus+2016} uses condensation to on demand generate condensed representations for the user's electronic footprints within a specific period of time. The condensed representation is shown to the user including associations to the contained things. This bears the potential of forgetting the originating things and resources (e.g., by moving them to an archive) while keeping references in the condensation. This explicit removal is not deployed on our PIMO, however, once computed condensations are kept. \paragraph{Lessons Learned.} The MB calculation is in use for~over 3 years. Our experience so far is that things are really gradually fading out if their relevancy decreases. And vice versa, related things raise in their MB if they are connected to user activities although they are not directly accessed, thus forming a user's recent mindset. In contrast, there are also drawbacks: the naive approach for hiding in search has been dropped. An often observed behavior was that users moved the slider to change the threshold to zero (i.e., show everything) if the results were not satisfying, instead of modifying the query first; whereas with lots of results, the slider was ignored. This implies that there is still not enough trust in MF if the results are not as expected. If things are raised in their MB although they are not explicitly accessed heavily depends on their connectivity in the graph. Thus, isolated areas might drop although the relevancy is still given. Here, more automated interconnection is required and we see that if they belong to some context, things are dragged along with the rest. Likewise, raising the MB for a whole area takes some time if the user jumps into a previously neglected area. Several actions are required to differentiate between visiting by chance or really working in that area again. This also implies to consider contexts which can be revisited after a long time but the user would expect everything in place in analogy to the brain which is able to quickly reconstruct a scenery. Up to now, we only discussed the forgetting aspect in our scenario. But from the knowledge management viewpoint also the aspect of long-term information value plays an important role. In \cite{MausJilekSchwarz2018} we also considered a ``preservation value'' as an orthogonal view on things for their long-term importance although they might only have had a short-term relevancy for the user.\\ From these experiences and projections with other research threads, we identified challenges which will be addressed in Section~\ref{sec:Challenges}. \subsection{Untapped potential of explicated user context and features inspired by Memory Inhibition} \label{sec:potential} \paragraph{Explicated User Context.} From the experiences presented in the last section, we learned that a greater focus on user context can further improve solutions found so far. We especially assume that users are aware of the concept of context and what their current context is (at least most of the time) \cite{GomezPerez2009}. In \cite{JilekSchroederSchwarz+2018} we presented a first SemDesk prototype\footnote{demo video at \url{https://pimo.opendfki.de/cSpaces/}}, that has context as an explicit element users can work in and interact with. We will go into details in Section \ref{sec:Context}. Additionally, our current information value assessment can be improved by introducing context-sensitive MB values (see \ref{sec:AdvMB}). \paragraph{Memory Inhibition.} In cognitive psychology, the term \textit{Memory Inhibition} describes the temporal suppression of currently irrelevant or misleading information in order to facilitate processing of relevant information \cite{levy2002inhibitory}. Cognitive psychology experiments like \cite{TempelFrings2016} revealed that intentionally forgetting about one task can enhance subsequent cognitive performances like encoding and recall of word material. A prominent explanation for these benefits is memory inhibition of intentionally forgotten information \cite{Bjork1989}. Inhibition can also help to efficiently switch contexts, by mentally segregating irrelevant, inhibited information from currently relevant information \cite{StormStone2015}. We intend to transfer these results to user contexts in knowledge work assuming that allowing users to intentionally forget about their recently irrelevant contexts increases their performance on the current one. To allow users to intentionally forget, we will implement features inspired by Memory Inhibition, a concept, to our best knowledge, so far -- if at all -- only implicitly used in computer science without explicitly calling it that way. In \cite{TempelNiedereeJilek+2018}, we give an overview of Memory Inhibition in cognitive and computer science and especially its potential for the latter. To get an impression of how one could define inhibition in computer science, consider the following example: information items associated with different contexts are activated by Spreading Activation (SA) \cite{Crestani1997}. Then, items of those contexts, that have been activated but are irrelevant for a target context, will be suppressed (inhibited). So, there is an additional differentiation mechanism for items correctly activated by classic SA with respect to a given target context. Thus, inhibition would be implemented to reduce or overcome interference due to information that is irrelevant for a particular target context. Importantly, if contexts switch again, inhibition of previously irrelevant items is released -- thus inhibiting items always means making items temporarily unavailable. \section{Challenges} \label{sec:Challenges} The last section already gave insights into which challenges we had to tackle to get a beneficial forgetful information system, e.g. to establish continuous user activity tracking or memory buoyancy calculation. In this section, we will address open challenges or ones that we have only solved partly so far. For the latter, we will also give insights on how we intend to solve them. Although we focus on a system to support information management and knowledge work, some of the solutions may also be applicable to FIS in other domains. \subsection{Capturing and efficiently storing metadata especially contextual information} \label{sec:Context} \paragraph{Context.} We learned from cognitive psychology (see \ref{sec:potential}) that our previous research on context (e.g. \cite{Schwarz2010,MausSchwarzHaas+2011,JilekMausSchwarz+2015}) can be beneficial for MF: an information item can be very important in one context while being totally irrelevant in another. So, in order to finally decide about an item's relevancy and thus provide beneficial MF measures, we have to take its associated contexts into account. The context model we use is depicted in Figure \ref{fig:ContextModel}. It is an extension of \cite{Schwarz2005}, which itself is an extension of \cite{Maus2001}. For our use case we additionally added the following aspects: \textit{forgetting} (which parts of a context have been forgotten or condensed), \textit{focus} (which parts of a context are currently in focus; other parts may be temporarily hidden, for example), and \textit{hierarchy} (sub-/super-contexts). \begin{figure \center \includegraphics[width=1\columnwidth]{ContextModel} \caption{By adding hierarchical, forgetting and focal aspects (blue), we extend the context model by Schwarz \cite{Schwarz2005}, which itself is an extension (green) of the one by Maus \cite{Maus2001} (black).} \label{fig:ContextModel} \end{figure} \paragraph{Capturing Context.} In order to take contexts into account, we have to capture them first, which implies the usage of sensors. Currently, we completely focus on virtual sensors \cite{PereraZaslavskyChristen+2014} as realized by our plug-outs (see \ref{sec:SemDesk}), for example. Since the ultimate answer which items belong to which contexts is only available in a person's mind, there will probably always be a certain sensor and interpretation gap. Thus, we can only try to minimize it and approximate user contexts as well as possible. The aforementioned idea of treating contexts as explicit elements that users can work in and interact with (see \ref{sec:potential}) also helps us in this regard: users can casually help in modeling their world by selecting their current context, working with it (e.g., to accomplish a task), adding or removing things, switching it, etc. We will come back to this aspect in Section \ref{sec:Incentives}. \paragraph{Storing Context.} Once we are able to capture contexts, we also have to store them efficiently together with other (meta)data managed by our system, especially the semantic graph (PIMO). Since several support decisions need to be made in less than a second in order not to harm user experience (system response time), we especially need data structures capable for real-time processing on usual computing devices. The same is true for the used information extraction methods: they should be able to operate in (near) real-time. \paragraph{Privacy Issues.} Another aspect we have to take into account are privacy issues that arise immediately when dealing with any kind of user activity tracking. Since we are capturing possibly very sensitive data, we have to take measures to protect users' privacy. Possible solutions are, for example, allowing to (temporarily) disable the observation, only store sensitive data on the user's local device (no server/cloud sync) or only stimulate the semantic graph (``activation'' of respective parts) without storing any details permanently. \subsection{Continuous, context-sensitive information value assessment} \label{sec:AdvMB} We have presented our current version of information value assessment resulting in different MB values in Section \ref{sec:Prototype}. Doing IVA continuously is already a challenge, since each click of a user may alter the MB of possibly a lot of things (depending on semantic network connectivity). One problem of our current solution is that there is only one MB value for a resource for each user, without taking contexts into account. But, as stated before, an item's relevancy can strongly vary from one context to another. Thus, we are in the process of advancing our MB calculation to additionally take contexts into account \cite{JilekChwalekSchwarz+2018}. If a resource is not associated with a context, or not accessible by the user, its MB value for this user and context is zero. In cases, in which we know that a certain activity (tagging a website, opening a document, etc.) was performed having selected a certain context (e.g. \textit{ForgetIT project proposal}), we thus only have to update a relatively small part of the semantic network compared to the original version, which also leads to performance gains. We call this the \textit{local MB}, since it is only calculated for a certain context. Nevertheless, we also keep a context-free MB value, the \textit{global MB}, summarizing all non-zero local MB values, thus providing an overall relevancy information of a resource for a certain user. Especially with regard to OM, we additionally introduce a \textit{group MB} that summarizes the global MB values of different users. More details and a first prototypical implementation are presented in \cite{JilekChwalekSchwarz+2018}. \subsection{Improve user interfaces and support features to enable cognitive offloading} Have you ever turned your head to read a tilted headline more easily? If this is the case, you have been performing a form of \textit{Cognitive Offloading} \cite{RiskoGilbert2016}. Every strategic use of physical actions like tilting your head or using a calculator to reduce cognitive demands is defined as \textit{Cognitive Offloading}. Only two decades ago, people still learned phone numbers by heart in order to be able to call somebody. Today, you just select the person's name in your (smart)phone and a connection will be established (assuming that you once have added that person's data to your contact list). That way, we are using several devices as our extended external memory store. Recent research in cognitive psychology shows that such memory offload can have comparable benefits for subsequent cognitive performance as discussed for intentional forgetting in \ref{sec:potential} \cite{StormStone2015,RungeFringsTempel2018}. As shown in \cite{RungeFringsTempel2018}, the possibility to store information externally can even be seen as an implicit cue to intentionally forget the offloaded information as long as you can rely on the device to continually store it \cite{StormStone2015}. In our scenario, think of contexts that ``tell'' the users what they have done the last time when they were working in/with them: what documents have been read or written, what are open tasks to be performed, etc. Such implementation allows the user to rely on the SemDesk as their external memory for all reached work progress. The intentional forgetting of externally stored work progress can then benefit subsequent tasks and ease the switch from one context to another. Due~to our transparent integration \cite{JilekSchroederSchwarz+2018}, these contexts are also available as folders in the file system. Thus, the metaphor of folders presenting a user their ``golden thread'' is close to realization. Our ultimate goal would be to also bring back all applications in the exact state they were in when they were last opened in that certain context. But this is a task, which is hard to perform due to missing interfaces for application resume. In general, we have the challenge of creating user interfaces fitting well with our MF capabilities, so that users are actually able to cognitively offload. If they still have to keep rather unimportant or too many things in mind or bother how to store things in a way that they will find them later, cognitive offloading is not possible. It is the system that should note what they did last, that the link to a certain website is stored for next time, that certain reminders or documents come up as soon as they become relevant (again), etc. \subsection{From word lists to complex knowledge work scenarios} \subsection{Incentives to maximize users' willingness to contribute} \label{sec:Incentives} In Section \ref{sec:Context}, we gave reasons why we will not be able to have a fully automated system. We will rely on users helping to make aspects of their mental model explicit in their PIMO. Therefore, we should provide incentives that make them actually willing to do it, e.g. if they add a sent e-mail to a certain context, an incoming reply could automatically be associated with that context, too. Further, the investments they have to spend should be as low as possible. We thus have the challenge of designing interfaces and functionality so well, that a single click or drag operation can already mean a lot, for example. This goes along the important aspect that users should immediately have (and see!) benefits from an action such as annotating a web page with a task (which in turn is an explicit ``modelling'' act in the semantic graph done by the user in the annotation sidebar). Therefore, from our experience in knowledge work support, it is important to embed the support into daily work of the users and trying to create a context in which information need can be derived and required information be provided. Hence, it is important to find scenarios and use cases where this support is beneficial for users such as process work embedded in the e-mail client \cite{LampasonaRostaninMaus12} or a context space for solving tickets \cite{JilekSchwarzMaus+2016}. Here, each action in that environment immediately leads to benefits for the user. An inadequacy of investment and resulting benefit may lead to a vicious circle of knowledge management \cite{ProbstRaubRomhardt97}. Our system should support the way towards a ``perfect model'' (i.e. user's PIMO and mental model are perfectly in sync), by allowing a sequence of tiny activities. The SemDesk ecosystem already takes this direction by crawling information sources such as calendars, providing a sidebar allowing semantic bookmarking, writing semantic notes, or a task management. For tasks such activities could be: create task, set deadlines, add notes, web or file links, etc. Thus, the user can decide at any point whether they go another step or stop, whereas both, the system and the user, benefit from each additionally taken step. Especially in the aforementioned multi-tasking craziness, in which users are under high pressure to continue their work and not spend time with seemingly unnecessary steps, they may thus easier regulate the amount of distraction they are currently willing to accept. One of our hypotheses is, that even if the return on investment of a ``modelling activity'' is quite high, a too high corresponding user investment may prevent the activity from being performed by the user. In contrast, having an interruptible sequence of tiny actions more likely leads to users doing at least some of the ``modelling steps''. Other incentive measures, possibly well applicable in OM scenarios, are discussed in \cite{Lazaruk2012}, for example. \subsection{Forgetful information systems need to be cautious and trustworthy} In experiments, interviews and discussions conducted~in our forgetting-related projects, we observed the tendency of users, especially experts, to rather mistrust automated system decisions in fear of losing their stuff. As a consequence, we strive to design our forgetful system to~be cautious, i.e. rather doing nothing than doing something wrong. By acting this way, we seek to earn the users' trust; they need to be sure that nothing harmful (a data loss) will happen. In particular, this means if collected and interpreted evidences do not justify to take a certain support action, the system will refrain from performing it. Our colleagues \cite{SiebersGoebelNiessen+2017} go one step further and have their system not take any action without user confirmation, which is what we also did~in the past: When the MB of files dropped below a certain threshold and the system selected them for being deleted on the user's current device, users were first asked for confirmation. Additionally, since our system is still a research prototype, we did not delete any files completely so far -- there is always a backup in an archive. Nevertheless, we abandoned asking for confirmation since it interrupted users and drawing their attention on actually forgotten regions resulting in the contrary effect of remembering, as well as counteracting our idea of \mbox{self-organization}. In \cite{JilekSchroederSchwarz+2018}, we presented a first prototype of a self-reorganizing SemDesk based on MF features. Folders can also be seen as contexts and the system is even able to infuse its managed contexts into the file system. Following the principle of the aforementioned cautiousness, automatically reorganizing these contexts, e.g. merging or splitting them, should not lead to totally different paths that users have to follow in order to re-find desired information. Since navigating folders (contexts) follows the human intuition of navigating a map \cite{BennBergmanGlazer+2015}, sudden, unexpected changes are potentially harmful here. We could thus restrict our system to only merge parent and child contexts after some time has passed. Greater modifications like merging contexts of similar but no directly related topic areas, for example, would only be allowed if a lot of time has passed and there was no new evidence indicating that one of the topics is still relevant. Thus, the goal of automatically tidying things up for the user would justify such a merge of similar contexts. \cite{BennBergmanGlazer+2015} also states that Personal Information Management (PIM) applications should take folders (contexts) as given and try to exploit and improve them rather than replace them, which is an advice we intend to follow as well as possible. \subsection{How to gain trust in forgetful search and how to visualize the forgotten?} One major question we still have to solve is how we achieve that users gain trust in forgetful search. Consider the example of a user entering keywords into the search field of a forgetful system and no (or seemingly incomplete) results are shown. Several questions could come to that user's mind: \textit{Have I used the ``right'' keywords? Have I really saved the things I am now looking for? I'm sure I saved it, why doesn't it show up?} The challenge of establishing trust in forgetful search is closely linked to the question of how to visualize what is actually forgotten (forgotten in the sense of MF, i.e. only a condensed version is still remaining or something is hidden by default due to low MB). In the scenario just mentioned, we could inform the user that there is currently no search result in the ``active'' part of their data, but something in the forgotten area. This could be accompanied by measures trying to visualize how search results belong to certain areas of the semantic network, e.g. as thematic clusters, as well as how much of the semantic graph has been covered by the current search result set. Additionally, as done in the PIMO, if the user enters an exact match of a thing's label, e.g. the full name of a person or project, then it would be justified to directly show actually forgotten items, since the user seems to have remembered something they have not used for a long time. In general, we have to find a balance between MF mechanisms that prevent users from being overwhelmed by the potentially high number of search results. But on the other hand, users still have to find the things they are looking for, especially if they came back to something not accessed for a very long time (whereas ``accessed'' here especially means that a whole topic area of the semantic graph has not been stimulated for a long time -- accessing related topics would have raised the MB otherwise). \subsection{How to evaluate forgetful information systems?} All challenges mentioned so far share the problem of~how to evaluate their possible solutions. Evaluating an FIS like ours is hard for several reasons. First, since we support information management and knowledge work, users' views on their stuff are subjective \cite{Dengel2006}, which restricts evaluation scenarios. Second, there is still no publicly available PIM dataset. To our best knowledge, \cite{AbelaStaffHandschuh2015} is the most recent paper mentioning the plan to release a dataset ``in the near future'', which is already three years ago. \cite{Gonccalves2011} even argues that if such a dataset was available, it would still lack the semantic information to really make use of the data (e.g. whether a term is the name of a project or whether a mentioned person is a co-worker or spouse, etc.). Other approaches like \cite{KimCroft2009b} created pseudo desktop collections for their experiments (on information retrieval). These collections neglect important sources like bookmarks or calendar events as well as structures like the file folder hierarchy, which also carry a lot of semantics. Last not least, we have the additional aspect of forgetting, which makes us state the hypothesis that participants need to perform the evaluation of such systems using their own data. How could people otherwise judge whether things were forgotten correctly if they never knew the data? To solve this problem, we intend to semi-automatically bootstrap the semantic graph (PIMO) of a participant before starting the evaluation of forgetting capabilities. \section{Conclusion \& Outlook} In this paper, we gave an overview of information management and knowledge work support measures found in two forgetting-related projects. In the first one, \textit{ForgetIT}, we enhanced our productively used, SD-based OM system with forgetting capabilities, thus having one of the first FIS used in practice. From the beginning on, in 2013, we searched for solutions inspired by findings of cognitive psychology, yielding concepts like MF and MB. In the recent and still ongoing \textit{Managed Forgetting} project, we especially focus on exploiting the yet untapped potential of more explicated user context and support measures inspired by Memory Inhibition. We presented challenges that arise in the field of FIS, discussed how we tackled some of them and also gave insights on how we intend to solve the still open ones, which will be our focus in the remainder of the project.